[must-gather ] OUT 2026-02-25T12:48:49.035034782Z Using must-gather plug-in image: quay.io/openstack-k8s-operators/openstack-must-gather:latest W0225 12:48:49.035099 21889 mustgather.go:390] volume percentage greater than or equal to 80 might cause filling up the disk space and have an impact on other components running on master When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 7dec01f8-89a2-484c-b000-0ae6c3570049 ClientVersion: 4.18.1 ClusterVersion: Stable at "4.18.1" ClusterOperators: clusteroperator/authentication is not available (OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node. OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF OAuthServerServiceEndpointAccessibleControllerAvailable: Get "https://10.217.4.222:443/healthz": dial tcp 10.217.4.222:443: connect: connection refused WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5d4ec001-2800-496a-8ad0-e8c9d0daf296", ResourceVersion:"37082", Generation:0, CreationTimestamp:time.Date(2025, time.February, 23, 5, 11, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"endpointslice.kubernetes.io/skip-mirror":"true"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-apiserver", Operation:"Update", APIVersion:"v1", Time:time.Date(2026, time.February, 25, 12, 38, 26, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042aac60), Subresource:""}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)) because OAuthServerServiceEndpointAccessibleControllerDegraded: Get "https://10.217.4.222:443/healthz": dial tcp 10.217.4.222:443: connect: connection refused OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server OAuthServerRouteEndpointAccessibleControllerDegraded: Get "https://oauth-openshift.apps-crc.testing/healthz": EOF clusteroperator/kube-apiserver is degraded because NodeInstallerDegraded: 1 nodes are failing on revision 11: NodeInstallerDegraded: installer: ) (len=17) "user-serving-cert", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-000", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-001", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-002", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-003", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-004", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-005", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-006", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-007", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-008", NodeInstallerDegraded: (string) (len=21) "user-serving-cert-009" NodeInstallerDegraded: }, NodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { NodeInstallerDegraded: (string) (len=20) "aggregator-client-ca", NodeInstallerDegraded: (string) (len=9) "client-ca", NodeInstallerDegraded: (string) (len=29) "control-plane-node-kubeconfig", NodeInstallerDegraded: (string) (len=26) "check-endpoints-kubeconfig" NodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { NodeInstallerDegraded: (string) (len=17) "trusted-ca-bundle" NodeInstallerDegraded: CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", NodeInstallerDegraded: ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", NodeInstallerDegraded: PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", NodeInstallerDegraded: Timeout: (time.Duration) 2m0s, NodeInstallerDegraded: StaticPodManifestsLockFile: (string) "", NodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) , NodeInstallerDegraded: KubeletVersion: (string) "" NodeInstallerDegraded: }) NodeInstallerDegraded: I0225 12:41:02.084851 1 cmd.go:413] Getting controller reference for node crc NodeInstallerDegraded: I0225 12:41:02.100550 1 cmd.go:426] Waiting for installer revisions to settle for node crc NodeInstallerDegraded: I0225 12:41:02.100649 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false NodeInstallerDegraded: I0225 12:41:02.100661 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false NodeInstallerDegraded: I0225 12:41:02.182199 1 cmd.go:518] Waiting additional period after revisions have settled for node crc NodeInstallerDegraded: I0225 12:41:32.182450 1 cmd.go:524] Getting installer pods for node crc NodeInstallerDegraded: F0225 12:41:46.186769 1 cmd.go:109] Get "https://10.217.4.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%!D(MISSING)installer": net/http: request canceled (Client.Timeout exceeded while awaiting headers) NodeInstallerDegraded: clusteroperator/machine-config is not upgradeable because One or more machine config pools are degraded, please see `oc get mcp` for further details and resolve before upgrading clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing [must-gather ] OUT 2026-02-25T12:48:49.360763693Z namespace/openshift-must-gather-pgxsx created [must-gather ] OUT 2026-02-25T12:48:49.401208781Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-gk4gv created Warning: spec.nodeSelector[node-role.kubernetes.io/master]: use "node-role.kubernetes.io/control-plane" instead [must-gather ] OUT 2026-02-25T12:48:49.664432901Z pod for plug-in image quay.io/openstack-k8s-operators/openstack-must-gather:latest created [must-gather-mds6f] OUT 2026-02-25T12:49:49.670241339Z gather did not start: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/openstack-k8s-operators/openstack-must-gather:latest" [must-gather ] OUT 2026-02-25T12:49:49.681510023Z namespace/openshift-must-gather-pgxsx deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 7dec01f8-89a2-484c-b000-0ae6c3570049 ClientVersion: 4.18.1 ClusterVersion: Stable at "4.18.1" ClusterOperators: clusteroperator/machine-config is not upgradeable because One or more machine config pools are degraded, please see `oc get mcp` for further details and resolve before upgrading clusteroperator/cloud-credential is missing clusteroperator/cluster-autoscaler is missing clusteroperator/insights is missing clusteroperator/monitoring is missing clusteroperator/storage is missing error: gather did not start for pod must-gather-mds6f: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/openstack-k8s-operators/openstack-must-gather:latest"