:_mod-docs-content-type: CONCEPT [id="ceph-daemon-cardinality_{context}"] = {Ceph} daemon cardinality [role="_abstract"] {Ceph} {CephVernum} and later applies strict constraints in the way daemons can be colocated within the same node. ifeval::["{build}" != "upstream"] For more information, see the Red Hat Knowledgebase article link:https://access.redhat.com/articles/1548993[Red Hat Ceph Storage: Supported configurations]. endif::[] Your topology depends on the available hardware and the amount of {Ceph} services in the Controller nodes that you retire. The amount of services that you can migrate depends on the amount of available nodes in the cluster. The following diagrams show the distribution of {Ceph} daemons on {Ceph} nodes where at least 3 nodes are required. * The following scenario includes only RGW and RBD, without the {Ceph} dashboard: + ---- | | | | |----|---------------------|-------------| | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | rgw/ingress | ---- * With the {Ceph} dashboard, but without {rhos_component_storage_file_first_ref}, at least 4 nodes are required. The {Ceph} dashboard has no failover: + ---- | | | | |-----|---------------------|-------------| | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | dashboard/grafana | | osd | rgw/ingress | (free) | ---- * With the {Ceph} dashboard and the {rhos_component_storage_file}, a minimum of 5 nodes are required, and the {Ceph} dashboard has no failover: + ---- | | | | |-----|---------------------|-------------------------| | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | rgw/ingress | | osd | mon/mgr/crash | mds/ganesha/ingress | | osd | rgw/ingress | mds/ganesha/ingress | | osd | mds/ganesha/ingress | dashboard/grafana | ----