@@ -577,3 +577,69 @@ and daemon name. For example, if the cluster name is ``ceph`` (it is by default)
577577and you want to retrieve the configuration for ``osd.0 ``, use the following::
578578
579579 ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | less
580+
581+
582+ Running Multiple Clusters
583+ =========================
584+
585+ With Ceph, you can run multiple clusters on the same hardware. Running multiple
586+ clusters provides a higher level of isolation compared to using different pools
587+ on the same cluster with different CRUSH rulesets. A separate cluster will have
588+ separate monitor, OSD and metadata server processes. When running Ceph with
589+ default settings, the default cluster name is ``ceph ``, which means you would
590+ save your Ceph configuration file with the file name ``ceph.conf `` in the
591+ ``/etc/ceph `` default directory.
592+
593+ When you run multiple clusters, you must name your cluster and save the Ceph
594+ configuration file with the name of the cluster. For example, a cluster named
595+ ``openstack `` will have a Ceph configuration file with the file name
596+ ``openstack.conf `` in the ``/etc/ceph `` default directory.
597+
598+ .. important :: Cluster names must consist of letters a-z and digits 0-9 only.
599+
600+ Separate clusters imply separate data disks and journals, which are not shared
601+ between clusters. Referring to `Metavariables `_, the ``$cluster `` metavariable
602+ evaluates to the cluster name (i.e., ``openstack `` in the foregoing example).
603+ Various settings use the ``$cluster `` metavariable, including:
604+
605+ - ``keyring ``
606+ - ``admin socket ``
607+ - ``log file ``
608+ - ``pid file ``
609+ - ``mon data ``
610+ - ``mon cluster log file ``
611+ - ``osd data ``
612+ - ``osd journal ``
613+ - ``mds data ``
614+ - ``rgw data ``
615+
616+ See `General Settings `_, `OSD Settings `_, `Monitor Settings `_, `MDS Settings `_,
617+ `RGW Settings `_ and `Log Settings `_ for relevant path defaults that use the
618+ ``$cluster `` metavariable.
619+
620+ .. _General Settings : ../general-config-ref
621+ .. _OSD Settings : ../osd-config-ref
622+ .. _Monitor Settings : ../mon-config-ref
623+ .. _MDS Settings : ../../../cephfs/mds-config-ref
624+ .. _RGW Settings : ../../../radosgw/config-ref/
625+ .. _Log Settings : ../log-and-debug-ref
626+
627+ When deploying the Ceph configuration file, ensure that you use the cluster name
628+ in your command line syntax. For example::
629+
630+ ssh myserver01 sudo tee /etc/ceph/openstack.conf < /etc/ceph/openstack.conf
631+
632+ When creating default directories or files, you should also use the cluster
633+ name at the appropriate places in the path. For example::
634+
635+ sudo mkdir /var/lib/ceph/osd/openstack-0
636+ sudo mkdir /var/lib/ceph/mon/openstack-a
637+
638+ .. important :: When running monitors on the same host, you should use
639+ different ports. By default, monitors use port 6789. If you already
640+ have monitors using port 6789, use a different port for your other cluster(s).
641+
642+ To invoke a cluster other than the default ``ceph `` cluster, use the
643+ ``--cluster=clustername `` option with the ``ceph `` command. For example::
644+
645+ ceph --cluster=openstack health
0 commit comments