The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
In the example used in the following steps, the deployment
node is ceph-node1.example.com
(192.168.1.51), which is the same as the administration
node.
Perform the following steps on the deployment node:
Install the
ceph-deploy
package.#
yum install ceph-deploy
Create a Ceph configuration directory for the Storage Cluster and change to this directory, for example:
#
mkdir /var/mydom_ceph
#cd /var/mydom_ceph
NoteThis is the working configuration directory used by the Ceph deployment node to roll out configuration changes to the cluster and to client and gateway nodes. If you need to make changes to the Ceph configuration files in future, the changes should be made in this directory and then use the ceph-deploy config push command to update the configuration for other nodes in the cluster.
Use the ceph-deploy command to define the members of the Storage Cluster, for example:
#
ceph-deploy --cluster mydom new
ceph-node{1..4}
TipIn this example we use the bash shell shorthand to iterate through adding nodes named ceph-node1, ceph-node2, ceph-node3 and ceph-node4. You may equally specify these hostnames manually as a space separated list. You may see this, and similar, notation used throughout this document.
NoteIf you do not intend to run more than one Storage Cluster on the same hardware, you do not need to specify a cluster name using the --cluster option.
Edit
/var/mydom_ceph/ceph.conf
and set the default number of replicas, for example:osd pool default size = 2
Edit
/var/mydom_ceph/ceph.conf
and add the workaround for the issue described in Section 1.8.3, “RBD kernel module fails to map an image to a block device”:rbd default features = 3
You can now install Ceph on the remaining Storage Cluster nodes. See Section 1.3.3, “Installing and Configuring Ceph on Participating Storage Cluster Nodes”.