The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
Having installed and configured the Ceph deployment node, you can use this node to install Ceph on the other nodes participating in the Storage Cluster.
To install Ceph on all the Storage Cluster nodes, run the following command on the deployment node:
# ceph-deploy install ceph-node{1..4}To configure the Storage Cluster, perform the following steps on the administration node:
Initialize Ceph monitoring and deploy a Ceph Monitor on one or more nodes in the Storage Cluster, for example:
#
ceph-deploy mon create-initial#ceph-deploy mon createceph-node{2,3,4}NoteFor high availability, Oracle recommends that you configure at least three nodes as Ceph Monitors.
Gather the monitor keys and the OSD and MDS bootstrap keyrings from one of the Ceph Monitors, for example:
#
ceph-deploy gatherkeysceph-node3Use the following command to prepare the back-end storage devices for each node in the Storage Cluster:
#
ceph-deploy osd create --zap-disk --fs-typefstypenode:deviceNoteThis commands deletes all data on the specified device and alters partitioning on the device.
Replace
nodewith the node name or hostname where the disk is located. Replacedevicewith the device name for the disk as reported when you run lsblk on the host where the disk is located. The supported file system types (fstype) arebtrfsandxfs. This command repartitions the disk to usually create two partitions, one to contain the data and the other to contain the journal.For example, prepare a btrfs file system as the back-end storage device on
/dev/sdbfor a node in the Storage Cluster:#
ceph-deploy osd create --zap-disk --fs-type btrfsceph-node1:sdbUse the following commands to check the health and status of the Storage Cluster:
#
ceph health#ceph statusIt usually takes several minutes for the Storage Cluster to stabilize before its health is shown as
HEALTH_OK. You can also check the Cluster quorum status to get an indication on the quorum status of the cluster monitors:#
ceph quorum_status --format json-prettyRefer to the upstream Ceph documentation for help troubleshooting any issues with the health or status of your Storage Cluster.

