The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.
Having installed and configured the Ceph deployment node, you can use this node to install Ceph on the other nodes participating in the Storage Cluster.
To install Ceph on all the Storage Cluster nodes, run the following command on the deployment node:
# ceph-deploy install ceph-node{1..4}
To configure the Storage Cluster, perform the following steps on the administration node:
Initialize Ceph monitoring and deploy a Ceph Monitor on one or more nodes in the Storage Cluster, for example:
#
ceph-deploy mon create-initial
#ceph-deploy mon create
ceph-node{2,3,4}
NoteFor high availability, Oracle recommends that you configure at least three nodes as Ceph Monitors.
Gather the monitor keys and the OSD and MDS bootstrap keyrings from one of the Ceph Monitors, for example:
#
ceph-deploy gatherkeys
ceph-node3
Use the following command to prepare the back-end storage devices for each node in the Storage Cluster:
#
ceph-deploy osd create --zap-disk --fs-type
fstype
node
:device
NoteThis commands deletes all data on the specified device and alters partitioning on the device.
Replace
node
with the node name or hostname where the disk is located. Replacedevice
with the device name for the disk as reported when you run lsblk on the host where the disk is located. The supported file system types (fstype
) arebtrfs
andxfs
. This command repartitions the disk to usually create two partitions, one to contain the data and the other to contain the journal.For example, prepare a btrfs file system as the back-end storage device on
/dev/sdb
for a node in the Storage Cluster:#
ceph-deploy osd create --zap-disk --fs-type btrfs
ceph-node1
:sdbUse the following commands to check the health and status of the Storage Cluster:
#
ceph health
#ceph status
It usually takes several minutes for the Storage Cluster to stabilize before its health is shown as
HEALTH_OK
. You can also check the Cluster quorum status to get an indication on the quorum status of the cluster monitors:#
ceph quorum_status --format json-pretty
Refer to the upstream Ceph documentation for help troubleshooting any issues with the health or status of your Storage Cluster.