The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

1.3.3 Installing and Configuring Ceph on Participating Storage Cluster Nodes

Having installed and configured the Ceph deployment node, you can use this node to install Ceph on the other nodes participating in the Storage Cluster.

To install Ceph on all the Storage Cluster nodes, run the following command on the deployment node:

# ceph-deploy install ceph-node{1..4}

To configure the Storage Cluster, perform the following steps on the administration node:

  1. Initialize Ceph monitoring and deploy a Ceph Monitor on one or more nodes in the Storage Cluster, for example:

    # ceph-deploy mon create-initial
    # ceph-deploy mon create ceph-node{2,3,4}
    Note

    For high availability, Oracle recommends that you configure at least three nodes as Ceph Monitors.

  2. Gather the monitor keys and the OSD and MDS bootstrap keyrings from one of the Ceph Monitors, for example:

    # ceph-deploy gatherkeys ceph-node3
  3. Use the following command to prepare the back-end storage devices for each node in the Storage Cluster:

    # ceph-deploy osd create --zap-disk --fs-type fstype node:device
    Note

    This commands deletes all data on the specified device and alters partitioning on the device.

    Replace node with the node name or hostname where the disk is located. Replace device with the device name for the disk as reported when you run lsblk on the host where the disk is located. The supported file system types (fstype) are btrfs and xfs. This command repartitions the disk to usually create two partitions, one to contain the data and the other to contain the journal.

    For example, prepare a btrfs file system as the back-end storage device on /dev/sdb for a node in the Storage Cluster:

    # ceph-deploy osd create --zap-disk --fs-type btrfs ceph-node1:sdb
  4. Use the following commands to check the health and status of the Storage Cluster:

    # ceph health
    # ceph status

    It usually takes several minutes for the Storage Cluster to stabilize before its health is shown as HEALTH_OK. You can also check the Cluster quorum status to get an indication on the quorum status of the cluster monitors:

    # ceph quorum_status --format json-pretty

    Refer to the upstream Ceph documentation for help troubleshooting any issues with the health or status of your Storage Cluster.