1.8 Configuring the Storage Cluster

To configure the Storage Cluster, perform the following steps on the administration node:

  1. Initialize Ceph monitoring and deploy a Ceph Monitor on one or more nodes in the Storage Cluster, for example:

    # ceph-deploy mon create-initial
    # ceph-deploy mon create ceph-node{2,3,4}
    Note

    For high availability, Oracle recommends that you configure at least three nodes as Ceph Monitors.

  2. Gather the monitor keys and the OSD and MDS bootstrap keyrings from one of the Ceph Monitors, for example:

    # ceph-deploy gatherkeys ceph-node3
  3. Use the following command to prepare the back-end storage devices for each node in the Storage Cluster:

    # ceph-deploy osd --zap-disk --fs-type fstype create node:device
    Note

    This command deletes all data on the specified device.

    The supported file system types (fstype) are btrfs and xfs.

    For example, prepare a btrfs file system as the back-end storage device on /dev/sdb1 for all nodes in a Storage Cluster:

    # ceph-deploy osd --zap-disk --fs-type btrfs create ceph-node{1,2,3,4}:sdb1
  4. When you have configured the Storage Cluster and established that it works correctly, re-enable SELinux in enforcing mode on each of the nodes where you previously disabled it and then reboot each node.

    # sed -i '/SELINUX/s/disabled/enforcing/' /etc/selinux/config
    # reboot
  5. Restart, re-enable, and reconfigure the firewall service on each of the nodes where you previously disabled it.

    For Oracle Linux 6:

    1. Restart and re-enable the firewall service.

      # service iptables start
      # service ip6tables start
      # chkconfig iptables on
      # chkconfig ip6tables on
    2. Allow access to TCP ports 6800 through 7300 that are used by the Ceph OSD, for example:

      # iptables -A INPUT -i interface -p tcp -s network-address/netmask \
        --match multiport --dports 6800:7300 -j ACCEPT
    3. If a node runs Ceph Monitor, allow access to TCP port 6789, for example:

      # iptables -A INPUT -i interface -p tcp -s network-address/netmask \
        --dport 6789 -j ACCEPT
    4. If a node if configured as an Object Gateway, allow access to port 7480 (or an alternate port that you have configured), for example:

      # iptables -A INPUT -i interface -p tcp -s network-address/netmask \
        --dport 7480 -j ACCEPT
    5. Save the configuration:

      # service iptables save

    For Oracle Linux 7:

    1. Restart and re-enable the firewall service.

      # systemctl start firewalld
      # systemctl enable firewalld

    2. Allow access to TCP ports 6800 through 7300 that are used by the Ceph OSD, for example:

      # firewall-cmd --permanent --zone=zone --add-port=6800-7300/tcp 
    3. If a node runs Ceph Monitor, allow access to TCP port 6789 , for example:

      # firewall-cmd --permanent --zone=zone --add-port=6789/tcp
    4. If a node if configured as an Object Gateway, allow access to port 7480 (or an alternate port that you have configured), for example:

      #  firewall-cmd --permanent --zone=zone --add-port=7480/tcp
  6. Use the following command to check the status of the Storage Cluster:

    # ceph status

    It usually takes several minutes for the Storage Cluster to stabilize before its health is shown as HEALTH_OK.