The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

1.3 Installing and Configuring a Ceph Storage Cluster

A Ceph Storage Cluster consists of several systems, known as nodes, each running the Ceph OSD (Object Storage Device) daemon. The Ceph storage cluster must also run the Ceph Monitor daemon on one or more nodes and may also run an optional Ceph Object Gateway on one or more nodes. A node is selected as an administration node from which commands can be run to control the cluster. Typically the administration node is also used as the deployment node, from which other systems can automatically be set up and configured as additional nodes within the cluster.

Note

For data integrity, a Storage Cluster should contain two or more nodes for storing copies of an object.

For high availability, a Storage Cluster should contain three or more nodes that store copies of an object.

In the example used in the following steps, the administration and deployment node is ceph-node1.example.com (192.168.1.51).

1.3.1 Preparing the Storage Cluster Nodes Before Installing Ceph

There are some basic requirements for each Oracle Linux system that you intend to use as a Storage Cluster node. These include the following items, for which some preparatory work may be required before you can begin your deployment:

  1. Time must be accurate and synchronized across the nodes within the storage cluster. This is achieved by installing and configuring NTP on each system that you wish to run as a node in the cluster. If the NTP service if not already configured, install and start it. See the Oracle Linux 7 Administrator's Guide for more information on configuring NTP.

    Note

    Use the hwclock --show command to ensure that all nodes agree on the time. By default, the Ceph monitors report health HEALTH_WARN clock skew detected on mon errors if the clocks on the nodes differ by more than 50 milliseconds.

  2. Cluster network communications must be able to take place between nodes within the cluster. If firewall software is running on any of the nodes, it must either be disabled or, preferably, configured to facilitate network traffic on the required ports.

    To stop and disable the firewall daemon on Oracle Linux 7, you can do the following:

    # systemctl stop firewalld
    # systemctl disable firewalld

    Preferably, leave the firewall running and configure the following rules:

    1. Allow TCP traffic on port 6789 to enable the Ceph Monitor:

      # firewall-cmd --zone=public --add-port=6789/tcp --permanent
    2. Allow TCP traffic for ports 6800 to 7300 to enable the traffic for the Ceph OSD daemon:

       # firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
    3. Allow TCP traffic on port 7480 to enable the Ceph Object Gateway:

      # firewall-cmd --zone=public --add-port=7480/tcp --permanent
    4. After modifying firewall rules, restart the firewall daemon service:

      # systemctl restart firewalld.service
  3. Cluster nodes must be able to resolve the fully qualified domain name for each node within the cluster. You may either use DNS for this purpose, or provide entries within /etc/hosts for each system. If you select to rely on DNS, it must have sufficient redundancy to ensure that the cluster is able to perform name resolution at any time. If you select to edit /etc/hosts, add entries for the IP address and host name of all of the nodes in the Storage Cluster, for example:

    192.168.1.51    ceph-node1.example.com ceph-node1
    192.168.1.52    ceph-node2.example.com ceph-node2
    192.168.1.53    ceph-node3.example.com ceph-node3
    192.168.1.54    ceph-node4.example.com ceph-node4
    Note

    Although you can use DNS to configure host name to IP address mapping, Oracle recommends that you also configure /etc/hosts in case the DNS service becomes unavailable.

  4. The Ceph Storage Cluster deployment node must be able to connect to each prospective node in the cluster over SSH, to facilitate deployment. To do this, you must generate an SSH key on the deployment node and copy the public key to each of the other nodes in the Storage Cluster.

    1. On the deployment node, generate the SSH key, specifying an empty passphrase:

      # ssh-keygen
    2. From the deployment node, copy the key to the other nodes in the Storage Cluster, for example:

      # ssh-copy-id root@ceph-node2
      # ssh-copy-id root@ceph-node3
      # ssh-copy-id root@ceph-node4 
  5. To prevent errors when running ceph-deploy as a user with passwordless sudo privileges, use visudo to comment out the Defaults requiretty setting in /etc/sudoers or change it to Defaults:ceph !requiretty.

You can now install and configure the Storage Cluster deployment node, which is usually the same system as the administration node. See Section 1.3.2, “Installing and Configuring Ceph on the Storage Cluster Deployment Node”.