The software described in this documentation is either no longer supported or is in extended support.
Oracle recommends that you upgrade to a current supported release.

1.7 Ceph FS

Important

The Ceph FS feature is available as a technology preview. Support for this feature is limited and it should not be used in production environments. If you intend to experiment with this feature, refer to the upstream documentation for more detailed information: http://docs.ceph.com/docs/master/cephfs/.

  1. Deploy a Ceph Metadata Server (MDS).

    At least one metadata server must be active within your environment to use Ceph FS. If you do not have a dedicated server available for this purpose, you may install the MDS service on an existing Monitor node within your Storage Cluster, as the service does not have significant resource requirements. To deploy Ceph MDS in your environment, execute the following command from your deployment node:

    # ceph-deploy mds create ceph-node3
  2. Deploy the Ceph Client.

    The Ceph Client must be deployed on the system where you intend to mount the Ceph FS and this system must have the appropriate network access and authentication keyring to access the Storage Cluster. See Section 1.5.1, “Installing the Ceph Client” for more information on deploying and setting up the Ceph Client.

  3. Create storage pools and a new Ceph Filesystem.

    A Ceph Filesystem requires at least two storage pools to function. The first pool is used to store actual data, while the second is used to store metadata. Although the Ceph command line includes commands for creating and removing Ceph filesystems, only one filesystem can exist at the same time.

    To create the storage pools, run the following commands from the Ceph Client node:

    # ceph osd pool create cephfs_data 1
    # ceph osd pool create cephfs_metadata 2

    To create the new Ceph Filesystem, run the following command from the Ceph Client node:

    # ceph fs new cephfs cephfs_metadata cephfs_data
  4. Check the status of the Ceph MDS.

    After a filesystem is created, the Ceph MDS enters into an active state. You are only able to mount the filesystem once the MDS is active. To check its status:

    # ceph mds stat
    e5: 1/1/1 up {0=ceph-node3=up:active}
  5. Store the secret key used for admin authentication into a file that can be used for mounting the Ceph FS.

    The Storage Cluster admin key is stored in /etc/ceph/ceph.client.admin.keyring on each node in the cluster and also on the Ceph Client node. When mounting a Ceph FS the key is required to authenticate the mount request. To prevent this key from being visible in the process list, it is best practice to copy it into a file that is secured with the appropriate permissions to keep this information safe.

    For example:

    # echo $(sed -n 's/.*key *= *\([^ ]*.*\)/\1/p' < /etc/ceph/ceph.client.admin.keyring) > /etc/ceph/admin.secret
    # chmod 600 /etc/ceph/admin.secret
    
    # cat /etc/ceph/ceph.client.admin.keyring
    [client.admin]
            key = AQDIvtZXzBriJBAA+3HmoYkUmPFnKljqhxlYGw==
    # cat /etc/ceph/admin.secret 
    AQDIvtZXzBriJBAA+3HmoYkUmPFnKljqhxlYGw==
  6. To mount the filesystem, you can either use the Ceph kernel module or you may use the Ceph FUSE (Filesystem in User Space) tools.

    Mount using kernel module

    1. Make the directory mount point, if required:

      # mkdir -p /mnt/cephfs
    2. Mount the file system:

       # mount -t ceph ceph-node2:6789:/ /mnt/cephfs -o name=admin,secretfile=/etc/ceph/admin.secret

      Replace ceph-node2 with the hostname or IP address of a Ceph monitor within your Storage Cluster. Multiple monitor addresses can be specified, separated by commas, although only one active monitor is needed to successfully mount the filesystem. If you do not know what monitors are available, you can run ceph mon stat to get a listing of available monitors, their IP addresses and port numbers.

      Mounting a Ceph Filesystem automatically loads the ceph and libceph kernel modules on the Ceph Client node.

    3. To unmount the filesystem:

      # umount /mnt/cephfs

    Mount using Ceph FUSE

    1. If you have not installed the ceph-fuse package already, install it on the client system:

      # yum install ceph-fuse
    2. Ceph FUSE can be used from any system, as long as it has access to the Ceph Storage Cluster configuration information and it has a copy of the Ceph Client admin keyring. If you have already configured this host as a Ceph Client, this information is already available and you may skip this step.

      • On the client host, create the appropriate configuration directory:

        # mkdir -p /etc/ceph
      • Copy the configuration and admin keyring to this directory from one of the Ceph Storage Cluster monitor hosts:

        # scp /etc/ceph/ceph.conf root@client_host:/etc/ceph
        # scp /etc/ceph/ceph.client.admin.keyring root@client_host:/etc/ceph
      • Ensure that the Ceph configuration file and the keyring have the appropriate permissions set on the client system:

        # chmod -R 644 /etc/ceph/
    3. Create a mount point for the file system, if required:

      # mkdir -p /mnt/cephfs
    4. To mount the Ceph Filesystem as FUSE, use the ceph-fuse command. For example:

      # ceph-fuse -c /etc/ceph/ceph.conf /mnt/cephfs
    5. To unmount a Ceph Filesystem mounted as FUSE, do the following:

      # fusermount -u /mnt/cephfs