Note:

Use Oracle Cloud Cluster File System Tools on Oracle Cloud Infrastructure

Introduction

OCFS2 is a general-purpose clustered file system used in clustered environments to increase storage performance and availability. In Oracle Cloud Infrastructure, you can deploy OCFS2 clustered file systems through Read/Write - shareable block storage attached to an instance.

Note: For optimal OCFS2 file system performance, reduce the number of files with the OCFS2 filesystem. Applications such as Oracle e-Business Suite (EBS) or Oracle WebCenter Content (WCC) should leverage a separate NFS file system for directories containing large amounts of temporary files. During runtime, system administrators should actively archive, purge, remove, delete, and move any non-current files to one or more separate subdirectories or filesystems while regularly monitoring the OCFS2 filesystem file/inode usage.

This tutorial provides instructions on using ocfs2-tools to deploy and test a two node Oracle Cluster File System version 2 (OCFS2) on Oracle Cloud Infrastructure.

Objectives

Prerequisites

Two Oracle Linux systems with the following configuration:

Configure a Security List Ingress Rule for the Virtual Cloud Network

Security lists control the traffic in and out of the various subnets associated with the VCN. When configuring configuring an OCFS2 cluster, you need to add an ingress rule allowing the instances access through TCP and UDP port 7777.

The required port is configured to the stateful ingress rules of the default security list for the Virtual Cloud Network (VCN).

  1. Connect to ol-server by following the instructions in Oracle Linux Lab Basics guide. The guide provides connection and usage instructions.

  2. From the server’s Instance details section of the Instance Information tab, click on the link beside Virtual cloud network to view the VCN details page.

    list details

  3. Under Resources, click on Security Lists.

    security list

  4. Click on the name of the default security list in the table.

    Note: Under Resources, be certain to click Ingress Rules to display the current list of ingress rules.

  5. Click Add Ingress Rules button.

    security list

    1. Source CIDR = 10.0.0.0/16
    2. IP Protocol = TCP
    3. Destination Port Range = 7777

    security list

  6. Click the + Another Ingress Rule button and add the following for UDP access:

    1. Leave Stateless box unchecked.
    2. Source Type = CIDR
    3. Source CIDR = 10.0.0.0/16
    4. IP Protocol = UDP
    5. Destination Port Range = 7777

    security list

    Note: Carefully review your selections, and click Add Ingress Rules.

  7. Verify you see port 7777 listed in the Ingress Rules list.

    security list

Create and Attach a Block Volume

In this task, you create a block volume in the lab compartment and attach it to ol-server.

  1. Create a block volume.

    1. In the web console, select Storage > Block Storage.

      block storage

      The Block Volumes page displays.

    2. Click the Create Block Volume button.

      block storage

      The Create block volume dialog displays.

    3. In the dialog, provide the following information:

      block storage

      • Name: ocfs2-bv01
      • Volume Size and Performance:
        • Custom
        • 50 for the size

      Click the Create Block Volume button. The volume becomes available (green icon) shortly.

  2. Attach the block volume to ol-server.

    1. Under the Resources section of the Block Volumes page, select Attached Instances

      block storage

      The web console displays the available block volumes in the compartment.

    2. Click the Attach to Instance button.

      The Attach to instance dialog displays.

    3. Select the following in the dialog:

      block storage

      • Attachment type: iSCSI
      • Access Type: Read/Write Sharable, and then select the checkbox to acknowledge the risk of data corruption without a configured clustered file system.
      • Select Instance: ol-server

      Click Attach to continue. When done, ocfs2-bv01 appears in the list of block volumes in the compartment.

    4. Click the ol-server link in the table to verify that ocfs2-bv01 appears in the list of attached block volumes with read/write sharable access.

      block storage

Install ocfs2-tools on the Cluster Members

The instructions use the following instance names for the cluster members: ol-server and ol-client01.

Substitute with your specified, cluster names, OCFS2 volume names, disk labels, instance hostnames, and private IP address when appropriate.

In this practice you:

  1. Install ocfs2-tools.

    1. Open terminals and connect to each cluster member.

    2. In the terminal window connected to ol-server, install ocfs2-tools using the dnf command.

      sudo dnf install -y ocfs2-tools
      
    3. In the terminal window connected to ol-client01, install ocfs2-tools using the dnf command.

      sudo dnf install -y ocfs2-tools
      
  2. Configure firewall rules using the firewall-cmd command on the cluster members. On each cluster member, run the following commands:

    1. Add the firewall rules:

      sudo firewall-cmd --permanent --add-port=7777/tcp --add-port=7777/udp --add-port=3260/tcp
      
    2. Reload the firewall rules:

      sudo firewall-cmd --complete-reload
      
  3. Use the uname command on the cluster members to ensure both instances use the same kernel version.

    sudo uname -r
    

    An OCFS2 cluster requires kernel compatibility by all cluster members. The output should display the same kernel version, for example: 5.4.17-2136.302.7.2.1.el8uek.x86_64.

  4. Disable selinux on the cluster nodes.

    sudo vi /etc/selinux/config
    

    Sample text:

    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=disabled
    # SELINUXTYPE= can take one of these three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected. 
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    

Configure the Cluster Layout

  1. From ol-server, use the o2cb command and create the ociocfs2 cluster.

    sudo o2cb add-cluster ociocfs2
    
  2. Use the o2cb command to list the ociocfs2 information in the cluster layout.

    sudo o2cb list-cluster ociocfs2
    

    Sample output:

    cluster:
            node_count = 0
            heartbeat_mode = local
            name = ociocfs2
    

    This information is written to the /etc/ocfs2/cluster.conf file.

    Note: The default heartbeat mode is local.

  3. Use the o2cb command and add ol-server and ol-client01 to ociocfs2.

    sudo o2cb add-node ociocfs2 ol-server --ip 10.0.0.150 
    
    sudo o2cb add-node ociocfs2 ol-client01 --ip 10.0.0.151 
    
  4. Use the o2cb command to list the ociocfs2 information.

    sudo o2cb list-cluster ociocfs2
    

    Sample output:

       node:
    	        number = 0
    	        name = ol-server
    	        ip_address = 10.0.0.150
    	        ip_port = 7777
    	        cluster = ociocfs2
    
       node:
    	        number = 1
    	        name = ol-client01
    	        ip_address = 10.0.0.151
    	        ip_port = 7777
    	        cluster = ociocfs2
    
       cluster:
    	        node_count = 2
    	        heartbeat_mode = local
    	        name = ociocfs2
    
  5. Use the cat command to display the contents of the /etc/ocfs2/cluster.conffile.

    cat /etc/ocfs2/cluster.conf
    
  6. Create a cluster configuration file on ol-client01.

    1. Use the mkdir command to create the /etc/ocfs2 directory on ol-client01.

      sudo mkdir /etc/ocfs2
      
    2. Use a text editor, like vim or vi, to create a cluster.conf file.

      sudo vim /etc/ocfs2/cluster.conf
      
    3. Insert the contents of the cluster.conf file on ol-server into the cluster.conf file on ol-client01.

    4. Save the file and exit the text editor.

Configure and Start the O2CB Cluster Stack on the Cluster Members

  1. On ol-server, use the /sbin/o2cb.init command without any arguments to display its usage.

    sudo /sbin/o2cb.init
    

    Command usage output:

    Usage: /sbin/o2cb.init {start|stop|restart|force-reload|enable|disable|configure|load|unload|online|offline|force-offline|status|online-status}et
    
  2. Add the configure argument to /sbin/o2cb.init and configure the cluster stack on both cluster nodes. Provide the following responses:

    • Answer y to “Load O2CB driver on boot”
    • Accept the default (press Enter), `“o2cb”, as the cluster stack
    • Enter ociocfs2 as the cluster to start on boot
    • Accept the defaults (press Enter) for all other queries
    sudo /sbin/o2cb.init configure
    

    Command output:

    Load O2CB driver on boot (y/n) [n]: y
    Cluster stack backing O2CB [o2cb]: 
    Cluster to start on boot (Enter "none" to clear) [ocfs2]: ociocfs2
    Specify heartbeat dead threshold (>=7) [31]: 
    Specify network idle timeout in ms (>=5000) [30000]: 
    Specify network keepalive delay in ms (>=1000) [2000]: 
    Specify network reconnect delay in ms (>=2000) [2000]: 
    Writing O2CB configuration: OK
    checking debugfs...
    Loading stack plugin "o2cb": OK
    Loading filesystem "ocfs2_dlmfs": OK
    Creating directory '/dlm': OK
    Mounting ocfs2_dlmfs filesystem at /dlm: OK
    Setting cluster stack "o2cb": OK.
    
  3. Run the same command and enter the identical responses on ol-client01.

    sudo /sbin/o2cb.init configure
    
  4. Check the cluster status on both members using the /sbin/o2cb.init command.

    sudo /sbin/o2cb.init status
    

    The output shows that O2CB cluster, ociocfs2, is online; however, the O2CB heartbeat is not active. The heartbeat becomes active after mounting a disk volume.

  5. Enable the o2cb service, and the ocfs2 service using the systemctl command on the cluster members.

    sudo systemctl enable o2cb
    
    sudo systemctl enable ocfs2
    
  6. Add the following kernel settings to the end of /etc/sysctl.d/99-sysctl.conf on both cluster members:

    • kernel.panic = 30
    • kernel.panic_on_oops = 1
    sudo vim /etc/sysctl.d/99-sysctl.conf
    

    Save the changes and exit the file.

  7. Implement the changes immediately using the sysctl command on both members.

    sudo sysctl -p
    

Create OCFS2 Volumes

  1. Create different types of OCFS2 volumes on block volume /dev/xvb. Enter y when prompted to overwrite the existing ocfs2 partition.

    1. Use the mkfs.ocfs2 command to create a file system.

      sudo mkfs.ocfs2 /dev/sdb
      

      Note: Review the default values:

      • Features
      • Block size and cluster size
      • Node slots
      • Journal size
    2. Use the mkfs.ocfs2 command to create a file system with the -T mail option.

      • Specify this type when you intend to use the file system as a mail server.
      • Mail servers perform many metadata changes to many small files, which require the use of a large journal.
      sudo mkfs.ocfs2 -T mail /dev/sdb1
      

      Note: Review the output and note the larger journal size.

    3. Use the mkfs.ocfs2 command to create a file system with the -T vmstore option.

      • Specify this type when you intend to store virtual machine images.
      • These file types are sparsely allocated large files and require moderate metadata updates.
      sudo mkfs.ocfs2 -T vmstore /dev/sdb1
      

      Note: Review the output and note the differences from the default file system:

      • Cluster size
      • Cluster groups
      • Extent allocator size
      • Journal size
    4. Use the mkfs.ocfs2 command to create a file system with the -T datafiles option.

      • Specify this type when you intend to use the file system for database files.
      • These file types use fewer fully allocated large files, with fewer metadata changes, and do not benefit from a large journal.
      sudo mkfs.ocfs2 -T datafiles /dev/sdb1
      

      Note: Review the output and note the differences in the journal size.

    5. Use the mkfs.ocfs2 command to create a file system with the label, ocfs2vol.

      sudo mkfs.ocfs2 -L "ocfs2vol" /dev/sdb1
      

Mount and Test the OCFS2 Volume

In this practice, you mount the clustered OCFS2 volume to ol-server and ol-client01, and then create, modify, and remove files from one host to another.

  1. From ol-server, mount the OCFS2 volume.

    1. Use the mdkir command to make a mount point, /u01, for the OCFS2 volume.

      sudo mkdir /u01
      
    2. Use the mount command to mount the OCFS2 volume by label ocfs2vol on the /u01 mount point.

      sudo mount -L ocfs2vol /u01
      
    3. Use the command /sbin/o2cb.init status to display the status of the O2CB heartbeat mode.

      sudo /sbin/o2cb.init status
      

      Note: After mounting the volume, the output shows that the heartbeat mode is now active.

    4. Create a test file in the /u01 directory.

      echo "File created on ol-server" | sudo tee /u01/shared.txt > /dev/null
      

      The tee command reads from standard input and writes the output to the shared.txt file.

    5. Use the cat command to view the contents of /u01/shared.txt.

      sudo cat /u01/shared.txt
      
  2. From ol-client02, mount the OCFS2 volume.

    1. Use the mdkir command to make a mount point, /u01, for the OCFS2 volume.

      sudo mkdir /u01
      
    2. Use the mount command to mount the OCFS2 volume by label ocfs2vol on the /u01 mount point.

      sudo mount -L ocfs2vol /u01
      

      If the mount command fails with a can’t find LABEL error message, then:

      1. Use the cat command to display the /proc/partitions file.

        sudo cat /proc/partitions
        

        The /proc/partitions file displays a table of partitioned devices.

      2. If the sdb partition is not listed, use the partprobe command on /dev/sdb to inform the OS of partition table changes.

        sudo partprobe /dev/sdb
        
    3. Rerun the command cat /proc/partitions to display the table.

      Confirm sdb appears in the table.

    4. Retry mounting the volume.

      sudo mount -L ocfs2vol /u01
      
    5. Use the ls to list the contents of the /u01 directory.

      sudo ls /u01
      

      The outputs displays the shared.txt file. This verifies that the OCFS2 shares the clustered file system between both cluster members.

    6. Use a text editor and modify the contents of the shared.txt file by adding Modified on ol-client01 to the end of the file.

  3. From ol-server, use the cat command to display the contents of the shared.txt file.

    sudo cat /u01/shared.txt
    

    Seeing the updated text file contents confirms both cluster members have read/write access.

For More Information:

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.