Go to main content

Oracle® Solaris Cluster 4.3 System Administration Guide

Exit Print View

Updated: June 2017
 
 

How to Add a Cluster File System

Perform this task for each cluster file system you create after your initial Oracle Solaris Cluster installation.


Caution

Caution  -  Be sure you specify the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you might not intend to delete.


Ensure the following prerequisites have been completed prior to adding an additional cluster file system:

  • The root role privilege is established on a node in the cluster.

  • Volume manager software be installed and configured on the cluster.

  • A device group (such as a Solaris Volume Manager device group) or block disk slice exists on which to create the cluster file system.


Note -  You can also use the Oracle Solaris Cluster Manager browser interface to add a cluster file system to a zone cluster. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager.

If you used Oracle Solaris Cluster Manager to install data services, one or more cluster file systems already exist if there were sufficient shared disks on which to create the cluster file systems.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Assume the root role on any node in the cluster.

    Tip  -  For faster file system creation, become the root role on the current primary of the global device for which you create a file system.
  2. Create a UFS file system by using the newfs command.

    Caution

    Caution  -  Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete.


    phys-schost# newfs raw-disk-device

    The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Volume Manager
    Sample Disk Device Name
    Description
    Solaris Volume Manager
    /dev/md/nfs/rdsk/d1
    Raw disk device d1 within the nfs disk set
    None
    /dev/global/rdsk/d1s3
    Raw disk device d1s3
  3. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system is not accessed on that node.


    Tip  -  For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.
    phys-schost# mkdir -p /global/device-group/mount-point/
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device.

    mount-point

    Name of the directory on which to mount the cluster file system.

  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    See the vfstab(4) man page for details.

    1. In each entry, specify the required mount options for the type of file system that you use.
    2. To automatically mount the cluster file system, set the mount at boot field to yes.
    3. For each cluster file system, ensure that the information in its /etc/vfstab entry is identical on each node.
    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.
    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/ and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.

  5. On any node in the cluster, run the configuration check utility.
    phys-schost# cluster check -k vfstab

    The configuration check utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster. If no errors occur, no output is returned.

    For more information, see the cluster(1CL) man page.

  6. Mount the cluster file system from any node in the cluster.
    phys-schost# mount /global/device-group/mountpoint/
  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df command or mount command to list mounted file systems. For more information, see the df(1M) man page or mount(1M) man page.