Sun Cluster 3.0 System Administration Guide

3.4.1 How to Add an Additional Cluster File System

Perform this task for each cluster file system you create after your initial Sun Cluster installation.


Caution - Caution -

Be sure you have specified the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you may not intend to delete.


The prerequisites to add an additional cluster file system are:

  1. Become superuser on any node in the cluster.


    Tip -

    For faster file system creation, become superuser on the current primary of the global device for which you are creating a file system.


  2. Create a file system using the newfs(1M) command.


    # newfs raw-disk-device
    

    Table 3-3 shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Table 3-3 Sample Raw Disk Device Names

    If Your Volume Manager Is ... 

    A Disk Device Name Might Be ... 

    Description 

    Solstice DiskSuite 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle metaset.

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group.

    None 

    /dev/global/rdsk/d1s3

    Raw disk device for block slice d1s3.

  3. On each node in the cluster, create a mount point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    # mkdir -p /global/device-group/mount-point
    
    device-group

    Name of the directory that corresponds to the name of the device group which contains the device.

    mount-point

    Name of the directory on which to mount the cluster file system.


    Tip -

    For ease of administration, create the mount point in the /global/device-group directory. This enables you to easily distinguish cluster file systems, which are globally available, from local file systems.


  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    1. To automatically mount a cluster file system, set the mount at boot field to yes.

    2. Use the following required mount options:

      • The global mount option is required for all cluster file systems. This option identifies the file system as a cluster file system.

      • File system logging is required for all cluster file systems. UFS logging can be done either through the use of Solstice DiskSuite metatrans devices or directly through a Solaris UFS mount option. But, the two approaches should not be combined. If Solaris UFS logging is used directly, the logging mount option should be used. Otherwise, if metatrans file system logging is used, no additional mount option is needed.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node that has that entry.

    4. Pay attention to boot order dependencies of the file systems.

      Normally, you should not nest the mount points for cluster file systems. For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot up and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

    5. Make sure the entries in each node's /etc/vfstab file list common devices in the same order.

      For example, if phys-schost-1 and phys-schost-2 have a physical connection to devices d0, d1, and d2, the entries in their respective /etc/vfstab files should be listed as d0, d1, and d2.

    Refer to the vfstab(4) man page for details.

  5. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If there are no errors, nothing is returned.

  6. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mount-point
    
  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

3.4.1.1 Example--Adding a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
[on each node:]
# mkdir -p /global/oracle/d1
 
# vi /etc/vfstab
#device           device       mount   FS      fsck    mount   mount
#to mount        to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
[save and exit]
 
[on one node:]
# sccheck
 
# mount /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/
largefiles on Sun Oct 3 08:56:16 1999