Sun Cluster 3.1 System Administration Guide

Administering Cluster File Systems

Table 3–3 Task Map: Administering Cluster File Systems

Task 

For Instructions, Go To… 

Add cluster file systems after the initial Sun Cluster installation 

    - Use newfs(1M) and mkdir

How to Add a Cluster File System

Remove a cluster file system 

    - Use fuser(1M) and umount(1M)

How to Remove a Cluster File System

Check global mount points in a cluster for consistency across nodes 

    - Use sccheck(1M)

How to Check Global Mounts in a Cluster

How to Add a Cluster File System

Perform this task for each cluster file system you create after your initial Sun Cluster installation.


Caution – Caution –

Be sure you specify the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you may not intend to delete.


The prerequisites to add an additional cluster file system are:

If you used SunPlex Manger to install data services, one or more cluster file systems already exist if there were sufficient shared disks on which to create the cluster file systems.

  1. Become superuser on any node in the cluster.


    Tip –

    For faster file system creation, become superuser on the current primary of the global device for which you are creating a file system.


  2. Create a file system using the newfs(1M) command.


    Note –

    The newfs(1M) command is only valid for creating new UFS file systems. To create a new VxFS file system, follow procedures provided in your VxFS documentation



    # newfs raw-disk-device
    

    The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Table 3–4 Sample Raw Disk Device Names

    If Your Volume Manager Is … 

    A Disk Device Name Might Be … 

    Description 

    Solstice DiskSuite/Solaris Volume Manager 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle diskset.

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group.

    None 

    /dev/global/rdsk/d1s3

    Raw disk device for block slice d1s3.

     

  3. On each node in the cluster, create a mount point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    Tip –

    For ease of administration, create the mount point in the /global/device-group directory. Using this location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    # mkdir -p /global/device-group/mountpoint
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device.

    mountpoint

    Name of the directory on which to mount the cluster file system.

  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    1. Use the following required mount options.


      Note –

      Logging is required for all cluster file systems.


      • Solaris UFS logging – Use the global,logging mount options. See the mount_ufs(1M) man page for more information about UFS mount options.


        Note –

        The syncdir mount option is not required for UFS cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior. If you do not, you will have the same behavior that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition until you close a file. The cases in which you could have problems if you do not specify syncdir are rare. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.


      • Solstice DiskSuite/Solaris Volume Manager trans metadevice or transactional volume– Use the global mount option (do not use the logging mount option). See your Solstice DiskSuite/Solaris Volume Manager documentation for information about setting up trans metadevices and transactional volumes.


        Note –

        Transactional volumes are scheduled to be removed from the Solaris operating environment in an upcoming Solaris release. Solaris UFS logging, available since the Solaris 8 release, provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.


      • VxFS logging – Use the global, log mount options. See the mount_vxfs(1M) man page for more information about VxFS mount options.

    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

    See the vfstab(4) man page for details.

  5. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If there are no errors, nothing is returned.

  6. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mountpoint
    

  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

    To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.

Example—Adding a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite/Solaris Volume Manager metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
[on each node:]
# mkdir -p /global/oracle/d1
 
# vi /etc/vfstab
#device                device                 mount            FS  fsck  mount          mount
#to mount              to fsck                point           type pass  at boot      options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs  2    yes         global,logging
[save and exit]
 
[on one node:]
# sccheck
# mount /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/
largefiles on Sun Oct 3 08:56:16 2001

How to Remove a Cluster File System

You remove a cluster file system by merely unmounting it. If you want to also remove or delete the data, remove the underlying disk device (or metadevice or volume) from the system.


Note –

Cluster file systems are automatically unmounted as part of the system shutdown that occurs when you run scshutdown(1M) to stop the entire cluster. A cluster file system is not unmounted when you run shutdown to stop a single node. However, if the node being shut down is the only node with a connection to the disk, any attempt to access the cluster file system on that disk results in an error.


The prerequisites to unmount cluster file systems are:

  1. Become superuser on any node in the cluster.

  2. Determine which cluster file systems are mounted.


    # mount -v
    

  3. On each node, list all processes that are using the cluster file system, so you know which processes you are going to stop.


    # fuser -c [ -u ] mountpoint
    

    -c

    Reports on files that are mount points for file systems and any files within those mounted file systems.

    -u

    (Optional) Displays the user login name for each process ID.

    mountpoint

    Specifies the name of the cluster file system for which you want to stop processes.

  4. On each node, stop all processes for the cluster file system.

    Use your preferred method for stopping processes. If necessary, use the following command to force termination of processes associated with the cluster file system.


    # fuser -c -k mountpoint
    

    A SIGKILL is sent to each process using the cluster file system.

  5. On each node, verify that no processes are using the file system.


    # fuser -c mountpoint
    

  6. From just one node, umount the file system.


    # umount mountpoint
    

    mountpoint

    Specifies the name of the cluster file system you want to unmount. This can be either the directory name where the cluster file system is mounted, or the device name path of the file system.

  7. (Optional) Edit the /etc/vfstab file to delete the entry for the cluster file system being removed.

    Perform this step on each cluster node that has an entry for this cluster file system in its /etc/vfstab file.

  8. (Optional) Remove the disk device group/metadevice/plex.

    See your volume manager documentation for more information.

Example—Removing a Cluster File System

The following example removes a UFS cluster file system mounted on the Solstice DiskSuite/Solaris Volume Manager metadevice /dev/md/oracle/rdsk/d1.


# mount -v
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles 
# fuser -c /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c -k /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c /global/oracle/d1
/global/oracle/d1:
# umount /global/oracle/d1
 
(on each node, remove the highlighted entry:)
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
[Save and exit.]

Note –

To remove the data on the cluster file system, remove the underlying device. See your volume manager documentation for more information.


How to Check Global Mounts in a Cluster

The sccheck(1M) utility verifies the syntax of the entries for cluster file systems in the /etc/vfstab file. If there are no errors, nothing is returned.


Note –

Run sccheck after making cluster configuration changes, such as removing a cluster file system, that have affected devices or volume management components.


  1. Become superuser on any node in the cluster.

  2. Check the cluster global mounts.


    # sccheck