Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Add a Cluster File System

Perform this task for each cluster file system you create after your initial Sun Cluster installation.


Caution – Caution –

Be sure you specify the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you might not intend to delete.


Ensure the following prerequisites have been completed prior to adding an additional cluster file system:

If you used Sun Cluster Manager to install data services, one or more cluster file systems already exist if shared disks on which to create the cluster file systems were sufficient.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on any node in the cluster.


    Tip –

    For faster file system creation, become superuser on the current primary of the global device for which you are creating a file system.


  2. Create a file system by using the newfs command.


    Note –

    The newfs command is valid only for creating new UFS file systems. To create a new VxFS file system, follow the procedures provided in your VxFS documentation.



    # newfs raw-disk-device
    

    The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Volume Manager 

    Disk Device Name 

    Description 

    Solaris Volume Manager 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle disk set.

    SPARC: Veritas Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group.

    None 

    /dev/global/rdsk/d1s3

    Raw disk device for block slice d1s3.

  3. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    Tip –

    For ease of administration, create the mount point in the /global/devicegroup directory. Using this location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    # mkdir -p /global/devicegroup mountpoint
    
    devicegroup

    Name of the directory that corresponds to the name of the device group that contains the device.

    mountpoint

    Name of the directory on which to mount the cluster file system.

  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    1. Use the following required mount options.


      Note –

      Logging is required for all cluster file systems.


      • Solaris UFS logging – Use the global,logging mount options. See the mount_ufs(1M) man page for more information about UFS mount options.


        Note –

        The syncdir mount option is not required for UFS cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior. If you do not, you will experience the same behavior as with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition until you close a file. The cases in which you could have problems if you do not specify syncdir are rare. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.


      • Solaris Volume Manager transactional volume – Use the global mount option (do not use the logging mount option). See your Solaris Volume Manager documentation for information about setting up transactional volumes.


        Note –

        Transactional volumes are scheduled to be removed from the Solaris OS in an upcoming Solaris software release. Solaris UFS logging provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.


      • VxFS logging – Use the global and log mount options. See the mount_vxfs man page that is provided with VxFS software for more information.

    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle .

    See the vfstab(4) man page for details.

  5. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If no errors occur, nothing is returned.

  6. From any node in the cluster, mount the cluster file system.


    # mount /global/devicegroup mountpoint
    
  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df or mount command to list mounted file systems.

    To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.


Example 5–42 Adding a Cluster File System

The following example creates a UFS cluster file system on the Solaris Volume Manager metadevice or volume /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
[on each node:]
# mkdir -p /global/oracle/d1
 
# vi /etc/vfstab
#device                device                 mount            FS  fsck  mount   mount
#to mount              to fsck                point           type pass  at boot options
# /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging

[save and exit]
 
[on one node:]
# sccheck
# mount /dev/md/oracle/dsk/d1 /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles 
on Sun Oct 3 08:56:16 2001