Sun Cluster Software Installation Guide for Solaris OS

How to Create Cluster File Systems

Perform this procedure to create a cluster file system. Unlike a local file system, a cluster file system is accessible from any node in the cluster. If you used SunPlex Installer to install data services, SunPlex Installer might have already created one or more cluster file systems.


Caution – Caution –

Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete.


Perform this procedure for each cluster file system that you want to create.

  1. Ensure that volume-manager software is installed and configured.

    For volume-manager installation procedures, see Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software or SPARC: Installing and Configuring VxVM Software.

  2. Become superuser on any node in the cluster.


    Tip –

    For faster file-system creation, become superuser on the current primary of the global device for which you create a file system.


  3. Create a file system.

    • For a UFS file system, use the newfs(1M) command.


      # newfs raw-disk-device
      

      The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

      Volume Manager 

      Sample Disk Device Name 

      Description 

      Solstice DiskSuite or Solaris Volume Manager 

      /dev/md/nfs/rdsk/d1

      Raw disk device d1 within the nfs disk set

      SPARC: VERITAS Volume Manager 

      /dev/vx/rdsk/oradg/vol01

      Raw disk device vol01 within the oradg disk group

      None 

      /dev/global/rdsk/d1s3

      Raw disk device d1s3

    • For a Sun StorEdge QFS file system, follow the procedures for defining the configuration in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

    • SPARC: For a VERITAS File System (VxFS) file system, follow the procedures that are provided in your VxFS documentation.

  4. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system is not accessed on that node.


    Tip –

    For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    # mkdir -p /global/device-group/mountpoint/
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device

    mountpoint

    Name of the directory on which to mount the cluster file system

  5. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    See the vfstab(4) man page for details.

    1. In each entry, specify the required mount options for the type of file system that you use. See Table 2–10, Table 2–11, or Table 2–12 for the list of required mount options.


      Note –

      Do not use the logging mount option for Solstice DiskSuite trans metadevices or Solaris Volume Manager transactional volumes. Trans metadevices and transactional volumes provide their own logging.

      In addition, Solaris Volume Manager transactional-volume logging (formerly Solstice DiskSuite trans-metadevice logging) is scheduled to be removed from the Solaris OS in an upcoming Solaris release. Solaris UFS logging provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.


      Table 2–10 Mount Options for UFS Cluster File Systems

      Mount Option 

      Description 

      global

      Required. This option makes the file system globally visible to all nodes in the cluster.

      logging

      Required. This option enables logging.

      forcedirectio

      Required for cluster file systems that will host Oracle Real Application Clusters RDBMS data files, log files, and control files.


      Note –

      Oracle Real Application Clusters is supported for use only in SPARC based clusters.


      onerror=panic

      Required. You do not have to explicitly specify the onerror=panic mount option in the /etc/vfstab file. This mount option is already the default value if no other onerror mount option is specified.


      Note –

      Only the onerror=panic mount option is supported by Sun Cluster software. Do not use the onerror=umount or onerror=lock mount options. These mount options are not supported on cluster file systems for the following reasons:

      • Use of the onerror=umount or onerror=lock mount option might cause the cluster file system to lock or become inaccessible. This condition might occur if the cluster file system experiences file corruption.

      • The onerror=umount or onerror=lock mount option might cause the cluster file system to become unmountable. This condition might thereby cause applications that use the cluster file system to hang or prevent the applications from being killed.

      A node might require rebooting to recover from these states.


      syncdir

      Optional. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior for the write() system call. If a write() succeeds, then this mount option ensures that sufficient space is on the disk.

      If you do not specify syncdir, the same behavior occurs that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition (ENOSPC) until you close a file.

      You see ENOSPC on close only during a very short time after a failover. With syncdir, as with POSIX behavior, the out-of-space condition would be discovered before the close.

      See the mount_ufs(1M) man page for more information about UFS mount options.

      Table 2–11 SPARC: Mount Parameters for Sun StorEdge QFS Shared File Systems

      Mount Parameter 

      Description 

      shared

      Required. This option specifies that this is a shared file system, therefore globally visible to all nodes in the cluster.


      Caution – Caution –

      Ensure that settings in the /etc/vfstab file do not conflict with settings in the /etc/opt/SUNWsamfs/samfs.cmd file. Settings in the /etc/vfstab file override settings in the /etc/opt/SUNWsamfs/samfs.cmd file.


      Certain data services such as Sun Cluster Support for Oracle Real Application Clusters have additional requirements and guidelines for QFS mount parameters. See your data service manual for any additional requirements.

      See the mount_samfs(1M) man page for more information about QFS mount parameters.


      Note –

      Logging is not enabled by an /etc/vfstab mount parameter. To enable logging, follow procedures in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.


      Table 2–12 SPARC: Mount Options for VxFS Cluster File Systems

      Mount Option 

      Description 

      global

      Required. This option makes the file system globally visible to all nodes in the cluster.

      log

      Required. This option enables logging.

      See the VxFS mount_vxfs man page and “Administering Cluster File Systems Overview” in Sun Cluster System Administration Guide for Solaris OS for more information about VxFS mount options.

    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/, and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.

  6. On any node in the cluster, run the sccheck(1M) utility.

    The sccheck utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If no errors occur, nothing is returned.

  7. Mount the cluster file system.


    # mount /global/device-group/mountpoint/
    

    • For UFS and QFS, mount the cluster file system from any node in the cluster.

    • SPARC: For VxFS, mount the cluster file system from the current master of device-group to ensure that the file system mounts successfully. In addition, unmount a VxFS file system from the current master of device-group to ensure that the file system unmounts successfully.


      Note –

      To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.


  8. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

  9. Configure IP Network Multipathing groups.

    Go to How to Configure Internet Protocol (IP) Network Multipathing Groups.

Example – Creating a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
…
 
(on each node)
# mkdir -p /global/oracle/d1
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type   ; pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
(save and exit)
 
(on one node)
# sccheck
# mount /global/oracle/d1
# mount
…
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles
on Sun Oct 3 08:56:16 2000