Oracle® Solaris Cluster Concepts Guide

Exit Print View

Updated: July 2014, E39575-01
 
 

Using Cluster File Systems

In the Oracle Solaris Cluster software, all multihost disks are placed into device groups, which can be Solaris Volume Manager disk sets, raw-disk groups, or individual disks that are not under control of a software-based volume manager.

For a cluster file system to be highly available, the underlying disk storage must be connected to more than one cluster node. Therefore, a local file system (a file system that is stored on a node's local disk) that is made into a cluster file system is not highly available.

    You can mount cluster file systems as you would mount file systems:

  • Manually. Use the mount command and the –g or –o global mount options to mount the cluster file system from the command line, for example:

    SPARC: # mount -g /dev/global/dsk/d0s0 /global/oracle/data
  • Automatically. Create an entry in the /etc/vfstab file with a global mount option to mount the cluster file system at boot. You then create a mount point under the /global directory on all nodes. The directory /global is a recommended location, not a requirement. Here's a sample line for a cluster file system from an /etc/vfstab file:

    /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/data ufs 2 yes global,logging

Note - While Oracle Solaris Cluster software does not impose a naming policy for cluster file systems, you can ease administration by creating a mount point for all cluster file systems under the same directory, such as /global/disk-group. See the Oracle Solaris Cluster System Administration Guide for more information.