In the SunPlex system, all multihost disks are placed into disk device groups, which can be Solaris Volume Manager disksets, VxVM disk groups, or individual disks not under control of a software-based volume manager.
For a cluster file system to be highly available, the underlying disk storage must be connected to more than one node. Therefore, a local file system (a file system that is stored on a node's local disk) that is made into a cluster file system is not highly available.
As with normal file systems, you can mount cluster file systems in two ways:
Manually: Use the mount command and the -g or -o global mount options to mount the cluster file system from the command line, for example:
# mount -g /dev/global/dsk/d0s0 /global/oracle/data |
Automatically: Create an entry in the /etc/vfstab file with a global mount option to mount the cluster file system at boot. You then create a mount point under the /global directory on all nodes. The directory /global is a recommended location, not a requirement. Here's a sample line for a cluster file system from an /etc/vfstab file:
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/data ufs 2 yes global,logging |
While Sun Cluster software does not impose a naming policy for cluster file systems, you can ease administration by creating a mount point for all cluster file systems under the same directory, such as /global/disk-device-group. See Sun Cluster 3.1 Software Installation Guide and Sun Cluster 3.1 System Administration Guide for more information.