JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Software Installation Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Planning the Oracle Solaris Cluster Configuration

2.  Installing Software on Global-Cluster Nodes

3.  Establishing the Global Cluster

4.  Configuring Solaris Volume Manager Software

5.  Installing and Configuring Veritas Volume Manager

6.  Creating a Cluster File System

Creating Cluster File Systems

How to Create Cluster File Systems

Creating Oracle ACFS File Systems

Sample Configurations of an Oracle ACFS File System

How to Register and Configure the Framework Resource Groups

How to Create an Oracle ACFS File System

How to Register and Configure the Scalable Device-Group Resource Group

How to Register and Configure the Oracle ASM Resource Group

How to Register and Configure the Oracle ACFS Proxy Resource Group

How to Create an Oracle Grid Infrastructure Resource for Interoperation With Oracle Solaris Cluster

7.  Creating Non-Global Zones and Zone Clusters

8.  Installing the Oracle Solaris Cluster Module to Sun Management Center

9.  Uninstalling Software From the Cluster

A.  Oracle Solaris Cluster Installation and Configuration Worksheets

Index

Creating Cluster File Systems

This section provides procedures to create cluster file systems to support data services.

How to Create Cluster File Systems

Perform this procedure for each cluster file system that you want to create. Unlike a local file system, a cluster file system is accessible from any node in the global cluster.

Before You Begin

Perform the following tasks:

  1. Become superuser on any node in the cluster.

    Perform this procedure from the global zone if non-global zones are configured in the cluster.


    Tip - For faster file-system creation, become superuser on the current primary of the global device for which you create a file system.


  2. Create a file system.

    Caution

    Caution - Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete.


    • For a UFS file system, use the newfs(1M) command.
      phys-schost# newfs raw-disk-device

      The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.


      Volume Manager
      Sample Disk Device Name
      Description
      Solaris Volume Manager
      /dev/md/nfs/rdsk/d1
      Raw disk device d1 within the nfs disk set
      Veritas Volume Manager
      /dev/vx/rdsk/oradg/vol01
      Raw disk device vol01 within the oradg disk group
      None
      /dev/global/rdsk/d1s3
      Raw disk device d1s3
    • For a Veritas File System (VxFS) file system, follow the procedures that are provided in your VxFS documentation.
  3. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system is not accessed on that node.


    Tip - For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.


    phys-schost# mkdir -p /global/device-group/mountpoint/
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device.

    mountpoint

    Name of the directory on which to mount the cluster file system.

  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    See the vfstab(4) man page for details.


    Note - If non-global zones are configured in the cluster, ensure that you mount cluster file systems in the global zone on a path in the global zone's root directory.


    1. In each entry, specify the required mount options for the type of file system that you use.
    2. To automatically mount the cluster file system, set the mount at boot field to yes.
    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.
    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.
    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/, and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.

  5. On any node in the cluster, run the configuration check utility.
    phys-schost# cluster check -k vfstab

    The configuration check utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster. If no errors occur, nothing is returned.

    For more information, see the cluster(1CL) man page.

  6. Mount the cluster file system.
    phys-schost# mount /global/device-group/mountpoint/
    • For UFS, mount the cluster file system from any node in the cluster.
    • For VxFS, mount the cluster file system from the current master of device-group to ensure that the file system mounts successfully.

      In addition, unmount a VxFS file system from the current master of device-group to ensure that the file system unmounts successfully.


      Note - To manage a VxFS cluster file system in an Oracle Solaris Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.


  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df command or mount command to list mounted file systems. For more information, see the df(1M) man page or mount(1M) man page.

    Cluster file systems are accessible from both the global zone and the non-global zone.

Example 6-1 Creating a UFS Cluster File System

The following example creates a UFS cluster file system on the Solaris Volume Manager volume /dev/md/oracle/rdsk/d1. An entry for the cluster file system is added to the vfstab file on each node. Then from one node the cluster check command is run. After configuration check processing is completes successfully, the cluster file system is mounted from one node and verified on all nodes.

phys-schost# newfs /dev/md/oracle/rdsk/d1
…
phys-schost# mkdir -p /global/oracle/d1
phys-schost# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
…
phys-schost# cluster check -k vfstab
phys-schost# mount /global/oracle/d1
phys-schost# mount
…
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles
on Sun Oct 3 08:56:16 2005

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.