JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster System Administration Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering Oracle Solaris Cluster

2.  Oracle Solaris Cluster and RBAC

3.  Shutting Down and Booting a Cluster

4.  Data Replication Approaches

5.  Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems

Overview of Administering Global Devices and the Global Namespace

Global Device Permissions for Solaris Volume Manager

Dynamic Reconfiguration With Global Devices

Administering Storage-Based Replicated Devices

Administering Hitachi TrueCopy Replicated Devices

How to Configure a Hitachi TrueCopy Replication Group

How to Configure DID Devices for Replication Using Hitachi TrueCopy

How to Verify a Hitachi TrueCopy Replicated Global Device Group Configuration

Example: Configuring a TrueCopy Replication Group for Oracle Solaris Cluster

Administering EMC Symmetrix Remote Data Facility Replicated Devices

How to Configure an EMC SRDF Replication Group

How to Configure DID Devices for Replication Using EMC SRDF

How to Verify EMC SRDF Replicated Global Device Group Configuration

Example: Configuring an SRDF Replication Group for Oracle Solaris Cluster

Overview of Administering Cluster File Systems

Cluster File System Restrictions

Administering Device Groups

How to Update the Global-Devices Namespace

How to Change the Size of a lofi Device That Is Used for the Global-Devices Namespace

Migrating the Global-Devices Namespace

How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device

How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition

Adding and Registering Device Groups

How to Add and Register a Device Group (Solaris Volume Manager)

How to Add and Register a Device Group (Raw-Disk)

How to Add and Register a Replicated Device Group (ZFS)

Maintaining Device Groups

How to Remove and Unregister a Device Group (Solaris Volume Manager)

How to Remove a Node From All Device Groups

How to Remove a Node From a Device Group (Solaris Volume Manager)

How to Remove a Node From a Raw-Disk Device Group

How to Change Device Group Properties

How to Set the Desired Number of Secondaries for a Device Group

How to List a Device Group Configuration

How to Switch the Primary for a Device Group

How to Put a Device Group in Maintenance State

Administering the SCSI Protocol Settings for Storage Devices

How to Display the Default Global SCSI Protocol Settings for All Storage Devices

How to Display the SCSI Protocol of a Single Storage Device

How to Change the Default Global Fencing Protocol Settings for All Storage Devices

How to Change the Fencing Protocol for a Single Storage Device

Administering Cluster File Systems

How to Add a Cluster File System

How to Remove a Cluster File System

How to Check Global Mounts in a Cluster

Administering Disk-Path Monitoring

How to Monitor a Disk Path

How to Unmonitor a Disk Path

How to Print Failed Disk Paths

How to Resolve a Disk-Path Status Error

How to Monitor Disk Paths From a File

How to Enable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail

How to Disable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail

6.  Administering Quorum

7.  Administering Cluster Interconnects and Public Networks

8.  Adding and Removing a Node

9.  Administering the Cluster

10.  Configuring Control of CPU Usage

11.  Patching Oracle Solaris Cluster Software and Firmware

12.  Backing Up and Restoring a Cluster

13.  Administering Oracle Solaris Cluster With the Graphical User Interfaces

A.  Example

Index

Administering Cluster File Systems

The cluster file system is a globally available file system that can be read and accessed from any node of the cluster.

Table 5-5 Task Map: Administering Cluster File Systems

Task
Instructions
Add cluster file systems after the initial Oracle Solaris Cluster installation
Remove a cluster file system
Check global mount points in a cluster for consistency across nodes

How to Add a Cluster File System

Perform this task for each cluster file system you create after your initial Oracle Solaris Cluster installation.


Caution

Caution - Be sure you specify the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you might not intend to delete.


Ensure the following prerequisites have been completed prior to adding an additional cluster file system:

If you used Oracle Solaris Cluster Manager to install data services, one or more cluster file systems already exist if shared disks on which to create the cluster file systems were sufficient.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Become superuser on any node in the cluster.

    Perform this procedure from the global zone if non-global zones are configured in the cluster.


    Tip - For faster file-system creation, become superuser on the current primary of the global device for which you create a file system.


  2. Create a file system.

    Caution

    Caution - Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete.


    • For a UFS file system, use the newfs(1M) command.
      phys-schost# newfs raw-disk-device

      The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.


      Volume Manager
      Sample Disk Device Name
      Description
      Solaris Volume Manager
      /dev/md/nfs/rdsk/d1
      Raw disk device d1 within the nfs disk set
      None
      /dev/global/rdsk/d1s3
      Raw disk device d1s3
  3. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system is not accessed on that node.


    Tip - For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.


    phys-schost# mkdir -p /global/device-group/mountpoint/
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device.

    mountpoint

    Name of the directory on which to mount the cluster file system.

  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    See the vfstab(4) man page for details.


    Note - If non-global zones are configured in the cluster, ensure that you mount cluster file systems in the global zone on a path in the global zone's root directory.


    1. In each entry, specify the required mount options for the type of file system that you use.
    2. To automatically mount the cluster file system, set the mount at boot field to yes.
    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.
    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.
    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/, and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.

  5. On any node in the cluster, run the configuration check utility.
    phys-schost# cluster check -k vfstab

    The configuration check utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster. If no errors occur, nothing is returned.

    For more information, see the cluster(1CL) man page.

  6. Mount the cluster file system.

    For UFS and QFS, mount the cluster file system from any node in the cluster.

    phys-schost# mount /global/device-group/mountpoint/
  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df command or mount command to list mounted file systems. For more information, see the df(1M) man page or mount(1M) man page.

    Cluster file systems are accessible from both the global zone and the non-global zone.

Example 5-36 Creating a UFS Cluster File System

The following example creates a UFS cluster file system on the Solaris Volume Manager volume /dev/md/oracle/rdsk/d1. An entry for the cluster file system is added to the vfstab file on each node. Then from one node the cluster check command is run. After configuration check processing is completes successfully, the cluster file system is mounted from one node and verified on all nodes.

phys-schost# newfs /dev/md/oracle/rdsk/d1
…
phys-schost# mkdir -p /global/oracle/d1
phys-schost# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
…
phys-schost# cluster check -k vfstab
phys-schost# mount /global/oracle/d1
phys-schost# mount
…
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles
on Sun Oct 3 08:56:16 2005

How to Remove a Cluster File System

You remove a cluster file system by merely unmounting it. To also remove or delete the data, remove the underlying disk device (or metadevice or volume) from the system.


Note - Cluster file systems are automatically unmounted as part of the system shutdown that occurs when you run cluster shutdown to stop the entire cluster. A cluster file system is not unmounted when you run shutdown to stop a single node. However, if the node being shut down is the only node with a connection to the disk, any attempt to access the cluster file system on that disk results in an error.


Ensure that the following prerequisites have been completed prior to unmounting cluster file systems:

  1. Become superuser on any node in the cluster.
  2. Determine which cluster file systems are mounted.
    # mount -v
  3. On each node, list all processes that are using the cluster file system, so that you know which processes you are going to stop.
    # fuser -c [ -u ] mountpoint
    -c

    Reports on files that are mount points for file systems and any files within those mounted file systems.

    -u

    (Optional) Displays the user login name for each process ID.

    mountpoint

    Specifies the name of the cluster file system for which you want to stop processes.

  4. On each node, stop all processes for the cluster file system.

    Use your preferred method for stopping processes. If necessary, use the following command to force termination of processes associated with the cluster file system.

    # fuser -c -k mountpoint

    A SIGKILL is sent to each process that uses the cluster file system.

  5. On each node, verify that no processes are using the file system.
    # fuser -c mountpoint
  6. From just one node, unmount the file system.
    # umount mountpoint
    mountpoint

    Specifies the name of the cluster file system you want to unmount. This can be either the directory name where the cluster file system is mounted, or the device name path of the file system.

  7. (Optional) Edit the /etc/vfstab file to delete the entry for the cluster file system being removed.

    Perform this step on each cluster node that has an entry for this cluster file system in its /etc/vfstab file.

  8. (Optional) Remove the disk device group/metadevice/volume/plex.

    See your volume manager documentation for more information.

Example 5-37 Removing a Cluster File System

The following example removes a UFS cluster file system that is mounted on the Solaris Volume Manager metadevice or volume/dev/md/oracle/rdsk/d1.

# mount -v
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles 
# fuser -c /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c -k /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c /global/oracle/d1
/global/oracle/d1:
# umount /global/oracle/d1
 
(On each node, remove the highlighted entry:)
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging

[Save and exit.]

To remove the data on the cluster file system, remove the underlying device. See your volume manager documentation for more information.

How to Check Global Mounts in a Cluster

The cluster(1CL) utility verifies the syntax of the entries for cluster file systems in the /etc/vfstab file. If no errors occur, nothing is returned.


Note - Run the cluster check command after making cluster configuration changes, such as removing a cluster file system, that have affected devices or volume management components.


  1. Become superuser on any node in the cluster.
  2. Check the cluster global mounts.
    # cluster check -k vfstab