JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Software Installation Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Planning the Oracle Solaris Cluster Configuration

2.  Installing Software on Global-Cluster Nodes

3.  Establishing the Global Cluster

4.  Configuring Solaris Volume Manager Software

5.  Installing and Configuring Veritas Volume Manager

Installing and Configuring VxVM Software

Setting Up a Root Disk Group Overview

How to Install Veritas Volume Manager Software

SPARC: How to Encapsulate the Root Disk

How to Create a Root Disk Group on a Nonroot Disk

How to Mirror the Encapsulated Root Disk

Creating Disk Groups in a Cluster

How to Create a Disk Group

How to Register a Disk Group

How to Assign a New Minor Number to a Device Group

How to Verify the Disk Group Configuration

Unencapsulating the Root Disk

How to Unencapsulate the Root Disk

6.  Creating a Cluster File System

7.  Creating Non-Global Zones and Zone Clusters

8.  Installing the Oracle Solaris Cluster Module to Sun Management Center

9.  Uninstalling Software From the Cluster

A.  Oracle Solaris Cluster Installation and Configuration Worksheets

Index

Installing and Configuring VxVM Software

This section provides information and procedures to install and configure VxVM software on an Oracle Solaris Cluster configuration.

The following table lists the tasks to perform to install and configure VxVM software for Oracle Solaris Cluster configurations. Complete the procedures in the order that is indicated.

Table 5-1 Task Map: Installing and Configuring VxVM Software

Task
Instructions
Plan the layout of your VxVM configuration.
(Optional) Determine how you will create the root disk group on each node.
Install VxVM software.
How to Install Veritas Volume Manager Software

VxVM installation documentation

(Optional) Create a root disk group. You can either encapsulate the root disk (UFS only) or create the root disk group on local, nonroot disks.
(Optional) Mirror the encapsulated root disk.
Create disk groups.

Setting Up a Root Disk Group Overview

The creation of a root disk group is optional. If you do not intend to create a root disk group, proceed to How to Install Veritas Volume Manager Software.

Oracle Solaris Cluster software supports the following methods to configure the root disk group.

See your VxVM installation documentation for more information.

How to Install Veritas Volume Manager Software

Perform this procedure to install Veritas Volume Manager (VxVM) software on each global-cluster node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or install VxVM just on the nodes that are physically connected to the storage devices that VxVM will manage.

Before You Begin

Perform the following tasks:

  1. Become superuser on a cluster node that you intend to install with VxVM.
  2. Insert the VxVM CD-ROM in the CD-ROM drive on the node.
  3. Follow procedures in your VxVM installation guide to install and configure VxVM software and licenses.
  4. Run the clvxvm utility in noninteractive mode.
    phys-schost# clvxvm initialize

    The clvxvm utility performs necessary postinstallation tasks. The clvxvm utility also selects and configures a cluster-wide vxio driver major number. See the clvxvm(1CL) man page for more information.

  5. SPARC: To enable the VxVM cluster feature, supply the cluster feature license key, if you did not already do so.

    See your VxVM documentation for information about how to add a license.

  6. (Optional) Install the VxVM GUI.

    See your VxVM documentation for information about installing the VxVM GUI.

  7. Eject the CD-ROM.
  8. Install any VxVM patches to support Oracle Solaris Cluster software.

    See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 5/11 Release Notes for the location of patches and installation instructions.

  9. Repeat Step 1 through Step 8 to install VxVM on any additional nodes.

    Note - SPARC: To enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.


  10. If you do not install one or more nodes with VxVM, modify the /etc/name_to_major file on each non-VxVM node.
    1. On a node that is installed with VxVM, determine the vxio major number setting.
      phys-schost# grep vxio /etc/name_to_major
    2. Become superuser on a node that you do not intend to install with VxVM.
    3. Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.
      phys-schost# vi /etc/name_to_major
      vxio NNN
    4. Initialize the vxio entry.
      phys-schost# drvconfig -b -i vxio -m NNN
    5. Repeat Step a through Step d on all other nodes that you do not intend to install with VxVM.

      When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.

  11. To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or How to Create a Root Disk Group on a Nonroot Disk.

    Otherwise, proceed to Step 12.


    Note - A root disk group is optional.


  12. Reboot each node on which you installed VxVM.
    phys-schost# shutdown -g0 -y -i6

Next Steps

To create a root disk group, go to (UFS only) SPARC: How to Encapsulate the Root Disk or How to Create a Root Disk Group on a Nonroot Disk.

Otherwise, create disk groups. Go to Creating Disk Groups in a Cluster.

SPARC: How to Encapsulate the Root Disk

Perform this procedure to create a root disk group by encapsulating the UFS root disk. Root disk groups are optional. See your VxVM documentation for more information.


Note - If your root disk uses ZFS, you can only create a root disk group on local nonroot disks. If you want to create a root disk group on nonroot disks, instead perform procedures in How to Create a Root Disk Group on a Nonroot Disk.


Before You Begin

Ensure that you have installed VxVM as described in How to Install Veritas Volume Manager Software.

  1. Become superuser on a node that you installed with VxVM.
  2. Encapsulate the UFS root disk.
    phys-schost# clvxvm encapsulate

    See the clvxvm(1CL) man page for more information.

  3. Repeat for any other node on which you installed VxVM.

Next Steps

To mirror the encapsulated root disk, go to How to Mirror the Encapsulated Root Disk.

Otherwise, go to Creating Disk Groups in a Cluster.

How to Create a Root Disk Group on a Nonroot Disk

Use this procedure to create a root disk group by encapsulating or initializing local disks other than the root disk. The creation of a root disk group is optional.


Note - If you want to create a root disk group on the root disk and the root disk uses UFS, instead perform procedures in SPARC: How to Encapsulate the Root Disk.


Before You Begin

If the disks are to be encapsulated, ensure that each disk has at least two slices with 0 cylinders. If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice.

  1. Become superuser.
  2. Start the vxinstall utility.
    phys-schost# vxinstall
  3. When prompted by the vxinstall utility, make the following choices or entries.
    • SPARC: To enable the VxVM cluster feature, supply the cluster feature license key.

    • Choose Custom Installation.

    • Do not encapsulate the boot disk.

    • Choose any disks to add to the root disk group.

    • Do not accept automatic reboot.

  4. If the root disk group that you created contains one or more disks that connect to more than one node, ensure that fencing is disabled for such disks.

    Use the following command to disable fencing for each shared disk in the root disk group.

    phys-schost# cldevice set -p default_fencing=nofencing device
    -p

    Specifies a device property.

    default_fencing=nofencing

    Disables fencing for the specified device.

    Disabling fencing for the device prevents unintentional fencing of the node from the disk that is used by the root disk group if that disk is connected to multiple nodes.

    For more information about the default_fencing property, see the cldevice(1CL) man page.

  5. Evacuate any resource groups or device groups from the node.
    phys-schost# clnode evacuate from-node
    from-node

    Specifies the name of the node from which to move resource or device groups.

  6. Reboot the node.
    phys-schost# shutdown -g0 -y -i6
  7. Use the vxdiskadm command to add multiple disks to the root disk group.

    The root disk group becomes tolerant of a disk failure when it contains multiple disks. See VxVM documentation for procedures.

Next Steps

Create disk groups. Go to Creating Disk Groups in a Cluster.

How to Mirror the Encapsulated Root Disk

After you install VxVM and encapsulate the root disk, perform this procedure on each node on which you mirror the encapsulated root disk.

Before You Begin

Ensure that you have encapsulated the root disk as described in SPARC: How to Encapsulate the Root Disk.

  1. Become superuser.
  2. List the devices.
    phys-schost# cldevice list -v

    Output looks similar to the following:

    DID Device          Full Device Path
    ----------          ----------------
    d1                  phys-schost-1:/dev/rdsk/c0t0d0
    d2                  phys-schost-1:/dev/rdsk/c0t6d0
    d3                  phys-schost-2:/dev/rdsk/c1t1d0
    d3                  phys-schost-1:/dev/rdsk/c1t1d0
  3. Mirror the encapsulated root disk.

    Follow the procedures in your VxVM documentation.

    For maximum availability and simplified administration, use a local disk for the mirror. See Guidelines for Mirroring the Root Disk for additional guidelines.


    Caution

    Caution - Do not use a quorum device to mirror a root disk. Using a quorum device to mirror a root disk might prevent the node from booting from the root-disk mirror under certain circumstances.


  4. View the node list of the raw-disk device group for the device that you used to mirror the root disk.

    The name of the device group is the form dsk/dN, where dN is the DID device name.

    phys-schost# cldevicegroup list -v dsk/dN
    -v

    Displays verbose output.

    Output looks similar to the following.

    Device group        Type                Node list
    ------------        ----                ---------
    dsk/dN              Local_Disk          phys-schost-1, phys-schost-3
  5. If the node list contains more than one node name, remove from the node list all nodes except the node whose root disk you mirrored.

    Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.

    phys-schost# cldevicegroup remove-node -n node dsk/dN
    -n node

    Specifies the node to remove from the device-group node list.

  6. Disable fencing for all disks in the raw-disk device group that connect to more than one node.

    Disabling fencing for a device prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.

    phys-schost# cldevice set -p default_fencing=nofencing device
    -p

    Sets the value of a device property.

    default_fencing=nofencing

    Disables fencing for the specified device.

    For more information about the default_fencing property, see the cldevice(1CL) man page.

  7. Repeat this procedure for each node in the cluster whose encapsulated root disk you want to mirror.

Example 5-1 Mirroring the Encapsulated Root Disk

The following example shows a mirror created of the root disk for the node phys-schost-1. The mirror is created on the disk c0t0d0, whose raw-disk device-group name is dsk/d2. Disk c0t0d0 is a multihost disk, so the node phys-schost-3 is removed from the disk's node list and fencing is disabled.

phys-schost# cldevice list -v
DID Device          Full Device Path
----------          ----------------
d2                  pcircinus1:/dev/rdsk/c0t0d0
…
Create the mirror by using VxVM procedures
phys-schost# cldevicegroup list -v dsk/d2
Device group        Type                Node list
------------        ----                ---------
dsk/d2              Local_Disk          phys-schost-1, phys-schost-3
phys-schost# cldevicegroup remove-node -n phys-schost-3 dsk/d2
phys-schost# cldevice set -p default_fencing=nofencing c0t0d0

Next Steps

Create disk groups. Go to Creating Disk Groups in a Cluster.