Sun Cluster Software Installation Guide for Solaris OS

SPARC: Installing and Configuring VxVM Software

This section provides information and procedures to install and configure VxVM software on a Sun Cluster configuration.

The following table lists the tasks to perform to install and configure VxVM software for Sun Cluster configurations.

Table 4–1 SPARC: Task Map: Installing and Configuring VxVM Software

Task 

Instructions 

1. Plan the layout of your VxVM configuration. 

Planning Volume Management

2. Determine how you will create the root disk group on each node. As of VxVM 4.0, the creation of a root disk group is optional. 

SPARC: Setting Up a Root Disk Group Overview

3. Install VxVM software. 

SPARC: How to Install VERITAS Volume Manager Software

VxVM installation documentation 

4. If necessary, create a root disk group. You can either encapsulate the root disk or create the root disk group on local, nonroot disks. 

SPARC: How to Encapsulate the Root Disk

SPARC: How to Create a Root Disk Group on a Nonroot Disk

5. (Optional) Mirror the encapsulated root disk.

SPARC: How to Mirror the Encapsulated Root Disk

6. Create disk groups. 

SPARC: Creating Disk Groups in a Cluster

SPARC: Setting Up a Root Disk Group Overview

As of VxVM 4.0, the creation of a root disk group is optional. If you do not intend to create a root disk group, proceed to SPARC: How to Install VERITAS Volume Manager Software.

For VxVM 3.5, each cluster node requires the creation of a root disk group after VxVM is installed. This root disk group is used by VxVM to store configuration information, and has the following restrictions.

Sun Cluster software supports the following methods to configure the root disk group.

See your VxVM installation documentation for more information.

ProcedureSPARC: How to Install VERITAS Volume Manager Software

Perform this procedure to install VERITAS Volume Manager (VxVM) software on each node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or install VxVM just on the nodes that are physically connected to the storage devices that VxVM will manage.

Before You Begin

Perform the following tasks:

Steps
  1. Become superuser on a cluster node that you intend to install with VxVM.

  2. Insert the VxVM CD-ROM in the CD-ROM drive on the node.

  3. For VxVM 4.1, follow procedures in your VxVM installation guide to install and configure VxVM software and licenses.


    Note –

    For VxVM 4.1, the scvxinstall command no longer performs installation of VxVM packages and licenses, but does perform necessary postinstallation tasks.


  4. Run the scvxinstall utility in noninteractive mode.

    • For VxVM 4.0 and earlier, use the following command:


      # scvxinstall -i -L {license | none}
      -i

      Installs VxVM but does not encapsulate the root disk

      -L {license | none}

      Installs the specified license. The none argument specifies that no additional license key is being added.

    • For VxVM 4.1, use the following command:


      # scvxinstall -i
      
      -i

      For VxVM 4.1, verifies that VxVM is installed but does not encapsulate the root disk

    The scvxinstall utility also selects and configures a cluster-wide vxio driver major number. See the scvxinstall(1M) man page for more information.

  5. If you intend to enable the VxVM cluster feature, supply the cluster feature license key, if you did not already do so.

    See your VxVM documentation for information about how to add a license.

  6. (Optional) Install the VxVM GUI.

    See your VxVM documentation for information about installing the VxVM GUI.

  7. Eject the CD-ROM.

  8. Install any VxVM patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

  9. (Optional) For VxVM 4.0 and earlier, if you prefer not to have VxVM man pages reside on the cluster node, remove the man-page package.


    # pkgrm VRTSvmman
    
  10. Repeat Step 1 through Step 9 to install VxVM on any additional nodes.


    Note –

    If you intend to enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.


  11. If you do not install one or more nodes with VxVM, modify the /etc/name_to_major file on each non-VxVM node.

    1. On a node that is installed with VxVM, determine the vxio major number setting.


      # grep vxio /etc/name_to_major
      
    2. Become superuser on a node that you do not intend to install with VxVM.

    3. Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.


      # vi /etc/name_to_major
      vxio NNN
      
    4. Initialize the vxio entry.


      # drvconfig -b -i vxio -m NNN
      
    5. Repeat Step a through Step d on all other nodes that you do not intend to install with VxVM.

      When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.

  12. To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or SPARC: How to Create a Root Disk Group on a Nonroot Disk.

    Otherwise, proceed to Step 13.


    Note –

    VxVM 3.5 requires that you create a root disk group. For VxVM 4.0 and later, a root disk group is optional.


  13. Reboot each node on which you installed VxVM.


    # shutdown -g0 -y -i6
    
Next Steps

To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or SPARC: How to Create a Root Disk Group on a Nonroot Disk.

Otherwise, create disk groups. Go to SPARC: Creating Disk Groups in a Cluster.

ProcedureSPARC: How to Encapsulate the Root Disk

Perform this procedure to create a root disk group by encapsulating the root disk. Root disk groups are required for VxVM 3.5. For VxVM 4.0 and later, root disk groups are optional. See your VxVM documentation for more information.


Note –

If you want to create the root disk group on nonroot disks, instead perform procedures in SPARC: How to Create a Root Disk Group on a Nonroot Disk.


Before You Begin

Ensure that you have installed VxVM as described in SPARC: How to Install VERITAS Volume Manager Software.

Steps
  1. Become superuser on a node that you installed with VxVM.

  2. Encapsulate the root disk.


    # scvxinstall -e
    
    -e

    Encapsulates the root disk

    See the scvxinstall(1M) for more information.

  3. Repeat for any other node on which you installed VxVM.

Next Steps

To mirror the encapsulated root disk, go to SPARC: How to Mirror the Encapsulated Root Disk.

Otherwise, go to SPARC: Creating Disk Groups in a Cluster.

ProcedureSPARC: How to Create a Root Disk Group on a Nonroot Disk

Use this procedure to create a root disk group by encapsulating or initializing local disks other than the root disk. As of VxVM 4.0, the creation of a root disk group is optional.


Note –

If you want to create a root disk group on the root disk, instead perform procedures in SPARC: How to Encapsulate the Root Disk.


Before You Begin

If the disks are to be encapsulated, ensure that each disk has at least two slices with 0 cylinders. If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice.

Steps
  1. Become superuser on the node.

  2. Start the vxinstall utility.


    # vxinstall
    

    When prompted, make the following choices or entries.

    • If you intend to enable the VxVM cluster feature, supply the cluster feature license key.

    • Choose Custom Installation.

    • Do not encapsulate the boot disk.

    • Choose any disks to add to the root disk group.

    • Do not accept automatic reboot.

  3. If the root disk group that you created contains one or more disks that connect to more than one node, enable the localonly property.

    Use the following command to enable the localonly property of the raw-disk device group for each shared disk in the root disk group.


    # scconf -c -D name=dsk/dN,localonly=true
    

    When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from the disk that is used by the root disk group if that disk is connected to multiple nodes.

    For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

  4. Move any resource groups or device groups from the node.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource or device groups

  5. Reboot the node.


    # shutdown -g0 -y -i6
    
  6. Use the vxdiskadm command to add multiple disks to the root disk group.

    The root disk group becomes tolerant of a disk failure when it contains multiple disks. See VxVM documentation for procedures.

Next Steps

Create disk groups. Go to SPARC: Creating Disk Groups in a Cluster.

ProcedureSPARC: How to Mirror the Encapsulated Root Disk

After you install VxVM and encapsulate the root disk, perform this procedure on each node on which you mirror the encapsulated root disk.

Before You Begin

Ensure that you have encapsulated the root disk as described in SPARC: How to Encapsulate the Root Disk.

Steps
  1. Mirror the encapsulated root disk.

    Follow the procedures in your VxVM documentation. For maximum availability and simplified administration, use a local disk for the mirror. See Guidelines for Mirroring the Root Disk for additional guidelines.


    Caution – Caution –

    Do not use a quorum device to mirror a root disk. Using a quorum device to mirror a root disk might prevent the node from booting from the root-disk mirror under certain circumstances.


  2. Display the DID mappings.


    # scdidadm -L
    
  3. From the DID mappings, locate the disk that is used to mirror the root disk.

  4. Extract the raw-disk device-group name from the device-ID name of the root-disk mirror.

    The name of the raw-disk device group follows the convention dsk/dN, where N is a number. In the following output, the portion of a scdidadm output line from which you extract the raw-disk device-group name is highlighted in bold.


    N         node:/dev/rdsk/cNtXdY     /dev/did/rdsk/dN
    
  5. View the node list of the raw-disk device group.

    Output looks similar to the following.


    # scconf -pvv | grep dsk/dN
    Device group name:						dsk/dN
    …
     (dsk/dN) Device group node list:		phys-schost-1, phys-schost-3
    …
  6. If the node list contains more than one node name, remove from the node list all nodes except the node whose root disk you mirrored.

    Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.


    # scconf -r -D name=dsk/dN,nodelist=node
    
    -D name=dsk/dN

    Specifies the cluster-unique name of the raw-disk device group

    nodelist=node

    Specifies the name of the node or nodes to remove from the node list

  7. Enable the localonly property of the raw-disk device group.

    When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


    # scconf -c -D name=dsk/dN,localonly=true
    

    For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

  8. Repeat this procedure for each node in the cluster whose encapsulated root disk you want to mirror.


Example 4–1 SPARC: Mirroring the Encapsulated Root Disk

The following example shows a mirror created of the root disk for the node phys-schost-1. The mirror is created on the disk c1t1d0, whose raw-disk device-group name is dsk/d2. Disk c1t1d0 is a multihost disk, so the node phys-schost-3 is removed from the disk's node list and the localonly property is enabled.


(Display the DID mappings)
# scdidadm -L 
…
2        phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2   
2        phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2   
…
 
(Display the node list of the mirror disk's raw-disk device group)
# scconf -pvv | grep dsk/d2
Device group name:						dsk/d2
…
  (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
…
 
(Remove phys-schost-3 from the node list)
# scconf -r -D name=dsk/d2,nodelist=phys-schost-3
  
(Enable the localonly property)
# scconf -c -D name=dsk/d2,localonly=true

Next Steps

Create disk groups. Go to SPARC: Creating Disk Groups in a Cluster.