Sun Cluster Software Installation Guide for Solaris OS

Chapter 4 SPARC: Installing and Configuring VERITAS Volume Manager

Install and configure your local and multihost disks for VERITAS Volume Manager (VxVM) by using the procedures in this chapter, along with the planning information in Planning Volume Management. See your VxVM documentation for additional details.

The following sections are in this chapter:

SPARC: Installing and Configuring VxVM Software

This section provides information and procedures to install and configure VxVM software on a Sun Cluster configuration.

The following table lists the tasks to perform to install and configure VxVM software for Sun Cluster configurations.

Table 4–1 SPARC: Task Map: Installing and Configuring VxVM Software

Task 

Instructions 

1. Plan the layout of your VxVM configuration. 

Planning Volume Management

2. Determine how you will create the root disk group on each node. As of VxVM 4.0, the creation of a root disk group is optional. 

SPARC: Setting Up a Root Disk Group Overview

3. Install VxVM software. 

SPARC: How to Install VERITAS Volume Manager Software

VxVM installation documentation 

4. If necessary, create a root disk group. You can either encapsulate the root disk or create the root disk group on local, nonroot disks. 

SPARC: How to Encapsulate the Root Disk

SPARC: How to Create a Root Disk Group on a Nonroot Disk

5. (Optional) Mirror the encapsulated root disk.

SPARC: How to Mirror the Encapsulated Root Disk

6. Create disk groups. 

SPARC: Creating Disk Groups in a Cluster

SPARC: Setting Up a Root Disk Group Overview

As of VxVM 4.0, the creation of a root disk group is optional. If you do not intend to create a root disk group, proceed to SPARC: How to Install VERITAS Volume Manager Software.

For VxVM 3.5, each cluster node requires the creation of a root disk group after VxVM is installed. This root disk group is used by VxVM to store configuration information, and has the following restrictions.

Sun Cluster software supports the following methods to configure the root disk group.

See your VxVM installation documentation for more information.

ProcedureSPARC: How to Install VERITAS Volume Manager Software

Perform this procedure to install VERITAS Volume Manager (VxVM) software on each node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or install VxVM just on the nodes that are physically connected to the storage devices that VxVM will manage.

Before You Begin

Perform the following tasks:

Steps
  1. Become superuser on a cluster node that you intend to install with VxVM.

  2. Insert the VxVM CD-ROM in the CD-ROM drive on the node.

  3. For VxVM 4.1, follow procedures in your VxVM installation guide to install and configure VxVM software and licenses.


    Note –

    For VxVM 4.1, the scvxinstall command no longer performs installation of VxVM packages and licenses, but does perform necessary postinstallation tasks.


  4. Run the scvxinstall utility in noninteractive mode.

    • For VxVM 4.0 and earlier, use the following command:


      # scvxinstall -i -L {license | none}
      -i

      Installs VxVM but does not encapsulate the root disk

      -L {license | none}

      Installs the specified license. The none argument specifies that no additional license key is being added.

    • For VxVM 4.1, use the following command:


      # scvxinstall -i
      
      -i

      For VxVM 4.1, verifies that VxVM is installed but does not encapsulate the root disk

    The scvxinstall utility also selects and configures a cluster-wide vxio driver major number. See the scvxinstall(1M) man page for more information.

  5. If you intend to enable the VxVM cluster feature, supply the cluster feature license key, if you did not already do so.

    See your VxVM documentation for information about how to add a license.

  6. (Optional) Install the VxVM GUI.

    See your VxVM documentation for information about installing the VxVM GUI.

  7. Eject the CD-ROM.

  8. Install any VxVM patches.

    See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.

  9. (Optional) For VxVM 4.0 and earlier, if you prefer not to have VxVM man pages reside on the cluster node, remove the man-page package.


    # pkgrm VRTSvmman
    
  10. Repeat Step 1 through Step 9 to install VxVM on any additional nodes.


    Note –

    If you intend to enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.


  11. If you do not install one or more nodes with VxVM, modify the /etc/name_to_major file on each non-VxVM node.

    1. On a node that is installed with VxVM, determine the vxio major number setting.


      # grep vxio /etc/name_to_major
      
    2. Become superuser on a node that you do not intend to install with VxVM.

    3. Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.


      # vi /etc/name_to_major
      vxio NNN
      
    4. Initialize the vxio entry.


      # drvconfig -b -i vxio -m NNN
      
    5. Repeat Step a through Step d on all other nodes that you do not intend to install with VxVM.

      When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.

  12. To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or SPARC: How to Create a Root Disk Group on a Nonroot Disk.

    Otherwise, proceed to Step 13.


    Note –

    VxVM 3.5 requires that you create a root disk group. For VxVM 4.0 and later, a root disk group is optional.


  13. Reboot each node on which you installed VxVM.


    # shutdown -g0 -y -i6
    
Next Steps

To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or SPARC: How to Create a Root Disk Group on a Nonroot Disk.

Otherwise, create disk groups. Go to SPARC: Creating Disk Groups in a Cluster.

ProcedureSPARC: How to Encapsulate the Root Disk

Perform this procedure to create a root disk group by encapsulating the root disk. Root disk groups are required for VxVM 3.5. For VxVM 4.0 and later, root disk groups are optional. See your VxVM documentation for more information.


Note –

If you want to create the root disk group on nonroot disks, instead perform procedures in SPARC: How to Create a Root Disk Group on a Nonroot Disk.


Before You Begin

Ensure that you have installed VxVM as described in SPARC: How to Install VERITAS Volume Manager Software.

Steps
  1. Become superuser on a node that you installed with VxVM.

  2. Encapsulate the root disk.


    # scvxinstall -e
    
    -e

    Encapsulates the root disk

    See the scvxinstall(1M) for more information.

  3. Repeat for any other node on which you installed VxVM.

Next Steps

To mirror the encapsulated root disk, go to SPARC: How to Mirror the Encapsulated Root Disk.

Otherwise, go to SPARC: Creating Disk Groups in a Cluster.

ProcedureSPARC: How to Create a Root Disk Group on a Nonroot Disk

Use this procedure to create a root disk group by encapsulating or initializing local disks other than the root disk. As of VxVM 4.0, the creation of a root disk group is optional.


Note –

If you want to create a root disk group on the root disk, instead perform procedures in SPARC: How to Encapsulate the Root Disk.


Before You Begin

If the disks are to be encapsulated, ensure that each disk has at least two slices with 0 cylinders. If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice.

Steps
  1. Become superuser on the node.

  2. Start the vxinstall utility.


    # vxinstall
    

    When prompted, make the following choices or entries.

    • If you intend to enable the VxVM cluster feature, supply the cluster feature license key.

    • Choose Custom Installation.

    • Do not encapsulate the boot disk.

    • Choose any disks to add to the root disk group.

    • Do not accept automatic reboot.

  3. If the root disk group that you created contains one or more disks that connect to more than one node, enable the localonly property.

    Use the following command to enable the localonly property of the raw-disk device group for each shared disk in the root disk group.


    # scconf -c -D name=dsk/dN,localonly=true
    

    When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from the disk that is used by the root disk group if that disk is connected to multiple nodes.

    For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

  4. Move any resource groups or device groups from the node.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource or device groups

  5. Reboot the node.


    # shutdown -g0 -y -i6
    
  6. Use the vxdiskadm command to add multiple disks to the root disk group.

    The root disk group becomes tolerant of a disk failure when it contains multiple disks. See VxVM documentation for procedures.

Next Steps

Create disk groups. Go to SPARC: Creating Disk Groups in a Cluster.

ProcedureSPARC: How to Mirror the Encapsulated Root Disk

After you install VxVM and encapsulate the root disk, perform this procedure on each node on which you mirror the encapsulated root disk.

Before You Begin

Ensure that you have encapsulated the root disk as described in SPARC: How to Encapsulate the Root Disk.

Steps
  1. Mirror the encapsulated root disk.

    Follow the procedures in your VxVM documentation. For maximum availability and simplified administration, use a local disk for the mirror. See Guidelines for Mirroring the Root Disk for additional guidelines.


    Caution – Caution –

    Do not use a quorum device to mirror a root disk. Using a quorum device to mirror a root disk might prevent the node from booting from the root-disk mirror under certain circumstances.


  2. Display the DID mappings.


    # scdidadm -L
    
  3. From the DID mappings, locate the disk that is used to mirror the root disk.

  4. Extract the raw-disk device-group name from the device-ID name of the root-disk mirror.

    The name of the raw-disk device group follows the convention dsk/dN, where N is a number. In the following output, the portion of a scdidadm output line from which you extract the raw-disk device-group name is highlighted in bold.


    N         node:/dev/rdsk/cNtXdY     /dev/did/rdsk/dN
    
  5. View the node list of the raw-disk device group.

    Output looks similar to the following.


    # scconf -pvv | grep dsk/dN
    Device group name:						dsk/dN
    …
     (dsk/dN) Device group node list:		phys-schost-1, phys-schost-3
    …
  6. If the node list contains more than one node name, remove from the node list all nodes except the node whose root disk you mirrored.

    Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.


    # scconf -r -D name=dsk/dN,nodelist=node
    
    -D name=dsk/dN

    Specifies the cluster-unique name of the raw-disk device group

    nodelist=node

    Specifies the name of the node or nodes to remove from the node list

  7. Enable the localonly property of the raw-disk device group.

    When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


    # scconf -c -D name=dsk/dN,localonly=true
    

    For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

  8. Repeat this procedure for each node in the cluster whose encapsulated root disk you want to mirror.


Example 4–1 SPARC: Mirroring the Encapsulated Root Disk

The following example shows a mirror created of the root disk for the node phys-schost-1. The mirror is created on the disk c1t1d0, whose raw-disk device-group name is dsk/d2. Disk c1t1d0 is a multihost disk, so the node phys-schost-3 is removed from the disk's node list and the localonly property is enabled.


(Display the DID mappings)
# scdidadm -L 
…
2        phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2   
2        phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2   
…
 
(Display the node list of the mirror disk's raw-disk device group)
# scconf -pvv | grep dsk/d2
Device group name:						dsk/d2
…
  (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
…
 
(Remove phys-schost-3 from the node list)
# scconf -r -D name=dsk/d2,nodelist=phys-schost-3
  
(Enable the localonly property)
# scconf -c -D name=dsk/d2,localonly=true

Next Steps

Create disk groups. Go to SPARC: Creating Disk Groups in a Cluster.

SPARC: Creating Disk Groups in a Cluster

This section describes how to create VxVM disk groups in a cluster.

The following table lists the tasks to perform to create VxVM disk groups for Sun Cluster configurations.

Table 4–2 SPARC: Task Map: Creating VxVM Disk Groups

Task 

Instructions 

1. Create disk groups and volumes. 

SPARC: How to Create and Register a Disk Group

2. If necessary, resolve any minor-number conflicts between disk device groups by assigning a new minor number. 

SPARC: How to Assign a New Minor Number to a Disk Device Group

3. Verify the disk groups and volumes. 

SPARC: How to Verify the Disk Group Configuration

ProcedureSPARC: How to Create and Register a Disk Group

Use this procedure to create your VxVM disk groups and volumes.


Note –

After a disk group is registered with the cluster as a disk device group, you should never import or deport a VxVM disk group by using VxVM commands. The Sun Cluster software can handle all cases where disk groups need to be imported or deported. See Administering Disk Device Groups in Sun Cluster System Administration Guide for Solaris OS for procedures about how to manage Sun Cluster disk device groups.


Perform this procedure from a node that is physically connected to the disks that make the disk group that you add.

Before You Begin

Perform the following tasks:

Steps
  1. Become superuser on the node that will own the disk group.

  2. Create a VxVM disk group and volume.

    If you are installing Oracle Real Application Clusters, create shared VxVM disk groups by using the cluster feature of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide. Otherwise, create VxVM disk groups by using the standard procedures that are documented in the VxVM documentation.


    Note –

    You can use Dirty Region Logging (DRL) to decrease volume recovery time if a node failure occurs. However, DRL might decrease I/O throughput.


  3. If the VxVM cluster feature is not enabled, register the disk group as a Sun Cluster disk device group.

    If the VxVM cluster feature is enabled, do not register a shared disk group as a Sun Cluster disk device group. Instead, go to SPARC: How to Verify the Disk Group Configuration.

    1. Start the scsetup(1M) utility.


      # scsetup
      
    2. Choose the menu item, Device groups and volumes.

    3. Choose the menu item, Register a VxVM disk group.

    4. Follow the instructions to specify the VxVM disk group that you want to register as a Sun Cluster disk device group.

    5. When finished, quit the scsetup utility.

    6. Verify that the disk device group is registered.

      Look for the disk device information for the new disk that is displayed by the following command.


      # scstat -D
      
Next Steps

Go to SPARC: How to Verify the Disk Group Configuration.

Troubleshooting

Failure to register the device group – If when you attempt to register the disk device group you encounter the error message scconf: Failed to add device group - in use, reminor the disk device group. Use the procedure SPARC: How to Assign a New Minor Number to a Disk Device Group. This procedure enables you to assign a new minor number that does not conflict with a minor number that is used by existing disk device groups.

Stack overflow – If a stack overflows when the disk device group is brought online, the default value of the thread stack size might be insufficient. On each node, add the entry set cl_comm:rm_thread_stacksize=0xsize to the /etc/system file, where size is a number greater than 8000, which is the default setting.

Configuration changes – If you change any configuration information for a VxVM disk group or volume, you must register the configuration changes by using the scsetup utility. Configuration changes you must register include adding or removing volumes and changing the group, owner, or permissions of existing volumes. See Administering Disk Device Groups in Sun Cluster System Administration Guide for Solaris OS for procedures to register configuration changes to a disk device group.

ProcedureSPARC: How to Assign a New Minor Number to a Disk Device Group

If disk device group registration fails because of a minor-number conflict with another disk group, you must assign the new disk group a new, unused minor number. Perform this procedure to reminor a disk group.

Steps
  1. Become superuser on a node of the cluster.

  2. Determine the minor numbers in use.


    # ls -l /global/.devices/node@1/dev/vx/dsk/*
    
  3. Choose any other multiple of 1000 that is not in use to become the base minor number for the new disk group.

  4. Assign the new base minor number to the disk group.


    # vxdg reminor diskgroup base-minor-number
    

Example 4–2 SPARC: How to Assign a New Minor Number to a Disk Device Group

This example uses the minor numbers 16000-16002 and 4000-4001. The vxdg reminor command reminors the new disk device group to use the base minor number 5000.


# ls -l /global/.devices/node@1/dev/vx/dsk/*
/global/.devices/node@1/dev/vx/dsk/dg1
brw-------   1 root     root      56,16000 Oct  7 11:32 dg1v1
brw-------   1 root     root      56,16001 Oct  7 11:32 dg1v2
brw-------   1 root     root      56,16002 Oct  7 11:32 dg1v3
 
/global/.devices/node@1/dev/vx/dsk/dg2
brw-------   1 root     root      56,4000 Oct  7 11:32 dg2v1
brw-------   1 root     root      56,4001 Oct  7 11:32 dg2v2
# vxdg reminor dg3 5000

Next Steps

Register the disk group as a Sun Cluster disk device group. Go to SPARC: How to Create and Register a Disk Group.

ProcedureSPARC: How to Verify the Disk Group Configuration

Perform this procedure on each node of the cluster.

Steps
  1. Verify that only the local disks are included in the root disk group, and disk groups are imported on the current primary node only.


    # vxdisk list
    
  2. Verify that all volumes have been started.


    # vxprint
    
  3. Verify that all disk groups have been registered as Sun Cluster disk device groups and are online.


    # scstat -D
    
Next Steps

Go to Configuring the Cluster.

SPARC: Unencapsulating the Root Disk

This section describes how to unencapsulate the root disk in a Sun Cluster configuration.

ProcedureSPARC: How to Unencapsulate the Root Disk

Perform this procedure to unencapsulate the root disk.

Before You Begin

Perform the following tasks:

Steps
  1. Become superuser on the node that you intend to unencapsulate.

  2. Move all resource groups and device groups from the node.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource or device groups

  3. Determine the node-ID number of the node.


    # clinfo -n
    
  4. Unmount the global-devices file system for this node, where N is the node ID number that is returned in Step 3.


    # umount /global/.devices/node@N
    
  5. View the /etc/vfstab file and determine which VxVM volume corresponds to the global-devices file system.


    # vi /etc/vfstab
    #device        device        mount    FS     fsck    mount    mount
    #to mount      to fsck       point    type   pass    at boot  options
    #
    #NOTE: volume rootdiskxNvol (/global/.devices/node@N) encapsulated 
    #partition cNtXdYsZ
    
  6. Remove from the root disk group the VxVM volume that corresponds to the global-devices file system.


    # vxedit -g rootdiskgroup -rf rm rootdiskxNvol
    

    Caution – Caution –

    Do not store data other than device entries for global devices in the global-devices file system. All data in the global-devices file system is destroyed when you remove the VxVM volume. Only data that is related to global devices entries is restored after the root disk is unencapsulated.


  7. Unencapsulate the root disk.


    Note –

    Do not accept the shutdown request from the command.



    # /etc/vx/bin/vxunroot
    

    See your VxVM documentation for details.

  8. Use the format(1M) command to add a 512-Mbyte partition to the root disk to use for the global-devices file system.


    Tip –

    Use the same slice that was allocated to the global-devices file system before the root disk was encapsulated, as specified in the /etc/vfstab file.


  9. Set up a file system on the partition that you created in Step 8.


    # newfs /dev/rdsk/cNtXdYsZ
    
  10. Determine the DID name of the root disk.


    # scdidadm -l cNtXdY
    1        phys-schost-1:/dev/rdsk/cNtXdY   /dev/did/rdsk/dN 
    
  11. In the /etc/vfstab file, replace the path names in the global-devices file system entry with the DID path that you identified in Step 10.

    The original entry would look similar to the following.


    # vi /etc/vfstab
    /dev/vx/dsk/rootdiskxNvol /dev/vx/rdsk/rootdiskxNvol /global/.devices/node@N ufs 2 no global

    The revised entry that uses the DID path would look similar to the following.


    /dev/did/dsk/dNsX /dev/did/rdsk/dNsX /global/.devices/node@N ufs 2 no global
  12. Mount the global-devices file system.


    # mount /global/.devices/node@N
    
  13. From one node of the cluster, repopulate the global-devices file system with device nodes for any raw-disk devices and Solstice DiskSuite or Solaris Volume Manager devices.


    # scgdevs
    

    VxVM devices are recreated during the next reboot.

  14. Reboot the node.


    # reboot
    
  15. Repeat this procedure on each node of the cluster to unencapsulate the root disk on those nodes.