Sun Cluster Software Installation Guide for Solaris OS

SPARC: Installing and Configuring VxVM Software

This section provides information and procedures to install and configure VxVM software on a Sun Cluster configuration.

SPARC: Task Map: Installing and Configuring VxVM Software

The following table lists the tasks to perform to install and configure VxVM software for Sun Cluster configurations.

Table 4–1 SPARC: Task Map: Installing and Configuring VxVM Software

Task 

Instructions 

1. Plan the layout of your VxVM configuration.  

Planning Volume Management

2. Determine how you will create the root disk group on each node. 

SPARC: Setting Up a Root Disk Group Overview

3. Install VxVM software and create the root disk group: 

  • Method 1 – Install VxVM software and encapsulate the root disk by using the scvxinstall command, and optionally mirror the encapsulated root disk.

  1. SPARC: How to Install VERITAS Volume Manager Software and Encapsulate the Root Disk

  2. SPARC: How to Mirror the Encapsulated Root Disk

  • Method 2 – Install VxVM software and create the root disk group on local, nonroot disks.

  1. SPARC: How to Install VERITAS Volume Manager Software Only

  2. SPARC: How to Create a Root Disk Group on a Nonroot Disk

4. Create disk groups and volumes. 

SPARC: How to Create and Register a Disk Group

5. If necessary, resolve any minor-number conflicts between disk device groups by assigning a new minor number. 

SPARC: How to Assign a New Minor Number to a Disk Device Group

6. Verify the disk groups and volumes. 

SPARC: How to Verify the Disk Group Configuration

7. Configure the cluster. 

Configuring the Cluster

SPARC: Setting Up a Root Disk Group Overview

Each cluster node requires the creation of a root disk group after VxVM is installed. This disk group is used by VxVM to store configuration information, and has the following restrictions.

Sun Cluster software supports the following methods to configure the root disk group.

See your VxVM installation documentation for more information.

SPARC: Where to Go From Here

Install VxVM by using one of the following installation methods, depending on how you intend to create the root disk group.

SPARC: How to Install VERITAS Volume Manager Software and Encapsulate the Root Disk

This procedure uses the scvxinstall(1M) command to install VxVM software and encapsulate the root disk in one operation.


Note –

If you intend to create the root disk group on local, nonroot disks, go instead to SPARC: How to Install VERITAS Volume Manager Software Only.


Perform this procedure on each node that you intend to install with VxVM. You can install VERITAS Volume Manager (VxVM) on all nodes of the cluster, or on only those nodes that are physically connected to the storage devices that VxVM will manage.

If you later need to unencapsulate the root disk, follow the procedures in SPARC: How to Unencapsulate the Root Disk.

  1. Ensure that the cluster meets the following prerequisites.

    • All nodes in the cluster are running in cluster mode.

    • The root disk of the node you install has two free (unassigned) partitions.

  2. Have available the following information.

  3. Become superuser on a node that you intend to install with VxVM.

  4. Insert the VxVM CD-ROM into the CD-ROM drive on the node.

  5. Start scvxinstall in interactive mode.

    Press Ctrl-C at any time to abort the scvxinstall command.


    # scvxinstall
    

    See the scvxinstall(1M) man page for more information.

  6. When prompted whether to encapsulate root, type yes.


    Do you want Volume Manager to encapsulate root [no]? y
    

  7. When prompted, provide the location of the VxVM CD-ROM.

    • If the appropriate VxVM CD-ROM is found, the location is displayed as part of the prompt within brackets. Press Enter to accept this default location.


      Where is the volume manager cdrom [default]?

    • If the VxVM CD-ROM is not found, the prompt is displayed without a default location. Type the location of the CD-ROM or CD-ROM image.


      Where is the volume manager cdrom?

  8. When prompted, type your VxVM license key.


    Please enter license key: license
    

    The scvxinstall command automatically performs the following tasks:

    • Installs the required VxVM software, licensing, and man-page packages, but does not install the GUI packages

    • Selects a cluster-wide vxio driver major number

    • Creates a root disk group by encapsulating the root disk

    • Updates the /global/.devices entry in the /etc/vfstab file

    See the scvxinstall(1M) man page for further details.


    Note –

    Two automatic reboots occur during installation. After all installation tasks are completed, scvxinstall automatically reboots the node the second time unless you press Ctrl-C when prompted. If you press Ctrl-C to abort the second reboot, you must reboot the node later to complete VxVM installation.


  9. If you intend to enable the VxVM cluster feature, supply the cluster feature license key.

    See your VxVM documentation for information about how to add a license.

  10. (Optional) Install the VxVM GUI.

    See your VxVM documentation for information about installing the VxVM GUI.

  11. Eject the CD-ROM.

  12. Install any VxVM patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  13. (Optional) If you prefer not to have VxVM man pages reside on the cluster node, remove the man-page package.


    # pkgrm VRTSvmman
    

  14. Do you intend to install VxVM on another node?

  15. Do you not intend to install one or more nodes with VxVM?


    Note –

    If you intend to enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.


  16. Modify the /etc/name_to_major file on each non-VxVM node.

    1. On a node installed with VxVM, determine the vxio major number setting.


      # grep vxio /etc/name_to_major
      

    2. Become superuser on a node that you do not intend to install with VxVM.

    3. Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.


      # vi /etc/name_to_major
      vxio NNN
      

    4. Initialize the vxio entry.


      # drvconfig -b -i vxio -m NNN
      

    5. Repeat Step b through Step d on all other nodes that you do not intend to install with VxVM.

      When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.

  17. Do you intend to mirror the encapsulated root disk?

SPARC: How to Mirror the Encapsulated Root Disk

After you install VxVM and encapsulate the root disk, perform this procedure on each node on which you mirror the encapsulated root disk.

  1. Mirror the encapsulated root disk.

    Follow the procedures in your VxVM documentation. For maximum availability and simplified administration, use a local disk for the mirror. See Guidelines for Mirroring the Root Disk for additional guidelines.


    Caution – Caution –

    Do not use a quorum device to mirror a root disk. Using a quorum device to mirror a root disk might prevent the node from booting from the root-disk mirror under certain circumstances.


  2. Display the DID mappings.


    # scdidadm -L
    

  3. From the DID mappings, locate the disk that is used to mirror the root disk.

  4. Extract the raw-disk device-group name from the device-ID name of the root-disk mirror.

    The name of the raw-disk device group follows the convention dsk/dN, where N is a number. In the following output, the portion of a scdidadm output line from which you extract the raw-disk device-group name is highlighted in bold.


    N         node:/dev/rdsk/cNtXdY     /dev/did/rdsk/dN
    

  5. View the node list of the raw-disk device group.

    Output looks similar to the following.


    # scconf -pvv | grep dsk/dN
    Device group name:						dsk/dN
    …
     (dsk/dN) Device group node list:		phys-schost-1, phys-schost-3
    …

  6. Does the node list contain more than one node name?

  7. Remove from the node list for the raw-disk device group all nodes except the node whose root disk you mirrored.

    Only the node whose root disk you mirrored should remain in the node list.


    # scconf -r -D name=dsk/dN,nodelist=node
    
    -D name=dsk/dN

    Specifies the cluster-unique name of the raw-disk device group

    nodelist=node

    Specifies the name of the node or nodes to remove from the node list

  8. Enable the localonly property of the raw-disk device group.

    When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


    # scconf -c -D name=dsk/dN,localonly=true
    

    For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

  9. Repeat this procedure for each node in the cluster whose encapsulated root disk you intend to mirror.

  10. Create disk groups.

    Go to SPARC: How to Create and Register a Disk Group.

SPARC: Example—Mirroring the Encapsulated Root Disk

The following example shows a mirror created of the root disk for the node phys-schost-1. The mirror is created on the disk c1t1d0, whose raw-disk device-group name is dsk/d2. Disk c1t1d0 is a multiported disk, so the node phys-schost-3 is removed from the disk's node list and the localonly property is enabled.


(Display the DID mappings)
# scdidadm -L 
…
2        phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2   
2        phys-schost-3:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2   
…
 
(Display the node list of the mirror disk's raw-disk device group)
# scconf -pvv | grep dsk/d2
Device group name:						dsk/d2
…
  (dsk/d2) Device group node list:		phys-schost-1, phys-schost-3
…
 
(Remove phys-schost-3 from the node list)
# scconf -r -D name=dsk/d2,nodelist=phys-schost-3
  
(Enable the localonly property)
# scconf -c -D name=dsk/d2,localonly=true

SPARC: How to Install VERITAS Volume Manager Software Only

This procedure uses the scvxinstall command to install VERITAS Volume Manager (VxVM) software only.


Note –

To create the root disk group by encapsulating the root disk, do not use this procedure. Instead, go to SPARC: How to Install VERITAS Volume Manager Software and Encapsulate the Root Disk to install VxVM software and encapsulate the root disk in one operation.


Perform this procedure on each node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or on only those nodes that are physically connected to the storage devices that VxVM will manage.

  1. Ensure that all nodes in the cluster are running in cluster mode.

  2. Become superuser on a cluster node that you intend to install with VxVM.

  3. Insert the VxVM CD-ROM into the CD-ROM drive on the node.

  4. Start scvxinstall in noninteractive installation mode.


    # scvxinstall -i
    

    The scvxinstall command automatically performs the following tasks.

    • Installs the required VxVM software, licensing, and man-page packages, but does not install the GUI packages

    • Selects a cluster-wide vxio driver major number


    Note –

    You add VxVM licenses during the next procedure, SPARC: How to Create a Root Disk Group on a Nonroot Disk.


    See the scvxinstall(1M) man page for information.

  5. (Optional) Install the VxVM GUI.

    See your VxVM documentation for information about installing the VxVM GUI.

  6. Eject the CD-ROM.

  7. Install any VxVM patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  8. (Optional) If you prefer not to have VxVM man pages reside on the cluster node, remove the man-page package.


    # pkgrm VRTSvmman
    

  9. Do you intend to install VxVM on another node?

  10. Do you not intend to install one or more nodes with VxVM?


    Note –

    If you intend to enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.


  11. Modify the /etc/name_to_major file on each non-VxVM node.

    1. On a node that is installed with VxVM, determine the vxio major number setting.


      # grep vxio /etc/name_to_major
      

    2. Become superuser on a node that you do not intend to install with VxVM.

    3. Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.


      # vi /etc/name_to_major
      vxio NNN
      

    4. Initialize the vxio entry.


      # drvconfig -b -i vxio -m NNN
      

    5. Repeat Step a through Step c on all other nodes that you do not intend to install with VxVM.

      When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.

  12. Create a root disk group.

    Go to SPARC: How to Create a Root Disk Group on a Nonroot Disk.

SPARC: How to Create a Root Disk Group on a Nonroot Disk

Use this procedure to create a root disk group by encapsulating or initializing local disks other than the root disk.

  1. Have available the VERITAS Volume Manager (VxVM) license keys.

  2. Become superuser on the node.

  3. (Optional) If the disks are to be encapsulated, ensure that each disk has at least two slices with 0 cylinders.

    If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice.

  4. Start the vxinstall utility.


    # vxinstall
    

    When prompted, make the following choices or entries.

    • Supply the VxVM license key.

    • If you intend to enable the VxVM cluster feature, supply the cluster feature license key.

    • Choose Custom Installation.

    • Do not encapsulate the boot disk.

    • Choose any disks to add to the root disk group.

    • Do not accept automatic reboot.

  5. Does the root disk group that you created contain one or more disks that connect to more than one node?

    • If yes, enable the localonly property of the raw-disk device group for each of these shared disks in the root disk group.

      When the localonly property is enabled, the raw-disk device group is used exclusively by the node in its node list. This usage prevents unintentional fencing of the node from the device that is used by the root disk group if that device is connected to multiple nodes.


      # scconf -c -D name=dsk/dN,localonly=true
      

      For more information about the localonly property, see the scconf_dg_rawdisk(1M) man page.

    • If no, proceed to Step 6.

  6. Move any resource groups or device groups from the node.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource or device groups

  7. Reboot the node.


    # shutdown -g0 -y -i6
    

  8. Use the vxdiskadm command to add multiple disks to the root disk group.

    The root disk group becomes tolerant of a disk failure when it contains multiple disks. See VxVM documentation for procedures.

  9. Create disk groups.

    Go to SPARC: How to Create and Register a Disk Group.

SPARC: How to Create and Register a Disk Group

Use this procedure to create your VxVM disk groups and volumes.


Note –

After a disk group is registered with the cluster as a disk device group, you should never import or deport a VxVM disk group by using VxVM commands. The Sun Cluster software can handle all cases where disk groups need to be imported or deported. See “Administering Disk Device Groups” in Sun Cluster System Administration Guide for Solaris OS for procedures on how to manage Sun Cluster disk device groups.


Perform this procedure from a node that is physically connected to the disks that make the disk group you add.

  1. Have available the following information.

  2. Become superuser on the node that will have ownership of the disk group.

  3. Create a VxVM disk group and volume.

    If you are installing Oracle Parallel Server/Real Application Clusters, create shared VxVM disk groups by using the cluster feature of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide. Otherwise, create VxVM disk groups by using the standard procedures that are documented in the VxVM documentation.


    Note –

    You can use Dirty Region Logging (DRL) to decrease volume recovery time if a node failure occurs. However, DRL might decrease I/O throughput.


  4. Is the VxVM cluster feature enabled?

    • If no, proceed to Step 5.

    • If yes, skip to Step 7. If the VxVM cluster feature is enabled, do not register a shared disk group as a Sun Cluster disk device group.

  5. Register the disk group as a Sun Cluster disk device group.

    1. Start the scsetup(1M) utility.


      # scsetup
      

    2. To work with disk device groups, type 4 (Device groups and volumes).

    3. To register a disk device group, type 1 (Register a VxVM disk group).

      Follow the instructions and type the VxVM disk device group to be registered as a Sun Cluster disk device group.

    4. If you encounter the following error message when you attempt to register the disk device group, reminor the disk device group.


      scconf: Failed to add device group - in use

      To reminor the disk device group, use the procedure SPARC: How to Assign a New Minor Number to a Disk Device Group. This procedure enables you to assign a new minor number that does not conflict with a minor number used by existing disk device groups.

    5. When finished, type q (Quit) to leave the scsetup utility.

  6. Verify that the disk device group is registered.

    Look for the disk device information for the new disk that is displayed by the following command.


    # scstat -D
    


    Tip –

    If you experience a stack overflow when the disk device group is brought online, the default value of the thread stack size might be insufficient. Add the following entry to the /etc/system file on each node, where size is a number greater than 8000, the default setting:


    set cl_comm:rm_thread_stacksize=0xsize
    



    Note –

    If you change any configuration information for a VxVM disk group or volume, you must register the configuration changes by using the scsetup utility. Configuration changes you must register include adding or removing volumes and changing the group, owner, or permissions of existing volumes. See “Administering Disk Device Groups” in Sun Cluster System Administration Guide for Solaris OS for procedures to register configuration changes to a disk device group.


  7. Verify the configuration of your VxVM disk groups and volumes.

    Go to SPARC: How to Verify the Disk Group Configuration.

SPARC: How to Assign a New Minor Number to a Disk Device Group

If disk device group registration fails because of a minor-number conflict with another disk group, you must assign the new disk group a new, unused minor number. Perform this procedure to reminor a disk group.

  1. Become superuser on a node of the cluster.

  2. Determine the minor numbers in use.


    # ls -l /global/.devices/node@1/dev/vx/dsk/*
    

  3. Choose any other multiple of 1000 that is not in use to become the base minor number for the new disk group.

  4. Assign the new base minor number to the disk group.


    # vxdg reminor diskgroup base-minor-number
    

  5. Go to Step 5 of SPARC: How to Create and Register a Disk Group to register the disk group as a Sun Cluster disk device group.

SPARC: Example—How to Assign a New Minor Number to a Disk Device Group

This example uses the minor numbers 16000-16002 and 4000-4001. The vxdg reminor command reminors the new disk device group to use the base minor number 5000.


# ls -l /global/.devices/node@1/dev/vx/dsk/*
/global/.devices/node@1/dev/vx/dsk/dg1
brw-------   1 root     root      56,16000 Oct  7 11:32 dg1v1
brw-------   1 root     root      56,16001 Oct  7 11:32 dg1v2
brw-------   1 root     root      56,16002 Oct  7 11:32 dg1v3
 
/global/.devices/node@1/dev/vx/dsk/dg2
brw-------   1 root     root      56,4000 Oct  7 11:32 dg2v1
brw-------   1 root     root      56,4001 Oct  7 11:32 dg2v2
# vxdg reminor dg3 5000

SPARC: How to Verify the Disk Group Configuration

Perform this procedure on each node of the cluster.

  1. Verify that only the local disks are included in the root disk group, and disk groups are imported on the current primary node only.


    # vxdisk list
    

  2. Verify that all volumes have been started.


    # vxprint
    

  3. Verify that all disk groups have been registered as Sun Cluster disk device groups and are online.


    # scstat -D
    

  4. Configure the cluster.

    Go to Configuring the Cluster.

SPARC: How to Unencapsulate the Root Disk

Perform this procedure to unencapsulate the root disk.

  1. Ensure that only Solaris root file systems are present on the root disk.

    The Solaris root file systems are root (/), swap, the global devices namespace, /usr, /var, /opt, and /home. If any other file systems reside on the root disk, back them up and remove them from the root disk.

  2. Become superuser on the node that you intend to unencapsulate.

  3. Move all resource groups and device groups from the node.


    # scswitch -S -h from-node
    
    -S

    Moves all resource groups and device groups

    -h from-node

    Specifies the name of the node from which to move resource or device groups

  4. Determine the node-ID number of the node.


    # clinfo -nN
    

  5. Unmount the global-devices file system for this node, where N is the node ID number that is returned in Step 4.


    # umount /global/.devices/node@N
    

  6. View the /etc/vfstab file and determine which VxVM volume corresponds to the global-devices file system.


    # vi /etc/vfstab
    #device        device        mount    FS     fsck    mount    mount
    #to mount      to fsck       point    type   pass    at boot  options
    #
    #NOTE: volume rootdiskxNvol (/global/.devices/node@N) encapsulated 
    #partition cNtXdYsZ
    

  7. Remove the VxVM volume that corresponds to the global-devices file system from the root disk group.


    # vxedit -rf rm rootdiskxNvol
    


    Caution – Caution –

    Do not store data other than device entries for global devices in the global-devices file system. All data in the global-devices file system is destroyed when you remove the VxVM volume. Only data that is related to global devices entries is restored after the root disk is unencapsulated.


  8. Unencapsulate the root disk.


    Note –

    Do not accept the shutdown request from the command.



    # /etc/vx/bin/vxunroot
    

    See your VxVM documentation for details.

  9. Use the format(1M) command to add a 512-Mbyte partition to the root disk to use for the global-devices file system.


    Tip –

    Use the same slice that was allocated to the global-devices file system before the root disk was encapsulated, as specified in the /etc/vfstab file.


  10. Set up a file system on the partition that you created in Step 9.


    # newfs /dev/rdsk/cNtXdYsZ
    

  11. Determine the DID name of the root disk.


    # scdidadm -l cNtXdY
    1        phys-schost-1:/dev/rdsk/cNtXdY   /dev/did/rdsk/dN 
    

  12. In the /etc/vfstab file, replace the path names in the global-devices file system entry with the DID path that you identified in Step 11.

    The original entry would look similar to the following.


    # vi /etc/vfstab
    /dev/vx/dsk/rootdiskxNvol /dev/vx/rdsk/rootdiskxNvol /global/.devices/node@N ufs 2 no global

    The revised entry that uses the DID path would look similar to the following.


    /dev/did/dsk/dNsX /dev/did/rdsk/dNsX /global/.devices/node@N ufs 2 no global

  13. Mount the global-devices file system.


    # mount /global/.devices/node@N
    

  14. From one node of the cluster, repopulate the global-devices file system with device nodes for any raw disk and Solstice DiskSuite/Solaris Volume Manager devices.


    # scgdevs
    

    VxVM devices are recreated during the next reboot.

  15. Reboot the node.


    # reboot
    

  16. Repeat this procedure on each node of the cluster to unencapsulate the root disk on those nodes.