Sun Cluster Software Installation Guide for Solaris OS

Chapter 5 Installing and Configuring Veritas Volume Manager

Install and configure your local and multihost disks for Veritas Volume Manager (VxVM) by using the procedures in this chapter, along with the planning information in Planning Volume Management. See your VxVM documentation for additional details.

The following sections are in this chapter:

Installing and Configuring VxVM Software

This section provides information and procedures to install and configure VxVM software on a Sun Cluster configuration.

The following table lists the tasks to perform to install and configure VxVM software for Sun Cluster configurations. Complete the procedures in the order that is indicated.

Table 5–1 Task Map: Installing and Configuring VxVM Software

Task 

Instructions 

Plan the layout of your VxVM configuration. 

Planning Volume Management

(Optional) Determine how you will create the root disk group on each node.

Setting Up a Root Disk Group Overview

Install VxVM software. 

How to Install Veritas Volume Manager Software

VxVM installation documentation 

(Optional) Create a root disk group. You can either encapsulate the root disk or create the root disk group on local, nonroot disks.

SPARC: How to Encapsulate the Root Disk

How to Create a Root Disk Group on a Nonroot Disk

(Optional) Mirror the encapsulated root disk.

How to Mirror the Encapsulated Root Disk

Create disk groups. 

Creating Disk Groups in a Cluster

Setting Up a Root Disk Group Overview

The creation of a root disk group is optional. If you do not intend to create a root disk group, proceed to How to Install Veritas Volume Manager Software.

Sun Cluster software supports the following methods to configure the root disk group.

See your VxVM installation documentation for more information.

ProcedureHow to Install Veritas Volume Manager Software

Perform this procedure to install Veritas Volume Manager (VxVM) software on each global-cluster node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or install VxVM just on the nodes that are physically connected to the storage devices that VxVM will manage.

Before You Begin

Perform the following tasks:

  1. Become superuser on a cluster node that you intend to install with VxVM.

  2. Insert the VxVM CD-ROM in the CD-ROM drive on the node.

  3. Follow procedures in your VxVM installation guide to install and configure VxVM software and licenses.

  4. Run the clvxvm utility in noninteractive mode.


    phys-schost# clvxvm initialize
    

    The clvxvm utility performs necessary postinstallation tasks. The clvxvm utility also selects and configures a cluster-wide vxio driver major number. See the clvxvm(1CL) man page for more information.

  5. SPARC: To enable the VxVM cluster feature, supply the cluster feature license key, if you did not already do so.

    See your VxVM documentation for information about how to add a license.

  6. (Optional) Install the VxVM GUI.

    See your VxVM documentation for information about installing the VxVM GUI.

  7. Eject the CD-ROM.

  8. Install any VxVM patches to support Sun Cluster software.

    See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.

  9. Repeat Step 1 through Step 8 to install VxVM on any additional nodes.


    Note –

    SPARC: To enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.


  10. If you do not install one or more nodes with VxVM, modify the /etc/name_to_major file on each non-VxVM node.

    1. On a node that is installed with VxVM, determine the vxio major number setting.


      phys-schost# grep vxio /etc/name_to_major
      
    2. Become superuser on a node that you do not intend to install with VxVM.

    3. Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.


      phys-schost# vi /etc/name_to_major
      vxio NNN
      
    4. Initialize the vxio entry.


      phys-schost# drvconfig -b -i vxio -m NNN
      
    5. Repeat Step a through Step d on all other nodes that you do not intend to install with VxVM.

      When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.

  11. To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or How to Create a Root Disk Group on a Nonroot Disk.

    Otherwise, proceed to Step 12.


    Note –

    A root disk group is optional.


  12. Reboot each node on which you installed VxVM.


    phys-schost# shutdown -g0 -y -i6
    
Next Steps

To create a root disk group, go to SPARC: How to Encapsulate the Root Disk or How to Create a Root Disk Group on a Nonroot Disk.

Otherwise, create disk groups. Go to Creating Disk Groups in a Cluster.

ProcedureSPARC: How to Encapsulate the Root Disk

Perform this procedure to create a root disk group by encapsulating the root disk. Root disk groups are optional. See your VxVM documentation for more information.


Note –

If you want to create a root disk group on nonroot disks, instead perform procedures in How to Create a Root Disk Group on a Nonroot Disk.


Before You Begin

Ensure that you have installed VxVM as described in How to Install Veritas Volume Manager Software.

  1. Become superuser on a node that you installed with VxVM.

  2. Encapsulate the root disk.


    phys-schost# clvxvm encapsulate
    

    See the clvxvm(1CL) man page for more information.

  3. Repeat for any other node on which you installed VxVM.

Next Steps

To mirror the encapsulated root disk, go to How to Mirror the Encapsulated Root Disk.

Otherwise, go to Creating Disk Groups in a Cluster.

ProcedureHow to Create a Root Disk Group on a Nonroot Disk

Use this procedure to create a root disk group by encapsulating or initializing local disks other than the root disk. The creation of a root disk group is optional.


Note –

If you want to create a root disk group on the root disk, instead perform procedures in SPARC: How to Encapsulate the Root Disk.


Before You Begin

If the disks are to be encapsulated, ensure that each disk has at least two slices with 0 cylinders. If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice.

  1. Become superuser.

  2. Start the vxinstall utility.


    phys-schost# vxinstall
    
  3. When prompted by the vxinstall utility, make the following choices or entries.

    • SPARC: To enable the VxVM cluster feature, supply the cluster feature license key.

    • Choose Custom Installation.

    • Do not encapsulate the boot disk.

    • Choose any disks to add to the root disk group.

    • Do not accept automatic reboot.

  4. If the root disk group that you created contains one or more disks that connect to more than one node, ensure that fencing is disabled for such disks.

    Use the following command to disable fencing for each shared disk in the root disk group.


    phys-schost# cldevice set -p default_fencing=nofencing device
    
    -p

    Specifies a device property.

    default_fencing=nofencing

    Disables fencing for the specified device.

    Disabling fencing for the device prevents unintentional fencing of the node from the disk that is used by the root disk group if that disk is connected to multiple nodes.

    For more information about the default_fencing property, see the cldevice(1CL) man page.

  5. Evacuate any resource groups or device groups from the node.


    phys-schost# clnode evacuate from-node
    
    from-node

    Specifies the name of the node from which to move resource or device groups.

  6. Reboot the node.


    phys-schost# shutdown -g0 -y -i6
    
  7. Use the vxdiskadm command to add multiple disks to the root disk group.

    The root disk group becomes tolerant of a disk failure when it contains multiple disks. See VxVM documentation for procedures.

Next Steps

Create disk groups. Go to Creating Disk Groups in a Cluster.

ProcedureHow to Mirror the Encapsulated Root Disk

After you install VxVM and encapsulate the root disk, perform this procedure on each node on which you mirror the encapsulated root disk.

Before You Begin

Ensure that you have encapsulated the root disk as described in SPARC: How to Encapsulate the Root Disk.

  1. Become superuser.

  2. List the devices.


    phys-schost# cldevice list -v
    

    Output looks similar to the following:


    DID Device          Full Device Path
    ----------          ----------------
    d1                  phys-schost-1:/dev/rdsk/c0t0d0
    d2                  phys-schost-1:/dev/rdsk/c0t6d0
    d3                  phys-schost-2:/dev/rdsk/c1t1d0
    d3                  phys-schost-1:/dev/rdsk/c1t1d0
  3. Mirror the encapsulated root disk.

    Follow the procedures in your VxVM documentation.

    For maximum availability and simplified administration, use a local disk for the mirror. See Guidelines for Mirroring the Root Disk for additional guidelines.


    Caution – Caution –

    Do not use a quorum device to mirror a root disk. Using a quorum device to mirror a root disk might prevent the node from booting from the root-disk mirror under certain circumstances.


  4. View the node list of the raw-disk device group for the device that you used to mirror the root disk.

    The name of the device group is the form dsk/dN, where dN is the DID device name.


    phys-schost# cldevicegroup list -v dsk/dN
    
    -v

    Displays verbose output.

    Output looks similar to the following.


    Device group        Type                Node list
    ------------        ----                ---------
    dsk/dN              Local_Disk          phys-schost-1, phys-schost-3
  5. If the node list contains more than one node name, remove from the node list all nodes except the node whose root disk you mirrored.

    Only the node whose root disk you mirrored should remain in the node list for the raw-disk device group.


    phys-schost# cldevicegroup remove-node -n node dsk/dN
    
    -n node

    Specifies the node to remove from the device-group node list.

  6. Disable fencing for all disks in the raw-disk device group that connect to more than one node.

    Disabling fencing for a device prevents unintentional fencing of the node from its boot device if the boot device is connected to multiple nodes.


    phys-schost# cldevice set -p default_fencing=nofencing device
    
    -p

    Sets the value of a device property.

    default_fencing=nofencing

    Disables fencing for the specified device.

    For more information about the default_fencing property, see the cldevice(1CL) man page.

  7. Repeat this procedure for each node in the cluster whose encapsulated root disk you want to mirror.


Example 5–1 Mirroring the Encapsulated Root Disk

The following example shows a mirror created of the root disk for the node phys-schost-1. The mirror is created on the disk c0t0d0, whose raw-disk device-group name is dsk/d2. Disk c0t0d0 is a multihost disk, so the node phys-schost-3 is removed from the disk's node list and fencing is disabled.


phys-schost# cldevice list -v
DID Device          Full Device Path
----------          ----------------
d2                  pcircinus1:/dev/rdsk/c0t0d0
…
Create the mirror by using VxVM procedures
phys-schost# cldevicegroup list -v dsk/d2
Device group        Type                Node list
------------        ----                ---------
dsk/d2              Local_Disk          phys-schost-1, phys-schost-3
phys-schost# cldevicegroup remove-node -n phys-schost-3 dsk/d2
phys-schost# cldevice set -p default_fencing=nofencing c0t0d0

Next Steps

Create disk groups. Go to Creating Disk Groups in a Cluster.

Creating Disk Groups in a Cluster

This section describes how to create VxVM disk groups in a cluster. The following table describes the types of VxVM disk groups you can configure in a Sun Cluster configuration and their characteristics.

Disk Group Type 

Use 

Registered with Sun Cluster? 

Storage Requirement 

VxVM disk group 

Device groups for failover or scalable data services, global devices, or cluster file systems 

Yes 

Shared storage 

Local VxVM disk group 

Applications that are not highly available and are confined to a single node 

No 

Shared or unshared storage 

VxVM shared disk group 

Oracle Real Application Clusters (also requires the VxVM cluster feature) 

No 

Shared storage 

The following table lists the tasks to perform to create VxVM disk groups in a Sun Cluster configuration. complete the procedures in the order that is indicated.

Table 5–2 Task Map: Creating VxVM Disk Groups

Task 

Instructions 

Create disk groups and volumes. 

How to Create a Disk Group

Register as Sun Cluster device groups those disk groups that are not local and that do not use the VxVM cluster feature. 

How to Register a Disk Group

If necessary, resolve any minor-number conflicts between device groups by assigning a new minor number. 

How to Assign a New Minor Number to a Device Group

Verify the disk groups and volumes. 

How to Verify the Disk Group Configuration

ProcedureHow to Create a Disk Group

Use this procedure to create your VxVM disk groups and volumes.

Perform this procedure from a node that is physically connected to the disks that make the disk group that you add.

Before You Begin

Perform the following tasks:

  1. Become superuser on the node that will own the disk group.

  2. Create the VxVM disk groups and volumes.

    Observe the following special instructions:


    Note –

    You can use Dirty Region Logging (DRL) to decrease volume recovery time if a node failure occurs. However, DRL might decrease I/O throughput.


  3. For local disk groups, set the localonly property and add a single node to the disk group's node list.


    Note –

    A disk group that is configured to be local only is not highly available or globally accessible.


    1. Start the clsetup utility.


      phys-schost# clsetup
      
    2. Choose the menu item, Device groups and volumes.

    3. Choose the menu item, Set localonly on a VxVM disk group.

    4. Follow the instructions to set the localonly property and to specify the single node that will exclusively master the disk group.

      Only one node at any time is permitted to master the disk group. You can later change which node is the configured master.

    5. When finished, quit the clsetup utility.

Next Steps

Determine your next step:

ProcedureHow to Register a Disk Group

If the VxVM cluster feature is not enabled, perform this procedure to register disk groups that are not local as Sun Cluster device groups.


Note –

SPARC: If the VxVM cluster feature is enabled or you created a local disk group, do not perform this procedure. Instead, proceed to How to Verify the Disk Group Configuration.


  1. Become superuser on a node of the cluster.

  2. Register the global disk group as a Sun Cluster device group.

    1. Start the clsetup utility.


      phys-schost# clsetup
      
    2. Choose the menu item, Device groups and volumes.

    3. Choose the menu item, Register a VxVM disk group.

    4. Follow the instructions to specify the VxVM disk group that you want to register as a Sun Cluster device group.

    5. When finished, quit the clsetup utility.

    6. Deport and re-import each local disk group.


      phys-schost# vxdg deport diskgroup
      # vxdg import dg
      
    7. Restart each local disk group.


      phys-schost# vxvol -g diskgroup startall
      
    8. Verify the local-only status of each local disk group.

      If the value of the flags property of the disk group is nogdl, the disk group is correctly configured for local-only access.


      phys-schost# vxdg list diskgroup | grep flags
      flags: nogdl
  3. Verify that the device group is registered.

    Look for the disk device information for the new disk that is displayed by the following command.


    phys-schost# cldevicegroup status
    
Next Steps

Go to How to Verify the Disk Group Configuration.

Troubleshooting

Stack overflow – If a stack overflows when the device group is brought online, the default value of the thread stack size might be insufficient. On each node, add the entry set cl_haci:rm_thread_stacksize=0xsize to the /etc/system file, where size is a number greater than 8000, which is the default setting.

Configuration changes – If you change any configuration information for a VxVM device group or its volumes, you must register the configuration changes by using the clsetup utility. Configuration changes that you must register include adding or removing volumes and changing the group, owner, or permissions of existing volumes. See Administering Device Groups in Sun Cluster System Administration Guide for Solaris OS for procedures to register configuration changes that are made to a VxVM device group.

ProcedureHow to Assign a New Minor Number to a Device Group

If device group registration fails because of a minor-number conflict with another disk group, you must assign the new disk group a new, unused minor number. Perform this procedure to reminor a disk group.

  1. Become superuser on a node of the cluster.

  2. Determine the minor numbers in use.


    phys-schost# ls -l /global/.devices/node@1/dev/vx/dsk/*
    
  3. Choose any other multiple of 1000 that is not in use to become the base minor number for the new disk group.

  4. Assign the new base minor number to the disk group.


    phys-schost# vxdg reminor diskgroup base-minor-number
    

Example 5–2 How to Assign a New Minor Number to a Device Group

This example uses the minor numbers 16000-16002 and 4000-4001. The vxdg reminor command reminors the new device group to use the base minor number 5000.


phys-schost# ls -l /global/.devices/node@1/dev/vx/dsk/*
/global/.devices/node@1/dev/vx/dsk/dg1
brw-------   1 root     root      56,16000 Oct  7 11:32 dg1v1
brw-------   1 root     root      56,16001 Oct  7 11:32 dg1v2
brw-------   1 root     root      56,16002 Oct  7 11:32 dg1v3
 
/global/.devices/node@1/dev/vx/dsk/dg2
brw-------   1 root     root      56,4000 Oct  7 11:32 dg2v1
brw-------   1 root     root      56,4001 Oct  7 11:32 dg2v2
phys-schost# vxdg reminor dg3 5000

Next Steps

Register the disk group as a Sun Cluster device group. Go to How to Register a Disk Group.

ProcedureHow to Verify the Disk Group Configuration

Perform this procedure on each node of the cluster.

  1. Become superuser.

  2. List the disk groups.


    phys-schost# vxdisk list
    
  3. List the device groups.


    phys-schost# cldevicegroup list -v
    
  4. Verify that all disk groups are correctly configured.

    Ensure that the following requirements are met:

    • The root disk group includes only local disks.

    • All disk groups and any local disk groups are imported on the current primary node only.

  5. Verify that all volumes have been started.


    phys-schost# vxprint
    
  6. Verify that all disk groups have been registered as Sun Cluster device groups and are online.


    phys-schost# cldevicegroup status
    

    Output should not display any local disk groups.

  7. (Optional) Capture the disk partitioning information for future reference.


    phys-schost# prtvtoc /dev/rdsk/cNtXdYsZ > filename
    

    Store the file in a location outside the cluster. If you make any disk configuration changes, run this command again to capture the changed configuration. If a disk fails and needs replacement, you can use this information to restore the disk partition configuration. For more information, see the prtvtoc(1M) man page.

  8. (Optional) Make a backup of your cluster configuration.

    An archived backup of your cluster configuration facilitates easier recovery of the your cluster configuration. For more information, see How to Back Up the Cluster Configuration in Sun Cluster System Administration Guide for Solaris OS.

Guidelines for Administering VxVM Disk Groups

Observe the following guidelines for administering VxVM disk groups in a Sun Cluster configuration:

Troubleshooting

If the output of the cldevicegroup status command includes any local disk groups, the displayed disk groups are not configured correctly for local-only access. Return to How to Create a Disk Group to reconfigure the local disk group.

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

Unencapsulating the Root Disk

This section describes how to unencapsulate the root disk in a Sun Cluster configuration.

ProcedureHow to Unencapsulate the Root Disk

Perform this procedure to unencapsulate the root disk.

Before You Begin

Perform the following tasks:

  1. Become superuser on the node that you intend to unencapsulate.

  2. Evacuate all resource groups and device groups from the node.


    phys-schost# clnode evacuate from-node
    
    from-node

    Specifies the name of the node from which to move resource or device groups.

  3. Determine the node-ID number of the node.


    phys-schost# clinfo -n
    
  4. Unmount the global-devices file system for this node, where N is the node ID number that is returned in Step 3.


    phys-schost# umount /global/.devices/node@N
    
  5. View the /etc/vfstab file and determine which VxVM volume corresponds to the global-devices file system.


    phys-schost# vi /etc/vfstab
    #device        device        mount    FS     fsck    mount    mount
    #to mount      to fsck       point    type   pass    at boot  options
    #
    #NOTE: volume rootdiskxNvol (/global/.devices/node@N) encapsulated 
    #partition cNtXdYsZ
    
  6. Remove from the root disk group the VxVM volume that corresponds to the global-devices file system.


    phys-schost# vxedit -g rootdiskgroup -rf rm rootdiskxNvol
    

    Caution – Caution –

    Do not store data other than device entries for global devices in the global-devices file system. All data in the global-devices file system is destroyed when you remove the VxVM volume. Only data that is related to global devices entries is restored after the root disk is unencapsulated.


  7. Unencapsulate the root disk.


    Note –

    Do not accept the shutdown request from the command.



    phys-schost# /etc/vx/bin/vxunroot
    

    See your VxVM documentation for details.

  8. Use the format(1M) command to add a 512-Mbyte partition to the root disk to use for the global-devices file system.


    Tip –

    Use the same slice that was allocated to the global-devices file system before the root disk was encapsulated, as specified in the /etc/vfstab file.


  9. Set up a file system on the partition that you created in Step 8.


    phys-schost# newfs /dev/rdsk/cNtXdYsZ
    
  10. Determine the DID name of the root disk.


    phys-schost# cldevice list cNtXdY
    dN
    
  11. In the /etc/vfstab file, replace the path names in the global-devices file system entry with the DID path that you identified in Step 10.

    The original entry would look similar to the following.


    phys-schost# vi /etc/vfstab
    /dev/vx/dsk/rootdiskxNvol /dev/vx/rdsk/rootdiskxNvol /global/.devices/node@N ufs 2 no global

    The revised entry that uses the DID path would look similar to the following.


    /dev/did/dsk/dNsX /dev/did/rdsk/dNsX /global/.devices/node@N ufs 2 no global
  12. Mount the global-devices file system.


    phys-schost# mount /global/.devices/node@N
    
  13. From one node of the cluster, repopulate the global-devices file system with device nodes for any raw-disk devices and Solaris Volume Manager devices.


    phys-schost# cldevice populate
    

    VxVM devices are recreated during the next reboot.

  14. On each node, verify that the cldevice populate command has completed processing before you proceed to the next step.

    The cldevice populate command executes remotely on all nodes, even through the command is issued from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.


    phys-schost# ps -ef | grep scgdevs
    
  15. Reboot the node.


    phys-schost# shutdown -g0 -y -i6
    
  16. Repeat this procedure on each node of the cluster to unencapsulate the root disk on those nodes.