Sun Cluster 3.0 Installation Guide

Configuring VxVM for Sun Cluster Configurations

The following table lists the tasks to perform to configure VxVM for Sun Cluster configurations.

Table B-1 Task Map: Configuring VxVM for Sun Cluster Configurations

Task 

For Instructions, Go To ... 

Plan the layout of your VxVM configuration. 

"Planning Volume Management"

Verify that the pseudo-device major number is the same on each node. 

"How to Verify the Pseudo-Device Major Number"

If necessary, change a node's pseudo-device major number. 

"How to Change the Pseudo-Device Major Number"

Create the root disk group (rootdg).

"Setting Up a rootdg Disk Group Overview"

Create shared disk groups and volumes. 

"How to Create and Register a Shared Disk Group"

If necessary, resolve minor number conflicts between disk device groups by assigning a new minor number. 

"How to Assign a New Minor Number to a Disk Device Group"

Verify the shared disk groups and volumes. 

"How to Verify the Disk Groups"

Create and mount cluster file systems. 

"How to Add Cluster File Systems"

How to Verify the Pseudo-Device Major Number

The vxio driver must have identical pseudo-device major numbers on all cluster nodes. You can find this number in the /etc/name_to_major file after you complete the installation. Use the following procedures to verify the pseudo-device major numbers.

  1. Become superuser on a node in the cluster.

  2. On each cluster node, view the pseudo-device major number.

    For example, type the following.


    # grep vxio /etc/name_to_major
    vxio 45

  3. Compare the pseudo-device major numbers of all the nodes.

    The major numbers should be identical on each node. If numbers vary, you must change the major numbers that are different.

Where to Go From Here

To change a node's pseudo-device major number, go to "How to Change the Pseudo-Device Major Number".

Otherwise, to set up the root disk group (rootdg), go to "Setting Up a rootdg Disk Group Overview".

How to Change the Pseudo-Device Major Number

Perform this procedure if the pseudo-device major number is not the same for each node of the cluster.

  1. Become superuser on a node that has the major number you want to change.

  2. Edit the /etc/name_to_major file to make the number identical on all nodes.

    Be sure that the number is unique in the /etc/name_to_major file for each node. A quick way to check for uniqueness is to find, by visual inspection, the maximum number assigned on each node in the /etc/name_to_major file, compute the maximum of these numbers, add one, then assign the sum to the vxio driver.

  3. Reboot the node.

    1. Use the scswitch(1M) command to evacuate any resource groups or device groups from the node.


      # scswitch -S -h node
      
      -S

      Evacuates all resource groups and device groups.

      -h node

      Specifies the name of the node from which to evacuate resource or device groups.

    2. Use the shutdown(1M) command to reboot the node.


      # shutdown -g 0 -y -i 6
      
  4. (Optional) If the system reports disk group errors and the cluster does not start, you might need to perform the following steps.

    1. Become superuser on the node.

    2. Use the vxedit(1M) command to change the failing field to off for affected subdisks.

      Refer to the vxedit(1M) man page for more information.

    3. Make sure all volumes are enabled and active.

Where to Go From Here

To set up the root disk group (rootdg), go to "Setting Up a rootdg Disk Group Overview".

Setting Up a rootdg Disk Group Overview

Each cluster node requires the creation of a rootdg disk group. This disk group is used by VxVM to store configuration information, and has the following restrictions.

Sun Cluster software supports the following methods for configuring the rootdg disk group.

Refer to your VxVM installation documentation for more information.

How to Encapsulate the Root Disk

Use this procedure to create a rootdg disk group by encapsulating the root disk.

  1. Have available the VERITAS Volume Manager (VxVM) license keys.

  2. Become superuser on a node in the cluster.

  3. Ensure that the root disk has at least two slices with 0 cylinders and one or more free cylinders at the end or beginning of the disk.

    If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice. If slice 7 was reserved for volume manager use, formatting slice 7 also frees the extra space needed at the end of the disk.

  4. Start the vxinstall(1M) utility.


    # vxinstall
    

    When prompted, make the following choices or entries.

    • Choose Custom Installation.

    • Encapsulate the root disk.

    • Choose a name for the root disk that is unique throughout the cluster. A simple way to name the root disk is to add an extra letter to the default name.

      For example, if the default name given is rootdisk, name the root disk rootdiska on one node, rootdiskb on the next node, and so forth.

    • Do not add any other disks to the rootdg disk group.

    • For any other controllers, choose 4 (Leave these disks alone).

    • Do not accept shutdown and reboot.

    Refer to the VxVM installation documentation for details.


    Note -

    Because Dynamic Multipathing (DMP) is disabled, an error message similar to the following might be generated. You can safely ignore it.



    vxvm:vxdmpadm: ERROR: vxdmp module is not loaded on the system. Command invalid.
  5. Edit the /etc/vfstab file device names for the /global/.devices/node@nodeid file system.


    Note -

    You need to make this modification in order for VxVM to recognize that the /global/.devices/node@nodeid file system is on the root disk.


    Replace the existing device names with the names used in the /globaldevices entry, which is commented out. For example, consider the following /etc/vfstab file entries for /globaldevices and /global/.devices/node@2.


    #device            device             mount         FS   fsck  mount   mount
    #to mount          to fsck            point         type pass  at boot options
    ...
    #/dev/dsk/c1t3d0s3 /dev/rdsk/c1t3d0s3 /globaldevices ufs 2     yes     -
    ...
    /dev/did/dsk/d4s3  /dev/did/rdsk/d4s3 /global/.devices/node@2 ufs 2 no global

    You would change the /global/.devices/node@2 entry to the following.


    #device            device             mount         FS   fsck  mount   mount
    #to mount          to fsck            point         type pass  at boot options
    ...
    #/dev/dsk/c1t3d0s3 /dev/rdsk/c1t3d0s3 /globaldevices ufs 2     yes     -
    ...
    /dev/dsk/c1t3d0s3  /dev/rdsk/c1t3d0s3 /global/.devices/node@2 ufs 2 no global
  6. Repeat Step 2 through Step 5 on each node of the cluster.

  7. From one node, use the scshutdown(1M) command to shut down the cluster.


    # scshutdown
    
  8. Reboot each node in non-cluster mode.

    1. Run the following command on each node to reboot in non-cluster mode.


      ok boot -x
      

      Note -

      Do not reboot the node in cluster mode.


    2. If a node displays a message similar to the following, press Control-D to continue the boot.

      Ignore the instruction to run fsck manually. Instead, press Control-D to continue with the boot and complete the remaining root disk encapsulation procedures.


      WARNING - Unable to repair the /global/.devices/node@1 filesystem. 
      Run fsck manually (fsck -F ufs /dev/vx/rdsk/rootdisk3vola). Exit 
      the shell when done to continue the boot process.
       
      Type control-d to proceed with normal startup,
      (or give root password for system maintenance): 

      The /global/.devices/node@nodeid file system still requires additional changes before the cluster can globally mount it on each node. Because of this requirement, all but one node will fail to mount the /global/.devices/node@nodeid file system during this reboot, resulting in a warning message.

    VxVM encapsulates the root disk and updates the /etc/vfstab entries.

  9. Unmount the /global/.devices/node@nodeid file system that successfully mounted in Step 8.


    # umount /global/.devices/node@nodeid
    

    Unmounting this file system enables you to reminor the disk group during Step 10 without needing to reboot the node twice to initialize the change. This file system is automatically remounted when you reboot during Step 14.

  10. Re-minor the rootdg disk group on each node of the cluster.

    Specify a rootdg minor number that is unique throughout the cluster and smaller than 1000 to prevent minor number conflicts with shared disk groups. An effective re-minoring scheme is to assign 100 on the first node, 200 on the second, and so forth.


    # vxdg reminor rootdg n
    

    n

    Specifies the rootdg minor number

    After executing this command, warning messages similar to the following might be displayed. You can safely ignore them.


    vxvm:vxdg: WARNING: Volume swapvol: Device is open, will renumber on reboot

    The new minor number is applied to the root disk volumes. The swap volume is renumbered after you reboot.


    # ls -l /dev/vx/dsk/rootdg
    total 0
    brw------- 1 root       root    55,100 Apr  4 10:48 rootdiska3vol
    brw------- 1 root       root    55,101 Apr  4 10:48 rootdiska7vol
    brw------- 1 root       root    55,  0 Mar 30 16:37 rootvol
    brw------- 1 root       root    55,  7 Mar 30 16:37 swapvol
  11. On each node of the cluster, if the /usr file system is not collocated with the root (/) file system on the root disk, manually update the device nodes for the /usr volume.

    1. Remove existing /usr device nodes.


      # rm /dev/vx/dsk/usr
      # rm /dev/vx/dsk/rootdg/usr
      # rm /dev/vx/rdsk/usr
      # rm /dev/vx/rdsk/rootdg/usr
      
    2. Determine the new minor number assigned to the /usr file system.


      # vxprint -l -v usrvol
      Disk group: rootdg Volume:   usrvol
      ...
      device:   minor=102 bdev=55/102 cdev=55/102 path=/dev/vx/dsk/rootdg/usrvol
    3. Create new /usr device nodes by using the new minor number.


      # mknod /dev/vx/dsk/usr b major_number new-minor-number
      # mknod /dev/vx/dsk/rootdg/usr b major_number new-minor-number
      # mknod /dev/vx/rdsk/usr c major_number new-minor-number
      # mknod /dev/vx/rdsk/rootdg/usr c major_number new-minor-number
      
  12. On each node of the cluster, if the /var file system is not collocated with the root (/) file system on the root disk, manually update the device nodes for the /var volume.

    1. Remove existing /var device nodes.


      # rm /dev/vx/dsk/var
      # rm /dev/vx/dsk/rootdg/var
      # rm /dev/vx/rdsk/var
      # rm /dev/vx/rdsk/rootdg/var
      
    2. Determine the new minor number assigned to the /var file system.


      # vxprint -l -v usrvol
      Disk group: rootdg Volume:   usrvol
      ...
      device:   minor=103 bdev=55/102 cdev=55/102 path=/dev/vx/dsk/rootdg/usrvol
    3. Create new /var device nodes by using the new minor number.


      # mknod b /dev/vx/dsk/var major_number new-minor-number
      # mknod b /dev/vx/dsk/rootdg/var major_number new-minor-number
      # mknod c /dev/vx/rdsk/var major_number new-minor-number 
      # mknod c /dev/vx/rdsk/rootdg/var major_number new-minor-number
      
  13. From one node, shut down the cluster.


    # scshutdown
    
  14. Reboot each node into cluster mode.


    ok boot
    
  15. (Optional) Mirror the root disk on each node of the cluster.

    Refer to your VxVM documentation for instructions on mirroring root.

  16. If you mirrored the root disk, on each node of the cluster enable the localonly property of the raw disk device group associated with the disk used to mirror that node's root disk.

    For each node, configure a different raw disk device group, which will be used exclusively by that node to mirror the root disk. You must enable the localonly property to prevent unintentional fencing of a node from its boot device if the boot device is connected to multiple nodes.


    # scconf -c -D name=rawdisk_groupname,localonly=true
    
    -D name=rawdisk_groupname

    Specifies the cluster-unique name of the raw disk device group

    Use the scdidadm -L command to display the full device ID (DID) pseudo-driver name of the raw disk device group. In the following example, the raw disk device group name dsk/d1 is extracted from the third column of output, which is the full DID pseudo-driver name. The scconf command then configures the dsk/d1 raw disk device group to be used exclusively by the node phys-schost-3 to mirror its root disk.


    # scdidadm -L
    ...
    1         phys-schost-3:/dev/rdsk/c0t0d0     /dev/did/rdsk/d1
    phys-schost-3# scconf -c -D name=dsk/d1,localonly=true
    

    For more information about the localonly property, refer to the scconf_dg_rawdisk(1M) man page.

Where to Go From Here

To create shared disk groups, go to "How to Create and Register a Shared Disk Group".

How to Create a Non-Root rootdg Disk Group

Use this procedure to create a rootdg disk group by encapsulating or initializing local non-root disks.

  1. Have available the VERITAS Volume Manager (VxVM) license keys.

  2. Become superuser on the node.

  3. (Optional) If the disks will be encapsulated, ensure that each disk has at least two slices with 0 cylinders.

    If necessary, use the format(1M) command to assign 0 cylinders to each VxVM slice.

  4. Start the vxinstall(1M) utility.


    # vxinstall
    

    When prompted, make the following choices or entries.

    • Choose Custom Installation.

    • Do not encapsulate the root disk.

    • Choose any disks you want added to the rootdg disk group.

    • Do not accept automatic reboot.

  5. Evacuate any resource groups or device groups from the node.


    # scswitch -S -h node
    
    -S

    Evacuates all resource groups and device groups.

    -h node

    Specifies the name of the node from which to evacuate resource or device groups.

  6. Reboot the node.


    # shutdown -g 0 -y -i 6
    

Where to Go From Here

To create shared disk groups, go to "How to Create and Register a Shared Disk Group".

How to Create and Register a Shared Disk Group

Use this procedure to create your VxVM disk groups and volumes.

Run this procedure from a node that is physically connected to the disks that make up the disk group being added.


Note -

After the disk group has been registered with the cluster as a disk device group, you should never import or deport VxVM disk groups by using VxVM commands. The Sun Cluster software can handle all cases where disk groups need to be imported or deported. Refer to Sun Cluster 3.0 System Administration Guide for procedures on managing Sun Cluster disk device groups.


  1. Have available the following information.

    • Mappings of your storage disk drives. Refer to the Sun Cluster 3.0 Hardware Guide chapter on performing an initial installation for your storage device.

    • The following completed configuration planning worksheets from Sun Cluster 3.0 Release Notes.

      • "Local File System Layout Worksheet"

      • "Disk Device Group Configurations Worksheet"

      • "Volume Manager Configurations Worksheet"

      See Chapter 1, Planning the Sun Cluster Configuration for planning guidelines.

  2. Become superuser on the node that will have ownership of the disk group.

  3. Create the VxVM disk group and volume.

    Use your preferred method to create the disk group and volume.


    Note -

    You can use Dirty Region Logging (DRL) to decrease volume recovery time in the event of a node failure. However, using DRL might decrease I/O throughput.


    See the VERITAS Volume Manager documentation for the procedures to complete this step.

  4. Register the disk group as a Sun Cluster disk device group.

    1. Start the scsetup(1M) utility.


      # scsetup
      
    2. To work with disk device groups, type 3 (Device groups).

    3. To register a disk device group, type 1 (Register a VxVM disk group).

      Follow the instructions and type the VxVM disk device group to be registered as a Sun Cluster disk device group.

      If you encounter the following error while attempting to register the disk device group, use the procedure "How to Assign a New Minor Number to a Disk Device Group". This procedure enables you to assign a new minor number that does not conflict with a minor number used by existing disk device groups.


      scconf: Failed to add device group - in use

    4. When finished, type q (Quit) to leave the scsetup utility.

  5. Verify that the disk device group has been registered.

    Look for the disk device information for the new disk displayed by the following command.


    # scconf -pv | egrep disk-device-group
    

Note -

If you change any configuration information for a VxVM disk group or volume, re-register the Sun Cluster disk device group. Re-registering the disk device group ensures that the global namespace is in the correct state. Refer to Sun Cluster 3.0 System Administration Guide for procedures for re-registering a disk device group.


Where to Go From Here

To verify your VxVM disk groups and volumes, go to "How to Verify the Disk Groups".

How to Assign a New Minor Number to a Disk Device Group

If registering a disk device group fails because of a minor number conflict with another disk group, the new disk group must be assigned a new, unused minor number. After assigning the new minor number, you then re-register the disk group as a Sun Cluster disk device group.

  1. Become superuser on a node of the cluster.

  2. Determine the minor numbers in use.


    # ls -l /global/.devices/node@1/dev/vx/dsk/*
    
  3. Choose any other multiple of 1000 that is not in use to be the base minor number for the new disk group.

  4. Assign the new base minor number to the disk group.


    # vxdg reminor diskgroup base_minor_number
    
  5. Return to Step 4 of "How to Create and Register a Shared Disk Group" to register the disk group as a Sun Cluster disk device group.

Example--How to Assign a New Minor Number to a Disk Device Group

This example shows the minor numbers 16000-16002 and 4000-4001 being used. The vxdg reminor command is used to re-minor the new disk device group to use the base minor number 5000.


# ls -l /global/.devices/node@1/dev/vx/dsk/*
/global/.devices/node@1/dev/vx/dsk/dg1
brw-------   1 root     root      56,16000 Oct  7 11:32 dg1v1
brw-------   1 root     root      56,16001 Oct  7 11:32 dg1v2
brw-------   1 root     root      56,16002 Oct  7 11:32 dg1v3
 
/global/.devices/node@1/dev/vx/dsk/dg2
brw-------   1 root     root      56,4000 Oct  7 11:32 dg2v1
brw-------   1 root     root      56,4001 Oct  7 11:32 dg2v2
# vxdg reminor dg3 5000

Where to Go From Here

You must register the disk group as a Sun Cluster disk device group. Go to Step 4 of "How to Create and Register a Shared Disk Group".

How to Verify the Disk Groups

Perform this procedure on each node of the cluster.

  1. Verify that only the local disks are included in the root disk group (rootdg), and shared disk groups are imported on the current primary node only.


    # vxdisk list
    
  2. Verify that all volumes have been started.


    # vxprint
    
  3. Verify that all shared disk groups have been registered as Sun Cluster disk device groups and are online.


    # scstat -D
    

Where to Go From Here

To configure cluster file systems, go to "How to Add Cluster File Systems".