Sun Cluster 2.2 Software Installation Guide

Appendix C Configuring VERITAS Volume Manager

Configure your local and multihost disks for VERITAS Volume Manager (VxVM) using the guidelines in this chapter along with the information in Chapter 2, Planning the Configuration. Refer to your VERITAS documentation for additional details.

This appendix includes the following sections:

Volume Manager Checklist

Verify that the items listed below are in place before configuring the volume manager:

After configuring the volume manager, verify that:

Configuring VxVM for Sun Cluster

Use this procedure to configure your disk groups, volumes, and file systems for the logical hosts.


Note -

This procedure is only applicable for high availability (HA) configurations. If you are using Oracle Parallel Server and the cluster feature of VxVM, refer to your VERITAS documentation for configuration information.


How to Configure VxVM for Sun Cluster
  1. Format the disks to be administered by the volume manager.

    Use the fmthard(1M) command to create a VTOC on each disk with a single Slice 2 defined for the entire disk.

  2. For each cluster node, create a root disk group (rootdg).

    See your VERITAS documentation for guidelines and details about creating a rootdg.

  3. Initialize each disk for use by the volume manager.

    You can use the vxdiskadd(1M) or vxinstall(1M) commands to initialize each disk. See your VERITAS documentation for details.

  4. (Optional) Assign hot spares.

    For each disk group, use the vxedit(1M) command to assign one disk as a hot spare for each disk controller.

  5. Reboot all nodes on which you installed VxVM.

  6. For each disk group, create a volume to be used for the HA administrative file system on the multihost disks.

    The HA administrative file system is used by Sun Cluster for data service specific state or configuration information.

    Use the vxassist(1M) command to create a 10-Mbyte volume mirrored across two controllers for the HA administrative file system. Name this volume diskgroup-stat.

  7. For each disk group, create the other volumes to be used by HA data services.

    Use the vxassist(1M) command to create these volumes.

  8. Start the volumes.

    Use the vxvol(1M) command to start the volumes.

  9. Create file systems on the volumes.

    Refer to "Configuring VxFS File Systems on the Multihost Disks", for details on creating the necessary file systems.

Configuring VxFS File Systems on the Multihost Disks

This section contains procedures to configure multihost VxFS file systems. To configure file systems to be shared by NFS, refer to Chapter 11, Installing and Configuring Sun Cluster HA for NFS.

How to Configure VxFS File Systems on the Multihost Disks
  1. Use the mkfs(1M) command to create file systems on the volumes.

    Before you can run the mkfs(1M) command on the disk groups, you might need to take ownership of the disk group containing the volume. Do this by importing the disk group to the active node using the vxdg(1M) command.


    phys-hahost1# vxdg import diskgroup
    

    1. Create the HA administrative file systems on the volumes.

      Run the mkfs(1M) command on each volume in the configuration.


      phys-hahost1# mkfs -F vxfs /dev/vx/rdsk/diskgroup/diskgroup-stat
      

    2. Create file systems for all volumes.

      These volumes will be mounted by the logical hosts.


      phys-hahost1# mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume
      

  2. Create a directory mount point for the HA administrative file system.

    Always use the logical host name as the mount point. This strategy is necessary to enable start up of DBMS fault monitors.


    phys-hahost1# mkdir /logicalhost
    

  3. Mount the HA administrative file system.


    phys-hahost1# mount /dev/vx/dsk/diskgroup/diskgroup-stat/logicalhost
    

  4. Create mount points for the data service file systems created in Step 1b.


    phys-hahost1# mkdir /logicalhost/volume
    

  5. Edit the /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost file to update the administrative and multihost VxFS file system information.

    Make sure that entries for each disk group appear in the vfstab.logicalhost files on each node that is a potential master of the disk group. Make sure the vfstab.logicalhost files contain the same information. Use the cconsole(1) facility to make simultaneous edits to vfstab.logicalhost files on all nodes in the cluster.

    Here is a sample /etc/vfstab.logicalhost file showing the administrative file system and two other VxFS file systems. In this example, dg1 is the disk group name and hahost1 is the logical host name.


    /dev/vx/dsk/dg1/dg1-stat	 								/dev/vx/rdsk/dg1/dg1-stat									 /hahost1 vxfs - yes -
    /dev/vx/dsk/dg1/vol_1									 /dev/vx/rdsk/dg1/vol_1	 							 /hahost1/vol_1 vxfs - yes -
    /dev/vx/dsk/dg1/vol_2									 /dev/vx/rdsk/dg1/vol_2									 /hahost1/vol_1 vxfs - yes -

  6. Unmount the HA administrative file systems that you mounted in Step 3.


    phys-hahost1# umount /logicalhost
    

  7. Export the disk groups.

    If you took ownership of the disk groups on the active node by using the vxdg(1M) command before creating the file systems, release ownership of the disk groups once file system creation is complete.


    phys-hahost1# vxdg deport diskgroup
    

  8. Import the disk groups to their default masters.

    It is most convenient to create and populate disk groups from the active node that is the default master of the particular disk group.

    Each disk group should be imported onto the default master node using the -t option. The -t option is important, as it prevents the import from persisting across the next boot.


    phys-hahost1# vxdg -t import diskgroup
    

  9. (Optional) To make file systems NFS-sharable, refer to Chapter 11, Installing and Configuring Sun Cluster HA for NFS.

Administering the Pseudo-Device Major Number

To avoid "Stale File handle" errors on the client on NFS failovers, the vxio driver must have identical pseudo-device major numbers on all cluster nodes. This number can be found in the /etc/name_to_major file after you complete the installation. Use the following procedures to verify and change the pseudo-device major numbers.

How to Verify the Pseudo-Device Major Number (VxVM)
  1. Verify the pseudo-device major number on all nodes.

    For example, enter the following:


    # grep vxio /etc/name_to_major
    vxio 45

  2. If the pseudo-device number is not the same on all nodes, stop all activity on the system and edit the /etc/name_to_major file to make the number identical on all cluster nodes.

    Be sure that the number is unique in the /etc/name_to_major file for each node. A quick way to do this is to find, by visual inspection, the maximum number assigned on each node in the /etc/name_to_major file, compute the maximum of these numbers, add one, then assign the sum to the vxio driver.

  3. Reboot the system immediately after the number is changed.

  4. (Optional) If the system reports disk group errors and the cluster will not start, you might need to perform these steps.

    1. Use the vxedit(1M) command to change the "failing" field to "off" for affected subdisks. Refer to the vxedit(1M) man page for more information.

    2. Make sure all volumes are enabled and active.

How to Change the Pseudo-Device Major Number (VxVM)
  1. Unencapsulate the root disk using the VxVM upgrade_start script.

    Find the script in the /Tools/scripts directory on your VxVM media. Run the script from only one node. In this example, CDROM_path is the path to the scripts on the VxVM media.


    phys-hahost1# CDROM_path/upgrade_start
    

  2. Reboot the node.

  3. Edit the /etc/name_to_major file and remove the appropriate entry, for example, /dev/vx/{dsk,rdsk,dmp,rdmp}.

  4. Reboot the node.

  5. Run the following command:


    phys-hahost1# vxconfigd -k -r reset
    

  6. Re-encapsulate the root disk using the VxVM upgrade_finish script.

    Find the path to the script on your VxVM media. Run the script from only one node.


    phys-hahost1# CDROM_path/upgrade_finish
    

  7. Reboot the node.

Configuring the Shared CCD Volume

You use the confccdssa(1M) command to create a disk group and volume to be used to store the CCD database. This is supported only on two-node clusters using VERITAS Volume Manager. This is not supported on clusters using Solstice DiskSuite.


Note -

The root disk group (rootdg) must be initialized before you run the confccdssa(1M) command.


How to Configure the Shared CCD Volume
  1. Make sure you have configured a volume for the CCD.

    Run the following command on both nodes. See the scconf(1M) man page for more details.


    # scconf clustername -S ccdvol
    

  2. Run the confccdssa(1M) command on only one node, and use it to select disks for the CCD.

    Select two disks from the shared disk expansion unit on which the shared CCD volume will be constructed:


    # /opt/SUNWcluster/bin/confccdssa clustername
    On a 2-node configured cluster you may select two disks that are 
    shared between the 2 nodes to store the CCD database in case of a
     single node failure.
    
    Please, select the disks you want to use from the following list:
    
    Select devices from list.
    Type the number corresponding to the desired selection.
    For example: 1<CR>
    
    1) SSA:00000078C9BF
    2) SSA:00000080295E
    3) DISK:c3t32d0s2:9725B71845
    4) DISK:c3t33d0s2:9725B70870
    
    Device 1: 3
    Disk c3t32d0s2 with serial id 9725B71845 has been selected
    as device 1.
    
    Select devices from list.
    Type the number corresponding to the desired selection.
    For example: 1<CR>
    
    1) SSA:00000078C9BF
    2) SSA:00000080295E
    3) DISK:c3t33d0s2:9725B70870
    4) DISK:c3t34d0s2:9725B71240
    
    Device 2: 4
    Disk c3t34d0s2 with serial id 9725B71240 has been selected
    as device 2.
    
    newfs: construct a new file system /dev/vx/rdsk/sc_dg/ccdvol: 
    
    (y/n)? y
    ...

The two disks selected can no longer be included in any other disk group. Once selected, the volume is created and a file system is laid out on the volume. See the confccdssa(1M) man page for more details.