Sun Cluster 2.2 Software Installation Guide

Appendix C Configuring Sun StorEdge Volume Manager and Cluster Volume Manager

Configure your local and multihost disks for Sun StorEdge Volume Manager (SSVM) and Cluster Volume Manager (CVM) using the guidelines in this chapter along with the information in Chapter 2, Planning the Configuration. Refer to your SSVM or CVM documentation for additional details.

This appendix includes the following procedures:

C.1 Volume Manager Checklist

Verify that the items listed below are in place before configuring the volume manager:

After configuring the volume manager, verify that:

C.2 Configuring SSVM for Sun Cluster

Use this procedure to configure your disk groups, volumes, and file systems for the logical hosts.


Note -

This procedure is only applicable for high availability (HA) configurations. If you are using Oracle Parallel Server and Cluster Volume Manager, refer to the Sun Cluster 2.2 Cluster Volume Manager Guide for configuration information.


C.2.1 How to Configure SSVM for Sun Cluster

  1. Format the disks to be administered by the volume manager.

    Use the fmthard(1M) command to create a VTOC on each disk with a single Slice 2 defined for the entire disk.

  2. Initialize each disk for use by the volume manager.

    Use the vxdiskadd(1M) command to initialize each disk.

  3. Add each initialized disk to a disk group.

    Use the vxdg(1M) command to add disks to a disk group. You must designate at least one disk to the rootdg disk group on each node. When you configure SSVM, you have the option of creating a rootdg by encapsulating the boot disk or by creating a simple rootdg using a few cylinders of the boot disk. To encapsulate the boot disk, refer to your SSVM documentation. To configure the rootdg using part of the boot disk, perform the following steps.

    1. Create a 10-MByte partition on the boot disk.

    2. Add the SSVM packages by using the pkgadd(1M) command.

    3. Execute the following commands to create the root disk group.

      In this example, c0t0d0s7 is the target partition.

      # vxconfigd -m disable
      # vxdctl init
      # vxdg init rootdg
      # vxdctl add disk c0t0d0s7 type=simple
      
       vxvm:vxdctl: WARNING: Device c0t0d0s7: Not currently in the configuration
      
       # vxdisk -f init c0t0d0s7 type=simple
      # vxdg -g rootdg adddisk c0t0d0s7
      # vxdctl enable
      # rm /etc/vx/reconfig.d/state.d/install-db
      

      Note -

      The error message

      vxvm:vxdctl: WARNING: Device c0t0d0s7: 
      Not currently in the configuration 

      can be ignored safely at this point.


  4. (Optional) Assign hot spares.

    For each disk group, use the vxedit(1M) command to assign one disk as a hot spare for each disk controller.

  5. Reboot all nodes on which you installed SSVM.

  6. For each disk group, create a volume to be used for the HA administrative file system on the multihost disks.

    The HA administrative file system is used by Sun Cluster for data service specific state or configuration information.

    Use the vxassist(1M) command to create a 10-Mbyte volume mirrored across two controllers for the HA administrative file system. Name this volume diskgroup-stat.

  7. For each disk group, create the other volumes to be used by HA data services.

    Use the vxassist(1M) command to create these volumes.

  8. Start the volumes.

    Use the vxvol(1M) command to start the volumes.

  9. Create file systems on the volumes.

    Refer to "C.3 Configuring VxFS File Systems on the Multihost Disks", for details on creating the necessary file systems.

C.3 Configuring VxFS File Systems on the Multihost Disks

This section contains procedures to configure multihost VxFS file systems. To configure file systems to be shared by NFS, refer to Chapter 11, Setting Up and Administering Sun Cluster HA for NFS.

C.3.1 How to Configure VxFS File Systems on the Multihost Disks

  1. Use the mkfs(1M) command to create file systems on the volumes.

    Before you can run the mkfs(1M) command on the disk groups, you might need to take ownership of the disk group containing the volume. Do this by importing the disk group to the active node using the vxdg(1M) command.

    phys-hahost1# vxdg import diskgroup
    
    1. Create the HA administrative file systems on the volumes.

      Run the mkfs(1M) command on each volume in the configuration.

      phys-hahost1# mkfs /dev/vx/rdsk/diskgroup/diskgroup-stat
      
    2. Create file systems for all volumes.

      These volumes will be mounted by the logical hosts.

      phys-hahost1# mkfs -F vxfs /dev/vx/rdsk/diskgroup/volume
      
  2. Create a directory mount point for the HA administrative file system.

    phys-hahost1# mkdir /logicalhost
    
  3. Mount the HA administrative file system.

    phys-hahost1# mount /dev/vx/dsk/diskgroup/diskgroup-stat/logicalhost
    
  4. Create mount points for the data service file systems created in Step 1b.

    phys-hahost1# mkdir /logicalhost/volume
    
  5. Create the /etc/opt/SUNWcluster/conf/hanfs directory.

  6. Create and edit the /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost file to update the administrative and multihost VxFS file system information.

    Make sure that entries for each disk group appear in the vfstab.logicalhost files on each node that is a potential master of the disk group. Make sure the vfstab.logicalhost files contain the same information. Use the cconsole(1) facility to make simultaneous edits to vfstab.logicalhost files on all nodes in the cluster.

    Here is a sample /etc/vfstab.logicalhost file showing the administrative file system and two other VxFS file systems. In this example, dg1 is the disk group name and hahost1 is the logical host name.

    /dev/vx/dsk/dg1/dg1-stat     /dev/vx/rdsk/dg1/dg1-stat     /hahost1 vxfs - yes -
    /dev/vx/dsk/dg1/vol_1        /dev/vx/rdsk/dg1/vol_1        /hahost1/vol_1 vxfs - yes -
    /dev/vx/dsk/dg1/vol_2        /dev/vx/rdsk/dg1/vol_2        /hahost1/vol_1 vxfs - yes -
  7. Unmount the HA administrative file systems that you mounted in Step 3.

    phys-hahost1# umount /logicalhost
    
  8. Export the disk groups.

    If you took ownership of the disk groups on the active node by using the vxdg(1M) command before creating the file systems, release ownership of the disk groups once file system creation is complete.

    phys-hahost1# vxdg deport diskgroup
    
  9. Import the disk groups to their default masters.

    It is most convenient to create and populate disk groups from the active node that is the default master of the particular disk group.

    Each disk group should be imported onto the default master node using the -t option. The -t option is important, as it prevents the import from persisting across the next boot.

    phys-hahost1# vxdg -t import diskgroup
    
  10. (Optional) To make file systems NFS-sharable, refer to Chapter 11, Setting Up and Administering Sun Cluster HA for NFS.

C.4 Administering the Pseudo-Device Major Number

To avoid "Stale File handle" errors on the client on NFS failovers, the vxio driver must have identical pseudo-device major numbers on all cluster nodes. This number can be found in the /etc/name_to_major file after you complete the installation. Use the following procedures to verify and change the pseudo-device major numbers.

C.4.1 How to Verify the Pseudo-Device Major Number (SSVM)

  1. Verify the pseudo-device major number on all nodes.

    For example, enter the following:

    # grep vxio /etc/name_to_major
    vxio 45
  2. If the pseudo-device number is not the same on all nodes, stop all activity on the system and edit the /etc/name_to_major file to make the number identical on all cluster nodes.

    Be sure that the number is unique in the /etc/name_to_major file for each node. A quick way to do this is to find, by visual inspection, the maximum number assigned on each node in the /etc/name_to_major file, compute the maximum of these numbers, add one, then assign the sum to the vxio driver.

  3. Reboot the system immediately after the number is changed.

  4. (Optional) If the system reports disk group errors and the cluster will not start, you might need to perform these steps.

    1. Use the vxedit(1M) command to change the "failing" field to "off" for affected subdisks. Refer to the vxedit(1M) man page for more information.

    2. Make sure all volumes are enabled and active.

C.4.2 How to Change the Pseudo-Device Major Number (SSVM)

  1. Unencapsulate the root disk using the SSVM upgrade_start script.

    Find the script in the /Tools/scripts directory on your SSVM media. Run the script from only one node. In this example, CDROM_path is the path to the tools on the SSVM media.

    phys-hahost1# CDROM_path/Tools/scripts/upgrade_start
    
  2. Reboot the node.

  3. Edit the /etc/name_to_major file and remove the appropriate entry, for example, /dev/vx/{dsk,rdsk,dmp,rdmp}.

  4. Reboot the node.

  5. Run the following command:

    phys-hahost1# vxconfigd -k -r reset
    
  6. Re-encapsulate the root disk using the SSVM upgrade_finish script.

    Find the script in the /Tools/scripts directory on your SSVM media. Run the script from only one node.

    phys-hahost1# CDROM_path/Tools/scripts/upgrade_finish
    
  7. Reboot the node.

C.5 Configuring the Shared CCD Volume

You use the confccdssa(1M) command to create a disk group and volume to be used to store the CCD database. This is supported only on two-node clusters using Sun StorEdge Volume Manager or Cluster Volume Manager as the volume manager. This is not supported on clusters using Solstice DiskSuite.


Note -

The root disk group (rootdg) must be initialized before you run the confccdssa(1M) command.


C.5.1 How to Configure the Shared CCD Volume

  1. Make sure you have configured a volume for the CCD.

    Run the following command on both nodes. See the scconf(1M) man page for more details.

    # scconf clustername -S ccdvol
    
  2. Run the confccdssa(1M) command on only one node, and use it to select disks for the CCD.

    Select two disks from the shared disk expansion unit on which the shared CCD volume will be constructed:

    # /opt/SUNWcluster/bin/confccdssa clustername
    
     On a 2-node configured cluster you may select two disks 
    that are shared between the 2 nodes to store the CCD 
    database in case of a single node failure.
    
     Please, select the disks you want to use from the following list:
    
     Select devices from list.
     Type the number corresponding to the desired selection.
     For example: 1<CR>
    
     1) SSA:00000078C9BF
     2) SSA:00000080295E
     3) DISK:c3t32d0s2:9725B71845
     4) DISK:c3t33d0s2:9725B70870
     Device 1: 3
    
     Disk c3t32d0s2 with serial id 9725B71845 has been selected
     as device 1.
    
     Select devices from list.
     Type the number corresponding to the desired selection.
     For example: 1<CR>
    
     1) SSA:00000078C9BF
     2) SSA:00000080295E
     3) DISK:c3t33d0s2:9725B70870
     4) DISK:c3t34d0s2:9725B71240
     Device 2: 4
    
     Disk c3t34d0s2 with serial id 9725B71240 has been selected
     as device 2.
    
     newfs: construct a new file system /dev/vx/rdsk/sc_dg/ccdvol:
     (y/n)? y
    ...

    The two disks selected can no longer be included in any other disk group. Once selected, the volume is created and a file system is laid out on the volume. See the confccdssa(1M) man page for more details.