Sun Cluster 2.2 System Administration Guide

Adding a SPARCstorage Array Disk

Depending upon the disk enclosure, adding SPARCstorage Array (SSA) multihost disks might involve taking off line all volume manager objects in the affected disk tray or disk enclosure. Additionally, the disk tray or disk enclosure might contain disks from more than one disk group, requiring that a single node own all of the affected disk groups.

How to Add a SPARCstorage Array Disk (Solstice DiskSuite)

These are the high-level steps to add a multihost disk in a Solstice DiskSuite configuration:

These are the detailed steps to add a new multihost disk to a Solstice DiskSuite configuration.

  1. Switch ownership of the logical host that will include the new disk to other nodes in the cluster.

    Switch over any logical hosts with disks in the tray you are removing.


    phys-hahost1# haswitch phys-hahost1 hahost1 hahost2
    

  2. Determine the controller number of the tray to which the disk will be added.

    SPARCstorage Arrays are assigned World Wide Names (WWN). The WWN on the front of the SPARCstorage Array also appears as part of the /devices entry, which is linked by pointer to the /dev entry containing the controller number. For example:


    phys-hahost1# ls -l /dev/rdsk | grep -i WWN | tail -1
    

    If the WWN on the front of the SPARCstorage Array is 36cc, the following output will display, and the controller number would be c2:


    phys-hahost1# ls -l /dev/rdsk | grep -i 36cc | tail -1
    lrwxrwxrwx  1 root   root       94 Jun 25 22:39 c2t5d2s7 -> ../../devices/io-unit@f,e1200000/sbi@0,0/SUNW,soc@3,0/SUNW,pln@a0000800,201836cc/ssd@5,2:h,raw

  3. Use the luxadm(1M) command with the display option to view the empty slots.


    phys-hahost1# luxadm display c2
    
                         SPARCstorage Array Configuration
    ...
                              DEVICE STATUS
          TRAY 1                 TRAY 2                 TRAY 3
    slot
    1     Drive: 0,0             Drive: 2,0             Drive: 4,0
    2     Drive: 0,1             Drive: 2,1             Drive: 4,1
    3     NO SELECT              NO SELECT              NO SELECT
    4     NO SELECT              NO SELECT              NO SELECT
    5     NO SELECT              NO SELECT              NO SELECT
    6     Drive: 1,0             Drive: 3,0             Drive: 5,0
    7     Drive: 1,1             NO SELECT              NO SELECT
    8     NO SELECT              NO SELECT              NO SELECT
    9     NO SELECT              NO SELECT              NO SELECT
    10    NO SELECT              NO SELECT              NO SELECT
    ...

    The empty slots are shown with a NO SELECT status. The output shown here is from a SPARCstorage Array 110; your display will be slightly different if you are using a different series SPARCstorage Array.

    Determine the tray to which you will add the new disk. If you can add the disk without affecting other drives, such as in the SPARCstorage Array 214 RSM, skip to Step 11.

    In the remainder of the procedure, Tray 2 is used as an example. The slot selected for the new disk is Tray 2 Slot 7. The new disk will be known as c2t3d1.

  4. Locate all hot spares affected by the installation.

    To determine the status and location of all hot spares, run the metahs(1M) command with the -i option on each of the logical hosts.


    phys-hahost1# metahs -s hahost1 -i 
    ...
    phys-hahost1# metahs -s hahost2 -i
    ...


    Note -

    Save a list of the hot spares. The list is used later in this maintenance procedure. Be sure to note the hot spare devices and their hot spare pools.


  5. Use the metahs(1M) command with the -d option to delete all affected hot spares.

    Refer to the man page for details on the metahs(1M) command.


    phys-hahost1# metahs -s hahost1 -d hot-spare-pool components
    phys-hahost1# metahs -s hahost2 -d hot-spare-pool components
    

  6. Locate all metadevice state database replicas that are on affected disks.

    Run the metadb(1M) command on each of the logical hosts to locate all metadevice state databases. Direct the output into temporary files.


    phys-hahost1# metadb -s hahost1 > /usr/tmp/mddb1
    phys-hahost1# metadb -s hahost2 > /usr/tmp/mddb2
    

    The output of metadb(1M) shows the location of metadevice state database replicas in this disk enclosure. Save this information for the step in which you restore the replicas.

  7. Delete the metadevice state database replicas that are on affected disks.

    Keep a record of the number and locale of the replicas that you delete. The replicas must be restored in a later step.


    phys-hahost1# metadb -s hahost1 -d replicas
    phys-hahost1# metadb -s hahost2 -d replicas
    

  8. Run the metastat(1M) command to determine all the metadevice components on affected disks.

    Direct the output from metastat(1M) to a temporary file so that you can use the information later when deleting and re-adding the metadevices.


    phys-hahost1# metastat -s hahost1 > /usr/tmp/replicalog1
    phys-hahost1# metastat -s hahost2 > /usr/tmp/replicalog2
    

  9. Take offline all submirrors containing affected disks.

    Use the temporary files to create a script to take offline all affected submirrors in the disk expansion unit. If only a few submirrors exist, run the metaoffline(1M) command to take each offline. The following is a sample script.


    #!/bin/sh
    # metaoffline -s <diskset> <mirror> <submirror>
    
    metaoffline -s hahost1 d15 d35
    metaoffline -s hahost2 d15 d35
    ...

  10. Spin down the affected disks.

    Spin down the SPARCstorage Array disks in the tray using the luxadm(1M) command.


    phys-hahost1# luxadm stop -t 2 c2
    

  11. Add the new disk.

    Use the instructions in your multihost disk expansion unit service manual to perform the hardware procedure of adding the disk. After the addition:

    • If your disk enclosure is a SPARCstorage Array 214 RSM, skip to Step 16. (This type of disk can be added without affecting other drives.)

    • For all other SPARCstorage Array types, proceed with Step 12.

  12. Make sure all disks in the tray spin up.

    The disks in the SPARCstorage Array tray should spin up automatically, but if the tray fails to spin up within two minutes, force the action using the following command:


    phys-hahost1# luxadm start -t 2 c2
    

  13. Bring the submirrors back online.

    Modify the script that you created in Step 9 to bring the submirrors back online.


    #!/bin/sh
    # metaonline -s <diskset> <mirror> <submirror>
    
    metaonline -s hahost1 d15 d35
    metaonline -s hahost2 d15 d35
    ...

  14. Restore the hot spares that were deleted in Step 5.


    phys-hahost1# metahs -s hahost1 -a hot-spare-pool components
    phys-hahost1# metahs -s hahost2 -a hot-spare-pool components
    

  15. Restore the original count of metadevice state database replicas to the devices in the tray.

    The replicas were removed in Step 7.


    phys-hahost1# metadb -s hahost1 -a replicas
    phys-hahost1# metadb -s hahost2 -a replicas
    

  16. Run the drvconfig(1M) and disks(1M) commands to create the new entries in /devices, /dev/dsk, and /dev/rdsk for all new disks.


    phys-hahost1# drvconfig 
    phys-hahost1# disks 
    

  17. Switch ownership of the logical host to which this disk will be added to the other node that is connected to the SPARCstorage Array.

    This assumes a topology in which each disk is connected to two nodes.


    phys-hahost1# haswitch phys-hahost2 hahost2
    

  18. Run the drvconfig(1M) and disks(1M) commands on the cluster node that now owns the diskset to which this disk will be added.


    phys-hahost2# drvconfig 
    phys-hahost2# disks 
    

  19. Run the scdidadm(1M) command to initialize the new disk for use by the DID pseudo driver.

    You must run scdidadm(1M) on Node 0 in the cluster. Refer to the Sun Cluster 2.2 Software Installation Guide for details on the DID pseudo driver.


    phys-hahost2# scdidadm -r
    

  20. Add the disk to a diskset.

    The command syntax is as follows, where diskset is the name of the diskset containing the failed disk, and drive is the DID name of the disk in the form dN (for new installations of Sun Cluster), or cNtYdZ (for installations that upgraded from HA 1.3):


    # metaset -s diskset -a drive
    


    Caution - Caution -

    The metaset(1M) command might repartition this disk automatically. See the Solstice DiskSuite documentation for more information.


  21. Use the scadmin(1M) command to reserve and enable failfast on the specified disk that has just been added to the diskset.


    phys-hahost2# scadmin reserve cNtXdYsZ
    

  22. Perform the usual administration actions on the new disk.

    You can now perform the usual administration steps that bring a new drive into service. These include partitioning the disk, adding it to the configuration as a hot spare, or configuring it as a metadevice. See the Solstice DiskSuite documentation for more information on these tasks.

  23. If necessary, switch logical hosts back to their default masters.

How to Add a SPARCstorage Array Disk (VxVM)

These are the high-level steps to add a multihost disk to an VxVM configuration:

These are the detailed steps to add a new multihost disk to an VxVM configuration.

  1. Switch ownership of the logical host that will include the new disk to another node in the cluster.

    Switch over any logical hosts with disks in the tray you are removing.


    phys-hahost1# haswitch phys-hahost1 hahost1 hahost2
    


    Note -

    In a mirrored configuration, you may not need to switch logical hosts as long as the node is not shut down.


  2. Determine the controller number of the tray to which the disk will be added.

    SPARCstorage Arrays are assigned World Wide Names (WWN). The WWN on the front of the SPARCstorage Array also appears as part of the /devices entry, which is linked by pointer to the /dev entry containing the controller number. For example:


    phys-hahost1# ls -l /dev/rdsk | grep -i WWN | tail -1
    

    If the WWN on the front of the SPARCstorage Array is 36cc, the following output will display and the controller number would be c2:


    phys-hahost1# ls -l /dev/rdsk | grep -i 36cc | tail -1
    lrwxrwxrwx  1 root   root       94 Jun 25 22:39 c2t5d2s7 -> ../../devices/io-unit@f,e1200000/sbi@0,0/SUNW,soc@3,0/SUNW,pln@a0000800,201836cc/ssd@5,2:h,raw
    phys-hahost1#

  3. Use the luxadm(1M) command with the display option to view the empty slots.

    If you can add the disk without affecting other drives, skip to Step 11.


    phys-hahost1# luxadm display c2
    
                         SPARCstorage Array Configuration
    ...
                              DEVICE STATUS
          TRAY 1                 TRAY 2                 TRAY 3
    slot
    1     Drive: 0,0             Drive: 2,0             Drive: 4,0
    2     Drive: 0,1             Drive: 2,1             Drive: 4,1
    3     NO SELECT              NO SELECT              NO SELECT
    4     NO SELECT              NO SELECT              NO SELECT
    5     NO SELECT              NO SELECT              NO SELECT
    6     Drive: 1,0             Drive: 3,0             Drive: 5,0
    7     Drive: 1,1             NO SELECT              NO SELECT
    8     NO SELECT              NO SELECT              NO SELECT
    9     NO SELECT              NO SELECT              NO SELECT
    10    NO SELECT              NO SELECT              NO SELECT
    ...

    The empty slots are shown with a NO SELECT status. The output shown here is from a SPARCstorage Array 110; your display will be slightly different if you are using a different series SPARCstorage Array.

    Determine the tray to which you will add the new disk.

    In the remainder of the procedure, Tray 2 is used as an example. The slot selected for the new disk is Tray 2 Slot 7. The new disk will be known as c2t3d1.

  4. Identify all the volumes and corresponding plexes on the disks in the tray which will contain the new disk.

    1. From the physical device address cNtNdN, obtain the controller number and the target number.

      In this example, the controller number is 2 and the target is 3.

    2. Identify devices from a vxdisk list output.

      Here is an example of how vxdisk can be used to obtain the information.


      # vxdisk -g diskgroup -q list | nawk '/^c2/ {print $3}'
      

      Record the volume media name for the disks from the output of the command.

    3. Identify all plexes on the above devices by using the appropriate version (csh, ksh, or Bourne shell) of the following command.


      PLLIST=`vxprint -ptq -g diskgroup -e '(aslist.sd_dm_name in ("c2t3d0")) && (pl_kstate=ENABLED)' | nawk '{print $2}'`
      

      For csh, the syntax is set PLLIST .... For ksh, the syntax is export PLLIST= .... The Bourne shell requires the command export PLLIST after the variable is set.

  5. After you have set the variable, stop I/O to the volumes whose components (subdisks) are on the tray.

    Make sure all volumes associated with that tray are detached (mirrored or RAID5 configurations) or stopped (simple plexes). Issue the following command to detach a mirrored plex.


    # vxplex -g diskgroup det ${PLLIST}
    

    An alternate command for detaching each plex in a tray is:


    # vxplex -g diskgroup -v volume det plex
    

    To stop I/O to simple plexes, unmount any file systems or stop database access.


    Note -

    Mirrored volumes will still be active because the other half of the mirror is still available.


  6. Add the new disk.

    Use the instructions in your multihost disk expansion unit service manual to perform the hardware procedure of adding the disk.

  7. Make sure all disks in the tray spin up.

    The disks in the SPARCstorage Array tray should spin up automatically, but if the tray fails to spin up within two minutes, force the action with the following command:


    phys-hahost1# luxadm start -t 2 c2
    

  8. Run the drvconfig(1M) and disks(1M) commands to create the new entries in /devices, /dev/dsk, and /dev/rdsk for all new disks.


    phys-hahost1# drvconfig
    phys-hahost1# disks
    

  9. Force the VxVM vxconfigd driver to scan for new disks.


    phys-hahost1# vxdctl enable
    

  10. Bring the new disk under VM control by using the vxdiskadd command.

  11. Perform the usual administration actions on the new disk.

    You can now perform the usual administration steps that bring a new drive into service. These include partitioning the disk, adding it to the configuration as a hot spare, or configuring it as a plex.

    This completes the procedure of adding a multihost disk to an existing SPARCstorage Array.