Sun Cluster 2.2 System Administration Guide

11.4.2 How to Take a SPARCstorage Array Tray Out of Service (SSVM or CVM)

Before removing a SPARCstorage Array tray, you must halt all I/O and spin down all drives in the tray. The drives automatically spin up if I/O requests are made, so it is necessary to stop all I/O before the drives are spun down.

These are the high-level steps to take a SPARCstorage Array tray out of service in an SSVM configuration:

If the entire SPARCstorage Array is being serviced, you must perform these steps on each tray.

These are the detailed steps to take a SPARCstorage Array tray out of service in an SSVM configuration.

  1. Switch ownership of the affected logical hosts to other nodes by using the haswitch(1M) command.

    phys-hahost1# haswitch phys-hahost1 hahost1 hahost2
    

    The SPARCstorage Array tray to be removed might contain disks included in more than one logical host. If this is the case, switch ownership of all logical hosts with disks using this tray to another node in the cluster. The luxadm(1M) command will be used later to spin down the disks. In this example, the haswitch(1M) command switched the logical hosts to phys-hahost1, enabling phys-hahost1 to perform the administrative functions.

  2. Identify all the volumes and corresponding plexes on the disks in the tray which is being taken out of service.

    1. From the physical device address cNtNdN, obtain the controller number and the target number.

      For example, if the device address is c3t2d0, the controller number is 3 and the target is 2.

    2. Identify SSVM or CVM devices on the affected tray from a vxdisk list output.

      If the target is 0 or 1, identify all devices with physical addresses beginning with cNt0 and cNt1. If the target is 2 or 3, identify all devices with physical addresses beginning with cNt2 and cNt3. If the target is 4 or 5, identify all devices with physical addresses beginning with cNt4 and cNt5. Here is an example of how vxdisk can be used to obtain the information.

      # vxdisk -g diskgroup -q list | egrep c3t2\|c3t3 | nawk '{print $3}'
      
    3. Identify all plexes on the above devices by using the appropriate version (csh, ksh, or Bourne shell) of the following command.

      PLLIST=`vxprint -ptq -g diskgroup -e '(aslist.sd_dm_name in
      ("c3t2d0","c3t3d0","c3t3d1")) && (pl_kstate=ENABLED)' | nawk '{print $2}'`
      

      For csh, the syntax is set PLLIST .... For ksh, the syntax is export PLLIST= .... The Bourne shell requires the command export PLLIST after the variable is set.

  3. After you have set the variable, stop I/O to the volumes whose components (subdisks) are on the tray.

    Make sure all volumes associated with that tray are detached (mirrored or RAID5 configurations) or stopped (simple plexes). Issue the following command to detach a mirrored plex.

    # vxplex det ${PLLIST}
    

    An alternate command for detaching each plex in a tray is:

    # vxplex -g diskgroup -v volume det plex
    

    To stop I/O to simple plexes, unmount any file systems or stop database access.


    Note -

    Mirrored volumes will still be active because the other half of the mirror is still available.


  4. If NVRAM is enabled, flush the NVRAM data on the appropriate controller, tray, or disk(s). Otherwise, skip to Step 5.

    # luxadm sync_cache pathname
    

    A confirmation appears, indicating that NVRAM data has been flushed. See "11.7.3 Flushing and Purging NVRAM", for details on flushing NVRAM data.

  5. To remove the tray, use the luxadm stop command to spin it down.

    When the tray lock light is out, remove the tray and perform the required service.

    # luxadm stop c1