Depending upon the disk enclosure, adding SPARCstorage Array (SSA) multihost disks might involve taking off line all volume manager objects in the affected disk tray or disk enclosure. Additionally, the disk tray or disk enclosure might contain disks from more than one disk group, requiring that a single node own all of the affected disk groups.
These are the high-level steps to add a multihost disk in a Solstice DiskSuite configuration:
Switching logical hosts to one cluster node
Identifying the controller for this new disk, and locating an empty slot in the tray or enclosure
For Model 100 series SPARCstorage Arrays, preparing the disk enclosure for removal of a disk tray
For Model 200 series SPARCstorage Arrays with wide differential SCSI disk trays, powering down the controller and all attached disks
Deleting all hot spares from the affected drives
Deleting all metadevice state databases from the affected drives
Taking offline all metadevices containing affected drives
Spinning down all affected drives
Adding the new disk
Returning the affected drives to service
Spinning up all drives
Bringing back online all affected metadevices
Adding back all deleted hot spares
Recreating all deleted metadevices
Performing the administrative actions to prepare the disk for use by Sun Cluster
Creating the /devices special files and /dev/dsk and /dev/rdsk links
Running the scdidadm -r command
Adding the disk to the diskset
Formatting and partitioning the disk, if necessary
Performing the volume manager-related administrative tasks
These are the detailed steps to add a new multihost disk to a Solstice DiskSuite configuration.
Switch ownership of the logical host that will include the new disk to other nodes in the cluster.
Switch over any logical hosts with disks in the tray you are removing.
phys-hahost1# haswitch phys-hahost1 hahost1 hahost2 |
Determine the controller number of the tray to which the disk will be added.
SPARCstorage Arrays are assigned World Wide Names (WWN). The WWN on the front of the SPARCstorage Array also appears as part of the /devices entry, which is linked by pointer to the /dev entry containing the controller number. For example:
phys-hahost1# ls -l /dev/rdsk | grep -i WWN | tail -1 |
If the WWN on the front of the SPARCstorage Array is 36cc, the following output will display, and the controller number would be c2:
phys-hahost1# ls -l /dev/rdsk | grep -i 36cc | tail -1 lrwxrwxrwx 1 root root 94 Jun 25 22:39 c2t5d2s7 -> ../../devices/io-unit@f,e1200000/sbi@0,0/SUNW,soc@3,0/SUNW,pln@a0000800,201836cc/ssd@5,2:h,raw |
Use the luxadm(1M) command with the display option to view the empty slots.
phys-hahost1# luxadm display c2 SPARCstorage Array Configuration ... DEVICE STATUS TRAY 1 TRAY 2 TRAY 3 slot 1 Drive: 0,0 Drive: 2,0 Drive: 4,0 2 Drive: 0,1 Drive: 2,1 Drive: 4,1 3 NO SELECT NO SELECT NO SELECT 4 NO SELECT NO SELECT NO SELECT 5 NO SELECT NO SELECT NO SELECT 6 Drive: 1,0 Drive: 3,0 Drive: 5,0 7 Drive: 1,1 NO SELECT NO SELECT 8 NO SELECT NO SELECT NO SELECT 9 NO SELECT NO SELECT NO SELECT 10 NO SELECT NO SELECT NO SELECT ... |
The empty slots are shown with a NO SELECT status. The output shown here is from a SPARCstorage Array 110; your display will be slightly different if you are using a different series SPARCstorage Array.
Determine the tray to which you will add the new disk. If you can add the disk without affecting other drives, such as in the SPARCstorage Array 214 RSM, skip to Step 11.
In the remainder of the procedure, Tray 2 is used as an example. The slot selected for the new disk is Tray 2 Slot 7. The new disk will be known as c2t3d1.
Locate all hot spares affected by the installation.
To determine the status and location of all hot spares, run the metahs(1M) command with the -i option on each of the logical hosts.
phys-hahost1# metahs -s hahost1 -i ... phys-hahost1# metahs -s hahost2 -i ... |
Save a list of the hot spares. The list is used later in this maintenance procedure. Be sure to note the hot spare devices and their hot spare pools.
Use the metahs(1M) command with the -d option to delete all affected hot spares.
Refer to the man page for details on the metahs(1M) command.
phys-hahost1# metahs -s hahost1 -d hot-spare-pool components phys-hahost1# metahs -s hahost2 -d hot-spare-pool components |
Locate all metadevice state database replicas that are on affected disks.
Run the metadb(1M) command on each of the logical hosts to locate all metadevice state databases. Direct the output into temporary files.
phys-hahost1# metadb -s hahost1 > /usr/tmp/mddb1 phys-hahost1# metadb -s hahost2 > /usr/tmp/mddb2 |
The output of metadb(1M) shows the location of metadevice state database replicas in this disk enclosure. Save this information for the step in which you restore the replicas.
Delete the metadevice state database replicas that are on affected disks.
Keep a record of the number and locale of the replicas that you delete. The replicas must be restored in a later step.
phys-hahost1# metadb -s hahost1 -d replicas phys-hahost1# metadb -s hahost2 -d replicas |
Run the metastat(1M) command to determine all the metadevice components on affected disks.
Direct the output from metastat(1M) to a temporary file so that you can use the information later when deleting and re-adding the metadevices.
phys-hahost1# metastat -s hahost1 > /usr/tmp/replicalog1 phys-hahost1# metastat -s hahost2 > /usr/tmp/replicalog2 |
Take offline all submirrors containing affected disks.
Use the temporary files to create a script to take offline all affected submirrors in the disk expansion unit. If only a few submirrors exist, run the metaoffline(1M) command to take each offline. The following is a sample script.
#!/bin/sh # metaoffline -s <diskset> <mirror> <submirror> metaoffline -s hahost1 d15 d35 metaoffline -s hahost2 d15 d35 ... |
Spin down the SPARCstorage Array disks in the tray using the luxadm(1M) command.
phys-hahost1# luxadm stop -t 2 c2 |
Add the new disk.
Use the instructions in your multihost disk expansion unit service manual to perform the hardware procedure of adding the disk. After the addition:
Make sure all disks in the tray spin up.
The disks in the SPARCstorage Array tray should spin up automatically, but if the tray fails to spin up within two minutes, force the action using the following command:
phys-hahost1# luxadm start -t 2 c2 |
Bring the submirrors back online.
Modify the script that you created in Step 9 to bring the submirrors back online.
#!/bin/sh # metaonline -s <diskset> <mirror> <submirror> metaonline -s hahost1 d15 d35 metaonline -s hahost2 d15 d35 ... |
Restore the hot spares that were deleted in Step 5.
phys-hahost1# metahs -s hahost1 -a hot-spare-pool components phys-hahost1# metahs -s hahost2 -a hot-spare-pool components |
Restore the original count of metadevice state database replicas to the devices in the tray.
The replicas were removed in Step 7.
phys-hahost1# metadb -s hahost1 -a replicas phys-hahost1# metadb -s hahost2 -a replicas |
Run the drvconfig(1M) and disks(1M) commands to create the new entries in /devices, /dev/dsk, and /dev/rdsk for all new disks.
phys-hahost1# drvconfig phys-hahost1# disks |
Switch ownership of the logical host to which this disk will be added to the other node that is connected to the SPARCstorage Array.
This assumes a topology in which each disk is connected to two nodes.
phys-hahost1# haswitch phys-hahost2 hahost2 |
Run the drvconfig(1M) and disks(1M) commands on the cluster node that now owns the diskset to which this disk will be added.
phys-hahost2# drvconfig phys-hahost2# disks |
Run the scdidadm(1M) command to initialize the new disk for use by the DID pseudo driver.
You must run scdidadm(1M) on Node 0 in the cluster. Refer to the Sun Cluster 2.2 Software Installation Guide for details on the DID pseudo driver.
phys-hahost2# scdidadm -r |
Add the disk to a diskset.
The command syntax is as follows, where diskset is the name of the diskset containing the failed disk, and drive is the DID name of the disk in the form dN (for new installations of Sun Cluster), or cNtYdZ (for installations that upgraded from HA 1.3):
# metaset -s diskset -a drive |
The metaset(1M) command might repartition this disk automatically. See the Solstice DiskSuite documentation for more information.
Use the scadmin(1M) command to reserve and enable failfast on the specified disk that has just been added to the diskset.
phys-hahost2# scadmin reserve cNtXdYsZ |
Perform the usual administration actions on the new disk.
You can now perform the usual administration steps that bring a new drive into service. These include partitioning the disk, adding it to the configuration as a hot spare, or configuring it as a metadevice. See the Solstice DiskSuite documentation for more information on these tasks.
If necessary, switch logical hosts back to their default masters.
These are the high-level steps to add a multihost disk to an VxVM configuration:
Switching logical hosts to one cluster node
Identifying the controller for this new disk and locating an empty slot in the tray or enclosure
For Model 100 series SPARCstorage Arrays, preparing the disk enclosure for removal of a disk tray
For Model 200 series SPARCstorage Arrays with wide differential SCSI disk trays, powering down the controller and all attached disks
Identifying VxVM objects on the affected tray
Stopping I/O to volumes with subdisks on the affected tray
Adding the new disk
Returning the affected drives to service
Spinning up all drives
Bringing back online all affected VxVM objects
Performing the administrative actions to prepare the disk for use by Sun Cluster
Creating the /devices special files and /dev/dsk and /dev/rdsk links
Scanning for the new disk
Adding the disk to VM control
Formatting and partitioning the disk, if necessary
Performing the volume manager-related administrative tasks
These are the detailed steps to add a new multihost disk to an VxVM configuration.
Switch ownership of the logical host that will include the new disk to another node in the cluster.
Switch over any logical hosts with disks in the tray you are removing.
phys-hahost1# haswitch phys-hahost1 hahost1 hahost2 |
In a mirrored configuration, you may not need to switch logical hosts as long as the node is not shut down.
Determine the controller number of the tray to which the disk will be added.
SPARCstorage Arrays are assigned World Wide Names (WWN). The WWN on the front of the SPARCstorage Array also appears as part of the /devices entry, which is linked by pointer to the /dev entry containing the controller number. For example:
phys-hahost1# ls -l /dev/rdsk | grep -i WWN | tail -1 |
If the WWN on the front of the SPARCstorage Array is 36cc, the following output will display and the controller number would be c2:
phys-hahost1# ls -l /dev/rdsk | grep -i 36cc | tail -1 lrwxrwxrwx 1 root root 94 Jun 25 22:39 c2t5d2s7 -> ../../devices/io-unit@f,e1200000/sbi@0,0/SUNW,soc@3,0/SUNW,pln@a0000800,201836cc/ssd@5,2:h,raw phys-hahost1# |
Use the luxadm(1M) command with the display option to view the empty slots.
If you can add the disk without affecting other drives, skip to Step 11.
phys-hahost1# luxadm display c2 SPARCstorage Array Configuration ... DEVICE STATUS TRAY 1 TRAY 2 TRAY 3 slot 1 Drive: 0,0 Drive: 2,0 Drive: 4,0 2 Drive: 0,1 Drive: 2,1 Drive: 4,1 3 NO SELECT NO SELECT NO SELECT 4 NO SELECT NO SELECT NO SELECT 5 NO SELECT NO SELECT NO SELECT 6 Drive: 1,0 Drive: 3,0 Drive: 5,0 7 Drive: 1,1 NO SELECT NO SELECT 8 NO SELECT NO SELECT NO SELECT 9 NO SELECT NO SELECT NO SELECT 10 NO SELECT NO SELECT NO SELECT ... |
The empty slots are shown with a NO SELECT status. The output shown here is from a SPARCstorage Array 110; your display will be slightly different if you are using a different series SPARCstorage Array.
Determine the tray to which you will add the new disk.
In the remainder of the procedure, Tray 2 is used as an example. The slot selected for the new disk is Tray 2 Slot 7. The new disk will be known as c2t3d1.
Identify all the volumes and corresponding plexes on the disks in the tray which will contain the new disk.
From the physical device address cNtNdN, obtain the controller number and the target number.
In this example, the controller number is 2 and the target is 3.
Identify devices from a vxdisk list output.
Here is an example of how vxdisk can be used to obtain the information.
# vxdisk -g diskgroup -q list | nawk '/^c2/ {print $3}' |
Record the volume media name for the disks from the output of the command.
Identify all plexes on the above devices by using the appropriate version (csh, ksh, or Bourne shell) of the following command.
PLLIST=`vxprint -ptq -g diskgroup -e '(aslist.sd_dm_name in ("c2t3d0")) && (pl_kstate=ENABLED)' | nawk '{print $2}'` |
For csh, the syntax is set PLLIST .... For ksh, the syntax is export PLLIST= .... The Bourne shell requires the command export PLLIST after the variable is set.
After you have set the variable, stop I/O to the volumes whose components (subdisks) are on the tray.
Make sure all volumes associated with that tray are detached (mirrored or RAID5 configurations) or stopped (simple plexes). Issue the following command to detach a mirrored plex.
# vxplex -g diskgroup det ${PLLIST} |
An alternate command for detaching each plex in a tray is:
# vxplex -g diskgroup -v volume det plex |
To stop I/O to simple plexes, unmount any file systems or stop database access.
Mirrored volumes will still be active because the other half of the mirror is still available.
Add the new disk.
Use the instructions in your multihost disk expansion unit service manual to perform the hardware procedure of adding the disk.
Make sure all disks in the tray spin up.
The disks in the SPARCstorage Array tray should spin up automatically, but if the tray fails to spin up within two minutes, force the action with the following command:
phys-hahost1# luxadm start -t 2 c2 |
Run the drvconfig(1M) and disks(1M) commands to create the new entries in /devices, /dev/dsk, and /dev/rdsk for all new disks.
phys-hahost1# drvconfig phys-hahost1# disks |
Force the VxVM vxconfigd driver to scan for new disks.
phys-hahost1# vxdctl enable |
Bring the new disk under VM control by using the vxdiskadd command.
Perform the usual administration actions on the new disk.
You can now perform the usual administration steps that bring a new drive into service. These include partitioning the disk, adding it to the configuration as a hot spare, or configuring it as a plex.
This completes the procedure of adding a multihost disk to an existing SPARCstorage Array.