This section describes the procedures for maintaining a StorEdge A5x00 array. Table 6-1 lists these procedures.
Table 6-1 Task Map: Maintaining a Sun StorEdge A5x00 Array
Task |
For Instructions, Go To |
---|---|
Add a disk drive |
"How to Add a Disk Drive to a StorEdge A5x00 Array in a Running Cluster" |
Replace a disk drive |
"How to Replace a Disk Drive in a StorEdge A5x00 Array in a Running Cluster" |
Remove a disk drive |
"How to Remove a Disk Drive From a StorEdge A5x00 Array in a Running Cluster" |
Add a StorEdge A5x00 array |
"How to Add the First StorEdge A5x00 Array to a Running Cluster" or "How to Add a StorEdge A5x00 Array to a Running Cluster That Has Existing StorEdge A5x00 Arrays" |
Replace a StorEdge A5x00 array |
"How to Replace a StorEdge A5x00 Array in a Running Cluster" |
Remove a StorEdge A5x00 array |
"How to Remove a StorEdge A5x00 Array From a Running Cluster" |
Use this procedure to add a disk drive to a running cluster. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 System Administration Guide and your server hardware manual.
For conceptual information on quorums, quorum devices, global devices, and device IDs, see the Sun Cluster 3.0 12/01 Concepts document.
On one node that is connected to the StorEdge A5x00 array, use the luxadm insert_device (1M) command to install the new disk.
Physically install the new disk drive and press Return when prompted. Using the luxadm insert_device command, you can insert multiple disk drives at the same time.
# luxadm insert_device enclosure,slot |
On all other nodes that are attached to the StorEdge A5x00 array, run the devfsadm(1M) command to probe all devices and to write the new disk drive to the /dev/rdsk directory.
Depending on the number of devices connected to the node, the devfsadm command can require at least five minutes to complete.
# devfsadm |
Ensure that entries for the disk drive have been added to the /dev/rdsk directory.
# ls -l /dev/rdsk |
If necessary, partition the disk drive.
You can use either the format(1M) command or copy the partitioning from another disk drive in the StorEdge A5x00 array.
From any node in the cluster, update the global device namespace.
If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.
# scgdevs |
Verify that a device ID (DID) has been assigned to the disk drive.
# scdidadm -l |
The DID that was assigned to the new disk drive might not be in sequential order in the StorEdge A5x00 array.
Perform necessary volume management administration actions on the new disk drive.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
To configure a disk drive as a quorum device, see the Sun Cluster 3.0 12/01 System Administration Guide for the procedure on adding a quorum device.
Use this procedure to replace a StorEdge A5x00 array disk drive. "Example--Replacing a StorEdge A5x00 Disk Drive" shows you how to apply this procedure. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 System Administration Guide and your server hardware manual. Use the procedures in your server hardware manual to identify a failed disk drive.
For conceptual information on quorums, quorum devices, global devices, and device IDs, see the Sun Cluster 3.0 12/01 Concepts document.
Identify the disk drive that needs replacement.
If the disk error message reports the drive problem by device ID (DID), use the scdidadm -l command to determine the Solaris logical device name. If the disk error message reports the drive problem by the Solaris physical device name, use your Solaris documentation to map the Solaris physical device name to the Solaris logical device name. Use this Solaris logical device name and DID throughout this procedure.
# scdidadm -l deviceID |
Determine if the disk drive you are replacing is a quorum device.
# scstat -q |
If the disk drive you are replacing is a quorum device, put the quorum device into maintenance state before you go to Step 3. For the procedure on putting a quorum device into maintenance state, see the Sun Cluster 3.0 12/01 System Administration Guide.
If the disk you are replacing is not a quorum device, go to Step 3.
If possible, back up the metadevice or volume.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Perform volume management administration to remove the disk drive from the configuration.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Identify the failed disk drive's physical DID.
Use this physical DID in Step 14 to verify that the failed disk drive has been replaced with a new disk drive. The DID and the World Wide Name (WWN) for the disk drive should be the same.
# scdidadm -o diskid -l cNtXdY |
If you are using Solstice DiskSuite as your volume manager, save the disk partitioning for use when partitioning the new disk drive.
If you are using VERITAS Volume Manager, go to Step 7.
# prtvtoc /dev/rdsk/cNtXdYsZ > filename |
On any node that is connected to the StorEdge A5x00 array, run the luxadm remove_device command.
# luxadm remove_device -F /dev/rdsk/cNtXdYsZ |
Replace the failed disk drive.
For the procedure on replacing a disk drive, see the Sun StorEdge A5000 Installation and Service Manual.
On any node that is connected to the StorEdge A5x00 array, run the luxadm insert_device command.
# luxadm insert_device boxname,rslotnumber # luxadm insert_device boxname,fslotnumber |
If you are inserting a front disk drive, use the fslotnumber parameter. If you are inserting a rear disk drive, use the rslotnumber parameter.
On all other nodes that are attached to the StorEdge A5x00 array, run the devfsadm(1M) command to probe all devices and to write the new disk drive to the /dev/rdsk directory.
Depending on the number of devices that are connected to the node, the devfsadm command can require at least five minutes to complete.
# devfsadm |
If you are using Solstice DiskSuite as your volume manager, on one node that is connected to the StorEdge A5x00 array, partition the new disk drive by using the partitioning you saved in Step 6.
If you are using VERITAS Volume Manager, go to Step 12.
# fmthard -s filename /dev/rdsk/cNtXdYsZ |
One at a time, shut down and reboot the nodes that are connected to the StorEdge A5x00 array.
# scswitch -S -h nodename # shutdown -y -g0 -i6 |
For more information on shutdown procedures, see the Sun Cluster 3.0 12/01 System Administration Guide.
On any of the nodes that are connected to the StorEdge A5x00 array, update the DID database.
# scdidadm -R deviceID |
On any node, confirm that the failed disk drive has been replaced by comparing the following physical DID to the physical DID in Step 5.
If the following physical DID is different from the physical DID in Step 5, you successfully replaced the failed disk drive with a new disk drive.
# scdidadm -o diskid -l cNtXdY |
On all nodes that are connected to the StorEdge A5x00 array, upload the new information to the DID driver.
If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.
# scdidadm -ui |
Perform volume management administration to add the disk drive back to its diskset or disk group.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
If you want this new disk drive to be a quorum device, add the quorum device.
For the procedure on adding a quorum device, see the Sun Cluster 3.0 12/01 System Administration Guide.
The following example shows how to apply the procedure for replacing a StorEdge A5x00 array disk drive.
# scstat -q # scdidadm -l d4 4 phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4 # scdidadm -o diskid -l c1t32d0 2000002037000edf # prtvtoc /dev/rdsk/c1t32d0s2 > /usr/tmp/c1t32d0.vtoc # luxadm remove_device -F /dev/rdsk/c1t32d0s2 WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up. The list of devices that will be removed is: 1: Box Name "venus1" front slot 0 Please enter 'q' to Quit or <Return> to Continue: <Return> stopping: Drive in "venus1" front slot 0....Done offlining: Drive in "venus1" front slot 0....Done Hit <Return> after removing the device(s). <Return> Drive in Box Name "venus1" front slot 0 Logical Nodes being removed under /dev/dsk/ and /dev/rdsk: c1t32d0s0 c1t32d0s1 c1t32d0s2 c1t32d0s3 c1t32d0s4 c1t32d0s5 c1t32d0s6 c1t32d0s7 # devfsadm # fmthard -s /usr/tmp/c1t32d0.vtoc /dev/rdsk/c1t32d0s2 # scswitch -S -h node1 # shutdown -y -g0 -i6 # scdidadm -R d4 # scdidadm -o diskid -l c1t32d0 20000020370bf955 # scdidadm -ui |
Use this procedure to remove a disk drive from a StorEdge A5x00 array. "Example--Removing a StorEdge A5x00 Disk Drive" shows you how to apply this procedure. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 System Administration Guide and your server hardware manual.
For conceptual information on quorum, quorum devices, global devices, and device IDs, see the Sun Cluster 3.0 12/01 Concepts document.
Determine if the disk drive you are removing is a quorum device.
# scstat -q |
If the disk drive you are replacing is a quorum device, put the quorum device into maintenance state before you go to Step 2. For the procedure on putting a quorum device into maintenance state, see the Sun Cluster 3.0 12/01 System Administration Guide.
If the disk you are replacing is not a quorum device, go to Step 2.
If possible, back up the metadevice or volume.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Perform volume management administration to remove the disk drive from the configuration.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Identify the disk drive that needs to be removed.
If the disk error message reports the drive problem by DID, use the scdidadm -l command to determine the Solaris device name.
# scdidadm -l deviceID |
On any node that is connected to the StorEdge A5x00 array, run the luxadm remove_device command.
Physically remove the disk drive, then press Return when prompted.
# luxadm remove_device -F /dev/rdsk/cNtXdYsZ |
On all connected nodes, remove references to the disk drive.
# devfsadm -C # scdidadm -C |
The following example shows how to apply the procedure for removing a StorEdge A5x00 array disk drive.
# scdidadm -l d4 4 phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4 # luxadm remove_device -F /dev/rdsk/c1t32d0s2 WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up. The list of devices that will be removed is: 1: Box Name "venus1" front slot 0 Please enter 'q' to Quit or <Return> to Continue: <Return> stopping: Drive in "venus1" front slot 0....Done offlining: Drive in "venus1" front slot 0....Done Hit <Return> after removing the device(s). <Return> Drive in Box Name "venus1" front slot 0 Logical Nodes being removed under /dev/dsk/ and /dev/rdsk: c1t32d0s0 c1t32d0s1 c1t32d0s2 c1t32d0s3 c1t32d0s4 c1t32d0s5 c1t32d0s6 c1t32d0s7 # devfsadm -C # scdidadm -C |
Use this procedure to install a StorEdge A5x00 array in a running cluster that does not yet have an existing StorEdge A5x00 installed.
If you are installing a StorEdge A5x00 array in a running cluster that already has StorEdge A5x00 arrays installed and configured, use the procedure in "How to Add a StorEdge A5x00 Array to a Running Cluster That Has Existing StorEdge A5x00 Arrays".
Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 System Administration Guide and your server hardware manual.
Determine if the StorEdge A5x00 array packages need to be installed on the nodes to which you are connecting the StorEdge A5x00 array. This product requires the following packages.
# pkginfo | egrep Wlux system SUNWluxd Sun Enterprise Network Array sf Device Driver system SUNWluxdx Sun Enterprise Network Array sf Device Driver (64-bit) system SUNWluxl Sun Enterprise Network Array socal Device Driver system SUNWluxlx Sun Enterprise Network Array socal Device Driver (64-bit) system SUNWluxop Sun Enterprise Network Array firmware and utilities |
On each node, install any necessary packages for the Solaris operating environment.
The StorEdge A5x00 array packages are located in the Product directory of the CD-ROM. Use the pkgadd command to add any necessary packages.
# pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN |
Path to the Solaris operating environment
The packages to be added
Shut down and power off any node that is connected to the StorEdge A5x00 array.
# scswitch -S -h nodename # shutdown -y -g0 -i0 |
For more information on shutdown procedures, see the Sun Cluster 3.0 12/01 System Administration Guide.
Install host adapters in the node that is to be connected to the StorEdge A5x00 array.
For the procedure on installing host adapters, see the documentation that shipped with your network adapters and nodes.
Cable, configure, and power on the StorEdge A5x00 array.
For more information, see the Sun StorEdge A5000 Installation and Service Manual and the Sun StorEdge A5000 Configuration Guide.
Figure 6-2 shows a sample StorEdge A5x00 array configuration.
Cabling procedures are different if you are adding StorEdge A5200 arrays in a SAN by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software. (StorEdge A5000 and A5100 arrays are not supported by the Sun SAN 3.0 release at this time.) See "StorEdge A5200 Array SAN Considerations" for more information.
Power on and boot the node.
# boot -r |
For the procedures on powering on and booting a node, see the Sun Cluster 3.0 12/01 System Administration Guide.
Determine if any patches need to be installed on the node(s) that are to be connected to the StorEdge A5x00 array.
For a list of patches specific to Sun Cluster, see the Sun Cluster 3.0 12/01 Release Notes.
Obtain and install any necessary patches on the nodes that are to be connected to the StorEdge A5x00 array.
For procedures on applying patches, see the Sun Cluster 3.0 12/01 System Administration Guide.
Read any README files that accompany the patches before you begin this installation. Some patches must be installed in a specific order.
If required by the patch README instructions, shut down and reboot the node.
# scswitch -S -h nodename # shutdown -y -g0 -i6 |
For more information on shutdown procedures, see the Sun Cluster 3.0 12/01 System Administration Guide.
Perform Step 3 through Step 9 for each node that is attached to the StorEdge A5x00 array.
Perform volume management administration to add the disk drives in the StorEdge A5x00 array to the volume management configuration.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Use this procedure to install a StorEdge A5x00 array in a running cluster that already has StorEdge A5x00 arrays installed and configured.
If you are installing the first StorEdge A5x00 array to a running cluster that does not yet have a StorEdge A5x00 array installed, use the procedure in "How to Add the First StorEdge A5x00 Array to a Running Cluster".
Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 System Administration Guide and your server hardware manual.
Configure the new StorEdge A5x00 array.
Each array in the loop must have a unique box ID. If necessary, use the front-panel module (FPM) to change the box ID for the new StorEdge A5x00 array you are adding. For more information about StorEdge A5x00 loops and general configuration, see the Sun StorEdge A5000 Configuration Guide and the Sun StorEdge A5000 Installation and Service Manual.
On both nodes, use the luxadm insert_device command to insert the new array to the cluster and to add paths to its disk drives.
# luxadm insert_device Please hit <RETURN> when you have finished adding Fibre Channel Enclosure(s)/Device(s): |
Do not press Return until after you have completed Step 3.
Cable the new StorEdge A5x00 array to a spare port in the existing hub, switch, or host adapter in your cluster.
For more information, see the Sun StorEdge A5000 Installation and Service Manual and the Sun StorEdge A5000 Configuration Guide.
Cabling and procedures are different if you are adding StorEdge A5200 arrays in a SAN by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software. (StorEdge A5000 and A5100 arrays are not supported by the Sun SAN 3.0 release at this time.) See "StorEdge A5200 Array SAN Considerations" for more information.
After you have finished cabling the new array, press Return to complete the luxadm insert_device operation (sample output shown below).
Waiting for Loop Initialization to complete... New Logical Nodes under /dev/dsk and /dev/rdsk : c4t98d0s0 c4t98d0s1 c4t98d0s2 c4t98d0s3 c4t98d0s4 c4t98d0s5 c4t98d0s6 ... New Logical Nodes under /dev/es: ses12 ses13 |
On both nodes, use the luxadm probe command to verify that the new StorEdge A5x00 array is recognized by both cluster nodes.
# luxadm probe |
On one node, use the scgdevs command to update the DID database.
# scgdevs |
Use this procedure to replace a failed StorEdge A5x00 array in a running cluster. "Example--Replacing a StorEdge A5x00 Array" shows you how to apply this procedure. This procedure assumes that you are retaining the disk drives.
If you are replacing your disk drives, see "How to Replace a Disk Drive in a StorEdge A5x00 Array in a Running Cluster".
If possible, back up the metadevices or volumes that reside in the StorEdge A5x00 array.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
Perform volume management administration to remove the StorEdge A5x00 array from the configuration.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
On all nodes that are connected to the StorEdge A5x00 array, run the luxadm remove_device command.
# luxadm remove_device -F boxname |
Disconnect the fiber optic cables from the StorEdge A5x00 array.
Power off and disconnect the StorEdge A5x00 array from the AC power source.
For more information, see the Sun StorEdge A5000 Installation and Service Manual and the Sun StorEdge A5000 Configuration Guide.
Connect the fiber optic cables to the new StorEdge A5x00 array.
Connect the new StorEdge A5x00 array to an AC power source.
One at a time, move the disk drives from the old StorEdge A5x00 disk array to the same slots in the new StorEdge A5x00 disk array.
Power on the StorEdge A5x00 array.
Use the luxadm insert_device command to find the new StorEdge A5x00 array.
Repeat this step for each node that is connected to the StorEdge A5x00 array.
# luxadm insert_device |
On all nodes that are connected to the new StorEdge A5x00 array, upload the new information to the DID driver.
If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.
# scgdevs |
Perform volume management administration to add the new StorEdge A5x00 array to the configuration.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
The following example shows how to apply the procedure for replacing a StorEdge A5x00 array.
# luxadm remove_device -F venus1 WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up. The list of devices that will be removed is: 1: Box name: venus1 Node WWN: 123456789abcdeff Device Type: SENA (SES device) SES Paths: /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/ses@w123456789abcdf03,0:0 /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/ses@w123456789abcdf00,0:0 Please verify the above list of devices and then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return> Hit <Return> after removing the device(s). <Return> # luxadm insert_device Please hit <RETURN> when you have finished adding Fibre Channel Enclosure(s)/Device(s): <Return> # scgdevs |
Use this procedure to remove a StorEdge A5x00 array from a cluster. "Example--Removing a StorEdge A5x00 Array" shows you how to apply this procedure. Use the procedures in your server hardware manual to identify the StorEdge A5x00 array.
Perform volume management administration to remove the StorEdge A5x00 array from the configuration.
For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.
On all nodes that are connected to the StorEdge A5x00 array, run the luxadm remove_device command.
# luxadm remove_device -F boxname |
Remove the StorEdge A5x00 array and the fiber optic cables that are connected to the StorEdge A5x00 array.
For more information, see the Sun StorEdge A5000 Installation and Service Manual.
If you are using your StorEdge A3500FC arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel to maintain cluster availability. See "StorEdge A5200 Array SAN Considerations" for more information.
On all nodes, remove references to the StorEdge A5x00 array.
# devfsadm -C # scdidadm -C |
If necessary, remove any unused host adapters from the nodes.
For the procedure on removing host adapters, see the documentation that shipped with your nodes.
The following example shows how to apply the procedure for removing a StorEdge A5x00 array.
# luxadm remove_device -F venus1 WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up. The list of devices that will be removed is: 1: Box name: venus1 Node WWN: 123456789abcdeff Device Type: SENA (SES device) SES Paths: /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/ses@w123456789abcdf03,0:0 /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/ses@w123456789abcdf00,0:0 Please verify the above list of devices and then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return> Hit <Return> after removing the device(s). <Return> # devfsadm -C # scdidadm -C |