The maintenance procedures in FRUs That Do Not Require Sun Cluster Maintenance Procedures are performed the same as in a noncluster environment. Table 1–2 lists the procedures that require cluster-specific steps.
Table 1–2 Task Map: Maintaining a Storage Array
Task |
Information |
---|---|
Remove a storage array | |
Replace a storage array | |
Add a disk drive | |
Remove a disk drive | |
Replace a disk drive |
Each storage device has a different set of FRUs that do not require cluster-specific procedures.
The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge A5000 Installation and Service Manual for the following procedures.
This procedure relies on the following prerequisites and assumptions.
Your cluster is operational.
You want to retain the existing disk drives in the storage array.
If you want to replace your disk drives, see How to Replace a Disk Drive.
Example 1–1 shows you how to apply this procedure.
If possible, back up the metadevices or volumes that reside in the storage array.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
Perform volume management administration to remove the storage array from the configuration.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
On all nodes that are connected to the storage array, run the luxadm remove_device command.
# luxadm remove_device -F boxname |
See Example 1–1 for an example of this command and its use.
Disconnect the fiber-optic cables from the storage array.
Power off and disconnect the storage array from the AC power source.
For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.
Connect the fiber optic cables to the new storage array.
Connect the new storage array to an AC power source.
One disk drive at a time, remove the disk drives from the old storage array. Insert the disk drives into the same slots in the new storage array.
Power on the storage array.
Use the luxadm insert_device command to find the new storage array.
Repeat this step for each node that is connected to the storage array.
# luxadm insert_device |
See Example 1–1 for an example of this command and its use.
On all nodes that are connected to the new storage array, upload the new information to the DID driver.
If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.
# scgdevs |
Perform volume management administration to add the new storage array to the configuration.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
The following example shows how to replace a Sun StorEdge A5x00 storage array. The storage array to be replaced is venus1.
# luxadm remove_device -F venus1 WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up. The list of devices that will be removed is: 1: Box name: venus1 Node WWN: 123456789abcdeff Device Type: SENA (SES device) SES Paths: /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/ \ ses@w123456789abcdf03,0:0 /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/ \ ses@w123456789abcdf00,0:0 Please verify the above list of devices and then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return> Hit <Return> after removing the device(s). <Return> # luxadm insert_device Please hit <RETURN> when you have finished adding Fibre Channel Enclosure(s)/Device(s): <Return> # scgdevs |
Use this procedure to remove a storage array from a cluster. Example 1–2 shows you how to apply this procedure. Use the procedures in your server hardware manual to identify the storage array.
Perform volume management administration to remove the storage array from the configuration.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
On all nodes that are connected to the storage array, run the luxadm remove_device command.
# luxadm remove_device -F boxname |
Remove the storage array and the fiber-optic cables that are connected to the storage array.
For more information, see your storage documentation. For a list of storage documentation, see Related Documentation.
If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS for more information.
On all nodes, remove references to the storage array.
# devfsadm -C # scdidadm -C |
If necessary, remove any unused host adapters from the nodes.
For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.
The following example shows how to remove a Sun StorEdge A5x00 storage array. The storage array to be removed is venus1.
# luxadm remove_device -F venus1 WARNING!!! Please ensure that no file systems are mounted on these device(s). All data on these devices should have been backed up. The list of devices that will be removed is: 1: Storage Array: venus1 Node WWN: 123456789abcdeff Device Type: SENA (SES device) SES Paths: /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/ \ ses@w123456789abcdf03,0:0 /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/ \ ses@w123456789abcdf00,0:0 Please verify the above list of devices and then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return> Hit <Return> after removing the device(s). <Return> # devfsadm -C # scdidadm -C |
For conceptual information about quorums, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation. For a list of Sun Cluster documentation, see Related Documentation.
This procedure assumes that your cluster is operational.
On one node that is connected to the storage array, install the new disk.
Install the new disk drive. Press the Return key when prompted. You can insert multiple disk drives at the same time.
# luxadm insert_device enclosure,slot |
On all other nodes that are attached to the storage array, probe all devices. Write the new disk drive to the /dev/rdsk directory.
The amount of time that the devfsadm command requires to complete its processing depends on the number of devices that are connected to the node. Expect at least five minutes.
# devfsadm -C |
Ensure that entries for the disk drive have been added to the /dev/rdsk directory.
# ls -l /dev/rdsk |
If necessary, partition the disk drive.
You can use either the format(1M) command or copy the partitioning from another disk drive in the storage array.
From any node in the cluster, update the global device namespace.
If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.
# scgdevs |
Verify that a device ID (DID) has been assigned to the disk drive.
#scdidadm -l |
The DID that was assigned to the new disk drive might not be in sequential order in the storage array.
Perform necessary volume management administration actions on the new disk drive.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation. For a list of Sun Cluster documentation, see Related Documentation.
Example 1–3 shows you how to apply this procedure.
This procedure assumes that your cluster is operational.
Is the disk drive that you want to remove a quorum device?
# scstat -q |
If no, proceed to Step 2.
If yes, choose and configure another device to be the new quorum device. Then remove the old quorum device.
For procedures about how to add and remove quorum devices, see Sun Cluster system administration documentation.
If possible, back up the metadevice or volume.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
Perform volume management administration to remove the disk drive from the configuration.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
Identify the disk drive that needs to be removed.
If the disk error message reports the drive problem by DID, determine the Solaris device name.
# scdidadm -l deviceID |
On any node that is connected to the storage array, run the luxadm remove_device command.
Remove the disk drive. Press the Return key when prompted.
# luxadm remove_device -F /dev/rdsk/cNtXdYsZ |
On all connected nodes, remove references to the disk drive.
# devfsadm -C # scdidadm -C |
The following example shows how to remove a disk drive from a Sun StorEdge A5x00 storage array. The disk drive to be removed is d4.
# scdidadm -l d4 4 phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4 # luxadm remove_device -F /dev/rdsk/c1t32d0s2 WARNING!!! Please ensure that no file systems are mounted on these device(s). All data on these devices should have been backed up. The list of devices that will be removed is: 1: Box Name "venus1" front slot 0 Please enter 'q' to Quit or <Return> to Continue: <Return> stopping: Drive in "venus1" front slot 0....Done offlining: Drive in "venus1" front slot 0....Done Hit <Return> after removing the device(s). <Return> Drive in Box Name "venus1" front slot 0 Logical Nodes being removed under /dev/dsk/ and /dev/rdsk: c1t32d0s0 c1t32d0s1 c1t32d0s2 c1t32d0s3 c1t32d0s4 c1t32d0s5 c1t32d0s6 c1t32d0s7 # devfsadm -C # scdidadm -C |
For conceptual information about quorum, quorum devices, global devices, and device IDs, see your Sun Cluster concepts documentation.
This procedure assumes that your cluster is operational.
Identify the disk drive that needs replacement.
If the disk error message reports the drive problem by device ID (DID), determine the Solaris logical device name. If the disk error message reports the drive problem by the Solaris physical device name, use your Solaris documentation to map the Solaris physical device name to the Solaris logical device name. Use this Solaris logical device name and DID throughout this procedure.
# scdidadm -l deviceID |
Is the disk drive you are replacing a quorum device?
# scstat -q |
If no, proceed to Step 3.
If yes, add a new quorum device on a different storage array. Remove the old quorum device.
For procedures about how to add and remove quorum devices, see Sun Cluster system administration documentation.
If possible, back up the metadevice or volume.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
Identify the failed disk drive's physical DID.
Use this physical DID in Step 11 to verify that the failed disk drive has been replaced with a new disk drive. The DID and the world wide name (WWN) for the disk drive are the same.
#scdidadm -o diskid -l cNtXdY |
Which volume manager are you using?
If VERITAS Volume Manager, proceed to Step 6.
If Solstice DiskSuite/Solaris Volume Manager, save the disk partitioning information to partition the new disk drive.
# prtvtoc /dev/rdsk/cNtXdYs2 > filename |
You can also use the format utility to save the disk's partition information.
On any node that is connected to the storage array, remove the disk drive when prompted.
# luxadm remove_device -F /dev/rdsk/cNtXdYs2 |
After running the command, warning messages might display. These messages can be ignored.
On any node that is connected to the storage array, run the luxadm insert_device command. Add the new disk drive when prompted.
# luxadm insert_device boxname,fslotnumber |
or
# luxadm insert_device boxname,fslotnumber |
If you are inserting a front disk drive, use the fslotnumber parameter. If you are inserting a rear disk drive, use the rslotnumber parameter.
On all other nodes that are attached to the storage array, probe all devices. Write the new disk drive to the /dev/rdsk directory.
The amount of time that the devfsadm command requires to complete depends on the number of devices that are connected to the node. Expect at least five minutes.
# devfsadm -C |
Which volume manager are you using?
If VERITAS Volume Manager, proceed to Step 10.
If Solstice DiskSuite/Solaris Volume Manager, on one node that is connected to the storage array, partition the new disk drive. Use the partitioning information you saved in Step 5.
# fmthard -s filename /dev/rdsk/cNtXdYs2 |
You can also use the format utility to partition the new disk drive.
From all nodes that are connected to the storage array, update the DID database and driver.
# scdidadm -R deviceID |
After running scdidadm —R on the first node, each subsequent node that you run the command on might display the warning, device id for the device matches the database. Ignore this warning.
On any node, confirm that the failed disk drive has been replaced. Compare the following physical DID to the physical DID in Step 4.
If the following physical DID is different from the physical DID in Step 4, you successfully replaced the failed disk drive with a new disk drive.
# scdidadm -o diskid -l cNtXdY |
Perform volume management administration to add the disk drive back to its diskset or disk group.
For more information, see your Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager documentation.
If you want this new disk drive to be a quorum device, add the quorum device.
For the procedure about how to add a quorum device, see Sun Cluster system administration documentation.