Sun Cluster 3.0 U1 Hardware Guide

How to Replace a Disk Drive in a StorEdge A5x00 Disk Array in a Running Cluster

Use this procedure to replace a StorEdge A5x00 disk array disk drive. "Example--Replacing a StorEdge A5x00 Disk Drive" shows you how to apply this procedure. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 System Administration Guide and your server hardware manual. Use the procedures in your server hardware manual to identify a failed disk drive.

For conceptual information on quorums, quorum devices, global devices, and device IDs, see the Sun Cluster 3.0 U1 Concepts document.

  1. Identify the disk drive that needs replacement.

    If the disk error message reports the drive problem by device ID (DID), use the scdidadm -l command to determine the Solaris logical device name. If the disk error message reports the drive problem by the Solaris physical device name, use your Solaris documentation to map the Solaris physical device name to the Solaris logical device name. Use this Solaris logical device name and DID throughout this procedure.


    # scdidadm -l deviceID
    
  2. Determine if the disk drive you are replacing is a quorum device.


    # scstat -q
    
    • If the disk drive you are replacing is a quorum device, put the quorum device into maintenance state before you go to Step 3. For the procedure on putting a quorum device into maintenance state, see the Sun Cluster 3.0 U1 System Administration Guide.

    • If the disk you are replacing is not a quorum device, go to Step 3.

  3. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  5. Identify the failed disk drive's physical DID.

    Use this physical DID in Step 14 to verify that the failed disk drive has been replaced with a new disk drive. The DID and the World Wide Name (WWN) for the disk drive should be the same.


    # scdidadm -o diskid -l cNtXdY
    
  6. If you are using Solstice DiskSuite as your volume manager, save the disk partitioning for use when partitioning the new disk drive.

    If you are using VERITAS Volume Manager, go to Step 7.


    # prtvtoc /dev/rdsk/cNtXdYsZ > filename
    
  7. On any node that is connected to the StorEdge A5x00 disk array, run the luxadm remove command.


    # luxadm remove -F /dev/rdsk/cNtXdYsZ
    
  8. Replace the failed disk drive.

    For the procedure on replacing a disk drive, see the Sun StorEdge A5000 Installation and Service Manual.

  9. On any node that is connected to the StorEdge A5x00 disk array, run the luxadm insert command.


    # luxadm insert boxname,rslotnumber
    # luxadm insert boxname,fslotnumber
    

    If you are inserting a front disk drive, use the fslotnumber parameter. If you are inserting a rear disk drive, use the rslotnumber parameter.

  10. On all other nodes that are attached to the StorEdge A5x00 disk array, run the devfsadm(1M) command to probe all devices and to write the new disk drive to the /dev/rdsk directory.

    Depending on the number of devices that are connected to the node, the devfsadm command can require at least five minutes to complete.


    # devfsadm
    
  11. If you are using Solstice DiskSuite as your volume manager, on one node that is connected to the StorEdge A5x00 disk array, partition the new disk drive by using the partitioning you saved in Step 6.

    If you are using VERITAS Volume Manager, go to Step 12.


    # fmthard -s filename /dev/rdsk/cNtXdYsZ
    
  12. One at a time, shut down and reboot the nodes that are connected to the StorEdge A5x00 disk array.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i6
    

    For more information on shutdown procedures, see the Sun Cluster 3.0 U1 System Administration Guide.

  13. On any of the nodes that are connected to the StorEdge A5x00 disk array, update the DID database.


    # scdidadm -R deviceID
    
  14. On any node, confirm that the failed disk drive has been replaced by comparing the following physical DID to the physical DID in Step 5.

    If the following physical DID is different from the physical DID in Step 5, you successfully replaced the failed disk drive with a new disk drive.


    # scdidadm -o diskid -l cNtXdY
    
  15. On all nodes that are connected to the StorEdge A5x00 disk array, upload the new information to the DID driver.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scdidadm -ui
    
  16. Perform volume management administration to add the disk drive back to its diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  17. If you want this new disk drive to be a quorum device, add the quorum device.

    For the procedure on adding a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide.

Example--Replacing a StorEdge A5x00 Disk Drive

The following example shows how to apply the procedure for replacing a StorEdge A5x00 disk array disk drive.


# scstat -q
# scdidadm -l d4
4        phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4
# scdidadm -o diskid -l c1t32d0
2000002037000edf
# prtvtoc /dev/rdsk/c1t32d0s2 > /usr/tmp/c1t32d0.vtoc 
# luxadm remove -F /dev/rdsk/c1t32d0s2
WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up.

The list of devices that will be removed is:  1: Box Name "venus1" front slot 0

Please enter 'q' to Quit or <Return> to Continue: <Return>

stopping:  Drive in "venus1" front slot 0....Done
offlining: Drive in "venus1" front slot 0....Done

Hit <Return> after removing the device(s). <Return>

Drive in Box Name "venus1" front slot 0
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
        c1t32d0s0
        c1t32d0s1
        c1t32d0s2
        c1t32d0s3
        c1t32d0s4
        c1t32d0s5
        c1t32d0s6
        c1t32d0s7

# devfsadm
# fmthard -s /usr/tmp/c1t32d0.vtoc /dev/rdsk/c1t32d0s2
# scswitch -S -h node1
# shutdown -y -g0 -i6
# scdidadm -R d4
# scdidadm -o diskid -l c1t32d0
20000020370bf955
# scdidadm -ui