Sun Cluster 3.0 12/01 Hardware Guide

How to Replace a Disk Drive in StorEdge MultiPack Enclosure in a Running Cluster

Use this procedure to replace a StorEdge MultiPack enclosure disk drive. "Example--Replacing a StorEdge MultiPack Disk Drive" shows how to apply this procedure. Perform the steps in this procedure in conjunction with the procedures in Sun Cluster 3.0 12/01 System Administration Guide and your server hardware manual. Use the procedures in your server hardware manual to identify a failed disk drive.

For conceptual information on quorums, quorum devices, global devices, and device IDs, see the Sun Cluster 3.0 12/01 Concepts document.


Caution - Caution -

SCSI-reservations failures have been observed when clustering StorEdge MultiPack enclosures that contain a particular model of Quantum disk drive: SUN4.2G VK4550J. Avoid the use of this particular model of Quantum disk drive for clustering with StorEdge MultiPack enclosures. If you do use this model of disk drive, you must set the scsi-initiator-id of the "first node" to 6. If you are using a six-slot StorEdge MultiPack enclosure, you must also set the enclosure for the 9-through-14 SCSI target address range (for more information, see the Sun StorEdge MultiPack Storage Guide).


  1. Identify the disk drive that needs replacement.

    If the disk error message reports the drive problem by device ID (DID), use the scdidadm -l command to determine the Solaris logical device name. If the disk error message reports the drive problem by the Solaris physical device name, use your Solaris documentation to map the Solaris physical device name to the Solaris logical device name. Use this Solaris logical device name and DID throughout this procedure.


    # scdidadm -l deviceID
    
  2. Determine if the disk drive you want to replace is a quorum device.


    # scstat -q
    
    • If the disk drive you want to replace is a quorum device, put the quorum device into maintenance state before you go to Step 3. For the procedure on putting a quorum device into maintenance state, see the Sun Cluster 3.0 12/01 System Administration Guide.

    • If the disk is not a quorum device, go to Step 3.

  3. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  5. Identify the failed disk drive's physical DID.

    Use this physical DID in Step 12 to verify that the failed disk drive has been replaced with a new disk drive.


    # scdidadm -o diskid -l cNtXdY
    
  6. If you are using Solstice DiskSuite as your volume manager, save the disk partitioning for use when you partition the new disk drive.

    If you are using VERITAS Volume Manager, skip this step and go to Step 7.


    # prtvtoc /dev/rdsk/cNtXdYsZ > filename
    

    Note -

    Do not save this file under /tmp because you will lose this file when you reboot. Instead, save this file under /usr/tmp.


  7. Replace the failed disk drive.

    For more information, see the Sun StorEdge MultiPack Storage Guide.

  8. On one node that is attached to the StorEdge MultiPack enclosure, run the devfsadm(1M) command to probe all devices and to write the new disk drive to the /dev/rdsk directory.

    Depending on the number of devices connected to the node, the devfsadm command can require at least five minutes to complete.


    # devfsadm
    
  9. If you are using Solstice DiskSuite as your volume manager, from any node that is connected to the StorEdge MultiPack enclosure, partition the new disk drive by using the partitioning you saved in Step 6.

    If you are using VERITAS Volume Manager, skip this step and go to Step 10.


    # fmthard -s filename /dev/rdsk/cNtXdYsZ
    
  10. One at a time, shut down and reboot the nodes that are connected to the StorEdge MultiPack enclosure.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  11. From any node that is connected to the disk drive, update the DID database.


    # scdidadm -R deviceID
    
  12. From any node, confirm that the failed disk drive has been replaced by comparing the new physical DID to the physical DID that was identified in Step 5.

    If the new physical DID is different from the physical DID in Step 5, you successfully replaced the failed disk drive with a new disk drive.


    # scdidadm -o diskid -l cNtXdY
    
  13. On all connected nodes, upload the new information to the DID driver.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scdidadm -ui
    
  14. Perform volume management administration to add the disk drive back to its diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  15. If you want this new disk drive to be a quorum device, add the quorum device.

    For the procedure on adding a quorum device, see the Sun Cluster 3.0 12/01 System Administration Guide.

Example--Replacing a StorEdge MultiPack Disk Drive

The following example shows how to apply the procedure for replacing a StorEdge MultiPack enclosure disk drive.


# scdidadm -l d20
20       phys-schost-2:/dev/rdsk/c3t2d0 /dev/did/rdsk/d20
# scdidadm -o diskid -l c3t2d0
5345414741544520393735314336343734310000
# prtvtoc /dev/rdsk/c3t2d0s2 > /usr/tmp/c3t2d0.vtoc 
...
# devfsadm
# fmthard -s /usr/tmp/c3t2d0.vtoc /dev/rdsk/c3t2d0s2
# scswitch -S -h node1
# shutdown -y -g0 -i6
...
# scdidadm -R d20
# scdidadm -o diskid -l c3t2d0
5345414741544520393735314336363037370000
# scdidadm -ui