Sun Cluster 3.0 U1 Hardware Guide

How to Add Disk Drive to StorEdge Multipack Enclosure in a Running Cluster

Use this procedure to add a disk drive to a running cluster. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 System Administration Guide and your server hardware manual. "Example--Adding a StorEdge MultiPack Disk Drive" shows how to apply this procedure.

For conceptual information on quorums, quorum devices, global devices, and device IDs, see the Sun Cluster 3.0 U1 Concepts document.


Caution - Caution -

SCSI-reservations failures have been observed when clustering StorEdge MultiPack enclosures that contain a particular model of Quantum disk drive: SUN4.2G VK4550J. Avoid the use of this particular model of Quantum disk drive for clustering with StorEdge MultiPack enclosures. If you do use this model of disk drive, you must set the scsi-initiator-id of the "first node" to 6. If you are using a six-slot StorEdge MultiPack enclosure, you must also set the enclosure for the 9-through-14 SCSI target address range. For more information, see the Sun StorEdge MultiPack Storage Guide.


  1. Locate an empty disk slot in the StorEdge MultiPack enclosure for the disk drive you want to add.

    Identify the empty slots either by observing the disk drive LEDs on the front of the StorEdge MultiPack enclosure, or by removing the side-cover of the unit. The target address IDs that correspond to the slots appear on the middle partition of the drive bay.

  2. Install the disk drive.

    For detailed instructions, see the documentation that shipped with your StorEdge MultiPack enclosure.

  3. On all nodes that are attached to the StorEdge MultiPack enclosure, configure the disk drive.


    # cfgadm -c configure cN
    # devfsadm
    
  4. On all nodes, ensure that entries for the disk drive have been added to the /dev/rdsk directory.


    # ls -l /dev/rdsk
    
  5. If necessary, use the format(1M) command or the fmthard(1M) command to partition the disk drive.

  6. From any node, update the global device namespace.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scgdevs
    
  7. On all nodes, verify that a device ID (DID) has been assigned to the disk drive.


    # scdidadm -l 
    

    Note -

    As shown in "Example--Adding a StorEdge MultiPack Disk Drive", the DID 35 that is assigned to the new disk drive might not be in sequential order in the StorEdge MultiPack enclosure.


  8. Perform volume management administration to add the new disk drive to the configuration.

    For more information, see your Solstice DiskSuiteTM or VERITAS Volume Manager documentation.

Example--Adding a StorEdge MultiPack Disk Drive

The following example shows how to apply the procedure for adding a StorEdge MultiPack enclosure disk drive.


# scdidadm -l
16       phys-circinus-3:/dev/rdsk/c2t0d0 /dev/did/rdsk/d16    
17       phys-circinus-3:/dev/rdsk/c2t1d0 /dev/did/rdsk/d17    
18       phys-circinus-3:/dev/rdsk/c2t2d0 /dev/did/rdsk/d18    
19       phys-circinus-3:/dev/rdsk/c2t3d0 /dev/did/rdsk/d19    
...
26       phys-circinus-3:/dev/rdsk/c2t12d0 /dev/did/rdsk/d26    
30       phys-circinus-3:/dev/rdsk/c1t2d0 /dev/did/rdsk/d30    
31       phys-circinus-3:/dev/rdsk/c1t3d0 /dev/did/rdsk/d31    
32       phys-circinus-3:/dev/rdsk/c1t10d0 /dev/did/rdsk/d32    
33       phys-circinus-3:/dev/rdsk/c0t0d0 /dev/did/rdsk/d33    
34       phys-circinus-3:/dev/rdsk/c0t6d0 /dev/did/rdsk/d34    
8190     phys-circinus-3:/dev/rmt/0     /dev/did/rmt/2   
# cfgadm -c configure c1
# devfsadm
# scgdevs
Configuring DID devices
Could not open /dev/rdsk/c0t6d0s2 to verify device id.
        Device busy
Configuring the /dev/global directory (global devices)
obtaining access to all attached disks
reservation program successfully exiting
# scdidadm -l
16       phys-circinus-3:/dev/rdsk/c2t0d0 /dev/did/rdsk/d16    
17       phys-circinus-3:/dev/rdsk/c2t1d0 /dev/did/rdsk/d17    
18       phys-circinus-3:/dev/rdsk/c2t2d0 /dev/did/rdsk/d18    
19       phys-circinus-3:/dev/rdsk/c2t3d0 /dev/did/rdsk/d19    
...
26       phys-circinus-3:/dev/rdsk/c2t12d0 /dev/did/rdsk/d26    
30       phys-circinus-3:/dev/rdsk/c1t2d0 /dev/did/rdsk/d30    
31       phys-circinus-3:/dev/rdsk/c1t3d0 /dev/did/rdsk/d31    
32       phys-circinus-3:/dev/rdsk/c1t10d0 /dev/did/rdsk/d32    
33       phys-circinus-3:/dev/rdsk/c0t0d0 /dev/did/rdsk/d33    
34       phys-circinus-3:/dev/rdsk/c0t6d0 /dev/did/rdsk/d34    
35       phys-circinus-3:/dev/rdsk/c2t13d0 /dev/did/rdsk/d35    
8190     phys-circinus-3:/dev/rmt/0     /dev/did/rmt/2       

Where to Go From Here

To configure a disk drive as a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide for the procedure on adding a quorum device.