Sun Cluster 3.0 Hardware Guide

Chapter 6 Installing and Maintaining a Sun StorEdge A5x00 Disk Array

This chapter provides the procedures for installing and maintaining a Sun StorEdgeTM A5x00 disk array.

This chapter contains the following procedures:

For conceptual information on multihost disks, see Sun Cluster 3.0 Concepts.

Installing a StorEdge A5x00

This section provides the procedure for an initial installation of an StorEdge A5x00 disk array. The following table lists the steps involved in an initial installation of an StorEdge A5x00 disk array.

Table 6-1 Task Map: Installing a StorEdge A5x00

Task 

For Instructions, Go To... 

Install the host adapters 

The documentation that shipped with your nodes 

Cable and configure the disk array 

Sun StorEdge A5000 Installation and Service Manual

Check the hardware firmware levels, and install any required firmware updates 

Sun Cluster 3.0 Release Notes

How to Install a StorEdge A5x00

Use this procedure to install an StorEdge A5x00 disk array. Perform the steps in this procedure in conjunction with the procedures in Sun Cluster 3.0 Installation Guide and your server hardware manual.

  1. Install host adapters in the nodes that will be connected to the disk array.

    For the procedure on installing host adapters, see the documentation that shipped with your network adapters and nodes.


    Note -

    To ensure maximum redundancy, put each host adapter on a separate I/O board if possible.


  2. Cable, power on, and configure the disk array.

    Figure 6-1 shows a sample disk array configuration.

    For more information on cabling and configuring disk arrays, see Sun StorEdge A5000 Installation and Service Manual.

    Figure 6-1 Sample StorEdge A5x00 Disk Array Configuration

    Graphic

  3. Check the hardware firmware levels, and install any required firmware updates.

    For the location of patches and installation instructions, see Sun Cluster 3.0 Release Notes.

Where to Go From Here

To install software, follow the procedures in Sun Cluster 3.0 Installation Guide.

Maintaining a StorEdge A5x00 Disk Array

This section provides the procedures for maintaining an StorEdge A5x00 disk array. The following table lists these procedures.

Table 6-2 Task Map: Maintaining a StorEdge A5x00 Disk Array

Task 

For Instructions, Go To... 

Perform an initial installation 

Use the scgdevs(1M) command to update the global device namespace without a reconfiguration reboot.

"How to Install a StorEdge A5x00"

Add a disk drive 

Use the scswitch(1M), shutdown(1M), and luxadm insert commands to add a disk drive to a disk array.

"How to Add a StorEdge A5x00 Disk Drive"

Replace a disk drive 

"How to Replace a StorEdge A5x00 Disk Drive"

Remove a disk drive 

"How to Remove a StorEdge A5x00 Disk Drive"

Add a disk array to an existing cluster 

Use the pkgadd(1M), shutdown(1M), scswitch(1M), and luxadm insert commands to add a disk array to an existing cluster.

"How to Add a StorEdge A5x00 Disk Array to an Existing Cluster"

Replace a disk array in an existing cluster 

Remove a disk array from the cluster configuration, and replace it with a new disk array. 

"How to Replace a StorEdge A5x00 Disk Array"

Remove a disk array from an existing cluster 

Use luxadm remove and devfsadm -C commands to remove a disk array without replacing it with another disk array.

"How to Remove a StorEdge A5x00 Disk Array"

How to Add a StorEdge A5x00 Disk Drive

Use this procedure to add a disk drive to an existing cluster. Perform the steps in this procedure in conjunction with the procedures in Sun Cluster 3.0 System Administration Guide and your server hardware manual.

For conceptual information on quorum, quorum devices, global devices, and device IDs, see Sun Cluster 3.0 Concepts.

  1. On one node connected to the disk array, use the luxadm(1M) command to insert the new disk.

    Physically install the new disk drive, and press Return when prompted. Using the luxadm insert command, you can insert multiple disk drives at the same time.


    # luxadm insert enclosure,slot
    

  2. On all other nodes attached to the disk array, run the devfsadm(1M) command to probe all devices and to write the new disk drive to the /dev/rdsk directory.

    Depending on the number of devices connected to the node, the devfsadm command can take at least five minutes to complete.


    # devfsadm
    
  3. Ensure that entries for the disk drive have been added to the /dev/rdsk directory.


    # ls -l /dev/rdsk
    

  4. If needed, partition the disk drive.

    You can use either the format(1M) command or copy the partitioning from another disk drive in the disk array.

  5. From any node in the cluster, update the global device namespace.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scgdevs
    
  6. Verify that a device ID (DID) has been assigned to the disk drive.


    # scdidadm -l 
    

    Note -

    The DID assigned to the new disk drive might not be in sequential order in the disk array.


  7. Perform the usual volume management administration actions on the new disk drive.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  8. If you want this new disk drive to be a quorum device, add the quorum device.

    Refer to Sun Cluster 3.0 System Administration Guide for the procedure on adding a quorum device.

How to Replace a StorEdge A5x00 Disk Drive

Use this procedure to replace an StorEdge A5x00 disk array disk drive. "Example--Replacing a StorEdge A5x00 Disk Drive" shows you how to apply this procedure. Perform the steps in this procedure in conjunction with the procedures in Sun Cluster 3.0 System Administration Guide and your server hardware manual. Use the procedures in your server hardware manual to identify a failed disk drive.

For conceptual information on quorum, quorum devices, global devices, and device IDs, see Sun Cluster 3.0 Concepts.

  1. Identify the disk drive that needs replacement.

    If the disk error message reports the drive problem by device ID (DID), use the scdidadm -l command to determine the Solaris logical device name. If the disk error message reports the drive problem by the Solaris physical device name, use your Solaris documentation to map the Solaris physical device name to the Solaris logical device name. Use this Solaris logical device name and DID throughout this procedure.


    # scdidadm -l deviceID
    
  2. Determine if the disk drive you want to replace is a quorum device.


    # scstat -q
    

    If the disk drive you want to replace is a quorum device, remove the quorum device before you proceed. Otherwise, proceed to Step 3.

    Refer to Sun Cluster 3.0 System Administration Guide for procedures on replacing a quorum device and putting a quorum device into maintenance state.

  3. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  5. Identify the failed disk drive's physical DID.

    Use this physical DID in Step 14 to verify that the failed disk drive has been replaced with a new disk drive. The DID and the World Wide Name (WWN) for the disk drive should be the same.


    # scdidadm -o diskid -l cNtXdY
    
  6. If you are using Solstice DiskSuite as your volume manager, save the disk partitioning for use when partitioning the new disk drive.

    If you are using VERITAS Volume Manager, proceed to Step 7.


    # prtvtoc /dev/rdsk/cNtXdYsZ > filename
    
  7. On any node connected to the disk array, run the luxadm remove command.


    # luxadm remove -F /dev/rdsk/cNtXdYsZ
    
  8. Replace the failed disk drive.

    For more information, see the documentation that shipped with your disk array.

  9. On any node connected to the disk array, run the luxadm insert command.


    # luxadm insert boxname,rslotnumber
    # luxadm insert boxname,fslotnumber
    

    If you want to insert a front disk drive, use the fslotnumber parameter. If you want to insert a rear disk drive, use the rslotnumber parameter.

  10. On all other nodes attached to the disk array, run the devfsadm(1M) command to probe all devices and to write the new disk drive to the /dev/rdsk directory.

    Depending on the number of devices connected to the node, the devfsadm command can take at least five minutes to complete.


    # devfsadm
    
  11. If you are using Solstice DiskSuite as your volume manager, on one node connected to the disk array, partition the new disk drive, using the partitioning you saved in Step 6.

    If you are using VERITAS Volume Manager, proceed to Step 12.


    # fmthard -s filename /dev/rdsk/cNtXdYsZ
    
  12. One at a time, shut down and reboot the nodes connected to the disk array.


    # scswitch -S -h nodename
    # shutdown -y -g 0 -i 6
    

    For more information, see Sun Cluster 3.0 System Administration Guide.

  13. On any of the nodes connected to the disk array, update the DID database.


    # scdidadm -R deviceID
    
  14. On any node, confirm that the failed disk drive has been replaced by comparing the following physical DID to the physical DID in Step 5.

    If the following physical DID is different from the physical DID in Step 5, you successfully replaced the failed disk drive with a new disk drive.


    # scdidadm -o diskid -l cNtXdY
    
  15. On all nodes connected to the disk array, upload the new information to the DID driver.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scdidadm -ui
    
  16. Perform volume management administration to add the disk drive back to its diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  17. If you want this new disk drive to be a quorum device, add the quorum device.

    Refer to Sun Cluster 3.0 System Administration Guide for the procedure on adding a quorum device.

Example--Replacing a StorEdge A5x00 Disk Drive

The following example shows how to apply the procedure for replacing an StorEdge A5x00 disk array disk drive.


# scstat -q
# scdidadm -l d4
4        phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4
# scdidadm -o diskid -l c1t32d0
2000002037000edf
# prtvtoc /dev/rdsk/c1t32d0s2 > /usr/tmp/c1t32d0.vtoc 
# luxadm remove -F /dev/rdsk/c1t32d0s2
WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up.

The list of devices that will be removed is:  1: Box Name "venus1" front slot 0

Please enter 'q' to Quit or <Return> to Continue: <Return>

stopping:  Drive in "venus1" front slot 0....Done
offlining: Drive in "venus1" front slot 0....Done

Hit <Return> after removing the device(s). <Return>

Drive in Box Name "venus1" front slot 0
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
        c1t32d0s0
        c1t32d0s1
        c1t32d0s2
        c1t32d0s3
        c1t32d0s4
        c1t32d0s5
        c1t32d0s6
        c1t32d0s7

# devfsadm
# fmthard -s /usr/tmp/c1t32d0.vtoc /dev/rdsk/c1t32d0s2
# scswitch -S -h node1
# shutdown -y -g 0 -i 6
# scdidadm -R d4
# scdidadm -o diskid -l c1t32d0
20000020370bf955
# scdidadm -ui

How to Remove a StorEdge A5x00 Disk Drive

Use this procedure to remove a disk drive from a disk array. "Example--Removing a StorEdge A5x00 Disk Drive" shows you how to apply this procedure. Perform the steps in this procedure in conjunction with the procedures in Sun Cluster 3.0 System Administration Guide and your server hardware manual.

For conceptual information on quorum, quorum devices, global devices, and device IDs, see Sun Cluster 3.0 Concepts.

  1. Determine if the disk drive you want to remove is a quorum device.


    # scstat -q
    

    If the disk drive you want to remove is a quorum device, remove the quorum device before you proceed. Otherwise, proceed to Step 2.

    Refer to Sun Cluster 3.0 System Administration Guide for procedures on removing a quorum device and putting a quorum device into maintenance state.

  2. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Identify the disk drive that needs to be removed.

    If the disk error message reports the drive problem by DID, use the scdidadm -l command to determine the Solaris device name.


    # scdidadm -l deviceID
    
  5. On any node connected to the disk array, run the luxadm remove command.

    Physically remove the disk drive, and press Return when prompted.


    # luxadm remove -F /dev/rdsk/cNtXdYsZ
    
  6. On all connected nodes, remove references to the disk drive.


    # devfsadm -C
    # scdidadm -C
    

Example--Removing a StorEdge A5x00 Disk Drive

The following example shows how to apply the procedure for removing an StorEdge A5x00 disk array disk drive.


# scdidadm -l d4
4        phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4
# luxadm remove -F /dev/rdsk/c1t32d0s2

WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up.

The list of devices that will be removed is:  1: Box Name "venus1" front slot 0

Please enter 'q' to Quit or <Return> to Continue: <Return>

stopping:  Drive in "venus1" front slot 0....Done
offlining: Drive in "venus1" front slot 0....Done

Hit <Return> after removing the device(s). <Return>

Drive in Box Name "venus1" front slot 0
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
        c1t32d0s0
        c1t32d0s1
        c1t32d0s2
        c1t32d0s3
        c1t32d0s4
        c1t32d0s5
        c1t32d0s6
        c1t32d0s7
# devfsadm -C
# scdidadm -C

How to Add a StorEdge A5x00 Disk Array to an Existing Cluster

Use this procedure to install a StorEdge A5x00 disk array in an existing cluster. Perform the steps in this procedure in conjunction with the procedures in Sun Cluster 3.0 System Administration Guide and your server hardware manual.

  1. Determine if the StorEdge A5x00 disk array packages need to be installed on the nodes to which you are connecting the disk array. The following packages are required.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  2. On each node, install any needed packages for the Solaris operating environment.

    The disk array packages are located in the Product directory of the CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
    path_to_Solaris

    Path to the Solaris operating environment

    Pkg1 Pkg2

    The packages to be added

  3. Shut down and power off any node connected to the disk array.


    # scswitch -S -h nodename
    # shutdown -y -g 0
    

    Refer to Sun Cluster 3.0 System Administration Guide for more information.

  4. Install host adapters in the node that will be connected to the disk array.

    For the procedure on installing host adapters, see the documentation that shipped with your network adapters and nodes.

  5. Cable, configure, and power on the disk array.

    For more information, see the documentation that shipped with your disk array.

    Figure 6-2 shows a sample disk array configuration.

    Figure 6-2 Sample StorEdge A5x00 Disk Array Configuration

    Graphic

  6. Power on and boot the node.


    # boot -r
    

    For the procedures on powering on and booting a node, see Sun Cluster 3.0 System Administration Guide.

  7. Determine if any patches need to be installed on the node(s) that will be connected to the disk array.

    For a list of Sun Cluster-specific patches, see Sun Cluster 3.0 Release Notes.

  8. Obtain and install any needed patches on the nodes that will be connected to the disk array.

    For procedures on applying patches, see Sun Cluster 3.0 System Administration Guide.


    Caution - Caution -

    Read any README files that accompany the patches before you begin this installation. Some patches must be installed in a specific order.


  9. If required by the patch README instructions, shut down and reboot the node.


    # scswitch -S -h nodename
    # shutdown -y -g 0 -i 6
    
  10. Perform Step 3 through Step 9 for each node attached to the disk array.

  11. Perform volume management administration to add the disk drives in the array to the volume management configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a StorEdge A5x00 Disk Array

Use this procedure to replace a failed StorEdge A5x00 disk array. "Example--Replacing a StorEdge A5x00 Disk Array" shows you how to apply this procedure. This procedure assumes that you want to retain the disk drives.

If you want to replace your disk drives, see "How to Replace a StorEdge A5x00 Disk Drive".

  1. If possible, back up the metadevices or volumes that reside in the disk array.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Perform volume management administration to remove the disk array from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. On all nodes connected to the disk array, run the luxadm remove command.


    # luxadm remove -F boxname
    
  4. Disconnect the fiber optic cables from the disk array.

  5. Power off, and disconnect the disk array from the AC power source.

    For more information, see the documentation that shipped with your disk array.

  6. Connect the fiber optic cables to the new disk array.

  7. Connect the new disk array to an AC power source.

    For more information, see the documentation that shipped with your disk array.

  8. One at a time, move the disk drives from the old disk array to the same slot in the new disk array.

  9. Power on the disk array.

  10. Use the luxadm insert command to find the new disk array.

    Repeat this step for each node connected to the disk array.


    # luxadm insert
    

  11. On all nodes connected to the new disk array, upload the new information to the DID driver.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scgdevs
    
  12. Perform volume management administration to add the new disk array to the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

Example--Replacing a StorEdge A5x00 Disk Array

The following example shows how to apply the procedure for replacing an StorEdge A5x00 disk array.


# luxadm remove -F venus1

WARNING!!! Please ensure that no filesystems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Box name:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/ses@w123456789abcdf03,0:0
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/ses@w123456789abcdf00,0:0

Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return>

Hit <Return> after removing the device(s). <Return>
# luxadm insert
Please hit <RETURN> when you have finished adding Fibre Channel 
Enclosure(s)/Device(s): <Return>
# scgdevs

How to Remove a StorEdge A5x00 Disk Array

Use this procedure to remove an StorEdge A5x00 disk array from a cluster. "Example--Removing a StorEdge A5x00 Disk Array" shows you how to apply this procedure. Use the procedures in your server hardware manual to identify the disk array.

  1. Perform volume management administration to remove the disk array from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. On all nodes connected to the disk array, run the luxadm remove command.


    # luxadm remove -F boxname
    
  3. Remove the disk array and the fiber optic cables connected to the disk array.

    Refer to Sun StorEdge A5000 Installation and Service Manual.

  4. On all nodes, remove references to the disk array.


    # devfsadm -C
    # scdidadm -C
    
  5. If needed, remove any host adapters from the nodes.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

Example--Removing a StorEdge A5x00 Disk Array

The following example shows how to apply the procedure for removing an StorEdge A5x00 disk array.


# luxadm remove -F venus1
WARNING!!! Please ensure that no filesystems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Box name:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/ses@w123456789abcdf03,0:0
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/ses@w123456789abcdf00,0:0


Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return>

Hit <Return> after removing the device(s). <Return>

# devfsadm -C
# scdidadm -C