Sun Cluster 3.0 U1 Hardware Guide

Chapter 6 Installing and Maintaining a Sun StorEdge A5x00 Disk Array

This chapter provides the procedures for installing and maintaining a Sun StorEdgeTM A5x00 disk array.

This chapter contains the following procedures:

For conceptual information on multihost disks, see the Sun Cluster 3.0 U1 Concepts document.

Installing a StorEdge A5x00 Disk Array

This section describes the procedure for an initial installation of a StorEdge A5x00 disk array.

How to Install a StorEdge A5x00 Disk Array

Use this procedure to install a StorEdge A5x00 disk array. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 Installation Guide and your server hardware manual.

  1. Install host adapters in the nodes that are to be connected to the StorEdge A5x00 disk array.

    For the procedure on installing host adapters, see the documentation that shipped with your network adapters and nodes.


    Note -

    To ensure maximum redundancy, put each host adapter on a separate I/O board, if possible.


  2. Cable, power on, and configure the StorEdge A5x00 disk array.

    Figure 6-1 shows a sample StorEdge A5x00 disk array configuration.

    For more information on cabling and configuring StorEdge A5x00 disk arrays, see the Sun StorEdge A5000 Installation and Service Manual.

    Figure 6-1 Sample StorEdge A5x00 Disk Array Configuration

    Graphic

  3. Check the StorEdge A5x00 disk array controller firmware revision, and, if required, install the most recent firmware revision.

    For more information, see the Sun StorEdge A5000 Product Notes.

Where to Go From Here

To install software, follow the procedures in Sun Cluster 3.0 U1 Installation Guide.

Maintaining a StorEdge A5x00 Disk Array

This section describes the procedures for maintaining a StorEdge A5x00 disk array. Table 6-1 lists these procedures.

Table 6-1 Task Map: Maintaining a Sun StorEdge A5x00 Disk Array

Task 

For Instructions, Go To 

Add a disk drive 

"How to Add a Disk Drive to a StorEdge A5x00 Disk Array in a Running Cluster"

Replace a disk drive 

"How to Replace a Disk Drive in a StorEdge A5x00 Disk Array in a Running Cluster"

Remove a disk drive 

"How to Remove a Disk Drive From a StorEdge A5x00 Disk Array in a Running Cluster"

Add a StorEdge A5x00 disk array 

"How to Add the First StorEdge A5x00 Disk Array to a Running Cluster"

or 

"How to Add a StorEdge A5x00 Disk Array to a Running Cluster That Has Existing StorEdge A5x00 Disk Arrays"

Replace a StorEdge A5x00 disk array 

"How to Replace a StorEdge A5x00 Disk Array in a Running Cluster"

Remove a StorEdge A5x00 disk array 

"How to Remove a StorEdge A5x00 Disk Array From a Running Cluster"

How to Add a Disk Drive to a StorEdge A5x00 Disk Array in a Running Cluster

Use this procedure to add a disk drive to a running cluster. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 System Administration Guide and your server hardware manual.

For conceptual information on quorums, quorum devices, global devices, and device IDs, see the Sun Cluster 3.0 U1 Concepts document.

  1. On one node that is connected to the StorEdge A5x00 disk array, use the luxadm(1M) command to install the new disk.

    Physically install the new disk drive, and press Return when prompted. Using the luxadm insert command, you can insert multiple disk drives at the same time.


    # luxadm insert enclosure,slot
    

  2. On all other nodes that are attached to the StorEdge A5x00 disk array, run the devfsadm(1M) command to probe all devices and to write the new disk drive to the /dev/rdsk directory.

    Depending on the number of devices connected to the node, the devfsadm command can require at least five minutes to complete.


    # devfsadm
    
  3. Ensure that entries for the disk drive have been added to the /dev/rdsk directory.


    # ls -l /dev/rdsk
    

  4. If necessary, partition the disk drive.

    You can use either the format(1M) command or copy the partitioning from another disk drive in the StorEdge A5x00 disk array.

  5. From any node in the cluster, update the global device namespace.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scgdevs
    
  6. Verify that a device ID (DID) has been assigned to the disk drive.


    # scdidadm -l 
    

    Note -

    The DID that was assigned to the new disk drive might not be in sequential order in the StorEdge A5x00 disk array.


  7. Perform necessary volume management administration actions on the new disk drive.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

Where to Go From Here

To configure a disk drive as a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide for the procedure on adding a quorum device.

How to Replace a Disk Drive in a StorEdge A5x00 Disk Array in a Running Cluster

Use this procedure to replace a StorEdge A5x00 disk array disk drive. "Example--Replacing a StorEdge A5x00 Disk Drive" shows you how to apply this procedure. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 System Administration Guide and your server hardware manual. Use the procedures in your server hardware manual to identify a failed disk drive.

For conceptual information on quorums, quorum devices, global devices, and device IDs, see the Sun Cluster 3.0 U1 Concepts document.

  1. Identify the disk drive that needs replacement.

    If the disk error message reports the drive problem by device ID (DID), use the scdidadm -l command to determine the Solaris logical device name. If the disk error message reports the drive problem by the Solaris physical device name, use your Solaris documentation to map the Solaris physical device name to the Solaris logical device name. Use this Solaris logical device name and DID throughout this procedure.


    # scdidadm -l deviceID
    
  2. Determine if the disk drive you are replacing is a quorum device.


    # scstat -q
    
    • If the disk drive you are replacing is a quorum device, put the quorum device into maintenance state before you go to Step 3. For the procedure on putting a quorum device into maintenance state, see the Sun Cluster 3.0 U1 System Administration Guide.

    • If the disk you are replacing is not a quorum device, go to Step 3.

  3. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  5. Identify the failed disk drive's physical DID.

    Use this physical DID in Step 14 to verify that the failed disk drive has been replaced with a new disk drive. The DID and the World Wide Name (WWN) for the disk drive should be the same.


    # scdidadm -o diskid -l cNtXdY
    
  6. If you are using Solstice DiskSuite as your volume manager, save the disk partitioning for use when partitioning the new disk drive.

    If you are using VERITAS Volume Manager, go to Step 7.


    # prtvtoc /dev/rdsk/cNtXdYsZ > filename
    
  7. On any node that is connected to the StorEdge A5x00 disk array, run the luxadm remove command.


    # luxadm remove -F /dev/rdsk/cNtXdYsZ
    
  8. Replace the failed disk drive.

    For the procedure on replacing a disk drive, see the Sun StorEdge A5000 Installation and Service Manual.

  9. On any node that is connected to the StorEdge A5x00 disk array, run the luxadm insert command.


    # luxadm insert boxname,rslotnumber
    # luxadm insert boxname,fslotnumber
    

    If you are inserting a front disk drive, use the fslotnumber parameter. If you are inserting a rear disk drive, use the rslotnumber parameter.

  10. On all other nodes that are attached to the StorEdge A5x00 disk array, run the devfsadm(1M) command to probe all devices and to write the new disk drive to the /dev/rdsk directory.

    Depending on the number of devices that are connected to the node, the devfsadm command can require at least five minutes to complete.


    # devfsadm
    
  11. If you are using Solstice DiskSuite as your volume manager, on one node that is connected to the StorEdge A5x00 disk array, partition the new disk drive by using the partitioning you saved in Step 6.

    If you are using VERITAS Volume Manager, go to Step 12.


    # fmthard -s filename /dev/rdsk/cNtXdYsZ
    
  12. One at a time, shut down and reboot the nodes that are connected to the StorEdge A5x00 disk array.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i6
    

    For more information on shutdown procedures, see the Sun Cluster 3.0 U1 System Administration Guide.

  13. On any of the nodes that are connected to the StorEdge A5x00 disk array, update the DID database.


    # scdidadm -R deviceID
    
  14. On any node, confirm that the failed disk drive has been replaced by comparing the following physical DID to the physical DID in Step 5.

    If the following physical DID is different from the physical DID in Step 5, you successfully replaced the failed disk drive with a new disk drive.


    # scdidadm -o diskid -l cNtXdY
    
  15. On all nodes that are connected to the StorEdge A5x00 disk array, upload the new information to the DID driver.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scdidadm -ui
    
  16. Perform volume management administration to add the disk drive back to its diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  17. If you want this new disk drive to be a quorum device, add the quorum device.

    For the procedure on adding a quorum device, see the Sun Cluster 3.0 U1 System Administration Guide.

Example--Replacing a StorEdge A5x00 Disk Drive

The following example shows how to apply the procedure for replacing a StorEdge A5x00 disk array disk drive.


# scstat -q
# scdidadm -l d4
4        phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4
# scdidadm -o diskid -l c1t32d0
2000002037000edf
# prtvtoc /dev/rdsk/c1t32d0s2 > /usr/tmp/c1t32d0.vtoc 
# luxadm remove -F /dev/rdsk/c1t32d0s2
WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up.

The list of devices that will be removed is:  1: Box Name "venus1" front slot 0

Please enter 'q' to Quit or <Return> to Continue: <Return>

stopping:  Drive in "venus1" front slot 0....Done
offlining: Drive in "venus1" front slot 0....Done

Hit <Return> after removing the device(s). <Return>

Drive in Box Name "venus1" front slot 0
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
        c1t32d0s0
        c1t32d0s1
        c1t32d0s2
        c1t32d0s3
        c1t32d0s4
        c1t32d0s5
        c1t32d0s6
        c1t32d0s7

# devfsadm
# fmthard -s /usr/tmp/c1t32d0.vtoc /dev/rdsk/c1t32d0s2
# scswitch -S -h node1
# shutdown -y -g0 -i6
# scdidadm -R d4
# scdidadm -o diskid -l c1t32d0
20000020370bf955
# scdidadm -ui

How to Remove a Disk Drive From a StorEdge A5x00 Disk Array in a Running Cluster

Use this procedure to remove a disk drive from a StorEdge A5x00 disk array. "Example--Removing a StorEdge A5x00 Disk Drive" shows you how to apply this procedure. Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 System Administration Guide and your server hardware manual.

For conceptual information on quorum, quorum devices, global devices, and device IDs, see the Sun Cluster 3.0 U1 Concepts document.

  1. Determine if the disk drive you are removing is a quorum device.


    # scstat -q
    
    • If the disk drive you are replacing is a quorum device, put the quorum device into maintenance state before you go to Step 2. For the procedure on putting a quorum device into maintenance state, see the Sun Cluster 3.0 U1 System Administration Guide.

    • If the disk you are replacing is not a quorum device, go to Step 2.

  2. If possible, back up the metadevice or volume.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Perform volume management administration to remove the disk drive from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Identify the disk drive that needs to be removed.

    If the disk error message reports the drive problem by DID, use the scdidadm -l command to determine the Solaris device name.


    # scdidadm -l deviceID
    
  5. On any node that is connected to the StorEdge A5x00 disk array, run the luxadm remove command.

    Physically remove the disk drive, then press Return when prompted.


    # luxadm remove -F /dev/rdsk/cNtXdYsZ
    
  6. On all connected nodes, remove references to the disk drive.


    # devfsadm -C
    # scdidadm -C
    

Example--Removing a StorEdge A5x00 Disk Drive

The following example shows how to apply the procedure for removing a StorEdge A5x00 disk array disk drive.


# scdidadm -l d4
4        phys-schost-2:/dev/rdsk/c1t32d0 /dev/did/rdsk/d4
# luxadm remove -F /dev/rdsk/c1t32d0s2

WARNING!!! Please ensure that no filesystems are mounted on these device(s). All data on these devices should have been backed up.

The list of devices that will be removed is:  1: Box Name "venus1" front slot 0

Please enter 'q' to Quit or <Return> to Continue: <Return>

stopping:  Drive in "venus1" front slot 0....Done
offlining: Drive in "venus1" front slot 0....Done

Hit <Return> after removing the device(s). <Return>

Drive in Box Name "venus1" front slot 0
Logical Nodes being removed under /dev/dsk/ and /dev/rdsk:
        c1t32d0s0
        c1t32d0s1
        c1t32d0s2
        c1t32d0s3
        c1t32d0s4
        c1t32d0s5
        c1t32d0s6
        c1t32d0s7
# devfsadm -C
# scdidadm -C

How to Add the First StorEdge A5x00 Disk Array to a Running Cluster

Use this procedure to install a StorEdge A5x00 disk array in a running cluster that does not yet have an existing StorEdge A5x00 installed.

If you are installing a StorEdge A5x00 disk array in a running cluster that already has StorEdge A5x00 disk arrays installed and configured with hubs, use the procedure in "How to Add a StorEdge A5x00 Disk Array to a Running Cluster That Has Existing StorEdge A5x00 Disk Arrays".

Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 System Administration Guide and your server hardware manual.

  1. Determine if the StorEdge A5x00 disk array packages need to be installed on the nodes to which you are connecting the StorEdge A5x00 disk array. This product requires the following packages.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  2. On each node, install any necessary packages for the Solaris operating environment.

    The StorEdge A5x00 disk array packages are located in the Product directory of the CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
    path_to_Solaris

    Path to the Solaris operating environment

    Pkg1 Pkg2

    The packages to be added

  3. Shut down and power off any node that is connected to the StorEdge A5x00 disk array.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0
    

    For more information on shutdown procedures, see the Sun Cluster 3.0 U1 System Administration Guide.

  4. Install host adapters in the node that is to be connected to the StorEdge A5x00 disk array.

    For the procedure on installing host adapters, see the documentation that shipped with your network adapters and nodes.

  5. Cable, configure, and power on the StorEdge A5x00 disk array.

    For more information, see the Sun StorEdge A5000 Installation and Service Manual and the Sun StorEdge A5000 Configuration Guide.

    Figure 6-2 shows a sample StorEdge A5x00 disk array configuration.

    Figure 6-2 Sample StorEdge A5x00 Disk Array Configuration

    Graphic

  6. Power on and boot the node.


    # boot -r
    

    For the procedures on powering on and booting a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  7. Determine if any patches need to be installed on the node(s) that are to be connected to the StorEdge A5x00 disk array.

    For a list of patches specific to Sun Cluster, see the Sun Cluster 3.0 U1 Release Notes.

  8. Obtain and install any necessary patches on the nodes that are to be connected to the StorEdge A5x00 disk array.

    For procedures on applying patches, see the Sun Cluster 3.0 U1 System Administration Guide.


    Note -

    Read any README files that accompany the patches before you begin this installation. Some patches must be installed in a specific order.


  9. If required by the patch README instructions, shut down and reboot the node.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i6
    

    For more information on shutdown procedures, see the Sun Cluster 3.0 U1 System Administration Guide.

  10. Perform Step 3 through Step 9 for each node that is attached to the StorEdge A5x00 disk array.

  11. Perform volume management administration to add the disk drives in the StorEdge A5x00 disk array to the volume management configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Add a StorEdge A5x00 Disk Array to a Running Cluster That Has Existing StorEdge A5x00 Disk Arrays

Use this procedure to install a StorEdge A5x00 disk array in a running cluster that already has StorEdge A5x00 disk arrays installed and configured with hubs.

If you are installing the first StorEdge A5x00 disk array to a running cluster that does not yet have a StorEdge A5x00 disk array installed, use the procedure in "How to Add the First StorEdge A5x00 Disk Array to a Running Cluster".

Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 System Administration Guide and your server hardware manual.

  1. Configure the new StorEdge A5x00 disk array.


    Note -

    Each disk array in the loop must have a unique box ID. If necessary, use the front-panel module (FPM) to change the box ID for the new StorEdge A5x00 disk array you are adding. For more information about StorEdge A5x00 loops and general configuration, see the Sun StorEdge A5000 Configuration Guide and the Sun StorEdge A5000 Installation and Service Manual.


  2. On both nodes, use the luxadm insert command to insert the new disk array to the cluster and to add paths to its disk drives.


    # luxadm insert
    Please hit <RETURN> when you have finished adding
    Fibre Channel Enclosure(s)/Device(s):
    


    Note -

    Do not press Return until after you have completed Step 3.


  3. Cable the new StorEdge A5x00 disk array to a spare port in the existing hub or host adapter in your cluster.

    For cabling instructions and diagrams, see the Sun StorEdge A5000 Configuration Guide.

  4. After you have finished cabling the new disk array, press Return to complete the luxadm insert operation (sample output shown below).


    Waiting for Loop Initialization to complete...
    New Logical Nodes under /dev/dsk and /dev/rdsk :
    c4t98d0s0
    c4t98d0s1
    c4t98d0s2
    c4t98d0s3
    c4t98d0s4
    c4t98d0s5
    c4t98d0s6
    ...
    New Logical Nodes under /dev/es:
    ses12
    ses13
    

  5. On both nodes, use the luxadm probe command to verify that the new StorEdge A5x00 disk array is recognized by both cluster nodes.


    # luxadm probe
    

  6. On one node, use the scgdevs command to update the DID database.


    # scgdevs
    

How to Replace a StorEdge A5x00 Disk Array in a Running Cluster

Use this procedure to replace a failed StorEdge A5x00 disk array in a running cluster. "Example--Replacing a StorEdge A5x00 Disk Array" shows you how to apply this procedure. This procedure assumes that you are retaining the disk drives.

If you are replacing your disk drives, see "How to Replace a Disk Drive in a StorEdge A5x00 Disk Array in a Running Cluster".

  1. If possible, back up the metadevices or volumes that reside in the StorEdge A5x00 disk array.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Perform volume management administration to remove the StorEdge A5x00 disk array from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. On all nodes that are connected to the StorEdge A5x00 disk array, run the luxadm remove command.


    # luxadm remove -F boxname
    
  4. Disconnect the fiber optic cables from the StorEdge A5x00 disk array.

  5. Power off and disconnect the StorEdge A5x00 disk array from the AC power source.

    For more information, see the Sun StorEdge A5000 Installation and Service Manual and the Sun StorEdge A5000 Configuration Guide.

  6. Connect the fiber optic cables to the new StorEdge A5x00 disk array.

  7. Connect the new StorEdge A5x00 disk array to an AC power source.

  8. One at a time, move the disk drives from the old StorEdge A5x00 disk array to the same slots in the new StorEdge A5x00 disk array.

  9. Power on the StorEdge A5x00 disk array.

  10. Use the luxadm insert command to find the new StorEdge A5x00 disk array.

    Repeat this step for each node that is connected to the StorEdge A5x00 disk array.


    # luxadm insert
    

  11. On all nodes that are connected to the new StorEdge A5x00 disk array, upload the new information to the DID driver.

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive that is connected to the node, a device busy error might be returned even if no disk is in the drive. This error is an expected behavior.


    # scgdevs
    
  12. Perform volume management administration to add the new StorEdge A5x00 disk array to the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

Example--Replacing a StorEdge A5x00 Disk Array

The following example shows how to apply the procedure for replacing a StorEdge A5x00 disk array.


# luxadm remove -F venus1

WARNING!!! Please ensure that no filesystems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Box name:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/ses@w123456789abcdf03,0:0
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/ses@w123456789abcdf00,0:0

Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return>

Hit <Return> after removing the device(s). <Return>
# luxadm insert
Please hit <RETURN> when you have finished adding Fibre Channel 
Enclosure(s)/Device(s): <Return>
# scgdevs

How to Remove a StorEdge A5x00 Disk Array From a Running Cluster

Use this procedure to remove a StorEdge A5x00 disk array from a cluster. "Example--Removing a StorEdge A5x00 Disk Array" shows you how to apply this procedure. Use the procedures in your server hardware manual to identify the StorEdge A5x00 disk array.

  1. Perform volume management administration to remove the StorEdge A5x00 disk array from the configuration.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. On all nodes that are connected to the StorEdge A5x00 disk array, run the luxadm remove command.


    # luxadm remove -F boxname
    
  3. Remove the StorEdge A5x00 disk array and the fiber optic cables that are connected to the StorEdge A5x00 disk array.

    For more information, see the Sun StorEdge A5000 Installation and Service Manual.

  4. On all nodes, remove references to the StorEdge A5x00 disk array.


    # devfsadm -C
    # scdidadm -C
    
  5. If necessary, remove any unused host adapters from the nodes.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

Example--Removing a StorEdge A5x00 Disk Array

The following example shows how to apply the procedure for removing a StorEdge A5x00 disk array.


# luxadm remove -F venus1
WARNING!!! Please ensure that no filesystems are mounted on these device(s).
 All data on these devices should have been backed up.

The list of devices that will be removed is:
  1: Box name:    venus1
     Node WWN:    123456789abcdeff
     Device Type: SENA (SES device)
     SES Paths:
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@0,0/ses@w123456789abcdf03,0:0
      /devices/nodes@1/sbus@1f,0/SUNW,socal@1,0/sf@1,0/ses@w123456789abcdf00,0:0


Please verify the above list of devices and
then enter 'c' or <CR> to Continue or 'q' to Quit. [Default: c]: <Return>

Hit <Return> after removing the device(s). <Return>

# devfsadm -C
# scdidadm -C