Sun Cluster 3.0 5/02 Supplement

Maintaining StorEdge 3900 and 6900 Series Systems

This section contains the procedures for maintaining StorEdge 3900 and 6900 Series systems. Table E-1 lists these procedures. This section does not include procedures for adding or removing disk drives because the StorEdge T3+ arrays in your StorEdge 3900 or 6900 Series system operate only when fully configured with disk drives.


Caution - Caution -

If you remove any field replaceable unit (FRU) from the StorEdge T3+ arrays for an extended period of time, thermal complications might result. To prevent these complications, the StorEdge T3+ array is designed so that an orderly shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before you start an FRU replacement procedure. You must replace an FRU within 30 minutes or the StorEdge T3+ array, and all attached StorEdge T3+ arrays, will shut down and power off.


Table E-1 Task Map: Maintaining a StorEdge 3900 or 6900 Series System

Task 

For Instructions, Go To... 

Add a StorEdge 3900 or 6900 Series system. 

"How to Add StorEdge 3900 or 6900 Series Systems to a Running Cluster"

Remove a StorEdge 3900 or 6900 Series system. 

"How to Remove StorEdge 3900 or 6900 Series Systems From a Running Cluster"

Replace a virtualization engine (StorEdge 6900 Series only). 

"How to Replace a Virtualization Engine in a Running Cluster (StorEdge 6900 Series Only)"

Replace a node-to-switch fiber optic cable. 

"How to Replace a Node-to-Switch Component in a Running Cluster"

Replace a gigabit interface converter (GBIC) on a node's host adapter. 

"How to Replace a Node-to-Switch Component in a Running Cluster"

Replace a GBIC on an FC switch, connecting to a node. 

"How to Replace a Node-to-Switch Component in a Running Cluster"

Remove a T3+ array partner group from the system. 

"How to Remove StorEdge T3/T3+ Arrays From a Running Cluster"

Upgrade StorEdge T3+ array firmware. 

"How to Upgrade StorEdge T3+ Array Firmware"

Replace a disk drive in an array. 

"How to Replace a Failed Disk Drive in a Running Cluster"

Replace a host adapter in a node. 

"How to Replace a Node's Host Adapter in a Running Cluster"

Replace a StorEdge network FC switch-8 or switch-16. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge 3900 and 6900 Series Reference Manual

Replace an array-to-switch fiber optic cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge 3900 and 6900 Series Reference Manual

Replace a GBIC on an FC switch, connecting to an array. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge 3900 and 6900 Series Reference Manual

Replace a StorEdge 3900 or 6900 Series storage service processor. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge 3900 and 6900 Series Reference Manual

Replace a StorEdge 3900 or 6900 Series Ethernet hub. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge 3900 and 6900 Series Reference Manual

Replace a StorEdge T3+ Power and Cooling Unit (PCU). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace a StorEdge T3+ unit interconnect card (UIC). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace a StorEdge T3+ array power cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace a StorEdge T3+ Ethernet cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

How to Add StorEdge 3900 or 6900 Series Systems to a Running Cluster


only -

Use this procedure to add new StorEdge 3900 and 6900 Series systems to a running cluster. To install systems to a new Sun Cluster that is not running, use the procedure in "How to Install StorEdge 3900 and 6900 Series Systems".


This procedure defines "Node A" as the node you begin working with, and "Node B" as the second attached node.

  1. Unpack, place, and level the system cabinet.

    For instructions, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.

  2. Install cables in the following order.

    1. Install the system power cord.

    2. Install the system grounding strap.

    3. Install the cables from the FC switches to the cluster nodes (see Figure E-1 for an example).

    4. Install the Ethernet cable to the LAN.

    For instructions on cabling, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.

  3. Power on the new system.


    only -

    The StorEdge T3+ arrays in your system might take several minutes to boot.


    For instructions, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.

  4. Set the host name, IP address, date, and timezone for the system's Storage Service Processor.

    For detailed instructions, see the initial field installation instructions in the Sun StorEdge 3900 and 6900 Series Reference Manual.

  5. Remove the preconfigured, default hard zoning from the new system's FC switches.


    only -

    For StorEdge 3900 Series only: To configure the StorEdge 3900 Series system for use with Sun Cluster host-based mirroring, the default hard zones must be removed from the system's FC switches. See the SANbox-8/16 Switch Management User's Manual for instructions on using the installed SANsurfer interface for removing the preconfigured hard zones from all Sun StorEdge Network FC Switch-8 and Switch-16 switches.


  6. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 48 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  7. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  8. Do you need to install host adapters in Node A?

    • If not, go to Step 14.

    • If you do need to install host adapters to Node A, continue with Step 9.

  9. Is the host adapter you are installing the first host adapter on Node A?

    • If not, go to Step 11.

    • If it is the first host adapter, use the pkginfo command as shown below to determine whether the required support packages for the host adapter are already installed on this node. The following packages are required.


      # pkginfo | egrep Wlux
      system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
      system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
      system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
      system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
      system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
      system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)

  10. Are the required support packages already installed?

    • If they are already installed, go to Step 11.

    • If not, install the required support packages that are missing.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  11. Shut down and power off Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  12. Install the host adapters in Node A.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  13. Power on and boot Node A into non-cluster mode.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  14. If necessary, upgrade the host adapter firmware on Node A.

    See the Sun Cluster 3.0 5/02 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  15. If necessary, install GBICs in the FC switches.

    For the procedure on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  16. Connect fiber optic cables between Node A and the FC switches in your StorEdge 3900 or 6900 Series system (see Figure E-1 for an example).

  17. If necessary, install the required Solaris patches for StorEdge T3+ array support on Node A.

    See the Sun Cluster 3.0 5/02 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  18. Install any required patches or software for Sun StorEdge Traffic Manager software support to Node A from the Sun Download Center Web site, http://www.sun.com/storage/san/.

    For instructions on installing the software, see the information on the web site.

  19. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 18.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  20. Shut down Node A.


    # shutdown -y -g0 -i0
    

  21. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


    {0} ok boot -r
    

  22. On all nodes, update the /devices and /dev entries.


    # devfsadm -C 
    # devfsadm
    

  23. On all nodes, update the paths to the DID instances


    # scdidadm -C 
    # scdidadm -r
    
    .

  24. Are you adding a StorEdge 3900 Series or StorEdge 6900 Series system?

    • If you are adding a StorEdge 3900 Series system, go to Step 26.

    • If you are adding StorEdge 6900 Series systems: On Node A, use the cfgadm command as shown below to view the virtualization engine (VE) controller status and to enable the VE controllers.


      # cfgadm -al
      # cfgadm -c configure <c::controller id>
      

    See the cfgadm(1M) man page for more information about the command and its options.

  25. (Optional) Configure VLUNs on the VEs in the new StorEdge 6900 Series system.

    For instructions on configuring VLUNs in a cluster, see "How to Configure VLUNs on the Virtualization Engines in Your StorEdge 6900 Series System".

  26. (Optional) On Node A, verify that the device IDs (DIDs) were assigned to the new array.


    # scdidadm -l
    

  27. Do you need to install host adapters in Node B?

    • If not, go to Step 34.

    • If you do need to install host adapters to Node B, continue with Step 28.

  28. Is the host adapter you are installing the first host adapter on Node B?

    • If not, go to Step 30.

    • If it is the first host adapter, determine whether the required support packages for the host adapter are already installed on this node. The following packages are required.


      # pkginfo | egrep Wlux
      system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
      system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
      system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
      system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
      system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
      system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)

  29. Are the required support packages already installed?

    • If they are already installed, go to Step 30.

    • If not, install the missing support packages.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  30. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    

  31. Shut down and power off Node B.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  32. Install the host adapters in Node B.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  33. Power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  34. If necessary, upgrade the host adapter firmware on Node B.

    See the Sun Cluster 3.0 5/02 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  35. If necessary, install GBICs in the FC switches.

    For the procedure on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  36. Connect fiber optic cables between the FC switches in your StorEdge 3900 or 6900 Series system and Node B (see Figure E-1 for an example.

  37. If necessary, install the required Solaris patches for StorEdge T3+ array support on Node B.

    For a list of required Solaris patches for StorEdge T3+ array support, see the Sun StorEdge T3 Disk Tray Release Notes.

  38. Install any required patches or software for Sun StorEdge Traffic Manager software support to Node B from the Sun Download Center Web site, http://www.sun.com/storage/san/.

    For instructions on installing the software, see the information on the web site.

  39. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 38.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  40. Shut down Node B.


    # shutdown -y -g0 -i0
    

  41. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    

  42. On all nodes, update the /devices and /dev entries.


    # devfsadm -C 
    # devfsadm
    

  43. On all nodes, update the paths to the DID instances


    # scdidadm -C 
    # scdidadm -r
    
    .

  44. Are you adding a StorEdge 3900 Series or StorEdge 6900 Series system?

    • If you are adding a StorEdge 3900 Series system, go to Step 46.

    • If you are adding StorEdge 6900 Series systems: On Node A, use the cfgadm command as shown below to view the virtualization engine (VE) controller status and to enable the VE controllers.


      # cfgadm -al
      # cfgadm -c configure <c::controller id>
      

    See the cfgadm(1M) man page for more information about the command and its options.

  45. (Optional) Configure VLUNs on the VEs in the new StorEdge 6900 Series system.

    For instructions on configuring VLUNs in a cluster, see "How to Configure VLUNs on the Virtualization Engines in Your StorEdge 6900 Series System".

  46. (Optional) On Node B, verify that the DIDs are assigned to the new arrays:


    # scdidadm -l
    

  47. On one node attached to the new arrays, reset the SCSI reservation state:


    # scdidadm -R n
    

    Where n is the DID instance of a array LUN you are adding to the cluster.


    only -

    Repeat this command on the same node for each array LUN you are adding to the cluster.


  48. Return the resource groups and device groups you identified in Step 6 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  49. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Remove StorEdge 3900 or 6900 Series Systems From a Running Cluster

Use this procedure to permanently remove a StorEdge 3900 or 6900 Series system and its associated submirrors from a running cluster.

This procedure defines "Node A" as the cluster node you begin working with, and "Node B" as the other node.


Caution - Caution -

During this procedure, you lose access to the data that resides on each StorEdge T3+ array partner-group in the StorEdge 3900 or 6900 Series system you are removing.


  1. If necessary, back up all database tables, data services, and volumes associated with each StorEdge T3+ partner-group in the StorEdge 3900 or 6900 Series system you are removing.

  2. If necessary, run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to detach the submirrors from each StorEdge T3+ partner-group in the StorEdge 3900 or 6900 Series system that you are removing to stop all I/O activity to the partner-groups.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove references to each LUN or VLUN in the StorEdge 3900 or 6900 Series system that you are removing.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 17 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  5. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  6. Shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Disconnect the cables that connected Node A to the FC switches in your StorEdge 3900 or 6900 Series system.

  8. Without allowing the node to boot, power on Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  9. Boot Node A into cluster mode.


    {0} ok boot
    

  10. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    

  11. Shut down Node B.


    # shutdown -y -g0 -i0
    

  12. Disconnect the cables that connected Node B to the FC switches in your StorEdge 3900 or 6900 Series system.

  13. Without allowing the node to boot, power on Node B.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  14. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  15. On all nodes, update the /devices and /dev entries.


    # devfsadm -C 
    # devfsadm
    

  16. On all nodes, update the paths to the DID instances


    # scdidadm -C 
    # scdidadm -r
    
    .

  17. Return the resource groups and device groups you identified in Step 4 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Virtualization Engine in a Running Cluster (StorEdge 6900 Series Only)

Use this procedure to replace a virtualization engine (VE) in a StorEdge 6900 Series system in a running cluster.

  1. Replace the VE hardware.

    Follow the instructions in the Sun StorEdge 3900 and 6900 Series Reference Manual.

  2. On any cluster node, use the cfgadm command as shown below to view the virtualization engine (VE) controller status and to enable the VE controllers.


    # cfgadm -al
    # cfgadm -c configure <c::controller id>
    

    See the cfgadm(1M) man page for more information about the command and its options.

How to Replace a Node-to-Switch Component in a Running Cluster

Use this procedure to replace the following node-to-switch components in a running cluster:

  1. On the node connected to the component you are replacing, determine the resource groups and device groups running on the node.

    Record this information because you will use it in Step 4 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  2. Move all resource groups and device groups to another node.


    # scswitch -S -h nodename
    

  3. Replace the node-to-switch component.

    • For the procedure on replacing a fiber optic cable between a node and an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

    • For the procedure on replacing a GBIC on an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  4. Return the resource groups and device groups you identified in Step 1 to the node that is connected to the component you replaced.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Remove StorEdge T3/T3+ Arrays From a Running Cluster

Use this procedure to permanently remove StorEdge T3/T3+ array partner groups and their submirrors from a StorEdge 3900 or 6900 Series system in a running cluster.

This procedure defines "Node A" as the cluster node you begin working with, and "Node B" as the other node.


Caution - Caution -

During this procedure, you lose access to the data that resides on each StorEdge T3+ array partner-group you are removing.


  1. If necessary, back up all database tables, data services, and volumes associated with each partner-group you are removing.

  2. If necessary, run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to detach the submirrors from each array or partner-group that you are removing to stop all I/O activity to the array or partner-group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove references to each LUN that belongs to the array or partner-group that you are removing.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 22 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  5. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  6. Shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Disconnect from both arrays the fiber optic cables connecting to the FC switches, then the Ethernet cable(s).

  8. Is any array you are removing the last array connected to an FC switch on Node A?

    • If not, go to Step 12.

    • If it is the last array, disconnect the fiber optic cable between Node A and the FC switch that was connected to this array.

    For the procedure on removing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  9. Do you want to remove the host adapters from Node A?

    • If not, go to Step 12.

    • If yes, power off Node A.

  10. Remove the host adapters from Node A.

    For the procedure on removing host adapters, see the documentation that shipped with your host adapter and nodes.

  11. Without allowing the node to boot, power on Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  12. Boot Node A into cluster mode.


    {0} ok boot
    

  13. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    

  14. Shut down Node B.


    # shutdown -y -g0 -i0
    

  15. Is any array you are removing the last array connected to an FC switch on Node B?

    • If not, go to Step 19.

    • If it is the last array, disconnect the fiber optic cable connecting this FC switch to Node B.

    For the procedure on removing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  16. Do you want to remove the host adapters from Node B?

    • If not, go to Step 19.

    • If yes, power off Node B.

  17. Remove the host adapters from Node B.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  18. Without allowing the node to boot, power on Node B.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  19. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  20. On all nodes, update the /devices and /dev entries.


    # devfsadm -C 
    # devfsadm
    

  21. On all nodes, update the paths to the DID instances


    # scdidadm -C 
    # scdidadm -r
    
    .

  22. Return the resource groups and device groups you identified in Step 4 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Upgrade StorEdge T3+ Array Firmware

Use one of the following procedures to upgrade the firmware on the StorEdge T3+ arrays in your StorEdge 3900 or 6900 Series system, depending on whether your array partner-group has been configured to support submirrors of a cluster node's volumes. StorEdge T3+ array firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.


only -

For all firmware, always read any README files that accompany the firmware for the latest information and special notes.


Upgrading Firmware on Arrays That Support Submirrored Data


Caution - Caution -

Perform this procedure on one array at a time. This procedure requires that you reset the arrays you are upgrading. If you reset more than one array at a time, your cluster will lose access to data.


  1. On the node that currently owns the disk group or disk set to which the submirror belongs, detach the submirrors of the array on which you are upgrading firmware. (This procedure refers to this node as Node A and remaining node as Node B.)

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Disconnect both array-to-switch fiber optic cables from the two arrays of the partner-group.

  3. Apply the controller, disk drive, and UIC firmware patches.

    For the list of required StorEdge T3+ array patches, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying firmware patches, see the firmware patch README file. For the procedure on verifying the firmware level, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. Reset the arrays.

    For the procedure on resetting an array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  5. Use the StorEdge T3+ disable command to disable the array controller that is attached to Node B so that all logical volumes come under the control of the remaining controller.


    t3:/:<#> disable uencidctr
    

    See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the disable command.

  6. Reconnect both array-to-switch fiber optic cables to the two arrays of the partner-group.

  7. On one node connected to the partner-group, use the format command to verify that the array controllers are rediscovered by the node.


    # format
    

  8. Use the StorEdge T3+ enable command to enable the array controller that you disabled in Step 5.


    t3:/:<#> enable uencidctr
    

  9. Reattach the submirrors that you detached in Step 1 to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

Upgrading Firmware on Arrays That Do Not Support Submirrored Data

In a partner-pair configuration, it is possible to have non-mirrored data; however, this requires that you shut down the cluster when upgrading firmware, as described in this procedure.

  1. Shut down the entire cluster.


    # scshutdown -y -g0
    

    For the full procedure on shutting down a cluster, see the Sun Cluster 3.0 12/01 System Administration Guide.

  2. Apply the controller, disk drive, and UIC firmware patches.

    For the list of required StorEdge T3+ array patches, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying firmware patches, see the firmware patch README file. For the procedure on verifying the firmware level, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. Reset the arrays.

    For the procedure on resetting an array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. Boot all nodes back into the cluster.


    ok boot 
    

    For the full procedure on booting nodes into the cluster, see the Sun Cluster 3.0 12/01 System Administration Guide.

  5. On one node connected to the partner-group, use the format command to verify that the array controllers are rediscovered by the node.


    # format
    

How to Replace a Failed Disk Drive in a Running Cluster

Use this procedure to replace one failed disk drive in a StorEdge T3+ array that is in your StorEdge 3900 or 6900 Series system, in a running cluster.


Caution - Caution -

If you remove any field replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3+ array is designed so that an orderly shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before starting an FRU replacement procedure. You must replace an FRU within 30 minutes or the StorEdge T3+ array, and all attached StorEdge T3+ arrays, will shut down and power off.


  1. Did the failed disk drive impact the array LUN's availability?

    • If not, go to Step 2.

    • If it did impact LUN availability, remove the LUN from volume management control.

      For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the disk drive in the array.

    For the procedure on replacing a disk drive, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. Did you remove a LUN from volume management control in Step 1?

    • If not, you are finished with this procedure.

    • If you did remove a LUN from volume management control, return the LUN to volume management control now.

      For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Node's Host Adapter in a Running Cluster

Use this procedure to replace a failed host adapter in a running cluster. As defined in this procedure, "Node A" is the node with the failed host adapter you are replacing and "Node B" is the other node.

  1. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 8 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  3. Shut down Node A.


    # shutdown -y -g0 -i0
    

  4. Power off Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  5. Replace the failed host adapter.

    For the procedure on removing and adding host adapters, see the documentation that shipped with your nodes.

  6. Power on Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Boot Node A into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  8. Return the resource groups and device groups you identified in Step 1 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.