Sun Cluster 3.0 U1 Release Notes Supplement

Appendix B Installing and Maintaining a Sun StorEdge T3 and T3+ Disk Tray Partner-Group Configuration

This chapter contains the procedures for installing, configuring, and maintaining Sun StorEdgeTM T3 and Sun StorEdge T3+ disk trays in a partner-group (interconnected) configuration. Differences between the StorEdge T3 and StorEdge T3+ procedures are noted where appropriate.

This chapter contains the following procedures:

For conceptual information on multihost disks, see the Sun Cluster 3.0 U1 Concepts document.

Installing Sun StorEdge T3/T3+ Disk Trays


Note -

This section contains the procedure for an initial installation of StorEdge T3 or StorEdge T3+ disk tray partner groups in a new Sun Cluster that is not running. If you are adding partner groups to an existing cluster, use the procedure in "How to Add StorEdge T3/T3+ Disk Tray Partner Groups to a Running Cluster".


How to Install StorEdge T3/T3+ Disk Tray Partner Groups

Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 U1 Installation Guide and your server hardware manual.

  1. Install the host adapters in the nodes that will be connected to the disk trays.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.


    Note -

    You must use FC switches when installing disk trays in a partner-group configuration.


    For the procedure on installing a Sun StorEdge network FC switch-8 or switch-16, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

  3. (Skip this step if you are installing StorEdge T3+ disk trays) Install the media interface adapters (MIAs) in the StorEdge T3 disk trays you are installing as shown in Figure B-1.

    For the procedure on installing a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  4. If necessary, install GBICs in the FC switches, as shown in Figure B-1.

    For the procedure on installing a GBIC to an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

  5. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new disk trays to reside on.

    This RARP server enables you to assign an IP address to the new disk trays using the disk tray's unique MAC address. For the procedure on setting up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Cable the disk trays (see Figure B-1):

    1. Connect the disk trays to the FC switches using fiber optic cables.

    2. Connect the Ethernet cables from each disk tray to the LAN.

    3. Connect interconnect cables between the two disk trays of each partner group.

    4. Connect power cords to each disk tray.

    For the procedure on installing fiber optic, Ethernet, and interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure B-1 StorEdge T3/T3+ Disk Tray Partner-Group (Interconnected) Controller Configuration

    Graphic

  7. Power on the disk trays and verify that all components are powered on and functional.

    For the procedure on powering on the disk trays and verifying the hardware configuration, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  8. Administer the disk trays' network settings:

    Telnet to the master controller unit and administer the disk trays. For the procedure on administering the disk tray network addresses and settings, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    The master controller unit is the disk tray that has the interconnect cables attached to the right-hand connectors of its interconnect cards (when viewed from the rear of the disk trays). For example, Figure B-1 shows the master controller unit of the partner-group as the lower disk tray. Note in this diagram that the interconnect cables are connected to the right-hand connectors of both interconnect cards on the master controller unit.

  9. Install any required disk tray controller firmware:

    For partner-group configurations, telnet to the master controller unit and install the required controller firmware.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  10. At the master disk tray's prompt, use the port list command to ensure that each disk tray has a unique target address:


    t3:/:<#> port list
    

    If the disk trays do not have unique target addresses, use the port set command to set the addresses. For the procedure on verifying and assigning a target address to a disk tray, see the Sun StorEdge T3 and T3+ Array Configuration Guide. For more information about the port command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  11. At the master disk tray's prompt, use the sys list command to verify that the cache and mirror settings for each disk tray are set to auto:


    t3:/:<#> sys list
    

    If the two settings are not already set to auto, set them using the following commands:


    t3:/:<#> sys cache auto
    t3:/:<#> sys mirror auto
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  12. At the master disk tray's prompt, use the sys list command to verify that the mp_support parameter for each disk tray is set to mpxio:


    t3:/:<#> sys list
    

    If mp_support is not already set to mpxio, set it using the following command:


    t3:/:<#> sys mp_support mpxio
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  13. At the master disk tray's prompt, use the sys stat command to verify that both disk tray controllers are online, as shown in the following example output.


    t3:/:<#> sys stat
    Unit   State      Role    Partner
    -----  ---------  ------  -------
     1    ONLINE     Master    2
     2    ONLINE     AlterM    1

    For more information about the sys command and how to correct the situation if both controllers are not online, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  14. (Optional) Configure the disk trays with the desired logical volumes.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  15. Reset the disk trays.

    For the procedure on rebooting or resetting a disk tray, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  16. Install to the cluster nodes the Solaris operating environment and apply the required Solaris patches for Sun Cluster software and StorEdge T3/T3+ disk tray support.

    For the procedure on installing the Solaris operating environment, see the Sun Cluster 3.0 U1 Installation Guide.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  17. Install to the cluster nodes any required patches or software for Sun StorEdge Traffic Manager software support from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  18. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed to the cluster nodes in Step 17.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  19. Perform a reconfiguration boot on all nodes to create the new Solaris device files and links.


    {0} ok boot -r
    
  20. On all nodes, update the /devices and /dev entries:


    # devfsadm -C 
    

Where to Go From Here

To continue with Sun Cluster software installation tasks, see the Sun Cluster 3.0 U1 Installation Guide.

Configuring StorEdge T3/T3+ Disk Trays in a Running Cluster

This section contains the procedures for configuring a StorEdge T3 or StorEdge T3+ disk tray in a running cluster. Table B-1 lists these procedures.

Table B-1 Task Map: Configuring a StorEdge T3/T3+ Disk Tray 

Task 

For Instructions, Go To... 

Create a logical volume 

"How to Create a Logical Volume"

Remove a logical volume 

"How to Remove a Logical Volume"

How to Create a Logical Volume

Use this procedure to create a StorEdge T3/T3+ disk tray logical volume. This procedure assumes all cluster nodes are booted and attached to the disk tray that will host the logical volume you are creating.

  1. Telnet to the disk tray that is the master controller unit of your partner-group.

    The master controller unit is the disk tray that has the interconnect cables attached to the right-hand connectors of its interconnect cards (when viewed from the rear of the disk trays). For example, Figure B-1 shows the master controller unit of the partner-group as the lower disk tray. Note in this diagram that the interconnect cables are connected to the right-hand connectors of both interconnect cards on the master controller unit.

  2. Create the logical volume.

    Creating a logical volume involves adding, initializing, and mounting the logical volume.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. On all cluster nodes, update the /devices and /dev entries:


    # devfsadm
    
  4. On one node connected to the partner-group, use the format command to verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  5. Are you running VERITAS Volume Manager?

    • If not, go to Step 6

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you created in Step 2.

    See your VERITAS Volume Manager documentation for information about using the vxdctl enable command to update new devices (volumes) in your VERITAS Volume Manager list of devices.

  6. If needed, partition the logical volume.

  7. From any node in the cluster, update the global device namespace by using the scgdevs command.


    # scgdevs
    

    Note -

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.



Note -

Do not configure StorEdge T3/T3+ logical volumes as quorum devices in partner-group configurations. The use of StorEdge T3/T3+ logical volumes as quorum devices in partner-group configurations is not supported.


Where to Go From Here

To create a new resource or reconfigure a running resource to use the new logical volume, see the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide.

How to Remove a Logical Volume

Use this procedure to remove a StorEdge T3/T3+ disk tray logical volume. This procedure assumes all cluster nodes are booted and attached to the disk tray that hosts the logical volume you are removing.

This procedure defines "Node A" as the node you begin working with, and "Node B" as the other node.


Caution - Caution -

This procedure removes all data from the logical volume you are removing.


  1. If necessary, migrate all data and volumes off the logical volume you are removing.

  2. Are you running VERITAS Volume Manager?

    • If not, go to Step 3.

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you are removing.

    See your VERITAS Volume Manager documentation for information about using the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  3. Run the appropriate Solstice DiskSuiteTM or VERITAS Volume Manager commands to remove the reference to the logical unit number (LUN) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Telnet to the disk tray that is the master controller unit of your partner-group.

    The master controller unit is the disk tray that has the interconnect cables attached to the right-hand connectors of its interconnect cards (when viewed from the rear of the disk trays). For example, Figure B-1 shows the master controller unit of the partner-group as the lower disk tray. Note in this diagram that the interconnect cables are connected to the right-hand connectors of both interconnect cards on the master controller unit.

  5. Remove the logical volume.

    For the procedure on removing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  6. Use the scstat command to identify the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 15 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Move all resource groups and device groups off of Node A:


    # scswitch -S -h nodename
    
  8. Shut down and reboot Node A by using the shutdown command with the i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  9. On Node A, remove the obsolete device IDs (DIDs):


    # devfsadm -C
    # scdidadm -C
    
  10. On Node A, use the format command to verify that the logical volume you removed is no longer visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  11. Move all resource groups and device groups off Node B:


    # scswitch -S -h nodename
    
  12. Shut down and reboot Node B by using the shutdown command with the i6 option.

    The -i6 option with the shutdown command causes the node to reboot after it shuts down to the ok prompt.


    # shutdown -y -g0 -i6
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  13. On Node B, remove the obsolete DIDs:


    # devfsadm -C
    # scdidadm -C
    
  14. On Node B, use the format command to verify that the logical volume you removed is no longer visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  15. Return the resource groups and device groups you identified in Step 6 to Node A and Node B:


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

Where to Go From Here

To create a logical volume, see "How to Create a Logical Volume".

Maintaining StorEdge T3/T3+ Disk Trays

This section contains the procedures for maintaining StorEdge T3 and StorEdge T3+ disk trays. Table B-2 lists these procedures. This section does not include a procedure for adding a disk drive or a procedure for removing a disk drive because a StorEdge T3/T3+ disk tray operates only when fully configured.


Caution - Caution -

If you remove any field replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3/T3+ disk tray is designed so that an orderly shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before you start an FRU replacement procedure. You must replace an FRU within 30 minutes or the StorEdge T3/T3+ disk tray, and all attached StorEdge T3/T3+ disk trays, will shut down and power off.


Table B-2 Task Map: Maintaining a StorEdge T3/T3+ Disk Tray 

Task 

For Instructions, Go To... 

Upgrade StorEdge T3/T3+ disk tray firmware. 

"How to Upgrade StorEdge T3/T3+ Disk Tray Firmware in a Running Cluster"

Add a StorEdge T3/T3+ disk tray. 

"How to Add StorEdge T3/T3+ Disk Tray Partner Groups to a Running Cluster"

Remove a StorEdge T3/T3+ disk tray. 

"How to Remove StorEdge T3/T3+ Disk Trays From a Running Cluster"

Replace a disk drive in a disk tray. 

"How to Replace a Failed Disk Drive in a Running Cluster"

Replace a node-to-switch fiber optic cable. 

"How to Replace a Node-to-Switch Component in a Running Cluster"

Replace a gigabit interface converter (GBIC) on a node's host adapter. 

"How to Replace a Node-to-Switch Component in a Running Cluster"

Replace a GBIC on an FC switch, connecting to a node. 

"How to Replace a Node-to-Switch Component in a Running Cluster"

Replace a disk tray-to-switch fiber optic cable. 

"How to Replace a Controller Card, FC Switch, or Disk Tray-to-Switch Component in a Running Cluster"

Replace a GBIC on an FC switch, connecting to a disk tray. 

"How to Replace a Controller Card, FC Switch, or Disk Tray-to-Switch Component in a Running Cluster"

Replace a StorEdge network FC switch-8 or switch-16. 

"How to Replace a Controller Card, FC Switch, or Disk Tray-to-Switch Component in a Running Cluster"

Replace a StorEdge network FC switch-8 or switch-16 power cord. 

"How to Replace a Controller Card, FC Switch, or Disk Tray-to-Switch Component in a Running Cluster"

Replace a media interface adapter (MIA) on a StorEdge T3 disk tray (not applicable for StorEdge T3+ disk trays). 

 

"How to Replace a Controller Card, FC Switch, or Disk Tray-to-Switch Component in a Running Cluster"

Replace interconnect cables. 

"How to Replace a Controller Card, FC Switch, or Disk Tray-to-Switch Component in a Running Cluster"

Replace a StorEdge T3/T3+ disk tray controller. 

 

"How to Replace a Controller Card, FC Switch, or Disk Tray-to-Switch Component in a Running Cluster"

Replace a StorEdge T3/T3+ disk tray chassis. 

 

"How to Replace a Disk Tray Chassis in a Running Cluster"

Replace a host adapter in a node. 

"How to Replace a Node's Host Adapter in a Running Cluster"

Migrate from a single-controller configuration to a partner-group configuration. 

"How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration"

Upgrade a StorEdge T3 disk tray controller to a StorEdge T3+ disk tray controller. 

Sun StorEdge T3 Array Controller Upgrade Manual

Replace a Power and Cooling Unit (PCU). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace a unit interconnect card (UIC). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace a StorEdge T3/T3+ disk tray power cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace an Ethernet cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

How to Upgrade StorEdge T3/T3+ Disk Tray Firmware in a Running Cluster

Use this procedure to upgrade StorEdge T3/T3+ disk tray firmware in a running cluster. StorEdge T3/T3+ disk tray firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.


Caution - Caution -

This procedure requires that you reset the StorEDge T3/T3+ array in which you are upgrading firmware. If the partner groups contain data mirrored between separate arrays, upgrade only one array submirror at one time, as described in this procedure. If you upgrade corresponding submirrors simultaneously, you will lose access to the data. If your partner-group configurations do not have data mirrored between separate arrays, you must shutdown the cluster when upgrading firmware, as described in this procedure. If you upgrade the firmware in an array that does not contain mirrored data, you will lose access to the data.



Note -

For all firmware, always read any README files that accompany the firmware for the latest information and special notes.


  1. Determine whether the partner-group is a submirror of a node's volume-manager volume.

    • If it is a submirror, go to Step 2.

    • If it is not a submirror, shut down the entire cluster, then go to Step 4:


      # scshutdown -y -g0
      

      For the full procedure on shutting down a cluster, see the Sun Cluster 3.0 U1 System Administration Guide.

  2. On one node attached to the disk tray you are upgrading, detach that disk tray's submirrors.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Disconnect both disk tray-to-switch fiber optic cables from the two disk trays of the partner-group.

  4. Apply the controller, disk drive, and UIC firmware patches.

    For the list of required StorEdge T3/T3+ disk tray patches, see the Sun StorEdge T3 and T3+ Array Release Notes. For the procedure on applying firmware patches, see the firmware patch README file. For the procedure on verifying the firmware level, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  5. Reset the disk trays.

    For the procedure on resetting a disk tray, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Use the StorEdge T3/T3+ disable command to disable one of the disk tray controllers so that all logical volumes come under the control of the remaining controller.


    t3:/:<#> disable uencidctr
    

    See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the disable command.

  7. Perform this step only if you disconnected cables in Step 3: Reconnect both disk tray-to-switch fiber optic cables to the two disk trays of the partner-group.

  8. On one node connected to the partner-group, use the format command to verify that the disk tray controllers are rediscovered by the node.


    # format
    

  9. Use the StorEdge T3/T3+ enable command to enable the disk tray controller that you disabled in Step 6.


    t3:/:<#> enable uencidctr
    

  10. Perform this step only if you were directed to detach the submirrors in Step 2: Reattach the disk tray's submirrors to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  11. Did you shut down the cluster in Step 1?

    • If not, you are done with this procedure.

    • If you did shut down the cluster, boot all nodes back into the cluster.


      ok boot 
      

      For the full procedure on booting nodes into the cluster, see the Sun Cluster 3.0 U1 System Administration Guide.

How to Add StorEdge T3/T3+ Disk Tray Partner Groups to a Running Cluster


Note -

Use this procedure to add new StorEdge T3/T3+ disk tray partner groups to a running cluster. To install partner groups to a new Sun Cluster that is not running, use the procedure in "How to Install StorEdge T3/T3+ Disk Tray Partner Groups".


This procedure defines "Node A" as the node you begin working with, and "Node B" as the second node.

  1. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new disk trays to reside on, then assign an IP address to the new disk trays.


    Note -

    Assign an IP address to the master controller unit only. The master controller unit is the disk tray that has the interconnect cables attached to the right-hand connectors of its interconnect cards (see Figure B-2).


    This RARP server lets you assign an IP address to the new disk trays using the disk tray's unique MAC address. For the procedure on setting up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  2. Install the Ethernet cable between the disk trays and the local area network (LAN) (see Figure B-2).

  3. If not already installed, install interconnect cables between the two disk trays of each partner group (see Figure B-2).

    For the procedure on installing interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure B-2 Adding Sun StorEdge T3/T3+ Disk Trays, Partner-Group Configuration

    Graphic

  4. Power on the disk trays.


    Note -

    The disk trays might take several minutes to boot.


    For the procedure on powering on disk trays, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  5. Administer the disk trays' network addresses and settings.

    Telnet to the StorEdge T3/T3+ master controller unit and administer the disk trays.

    For the procedure on administering disk tray network address and settings, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Install any required disk tray controller firmware upgrades:

    For partner-group configurations, telnet to the StorEdge T3/T3+ master controller unit and if necessary, install the required disk tray controller firmware.

    For the required disk tray controller firmware revision number, see the Sun StorEdge T3 and T3+ Array Release Notes.

  7. At the master disk tray's prompt, use the port list command to ensure that each disk tray has a unique target address:


    t3:/:<#> port list
    

    If the disk trays do not have unique target addresses, use the port set command to set the addresses. For the procedure on verifying and assigning a target address to a disk tray, see the Sun StorEdge T3 and T3+ Array Configuration Guide. For more information about the port command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  8. At the master disk tray's prompt, use the sys list command to verify that the cache and mirror settings for each disk tray are set to auto:


    t3:/:<#> sys list
    

    If the two settings are not already set to auto, set them using the following commands at each disk tray's prompt:


    t3:/:<#> sys cache auto
    t3:/:<#> sys mirror auto
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  9. Use the StorEdge T3/T3+ sys list command to verify that the mp_support parameter for each disk tray is set to mpxio:


    t3:/:<#> sys list
    

    If mp_support is not already set to mpxio, set it using the following command at each disk tray's prompt:


    t3:/:<#> sys mp_support mpxio
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  10. Configure the new disk trays with the desired logical volumes.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  11. Reset the disk trays.

    For the procedure on rebooting or resetting a disk tray, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  12. (Skip this step if you are adding StorEdge T3+ disk trays.) Install the media interface adapter (MIA) in the StorEdge T3 disk trays you are adding, as shown in Figure B-2.

    For the procedure on installing an MIA, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  13. If necessary, install GBICs in the FC switches, as shown in Figure B-2.


    Note -

    There are no FC switch port-assignment restrictions. You can connect your disk trays and nodes to any FC switch port.


    For the procedure on installing a GBIC to an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

  14. Install a fiber optic cable between the FC switch and the new disk tray as shown in Figure B-2.

    For the procedure on installing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  15. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 54 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  16. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  17. Do you need to install host adapters in Node A?

    • If not, go to Step 24.

    • If you do need to install host adapters to Node A, continue with Step 18.

  18. Is the host adapter you are installing the first host adapter on Node A?

    • If not, go to Step 20.

    • If it is the first host adapter, use the pkginfo command as shown below to determine whether the required support packages are already installed on this node. The following packages are required:


      # pkginfo | egrep Wlux
      system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
      system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
      system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
      system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
      system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
      system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)

  19. Are the required support packages already installed?

    • If they are already installed, go to Step 20.

    • If not, install the required support packages that are missing.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  20. Shut down and power off Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  21. Install the host adapters in Node A.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  22. Power on and boot Node A into non-cluster mode.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  23. If necessary, upgrade the host adapter firmware on Node A.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  24. If necessary, install GBICs to the FC switches, as shown in Figure B-3.

    For the procedure on installing a GBIC to an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

  25. Connect fiber optic cables between Node A and the FC switches, as shown in Figure B-3.


    Note -

    There are no FC switch port-assignment restrictions. You can connect your StorEdge T3/T3+ disk tray and node to any FC switch port.


    For the procedure on installing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure B-3 Adding Sun StorEdge T3/T3+ Disk Trays, Partner-Group Configuration

    Graphic

  26. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node A.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  27. Install any required patches or software for Sun StorEdge Traffic Manager software support to Node A from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  28. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 27.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  29. Shut down Node A.


    # shutdown -y -g0 -i0
    

  30. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


    {0} ok boot -r
    

  31. On Node A, update the /devices and /dev entries:


    # devfsadm -C 
    

  32. On Node A, update the paths to the DID instances:


    # scdidadm -C
    

  33. Label the new disk tray logical volume.

    For the procedure on labeling a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  34. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new disk tray.


    # scdidadm -l
    

  35. Do you need to install host adapters in Node B?

    • If not, go to Step 43.

    • If you do need to install host adapters to Node B, continue with Step 36.

  36. Is the host adapter you are installing the first host adapter on Node B?

    • If not, go to Step 38.

    • If it is the first host adapter, determine whether the required support packages are already installed on this node. The following packages are required.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
    system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)
  37. Are the required support packages already installed?

    • If they are already installed, go to Step 38.

    • If not, install the missing support packages.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  38. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    

  39. Shut down and power off Node B.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  40. Install the host adapters in Node B.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  41. Power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  42. If necessary, upgrade the host adapter firmware on Node B.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  43. If necessary, install GBICs to the FC switches, as shown in Figure B-4.

    For the procedure on installing GBICs to an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide

  44. Connect fiber optic cables between the FC switches and Node B as shown in Figure B-4.

    For the procedure on installing fiber optic cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure B-4 Adding a Sun StorEdge T3/T3+ Disk Tray, Partner-Pair Configuration

    Graphic

  45. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node B.

    For a list of required Solaris patches for StorEdge T3/T3+ disk tray support, see the Sun StorEdge T3 and T3+ Array Release Notes.

  46. If you are installing a partner-group configuration, install any required patches or software for Sun StorEdge Traffic Manager software support to Node B from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  47. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 46.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  48. Shut down Node B.


    # shutdown -y -g0 -i0
    

  49. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    

  50. On Node B, update the /devices and /dev entries:


    # devfsadm -C 
    

  51. On Node B, update the paths to the DID instances:


    # scdidadm -C
    

  52. (Optional) On Node B, verify that the DIDs are assigned to the new disk trays:


    # scdidadm -l
    

  53. On one node attached to the new disk trays, reset the SCSI reservation state:


    # scdidadm -R n
    

    Where n is the DID instance of a disk tray LUN you are adding to the cluster.


    Note -

    Repeat this command on the same node for each disk tray LUN you are adding to the cluster.


  54. Return the resource groups and device groups you identified in Step 15 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  55. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Remove StorEdge T3/T3+ Disk Trays From a Running Cluster

Use this procedure to permanently remove StorEdge T3/T3+ disk tray partner groups and their submirrors from a running cluster.

This procedure defines "Node A" as the cluster node you begin working with, and "Node B" as the other node.


Caution - Caution -

During this procedure, you lose access to the data that resides on each disk tray partner-group you are removing.


  1. If necessary, back up all database tables, data services, and volumes associated with each partner-group you are removing.

  2. If necessary, run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to detach the submirrors from each disk tray or partner-group that you are removing to stop all I/O activity to the disk tray or partner-group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove references to each LUN that belongs to the disk tray or partner-group that you are removing.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 21 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  5. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  6. Shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  7. Disconnect from both disk trays the fiber optic cables connecting to the FC switches, then the Ethernet cable(s).

  8. Is any disk tray you are removing the last disk tray connected to an FC switch on Node A?

    • If not, go to Step 12.

    • If it is the last disk tray, disconnect the fiber optic cable between Node A and the FC switch that was connected to this disk tray.

    For the procedure on removing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  9. Do you want to remove the host adapters from Node A?

    • If not, go to Step 12.

    • If yes, power off Node A.

  10. Remove the host adapters from Node A.

    For the procedure on removing host adapters, see the documentation that shipped with your host adapter and nodes.

  11. Without allowing the node to boot, power on Node A.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  12. Boot Node A into cluster mode.


    {0} ok boot
    

  13. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    

  14. Shut down Node B.


    # shutdown -y -g0 -i0
    

  15. Is any disk tray you are removing the last disk tray connected to an FC switch on Node B?

    • If not, go to Step 16.

    • If it is the last disk tray, disconnect the fiber optic cable connecting this FC switch to Node B.

    For the procedure on removing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  16. Do you want to remove the host adapters from Node B?

    • If not, go to Step 19.

    • If yes, power off Node B.

  17. Remove the host adapters from Node B.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  18. Without allowing the node to boot, power on Node B.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  19. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  20. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    

  21. Return the resource groups and device groups you identified in Step 4 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Failed Disk Drive in a Running Cluster

Use this procedure to replace one failed disk drive in a StorEdge T3/T3+ disk tray in a running cluster.


Caution - Caution -

If you remove any field replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3/T3+ disk tray is designed so that an orderly shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before starting an FRU replacement procedure. You must replace an FRU within 30 minutes or the StorEdge T3/T3+ disk tray, and all attached StorEdge T3/T3+ disk trays, will shut down and power off.


  1. Replace the disk drive in the disk tray.

    For the procedure on replacing a disk drive, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  2. Perform volume management administration to configure logical volumes for the new disk tray into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Node-to-Switch Component in a Running Cluster

Use this procedure to replace the following node-to-switch components in a running cluster:

  1. On the node connected to the component you are replacing, determine the resource groups and device groups running on the node.

    Record this information because you will use it in Step 4 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  2. Move all resource groups and device groups to another node.


    # scswitch -S -h nodename
    

  3. Replace the node-to-switch component.

    • For the procedure on replacing a fiber optic cable between a node and an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

    • For the procedure on replacing a GBIC on an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

  4. Return the resource groups and device groups you identified in Step 1 to the node that is connected to the component you replaced.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Controller Card, FC Switch, or Disk Tray-to-Switch Component in a Running Cluster

Use this procedure to replace a failed disk tray controller card, an FC switch, or the following disk tray-to-switch components in a running cluster:

  1. Telnet to the disk tray that is connected to the controller card, FC switch, or component that you are replacing.

  2. Use the T3/T3+ sys stat command to view the controller status for the two disk trays of the partner group.

    In the following example, both controllers are ONLINE.


    t3:/:<#> sys stat
    Unit   State      Role    Partner
    -----  ---------  ------  -------
     1    ONLINE     Master    2
     2    ONLINE     AlterM    1

    See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the sys stat command.

  3. Is the controller card, FC switch, or component that you are replacing attached to a disk tray controller that is ONLINE or DISABLED, as determined in Step 2?

    • If the controller is already DISABLED, go to Step 5.

    • If the controller is ONLINE, use the T3/T3+ disable command to disable it. Using the example from Step 2, if you want to disable Unit 1, enter the following:


      t3:/:<#> disable u1
      

    See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the disable command.

  4. Use the T3/T3+ sys stat command to verify that the controller's state has been changed to DISABLED.


    t3:/:<#> sys stat
    Unit   State      Role    Partner
    -----  ---------  ------  -------
     1    DISABLED   Slave     
     2    ONLINE     Master    

  5. Replace the component using the following references:

    • For the procedure on replacing a disk tray controller card, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    • For the procedure on replacing a fiber optic cable between a disk tray and an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

    • For the procedure on replacing a GBIC on an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

    • For the procedure on replacing a StorEdge network FC switch-8 or switch-16, or an FC switch power cord, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

    • For the procedure on replacing an MIA, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    • For the procedure on replacing interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. If necessary, telnet to the disk tray of the partner group that is still online.

  7. Use the T3/T3+ enable command to reenable the disk tray that you disabled in Step 3.


    t3:/:<#> enable u1
    

    See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the enable command.

  8. Use the T3/T3+ sys stat command to verify that the controller's state has been changed to ONLINE.


    t3:/:<#> sys stat
    Unit   State      Role    Partner
    -----  ---------  ------  -------
     1    ONLINE     AlterM    2
     2    ONLINE     Master    1

How to Replace a Disk Tray Chassis in a Running Cluster

Use this procedure to replace a StorEdge T3/T3+ disk tray chassis in a running cluster. This procedure assumes that you want to retain all FRUs other than the chassis and the backplane. To replace the chassis, you must replace both the chassis and the backplane because these components are manufactured as one part.


Note -

Only trained, qualified Sun service providers should use this procedure to replace a StorEdge T3/T3+ disk tray chassis. This procedure requires the Sun StorEdge T3 and T3+ Array Field Service Manual, which is available to trained Sun service providers only.


  1. Detach the submirrors on the disk tray that is connected to the chassis you are replacing to stop all I/O activity to this disk tray.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Are the disk trays in your partner-pair configuration made redundant by host-based mirroring?

    • If yes, go to Step 3.

    • If not, shutdown the cluster.


      # scshutdown -y -g0
      

  3. Replace the chassis/backplane.

    For the procedure on replacing a StorEdge T3/T3+ chassis, see the Sun StorEdge T3 and T3+ Array Field Service Manual. (This manual is available to trained Sun service providers only.)

  4. Did you shut down the cluster in Step 2?

    • If not, go to Step 5.

    • If you did shut down the cluster, boot it back into cluster mode.


      {0} ok boot
      

  5. Reattach the submirrors you detached in Step 1 to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  6. On all nodes attached to the disk tray, use the following command to make the nodes' volumes recognize the new WWNs.


    # drvconfig; disks; devlinks
    

  7. On one node, use the format command to verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

How to Replace a Node's Host Adapter in a Running Cluster

Use this procedure to replace a failed host adapter in a running cluster. As defined in this procedure, "Node A" is the node with the failed host adapter you are replacing and "Node B" is the other node.

  1. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 8 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  3. Shut down Node A.


    # shutdown -y -g0 -i0
    

  4. Power off Node A.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  5. Replace the failed host adapter.

    For the procedure on removing and adding host adapters, see the documentation that shipped with your nodes.

  6. Power on Node A.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  7. Boot Node A into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  8. Return the resource groups and device groups you identified in Step 1 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration

Use this procedure to migrate your StorEdge T3/T3+ disk trays from a single-controller (non-interconnected) configuration to a partner-group (interconnected) configuration.


Note -

Only trained, qualified Sun service providers should use this procedure. This procedure requires the Sun StorEdge T3 and T3+ Array Field Service Manual, which is available to trained Sun service providers only.


  1. Remove the non-interconnected disk trays that will be in your partner-group from the cluster configuration.

    Follow the procedure in "How to Remove StorEdge T3/T3+ Disk Trays From a Running Cluster".


    Note -

    Backup all data on the disk trays before removing them from the cluster configuration.



    Note -

    This procedure assumes that the two disk trays that will be in the partner-group configuration are correctly isolated from each other on separate FC switches. You must use FC switches when installing disk trays in a partner-group configuration. Do not disconnect the cables from the FC switches or nodes.


  2. Connect the single disk trays to form a partner-group.

    Follow the procedure in the Sun StorEdge T3 and T3+ Array Field Service Manual.

  3. Add the new partner-group to the cluster configuration:

    1. At each disk tray's prompt, use the port list command to ensure that each disk tray has a unique target address:


      t3:/:<#> port list
      

      If the disk trays do not have unique target addresses, use the port set command to set the addresses. For the procedure on verifying and assigning a target address to a disk tray, see the Sun StorEdge T3 and T3+ Array Configuration Guide. For more information about the port command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

    2. At each disk tray's prompt, use the sys list command to verify that the cache and mirror settings for each disk tray are set to auto:


      t3:/:<#> sys list
      

      If the two settings are not already set to auto, set them using the following commands at each disk tray's prompt:


      t3:/:<#> sys cache auto
      t3:/:<#> sys mirror auto
      

    3. Use the StorEdge T3/T3+ sys list command to verify that the mp_support parameter for each disk tray is set to mpxio:


      t3:/:<#> sys list
      

      If mp_support is not already set to mpxio, set it using the following command at each disk tray's prompt:


      t3:/:<#> sys mp_support mpxio
      

    4. If necessary, upgrade the host adapter firmware on Node A.

      See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

    5. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node A.

      See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download.

    6. Install any required patches or software for Sun StorEdge Traffic Manager software support from the Sun Download Center Web site, http://www.sun.com/storage/san/

      For instructions on installing the software, see the information on the web site.

    7. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step f.

      To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


      mpxio-disable="no"
      

    8. Shut down Node A.


      # shutdown -y -g0 -i0
      
    9. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


      {0} ok boot -r
      
    10. On Node A, update the /devices and /dev entries:


      # devfsadm -C 
      

    11. On Node A, update the paths to the DID instances:


      # scdidadm -C
      
    12. Configure the new disk trays with the desired logical volumes.

      For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    13. Label the new disk tray logical volume.

      For the procedure on labeling a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

    14. If necessary, upgrade the host adapter firmware on Node B.

      See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

    15. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node B.

      For a list of required Solaris patches for StorEdge T3/T3+ disk tray support, see the Sun StorEdge T3 and T3+ Array Release Notes.

    16. Install any required patches or software for Sun StorEdge Traffic Manager software support from the Sun Download Center Web site, http://www.sun.com/storage/san/

      For instructions on installing the software, see the information on the web site.

    17. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step p.

      To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


      mpxio-disable="no"
      

    18. Shut down Node B.


      # shutdown -y -g0 -i0
      
    19. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


      {0} ok boot -r
      
    20. On Node B, update the /devices and /dev entries:


      # devfsadm -C 
      

    21. On Node B, update the paths to the DID instances:


      # scdidadm -C
      
    22. (Optional) On Node B, verify that the DIDs are assigned to the new disk trays:


      # scdidadm -l
      

    23. On one node attached to the new disk trays, reset the SCSI reservation state:


      # scdidadm -R n
      

      Where n is the DID instance of a disk tray LUN you are adding to the cluster.


      Note -

      Repeat this command on the same node for each disk tray LUN you are adding to the cluster.


    24. Perform volume management administration to incorporate the new logical volumes into the cluster.

      For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.