Sun Cluster 3.0 12/01 Hardware Guide

Chapter 9 Installing and Maintaining a Sun StorEdge T3 and T3+ Array Partner-Group Configuration

This chapter contains the procedures for installing, configuring, and maintaining Sun StorEdgeTM T3 and Sun StorEdge T3+ arrays in a partner-group (interconnected) configuration. Differences between the StorEdge T3 and StorEdge T3+ procedures are noted where appropriate.

This chapter contains the following procedures:

For conceptual information on multihost disks, see the Sun Cluster 3.0 12/01 Concepts document.

For information about using a StorEdge T3 or StorEdge T3+ arrays as a storage devices in a storage area network (SAN), see "StorEdge T3 and T3+ Array (Partner-Group) SAN Considerations".

Installing StorEdge T3/T3+ Arrays


Note -

This section contains the procedure for an initial installation of StorEdge T3 or StorEdge T3+ array partner groups in a new Sun Cluster that is not running. If you are adding partner groups to an existing cluster, use the procedure in "How to Add StorEdge T3/T3+ Array Partner Groups to a Running Cluster".


How to Install StorEdge T3/T3+ Array Partner Groups

Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 Software Installation Guide and your server hardware manual.

  1. Install the host adapters in the nodes that will be connected to the arrays.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    For the procedure on installing a Sun StorEdge network FC switch-8 or switch-16, see the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0.


    Note -

    You must use FC switches when installing arrays in a partner-group configuration. If you are using your StorEdge T3/T3+ arrays to create a storage area network (SAN) by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software, see "StorEdge T3 and T3+ Array (Partner-Group) SAN Considerations" for more information


  3. (Skip this step if you are installing StorEdge T3+ arrays) Install the media interface adapters (MIAs) in the StorEdge T3 arrays you are installing as shown in Figure 9-1.

    For the procedure on installing a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  4. If necessary, install GBICs in the FC switches, as shown in Figure 9-1.

    For instructions on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  5. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new arrays to reside on.

    This RARP server enables you to assign an IP address to the new arrays using the array's unique MAC address. For the procedure on setting up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Cable the arrays (see Figure 9-1):

    1. Connect the arrays to the FC switches using fiber optic cables.

    2. Connect the Ethernet cables from each array to the LAN.

    3. Connect interconnect cables between the two arrays of each partner group.

    4. Connect power cords to each array.

    For the procedure on installing fiber optic, Ethernet, and interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure 9-1 StorEdge T3/T3+ Array Partner-Group (Interconnected) Controller Configuration

    Graphic

  7. Power on the arrays and verify that all components are powered on and functional.

    For the procedure on powering on the arrays and verifying the hardware configuration, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  8. Administer the arrays' network settings:

    Telnet to the master controller unit and administer the arrays. For the procedure on administering the array network addresses and settings, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    The master controller unit is the array that has the interconnect cables attached to the right-hand connectors of its interconnect cards (when viewed from the rear of the arrays). For example, Figure 9-1 shows the master controller unit of the partner-group as the lower array. Note in this diagram that the interconnect cables are connected to the right-hand connectors of both interconnect cards on the master controller unit.

  9. Install any required array controller firmware:

    For partner-group configurations, telnet to the master controller unit and install the required controller firmware.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  10. At the master array's prompt, use the port list command to ensure that each array has a unique target address:


    t3:/:<#> port list
    

    If the arrays do not have unique target addresses, use the port set command to set the addresses. For the procedure on verifying and assigning a target address to a array, see the Sun StorEdge T3 and T3+ Array Configuration Guide. For more information about the port command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  11. At the master array's prompt, use the sys list command to verify that the cache and mirror settings for each array are set to auto:


    t3:/:<#> sys list
    

    If the two settings are not already set to auto, set them using the following commands:


    t3:/:<#> sys cache auto
    t3:/:<#> sys mirror auto
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  12. At the master array's prompt, use the sys list command to verify that the mp_support parameter for each array is set to mpxio:


    t3:/:<#> sys list
    

    If mp_support is not already set to mpxio, set it using the following command:


    t3:/:<#> sys mp_support mpxio
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  13. At the master array's prompt, use the sys stat command to verify that both array controllers are online, as shown in the following example output.


    t3:/:<#> sys stat
    Unit   State      Role    Partner
    -----  ---------  ------  -------
     1    ONLINE     Master    2
     2    ONLINE     AlterM    1

    For more information about the sys command and how to correct the situation if both controllers are not online, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  14. (Optional) Configure the arrays with the desired logical volumes.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  15. Reset the arrays.

    For the procedure on rebooting or resetting a array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  16. Install to the cluster nodes the Solaris operating environment and apply the required Solaris patches for Sun Cluster software and StorEdge T3/T3+ array support.

    For the procedure on installing the Solaris operating environment, see the Sun Cluster 3.0 12/01 Software Installation Guide.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  17. Install to the cluster nodes any required patches or software for Sun StorEdge Traffic Manager software support from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  18. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed to the cluster nodes in Step 17.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  19. Perform a reconfiguration boot on all nodes to create the new Solaris device files and links.


    {0} ok boot -r
    
  20. On all nodes, update the /devices and /dev entries:


    # devfsadm -C 
    

  21. On all nodes, use the luxadm display command to confirm that all arrays you installed are now visible.


    # luxadm display 
    

Where to Go From Here

To continue with Sun Cluster software installation tasks, see the Sun Cluster 3.0 12/01 Software Installation Guide.

Configuring StorEdge T3/T3+ Arrays in a Running Cluster

This section contains the procedures for configuring a StorEdge T3 or StorEdge T3+ array in a running cluster. Table 9-1 lists these procedures.

Table 9-1 Task Map: Configuring a StorEdge T3/T3+ Array 

Task 

For Instructions, Go To... 

Create a logical volume 

"How to Create a Logical Volume"

Remove a logical volume 

"How to Remove a Logical Volume"

How to Create a Logical Volume

Use this procedure to create a StorEdge T3/T3+ array logical volume. This procedure assumes all cluster nodes are booted and attached to the array that will host the logical volume you are creating.

  1. Telnet to the array that is the master controller unit of your partner-group.

    The master controller unit is the array that has the interconnect cables attached to the right-hand connectors of its interconnect cards (when viewed from the rear of the arrays). For example, Figure 9-1 shows the master controller unit of the partner-group as the lower array. Note in this diagram that the interconnect cables are connected to the right-hand connectors of both interconnect cards on the master controller unit.

  2. Create the logical volume.

    Creating a logical volume involves adding, initializing, and mounting the logical volume.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. On all cluster nodes, update the /devices and /dev entries:


    # devfsadm
    
  4. On one node connected to the partner-group, use the format command to verify that the new logical volume is visible to the system.


    # format
    

    See the format command man page for more information about using the command.

  5. Are you running VERITAS Volume Manager?

    • If not, go to Step 6

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you created in Step 2.

    See your VERITAS Volume Manager documentation for information about using the vxdctl enable command to update new devices (volumes) in your VERITAS Volume Manager list of devices.

  6. If needed, partition the logical volume.

  7. From any node in the cluster, update the global device namespace.


    # scgdevs
    

    Note -

    If a volume management daemon such as vold is running on your node, and you have a CD-ROM drive connected to the node, a device busy error might be returned even if no disk is in the drive. This error is expected behavior.



Note -

Do not configure StorEdge T3/T3+ logical volumes as quorum devices in partner-group configurations. The use of StorEdge T3/T3+ logical volumes as quorum devices in partner-group configurations is not supported.


Where to Go From Here

To create a new resource or reconfigure a running resource to use the new logical volume, see the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.

How to Remove a Logical Volume

Use this procedure to remove a StorEdge T3/T3+ array logical volume. This procedure assumes all cluster nodes are booted and attached to the array that hosts the logical volume you are removing.

This procedure defines "Node A" as the node you begin working with, and "Node B" as the other node.


Caution - Caution -

This procedure removes all data from the logical volume you are removing.


  1. If necessary, migrate all data and volumes off the logical volume you are removing.

  2. Are you running VERITAS Volume Manager?

    • If not, go to Step 3.

    • If you are running VERITAS Volume Manager, update its list of devices on all cluster nodes attached to the logical volume you are removing.

    See your VERITAS Volume Manager documentation for information about using the vxdisk rm command to remove devices (volumes) in your VERITAS Volume Manager device list.

  3. Run the appropriate Solstice DiskSuiteTM or VERITAS Volume Manager commands to remove the reference to the logical unit number (LUN) from any diskset or disk group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Telnet to the array that is the master controller unit of your partner-group.

    The master controller unit is the array that has the interconnect cables attached to the right-hand connectors of its interconnect cards (when viewed from the rear of the arrays). For example, Figure 9-1 shows the master controller unit of the partner-group as the lower array. Note in this diagram that the interconnect cables are connected to the right-hand connectors of both interconnect cards on the master controller unit.

  5. Remove the logical volume.

    For the procedure on removing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  6. Use the scstat command to identify the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 15 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  7. Move all resource groups and device groups off of Node A:


    # scswitch -S -h nodename
    
  8. Shut down Node A:


    # shutdown -y -g0 -i0
    
  9. Boot Node A into cluster mode:


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  10. On Node A, remove the obsolete device IDs (DIDs):


    # devfsadm -C
    # scdidadm -C
    
  11. Move all resource groups and device groups off Node B:


    # scswitch -S -h nodename
    
  12. Shut down Node B:


    # shutdown -y -g0 -i0
    
  13. Boot Node B into cluster mode:


    {0} ok boot
    
  14. On Node B, remove the obsolete DIDs:


    # devfsadm -C
    # scdidadm -C
    
  15. Return the resource groups and device groups you identified in Step 6 to Node A and Node B:


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

Where to Go From Here

To create a logical volume, see "How to Create a Logical Volume".

Maintaining StorEdge T3/T3+ Arrays

This section contains the procedures for maintaining StorEdge T3 and StorEdge T3+ arrays. Table 9-2 lists these procedures. This section does not include a procedure for adding a disk drive or a procedure for removing a disk drive because a StorEdge T3/T3+ array operates only when fully configured.


Caution - Caution -

If you remove any field replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3/T3+ array is designed so that an orderly shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before you start an FRU replacement procedure. You must replace an FRU within 30 minutes or the StorEdge T3/T3+ array, and all attached StorEdge T3/T3+ arrays, will shut down and power off.


Table 9-2 Task Map: Maintaining a StorEdge T3/T3+ Array 

Task 

For Instructions, Go To... 

Upgrade StorEdge T3/T3+ array firmware. 

"How to Upgrade StorEdge T3/T3+ Array Firmware"

Add a StorEdge T3/T3+ array. 

"How to Add StorEdge T3/T3+ Array Partner Groups to a Running Cluster"

Remove a StorEdge T3/T3+ array. 

"How to Remove StorEdge T3/T3+ Arrays From a Running Cluster"

Replace a disk drive in an array. 

"How to Replace a Failed Disk Drive in a Running Cluster"

Replace a node-to-switch fiber optic cable. 

"How to Replace a Node-to-Switch Component in a Running Cluster"

Replace a gigabit interface converter (GBIC) on a node's host adapter. 

"How to Replace a Node-to-Switch Component in a Running Cluster"

Replace a GBIC on an FC switch, connecting to a node. 

"How to Replace a Node-to-Switch Component in a Running Cluster"

Replace an array-to-switch fiber optic cable. 

"How to Replace a FC Switch or Array-to-Switch Component in a Running Cluster"

Replace a GBIC on an FC switch, connecting to an array. 

"How to Replace a FC Switch or Array-to-Switch Component in a Running Cluster"

Replace a StorEdge network FC switch-8 or switch-16. 

"How to Replace a FC Switch or Array-to-Switch Component in a Running Cluster"

Replace a StorEdge network FC switch-8 or switch-16 power cord. 

"How to Replace a FC Switch or Array-to-Switch Component in a Running Cluster"

Replace a media interface adapter (MIA) on a StorEdge T3 array (not applicable for StorEdge T3+ arrays). 

 

"How to Replace a FC Switch or Array-to-Switch Component in a Running Cluster"

Replace interconnect cables. 

"How to Replace a FC Switch or Array-to-Switch Component in a Running Cluster"

Replace a StorEdge T3/T3+ array controller. 

 

"How to Replace a FC Switch or Array-to-Switch Component in a Running Cluster"

Replace a StorEdge T3/T3+ array chassis. 

 

"How to Replace an Array Chassis in a Running Cluster"

Replace a host adapter in a node. 

"How to Replace a Node's Host Adapter in a Running Cluster"

Migrate from a single-controller configuration to a partner-group configuration. 

"How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration"

Upgrade a StorEdge T3 array controller to a StorEdge T3+ array controller. 

Sun StorEdge T3 Array Controller Upgrade Manual

Replace a Power and Cooling Unit (PCU). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace a unit interconnect card (UIC). 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace a StorEdge T3/T3+ array power cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

Replace an Ethernet cable. 

 

Follow the same procedure used in a non-cluster environment. 

Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual

How to Upgrade StorEdge T3/T3+ Array Firmware

Use one of the following procedures to upgrade StorEdge T3/T3+ array firmware, depending on whether your partner-group has been configured to support submirrors of a cluster node's volumes. StorEdge T3/T3+ array firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.


Note -

For all firmware, always read any README files that accompany the firmware for the latest information and special notes.


Upgrading Firmware on Arrays That Support Submirrored Data


Caution - Caution -

Perform this procedure on one array at a time. This procedure requires that you reset the arrays you are upgrading. If you reset more than one array at a time, your cluster will lose access to data.


  1. On the node that currently owns the disk group or disk set to which the submirror belongs, detach the submirrors of the array on which you are upgrading firmware. (This procedure refers to this node as Node A and remaining node as Node B.)

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Disconnect both array-to-switch fiber optic cables from the two arrays of the partner-group.

  3. Apply the controller, disk drive, and UIC firmware patches.

    For the list of required StorEdge T3/T3+ array patches, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying firmware patches, see the firmware patch README file. For the procedure on verifying the firmware level, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. Reset the arrays.

    For the procedure on resetting an array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  5. Use the StorEdge T3/T3+ disable command to disable the array controller that is attached to Node B so that all logical volumes come under the control of the remaining controller.


    t3:/:<#> disable uencidctr
    

    See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the disable command.

  6. Reconnect both array-to-switch fiber optic cables to the two arrays of the partner-group.

  7. On one node connected to the partner-group, use the format command to verify that the array controllers are rediscovered by the node.


    # format
    

  8. Use the StorEdge T3/T3+ enable command to enable the array controller that you disabled in Step 5.


    t3:/:<#> enable uencidctr
    

  9. Reattach the submirrors that you detached in Step 1 to resynchronize them.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

Upgrading Firmware on Arrays That Do Not Support Submirrored Data

In a partner-pair configuration, it is possible to have non-mirrored data; however, this requires that you shutdown the cluster when upgrading firmware, as described in this procedure.

  1. Shutdown the entire cluster.


    # scshutdown -y -g0
    

    For the full procedure on shutting down a cluster, see the Sun Cluster 3.0 12/01 System Administration Guide.

  2. Apply the controller, disk drive, and UIC firmware patches.

    For the list of required StorEdge T3/T3+ array patches, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying firmware patches, see the firmware patch README file. For the procedure on verifying the firmware level, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. Reset the arrays.

    For the procedure on resetting an array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. Boot all nodes back into the cluster.


    ok boot 
    

    For the full procedure on booting nodes into the cluster, see the Sun Cluster 3.0 12/01 System Administration Guide.

  5. On one node connected to the partner-group, use the format command to verify that the array controllers are rediscovered by the node.


    # format
    

How to Add StorEdge T3/T3+ Array Partner Groups to a Running Cluster


Note -

Use this procedure to add new StorEdge T3/T3+ array partner groups to a running cluster. To install partner groups to a new Sun Cluster that is not running, use the procedure in "How to Install StorEdge T3/T3+ Array Partner Groups".


This procedure defines "Node A" as the node you begin working with, and "Node B" as the second node.

  1. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new arrays to reside on, then assign an IP address to the new arrays.


    Note -

    Assign an IP address to the master controller unit only. The master controller unit is the array that has the interconnect cables attached to the right-hand connectors of its interconnect cards (see Figure 9-2).


    This RARP server lets you assign an IP address to the new arrays using the array's unique MAC address. For the procedure on setting up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  2. Install the Ethernet cable between the arrays and the local area network (LAN) (see Figure 9-2).

  3. If not already installed, install interconnect cables between the two arrays of each partner group (see Figure 9-2).

    For the procedure on installing interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure 9-2 Adding Sun StorEdge T3/T3+ Arrays, Partner-Group Configuration

    Graphic

  4. Power on the arrays.


    Note -

    The arrays might take several minutes to boot.


    For the procedure on powering on arrays, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  5. Administer the arrays' network addresses and settings.

    Telnet to the StorEdge T3/T3+ master controller unit and administer the arrays.

    For the procedure on administering array network address and settings, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Install any required array controller firmware upgrades:

    For partner-group configurations, telnet to the StorEdge T3/T3+ master controller unit and if necessary, install the required array controller firmware.

    For the required array controller firmware revision number, see the Sun StorEdge T3 Disk Tray Release Notes.

  7. At the master array's prompt, use the port list command to ensure that each array has a unique target address:


    t3:/:<#> port list
    

    If the arrays do not have unique target addresses, use the port set command to set the addresses. For the procedure on verifying and assigning a target address to a array, see the Sun StorEdge T3 and T3+ Array Configuration Guide. For more information about the port command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  8. At the master array's prompt, use the sys list command to verify that the cache and mirror settings for each array are set to auto:


    t3:/:<#> sys list
    

    If the two settings are not already set to auto, set them using the following commands at each array's prompt:


    t3:/:<#> sys cache auto
    t3:/:<#> sys mirror auto
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  9. Use the StorEdge T3/T3+ sys list command to verify that the mp_support parameter for each array is set to mpxio:


    t3:/:<#> sys list
    

    If mp_support is not already set to mpxio, set it using the following command at each array's prompt:


    t3:/:<#> sys mp_support mpxio
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  10. Configure the new arrays with the desired logical volumes.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  11. Reset the arrays.

    For the procedure on rebooting or resetting an array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  12. (Skip this step if you are adding StorEdge T3+ arrays.) Install the media interface adapter (MIA) in the StorEdge T3 arrays you are adding, as shown in Figure 9-2.

    For the procedure on installing an MIA, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  13. If necessary, install GBICs in the FC switches, as shown in Figure 9-2.

    For the procedure on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  14. Install a fiber optic cable between each FC switch and both new arrays of the partner-group, as shown in Figure 9-2.

    For the procedure on installing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note -

    If you are using your StorEdge T3/T3+ arrays to create a SAN by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software, see "StorEdge T3 and T3+ Array (Partner-Group) SAN Considerations" for more information


  15. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 54 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  16. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  17. Do you need to install host adapters in Node A?

    • If not, go to Step 24.

    • If you do need to install host adapters to Node A, continue with Step 18.

  18. Is the host adapter you are installing the first host adapter on Node A?

    • If not, go to Step 20.

    • If it is the first host adapter, use the pkginfo command as shown below to determine whether the required support packages for the host adapter are already installed on this node. The following packages are required:


      # pkginfo | egrep Wlux
      system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
      system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
      system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
      system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
      system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
      system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)

  19. Are the required support packages already installed?

    • If they are already installed, go to Step 20.

    • If not, install the required support packages that are missing.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  20. Shut down and power off Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  21. Install the host adapters in Node A.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  22. Power on and boot Node A into non-cluster mode.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  23. If necessary, upgrade the host adapter firmware on Node A.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  24. If necessary, install GBICs to the FC switches, as shown in Figure 9-3.

    For the procedure on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  25. Connect fiber optic cables between Node A and the FC switches, as shown in Figure 9-3.

    For the procedure on installing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note -

    If you are using your StorEdge T3/T3+ arrays to create a SAN by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software, see "StorEdge T3 and T3+ Array (Partner-Group) SAN Considerations" for more information


    Figure 9-3 Adding Sun StorEdge T3/T3+ Arrays, Partner-Group Configuration

    Graphic

  26. If necessary, install the required Solaris patches for StorEdge T3/T3+ array support on Node A.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  27. Install any required patches or software for Sun StorEdge Traffic Manager software support to Node A from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  28. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 27.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  29. Shut down Node A.


    # shutdown -y -g0 -i0
    

  30. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


    {0} ok boot -r
    

  31. On Node A, update the /devices and /dev entries:


    # devfsadm -C 
    

  32. On Node A, update the paths to the DID instances:


    # scdidadm -C
    

  33. Label the new array logical volume.

    For the procedure on labeling a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  34. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new array.


    # scdidadm -l
    

  35. Do you need to install host adapters in Node B?

    • If not, go to Step 43.

    • If you do need to install host adapters to Node B, continue with Step 36.

  36. Is the host adapter you are installing the first host adapter on Node B?

    • If not, go to Step 38.

    • If it is the first host adapter, determine whether the required support packages for the host adapter are already installed on this node. The following packages are required.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
    system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)
  37. Are the required support packages already installed?

    • If they are already installed, go to Step 38.

    • If not, install the missing support packages.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  38. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    

  39. Shut down and power off Node B.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  40. Install the host adapters in Node B.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  41. Power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  42. If necessary, upgrade the host adapter firmware on Node B.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  43. If necessary, install GBICs to the FC switches, as shown in Figure 9-4.

    For the procedure on installing GBICs to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual

  44. Connect fiber optic cables between the FC switches and Node B as shown in Figure 9-4.

    For the procedure on installing fiber optic cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note -

    If you are using your StorEdge T3/T3+ arrays to create a SAN by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software, see "StorEdge T3 and T3+ Array (Partner-Group) SAN Considerations" for more information


    Figure 9-4 Adding a Sun StorEdge T3/T3+ Array, Partner-Pair Configuration

    Graphic

  45. If necessary, install the required Solaris patches for StorEdge T3/T3+ array support on Node B.

    For a list of required Solaris patches for StorEdge T3/T3+ array support, see the Sun StorEdge T3 Disk Tray Release Notes.

  46. If you are installing a partner-group configuration, install any required patches or software for Sun StorEdge Traffic Manager software support to Node B from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  47. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 46.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  48. Shut down Node B.


    # shutdown -y -g0 -i0
    

  49. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    

  50. On Node B, update the /devices and /dev entries:


    # devfsadm -C 
    

  51. On Node B, update the paths to the DID instances:


    # scdidadm -C
    

  52. (Optional) On Node B, verify that the DIDs are assigned to the new arrays:


    # scdidadm -l
    

  53. On one node attached to the new arrays, reset the SCSI reservation state:


    # scdidadm -R n
    

    Where n is the DID instance of a array LUN you are adding to the cluster.


    Note -

    Repeat this command on the same node for each array LUN you are adding to the cluster.


  54. Return the resource groups and device groups you identified in Step 15 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  55. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Remove StorEdge T3/T3+ Arrays From a Running Cluster

Use this procedure to permanently remove StorEdge T3/T3+ array partner groups and their submirrors from a running cluster.

This procedure defines "Node A" as the cluster node you begin working with, and "Node B" as the other node.


Caution - Caution -

During this procedure, you lose access to the data that resides on each array partner-group you are removing.


  1. If necessary, back up all database tables, data services, and volumes associated with each partner-group you are removing.

  2. If necessary, run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to detach the submirrors from each array or partner-group that you are removing to stop all I/O activity to the array or partner-group.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  3. Run the appropriate Solstice DiskSuite or VERITAS Volume Manager commands to remove references to each LUN that belongs to the array or partner-group that you are removing.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  4. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 21 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  5. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  6. Shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Disconnect from both arrays the fiber optic cables connecting to the FC switches, then the Ethernet cable(s).

  8. Is any array you are removing the last array connected to an FC switch on Node A?

    • If not, go to Step 12.

    • If it is the last array, disconnect the fiber optic cable between Node A and the FC switch that was connected to this array.

    For the procedure on removing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note -

    If you are using your StorEdge T3/T3+ arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel to maintain cluster availability. See "StorEdge T3 and T3+ Array (Partner-Group) SAN Considerations" for more information.


  9. Do you want to remove the host adapters from Node A?

    • If not, go to Step 12.

    • If yes, power off Node A.

  10. Remove the host adapters from Node A.

    For the procedure on removing host adapters, see the documentation that shipped with your host adapter and nodes.

  11. Without allowing the node to boot, power on Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  12. Boot Node A into cluster mode.


    {0} ok boot
    

  13. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    

  14. Shut down Node B.


    # shutdown -y -g0 -i0
    

  15. Is any array you are removing the last array connected to an FC switch on Node B?

    • If not, go to Step 16.

    • If it is the last array, disconnect the fiber optic cable connecting this FC switch to Node B.

    For the procedure on removing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note -

    If you are using your StorEdge T3/T3+ arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel to maintain cluster availability. See "StorEdge T3 and T3+ Array (Partner-Group) SAN Considerations" for more information.


  16. Do you want to remove the host adapters from Node B?

    • If not, go to Step 19.

    • If yes, power off Node B.

  17. Remove the host adapters from Node B.

    For the procedure on removing host adapters, see the documentation that shipped with your nodes.

  18. Without allowing the node to boot, power on Node B.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  19. Boot Node B into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  20. On all cluster nodes, update the /devices and /dev entries.


    # devfsadm -C
    # scdidadm -C
    

  21. Return the resource groups and device groups you identified in Step 4 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a Failed Disk Drive in a Running Cluster

Use this procedure to replace one failed disk drive in a StorEdge T3/T3+ array in a running cluster.


Caution - Caution -

If you remove any field replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the StorEdge T3/T3+ array is designed so that an orderly shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before starting an FRU replacement procedure. You must replace an FRU within 30 minutes or the StorEdge T3/T3+ array, and all attached StorEdge T3/T3+ arrays, will shut down and power off.


  1. Did the failed disk drive impact the array logical volume's availability?

    • If not, go to Step 2.

    • If it did impact logical volume availability, remove the logical volume from volume management control.

      For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Replace the disk drive in the array.

    For the procedure on replacing a disk drive, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. Did you remove a LUN from volume management control in Step 1?

    • If not, you are finished with this procedure.

    • If you did remove a LUN from volume management control, return the LUN to volume management control now.

      For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Node-to-Switch Component in a Running Cluster

Use this procedure to replace the following node-to-switch components in a running cluster:

  1. On the node connected to the component you are replacing, determine the resource groups and device groups running on the node.

    Record this information because you will use it in Step 4 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  2. Move all resource groups and device groups to another node.


    # scswitch -S -h nodename
    

  3. Replace the node-to-switch component.

    • For the procedure on replacing a fiber optic cable between a node and an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

    • For the procedure on replacing a GBIC on an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  4. Return the resource groups and device groups you identified in Step 1 to the node that is connected to the component you replaced.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

How to Replace a FC Switch or Array-to-Switch Component in a Running Cluster

Use this procedure to replace an FC switch, or the following array-to-switch components in a running cluster:

  1. Telnet to the array that is connected to the FC switch or component that you are replacing.

  2. Use the T3/T3+ sys stat command to view the controller status for the two arrays of the partner group.

    In the following example, both controllers are ONLINE.


    t3:/:<#> sys stat
    Unit   State      Role    Partner
    -----  ---------  ------  -------
     1    ONLINE     Master    2
     2    ONLINE     AlterM    1

    See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the sys stat command.

  3. Is the FC switch or component that you are replacing attached to an array controller that is ONLINE or DISABLED, as determined in Step 2?

    • If the controller is already DISABLED, go to Step 5.

    • If the controller is ONLINE, use the T3/T3+ disable command to disable it. Using the example from Step 2, if you want to disable Unit 1, enter the following:


      t3:/:<#> disable u1
      

    See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the disable command.

  4. Use the T3/T3+ sys stat command to verify that the controller's state has been changed to DISABLED.


    t3:/:<#> sys stat
    Unit   State      Role    Partner
    -----  ---------  ------  -------
     1    DISABLED   Slave     
     2    ONLINE     Master    

  5. Replace the component using the following references:

    • For the procedure on replacing a fiber optic cable between an array and an FC switch, see the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0.

    • For the procedure on replacing a GBIC on an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

    • For the procedure on replacing a StorEdge network FC switch-8 or switch-16, see the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0.


      Note -

      If you are replacing an FC switch and you intend to save the switch IP configuration for restoration to the replacement switch, do not connect the cables to the replacement switch until after you recall the Fabric configuration to the replacement switch. For more information about saving and recalling switch configurations see the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0.



      Note -

      Before you replace an FC switch, be sure that the probe_timeout parameter of your data service software is set to more than 90 seconds. Increasing the value of the probe_timeout parameter to more than 90 seconds avoids unnecessary resource group restarts when one of the FC switches is powered off.


    • For the procedure on replacing an MIA, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    • For the procedure on replacing interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. If necessary, telnet to the array of the partner group that is still online.

  7. Use the T3/T3+ enable command to enable the array that you disabled in Step 3.


    t3:/:<#> enable u1
    

    See the Sun StorEdge T3 and T3+ Array Administrator's Guide for more information about the enable command.

  8. Use the T3/T3+ sys stat command to verify that the controller's state has been changed to ONLINE.


    t3:/:<#> sys stat
    Unit   State      Role    Partner
    -----  ---------  ------  -------
     1    ONLINE     AlterM    2
     2    ONLINE     Master    1

How to Replace an Array Chassis in a Running Cluster

Use this procedure to replace a StorEdge T3/T3+ array chassis in a running cluster. This procedure assumes that you want to retain all FRUs other than the chassis and the backplane. To replace the chassis, you must replace both the chassis and the backplane because these components are manufactured as one part.


Note -

Only trained, qualified Sun service providers should use this procedure to replace a StorEdge T3/T3+ array chassis. This procedure requires the Sun StorEdge T3 and T3+ Array Field Service Manual, which is available to trained Sun service providers only.


  1. Detach the submirrors on the array that is connected to the chassis you are replacing to stop all I/O activity to this array.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

  2. Are the arrays in your partner-pair configuration made redundant by host-based mirroring?

    • If yes, go to Step 3.

    • If not, shutdown the cluster.


      # scshutdown -y -g0
      

  3. Replace the chassis/backplane.

    For the procedure on replacing a StorEdge T3/T3+ chassis, see the Sun StorEdge T3 and T3+ Array Field Service Manual. (This manual is available to trained Sun service providers only.)

  4. Did you shut down the cluster in Step 2?

    • If not, go to Step 5.

    • If you did shut down the cluster, boot it back into cluster mode.


      {0} ok boot
      

  5. Reattach the submirrors you detached in Step 1 to resynchronize them.


    Caution - Caution -

    The world wide numbers (WWNs) will change as a result of this procedure and you must reconfigure your volume manager software to recognize the new WWNs.


    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

How to Replace a Node's Host Adapter in a Running Cluster

Use this procedure to replace a failed host adapter in a running cluster. As defined in this procedure, "Node A" is the node with the failed host adapter you are replacing and "Node B" is the other node.

  1. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 8 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  2. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  3. Shut down Node A.


    # shutdown -y -g0 -i0
    

  4. Power off Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  5. Replace the failed host adapter.

    For the procedure on removing and adding host adapters, see the documentation that shipped with your nodes.

  6. Power on Node A.

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  7. Boot Node A into cluster mode.


    {0} ok boot
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  8. Return the resource groups and device groups you identified in Step 1 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration

Use this procedure to migrate your StorEdge T3/T3+ arrays from a single-controller (non-interconnected) configuration to a partner-group (interconnected) configuration.


Note -

Only trained, qualified Sun service providers should use this procedure. This procedure requires the Sun StorEdge T3 and T3+ Array Field Service Manual, which is available to trained Sun service providers only.


  1. Remove the non-interconnected arrays that will be in your partner-group from the cluster configuration.

    Follow the procedure in "How to Remove StorEdge T3/T3+ Arrays From a Running Cluster".


    Note -

    Backup all data on the arrays before removing them from the cluster configuration.



    Note -

    This procedure assumes that the two arrays that will be in the partner-group configuration are correctly isolated from each other on separate FC switches. Do not disconnect the cables from the FC switches or nodes.


  2. Connect and configure the single arrays to form a partner-group.

    Follow the procedure in the Sun StorEdge T3 and T3+ Array Field Service Manual.

  3. Add the new partner-group to the cluster configuration:

    1. At each array's prompt, use the port list command to ensure that each array has a unique target address:


      t3:/:<#> port list
      

      If the arrays do not have unique target addresses, use the port set command to set the addresses. For the procedure on verifying and assigning a target address to an array, see the Sun StorEdge T3 and T3+ Array Configuration Guide. For more information about the port command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

    2. At each array's prompt, use the sys list command to verify that the cache and mirror settings for each array are set to auto:


      t3:/:<#> sys list
      

      If the two settings are not already set to auto, set them using the following commands at each array's prompt:


      t3:/:<#> sys cache auto
      t3:/:<#> sys mirror auto
      

    3. Use the StorEdge T3/T3+ sys list command to verify that the mp_support parameter for each array is set to mpxio:


      t3:/:<#> sys list
      

      If mp_support is not already set to mpxio, set it using the following command at each array's prompt:


      t3:/:<#> sys mp_support mpxio
      

    4. If necessary, upgrade the host adapter firmware on Node A.

      See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

    5. If necessary, install the required Solaris patches for StorEdge T3/T3+ array support on Node A.

      See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download.

    6. Install any required patches or software for Sun StorEdge Traffic Manager software support from the Sun Download Center Web site, http://www.sun.com/storage/san/

      For instructions on installing the software, see the information on the web site.

    7. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step f.

      To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


      mpxio-disable="no"
      

    8. Shut down Node A.


      # shutdown -y -g0 -i0
      
    9. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


      {0} ok boot -r
      
    10. On Node A, update the /devices and /dev entries:


      # devfsadm -C 
      

    11. On Node A, update the paths to the DID instances:


      # scdidadm -C
      
    12. Label the new array logical volume.

      For the procedure on labeling a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

    13. If necessary, upgrade the host adapter firmware on Node B.

      See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

    14. If necessary, install the required Solaris patches for StorEdge T3/T3+ array support on Node B.

      For a list of required Solaris patches for StorEdge T3/T3+ array support, see the Sun StorEdge T3 Disk Tray Release Notes.

    15. Install any required patches or software for Sun StorEdge Traffic Manager software support from the Sun Download Center Web site, http://www.sun.com/storage/san/

      For instructions on installing the software, see the information on the web site.

    16. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step o.

      To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


      mpxio-disable="no"
      

    17. Shut down Node B.


      # shutdown -y -g0 -i0
      
    18. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


      {0} ok boot -r
      
    19. On Node B, update the /devices and /dev entries:


      # devfsadm -C 
      

    20. On Node B, update the paths to the DID instances:


      # scdidadm -C
      
    21. (Optional) On Node B, verify that the DIDs are assigned to the new arrays:


      # scdidadm -l
      

    22. On one node attached to the new arrays, reset the SCSI reservation state:


      # scdidadm -R n
      

      Where n is the DID instance of a array LUN you are adding to the cluster.


      Note -

      Repeat this command on the same node for each array LUN you are adding to the cluster.


    23. Perform volume management administration to incorporate the new logical volumes into the cluster.

      For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

StorEdge T3 and T3+ Array (Partner-Group) SAN Considerations

This section contains information for using StorEdge T3/T3+ arrays in a partner-group configuration as the storage devices in a SAN that is in a Sun Cluster environment.

Full, detailed hardware and software installation and configuration instructions for creating and maintaining a SAN are described in the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0 that is shipped with your switch hardware. Use the cluster-specific procedures in this chapter for installing and maintaining StorEdge T3/T3+ arrays in your cluster; refer to the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0 for switch and SAN instructions and information on such topics as switch ports and zoning, and required software and firmware

Hardware components of a SAN include Fibre Channel switches, Fibre Channel host adapters, and storage devices and enclosures. The software components include drivers bundled with the operating system, firmware for the switches, management tools for the switches and storage devices, volume managers, if needed, and other administration tools.

StorEdge T3/T3+ Array (Partner-Group) Supported SAN Features

Table 9-3 lists the SAN features that are supported with the StorEdge T3/T3+ array in a partner-group configuration. See the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0 for details about these features.

Table 9-3 StorEdge T3/T3+ Array (Partner-Group) Supported SAN Features

Feature 

Supported 

Cascading 

Yes 

Zone type 

SL zone, nameserver zone* 

 

*When using nameserver zones, the host must be connected to the F-port on the switch; the StorEdge T3/T3+ array must be connected to the TL port of the switch. 

Maximum number of arrays per SL zone 

Maximum initiators per LUN 

Maximum initiators per zone 

4* 

 

*Each node has one path to each of the arrays in the partner-group. 

Sample StorEdge T3/T3+ Array (Partner-Group) SAN Configuration

Figure 9-5 shows a sample SAN hardware configuration when using two hosts and four StorEdge T3/T3+ partner-groups. See the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0 for details.

Figure 9-5 Sample StorEdge T3/T3+ Array (Partner-Group) SAN Configuration

Graphic

StorEdge T3/T3+ Array (Partner-Group) SAN Clustering Considerations

If you are replacing an FC switch and you intend to save the switch IP configuration for restoration to the replacement switch, do not connect the cables to the replacement switch until after you recall the Fabric configuration to the replacement switch. For more information about saving and recalling switch configurations see the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0.