Sun Cluster 3.0 12/01 Hardware Guide

How to Install StorEdge T3/T3+ Array Partner Groups

Perform the steps in this procedure in conjunction with the procedures in the Sun Cluster 3.0 12/01 Software Installation Guide and your server hardware manual.

  1. Install the host adapters in the nodes that will be connected to the arrays.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  2. Install the Fibre Channel (FC) switches.

    For the procedure on installing a Sun StorEdge network FC switch-8 or switch-16, see the Sun StorEdge Network FC Switch-8 and Switch-16 Installation and Configuration Guide, Sun SAN 3.0.


    Note -

    You must use FC switches when installing arrays in a partner-group configuration. If you are using your StorEdge T3/T3+ arrays to create a storage area network (SAN) by using two Sun StorEdge Network FC Switch-8 or Switch-16 switches and Sun SAN Version 3.0 release software, see "StorEdge T3 and T3+ Array (Partner-Group) SAN Considerations" for more information


  3. (Skip this step if you are installing StorEdge T3+ arrays) Install the media interface adapters (MIAs) in the StorEdge T3 arrays you are installing as shown in Figure 9-1.

    For the procedure on installing a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  4. If necessary, install GBICs in the FC switches, as shown in Figure 9-1.

    For instructions on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  5. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new arrays to reside on.

    This RARP server enables you to assign an IP address to the new arrays using the array's unique MAC address. For the procedure on setting up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Cable the arrays (see Figure 9-1):

    1. Connect the arrays to the FC switches using fiber optic cables.

    2. Connect the Ethernet cables from each array to the LAN.

    3. Connect interconnect cables between the two arrays of each partner group.

    4. Connect power cords to each array.

    For the procedure on installing fiber optic, Ethernet, and interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure 9-1 StorEdge T3/T3+ Array Partner-Group (Interconnected) Controller Configuration

    Graphic

  7. Power on the arrays and verify that all components are powered on and functional.

    For the procedure on powering on the arrays and verifying the hardware configuration, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  8. Administer the arrays' network settings:

    Telnet to the master controller unit and administer the arrays. For the procedure on administering the array network addresses and settings, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    The master controller unit is the array that has the interconnect cables attached to the right-hand connectors of its interconnect cards (when viewed from the rear of the arrays). For example, Figure 9-1 shows the master controller unit of the partner-group as the lower array. Note in this diagram that the interconnect cables are connected to the right-hand connectors of both interconnect cards on the master controller unit.

  9. Install any required array controller firmware:

    For partner-group configurations, telnet to the master controller unit and install the required controller firmware.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  10. At the master array's prompt, use the port list command to ensure that each array has a unique target address:


    t3:/:<#> port list
    

    If the arrays do not have unique target addresses, use the port set command to set the addresses. For the procedure on verifying and assigning a target address to a array, see the Sun StorEdge T3 and T3+ Array Configuration Guide. For more information about the port command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  11. At the master array's prompt, use the sys list command to verify that the cache and mirror settings for each array are set to auto:


    t3:/:<#> sys list
    

    If the two settings are not already set to auto, set them using the following commands:


    t3:/:<#> sys cache auto
    t3:/:<#> sys mirror auto
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  12. At the master array's prompt, use the sys list command to verify that the mp_support parameter for each array is set to mpxio:


    t3:/:<#> sys list
    

    If mp_support is not already set to mpxio, set it using the following command:


    t3:/:<#> sys mp_support mpxio
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  13. At the master array's prompt, use the sys stat command to verify that both array controllers are online, as shown in the following example output.


    t3:/:<#> sys stat
    Unit   State      Role    Partner
    -----  ---------  ------  -------
     1    ONLINE     Master    2
     2    ONLINE     AlterM    1

    For more information about the sys command and how to correct the situation if both controllers are not online, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  14. (Optional) Configure the arrays with the desired logical volumes.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  15. Reset the arrays.

    For the procedure on rebooting or resetting a array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  16. Install to the cluster nodes the Solaris operating environment and apply the required Solaris patches for Sun Cluster software and StorEdge T3/T3+ array support.

    For the procedure on installing the Solaris operating environment, see the Sun Cluster 3.0 12/01 Software Installation Guide.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  17. Install to the cluster nodes any required patches or software for Sun StorEdge Traffic Manager software support from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  18. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed to the cluster nodes in Step 17.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  19. Perform a reconfiguration boot on all nodes to create the new Solaris device files and links.


    {0} ok boot -r
    
  20. On all nodes, update the /devices and /dev entries:


    # devfsadm -C 
    

  21. On all nodes, use the luxadm display command to confirm that all arrays you installed are now visible.


    # luxadm display 
    

Where to Go From Here

To continue with Sun Cluster software installation tasks, see the Sun Cluster 3.0 12/01 Software Installation Guide.