Sun Cluster 3.0 U1 Hardware Guide

How to Add a StorEdge T3 Disk Tray

Use this procedure to add a new StorEdge T3 disk tray to a running cluster.

This procedure defines Node A as the node you begin working with, and Node B as the remaining node.

  1. Set up a Reverse Address Resolution Protocol (RARP) server on the network the new StorEdge T3 disk tray is to reside on, and then assign an IP address to the new StorEdge T3 disk tray.

    This RARP server enables you to assign an IP address to the new StorEdge T3 disk tray by using the StorEdge T3 disk tray's unique MAC address.

    For the procedure on setting up a RARP server, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  2. Install the media interface adapter (MIA) in the StorEdge T3 disk tray you want to add as shown in Figure 8-2.

    For the procedure on installing a media interface adapter (MIA), see the Sun StorEdge T3 Configuration Guide.

  3. If necessary, install a gigabit interface converter (GBIC) in the Sun StorEdge FC-100 hub as shown in Figure 8-2.

    This GBIC enables you to connect the Sun StorEdge FC-100 hub to the StorEdge T3 disk tray you want to add.


    Note -

    No restrictions are placed on the hub port assignments. You can connect your StorEdge T3 disk tray and node to any hub port.


    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  4. Install the 10Base-T Ethernet cable between the StorEdge T3 disk tray and the Local Area Network (LAN), as shown in Figure 8-2.

  5. Power on the StorEdge T3 disk tray.


    Note -

    The StorEdge T3 disk tray might require a few minutes to boot.


    For the procedure on powering on a StorEdge T3 disk tray, see the Sun StorEdge T3 Installation, Operation, and Service Manual.

  6. Telnet to the StorEdge T3 disk tray you are adding, and, if necessary, install the required StorEdge T3 disk tray controller firmware.

    Revision 1.16a firmware is required for the StorEdge T3 disk tray controller. For the procedure on upgrading firmware, see the firmware patch README.

  7. Does this new StorEdge T3 disk tray have a unique target address?

    • If yes, proceed to Step 8.

    • If no, change the target address for this new StorEdge T3 disk tray.

    For the procedure on verifying and assigning a target address, see the Sun StorEdge T3 Configuration Guide.

  8. Install a fiber-optic cable between the Sun StorEdge FC-100 hub and the StorEdge T3 disk tray as shown in Figure 8-2.


    Note -

    No restrictions are placed on the hub port assignments. You can connect your StorEdge T3 disk tray and node to any hub port.


    For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

    Figure 8-2 Adding a StorEdge T3 Disk Tray in a Single-Controller Configuration

    Graphic


    Note -

    Although Figure 8-2 shows a single-controller configuration, two disk trays are shown to illustrate how two non-interconnected disk trays are typically cabled in a cluster to allow data sharing and host-based mirroring.


  9. Configure the new StorEdge T3 disk tray.

    For the procedure on creating a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  10. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you will use it in Step 42 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  11. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    
  12. Do you need to install a host adapter in Node A?

  13. Is the host adapter you are installing the first FC-100/S host adapter on Node A?

    • If no, skip to Step 15.

    • If yes, determine whether the Fibre Channel support packages are already installed on these nodes. This product requires the following packages.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  14. Are the Fibre Channel support packages installed?

    • If yes, proceed to Step 15.

    • If no, install them.

    The StorEdge T3 disk tray packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
  15. Stop the Sun Cluster software on Node A and shut down Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  16. Power off Node A.

  17. Install the host adapter in Node A.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  18. If necessary, power on and boot Node A.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  19. If necessary, upgrade the host adapter firmware on Node A.

    For the required host adapter firmware patch, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying the host adapter firmware patch, see the firmware patch README.

  20. If necessary, install gigabit interface converters (GBIC), as shown in Figure 8-3.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  21. If necessary, connect a fiber-optic cable between the Sun StorEdge FC-100 hub and Node A as shown in Figure 8-3.


    Note -

    No restrictions are placed on hub port assignments. You can connect your StorEdge T3 disk tray and node to any hub port.


    For the procedure on installing an FC-100/S host adapter GBIC, see your host adapter documentation. For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

    Figure 8-3 Adding a StorEdge T3 Disk Tray in a Single-Controller Configuration

    Graphic

  22. If necessary, install the required Solaris patches for StorEdge T3 disk tray support on Node A.

    For a list of required Solaris patches for StorEdge T3 disk tray support, see the Sun StorEdge T3 Disk Tray Release Notes.

  23. Shut down Node A.


    # shutdown -y -g0 -i0
    
  24. Perform a reconfiguration boot to create the new Solaris device files and links on Node A.


    {0} ok boot -r
    
  25. Label the new logical volume.

    For the procedure on labeling a logical volume, see the Sun StorEdge T3 Disk Tray Administrator's Guide.

  26. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new StorEdge T3 disk tray.


    # scdidadm -l
    

  27. Do you need to install a host adapter in Node B?

  28. Is the host adapter you want to install the first FC-100/S host adapter on Node B?

    • If no, skip to Step 30.

    • If yes, determine whether the Fibre Channel support packages are already installed on these nodes. This product requires the following packages.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
  29. Are the Fibre Channel support packages installed?

    • If yes, proceed to Step 30.

    • If no, install them.

    The StorEdge T3 disk tray packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any necessary packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    
  30. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    
  31. Stop the Sun Cluster software on Node B, and shut down the node.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  32. Power off Node B.

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  33. Install the host adapter in Node B.

    For the procedure on installing a host adapter, see the documentation that shipped with your host adapter and node.

  34. If necessary, power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  35. If necessary, upgrade the host adapter firmware on Node B.

    For the required host adapter firmware patch, see the Sun StorEdge T3 Disk Tray Release Notes. For the procedure on applying the host adapter firmware patch, see the firmware patch README.

  36. If necessary, install gigabit interface converters (GBIC) as shown in Figure 8-4.

    For the procedure on installing an FC-100 hub GBIC, see the FC-100 Hub Installation and Service Manual.

  37. If necessary, connect a fiber-optic cable between the Sun StorEdge FC-100 hub and Node B as shown in Figure 8-4.

    For the procedure on installing a FC-100/S host adapter GBIC, see your host adapter documentation. For the procedure on installing a fiber-optic cable, see the Sun StorEdge T3 Configuration Guide.

    Figure 8-4 Adding a StorEdge T3 Disk Tray in a Single-Controller Configuration

    Graphic

  38. If necessary, install the required Solaris patches for StorEdge T3 disk tray support on Node B.

    For a list of required Solaris patches for StorEdge T3 disk tray support, see the Sun StorEdge T3 Disk Tray Release Notes.

  39. Shut down Node B.


    # shutdown -y -g0 -i0
    
  40. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    
  41. (Optional) On Node B, verify that the device IDs (DIDs) are assigned to the new StorEdge T3 disk tray.


    # scdidadm -l
    

  42. Return the resource groups and device groups you identified in Step 10 to Node A and Node B.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  43. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.