Sun Cluster 3.0 U1 Release Notes Supplement

How to Add StorEdge T3/T3+ Disk Tray Partner Groups to a Running Cluster


Note -

Use this procedure to add new StorEdge T3/T3+ disk tray partner groups to a running cluster. To install partner groups to a new Sun Cluster that is not running, use the procedure in "How to Install StorEdge T3/T3+ Disk Tray Partner Groups".


This procedure defines "Node A" as the node you begin working with, and "Node B" as the second node.

  1. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new disk trays to reside on, then assign an IP address to the new disk trays.


    Note -

    Assign an IP address to the master controller unit only. The master controller unit is the disk tray that has the interconnect cables attached to the right-hand connectors of its interconnect cards (see Figure B-2).


    This RARP server lets you assign an IP address to the new disk trays using the disk tray's unique MAC address. For the procedure on setting up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  2. Install the Ethernet cable between the disk trays and the local area network (LAN) (see Figure B-2).

  3. If not already installed, install interconnect cables between the two disk trays of each partner group (see Figure B-2).

    For the procedure on installing interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure B-2 Adding Sun StorEdge T3/T3+ Disk Trays, Partner-Group Configuration

    Graphic

  4. Power on the disk trays.


    Note -

    The disk trays might take several minutes to boot.


    For the procedure on powering on disk trays, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  5. Administer the disk trays' network addresses and settings.

    Telnet to the StorEdge T3/T3+ master controller unit and administer the disk trays.

    For the procedure on administering disk tray network address and settings, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Install any required disk tray controller firmware upgrades:

    For partner-group configurations, telnet to the StorEdge T3/T3+ master controller unit and if necessary, install the required disk tray controller firmware.

    For the required disk tray controller firmware revision number, see the Sun StorEdge T3 and T3+ Array Release Notes.

  7. At the master disk tray's prompt, use the port list command to ensure that each disk tray has a unique target address:


    t3:/:<#> port list
    

    If the disk trays do not have unique target addresses, use the port set command to set the addresses. For the procedure on verifying and assigning a target address to a disk tray, see the Sun StorEdge T3 and T3+ Array Configuration Guide. For more information about the port command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  8. At the master disk tray's prompt, use the sys list command to verify that the cache and mirror settings for each disk tray are set to auto:


    t3:/:<#> sys list
    

    If the two settings are not already set to auto, set them using the following commands at each disk tray's prompt:


    t3:/:<#> sys cache auto
    t3:/:<#> sys mirror auto
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  9. Use the StorEdge T3/T3+ sys list command to verify that the mp_support parameter for each disk tray is set to mpxio:


    t3:/:<#> sys list
    

    If mp_support is not already set to mpxio, set it using the following command at each disk tray's prompt:


    t3:/:<#> sys mp_support mpxio
    

    For more information about the sys command see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  10. Configure the new disk trays with the desired logical volumes.

    For the procedure on creating and initializing a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure on mounting a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  11. Reset the disk trays.

    For the procedure on rebooting or resetting a disk tray, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  12. (Skip this step if you are adding StorEdge T3+ disk trays.) Install the media interface adapter (MIA) in the StorEdge T3 disk trays you are adding, as shown in Figure B-2.

    For the procedure on installing an MIA, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  13. If necessary, install GBICs in the FC switches, as shown in Figure B-2.


    Note -

    There are no FC switch port-assignment restrictions. You can connect your disk trays and nodes to any FC switch port.


    For the procedure on installing a GBIC to an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

  14. Install a fiber optic cable between the FC switch and the new disk tray as shown in Figure B-2.

    For the procedure on installing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  15. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 54 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  16. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  17. Do you need to install host adapters in Node A?

    • If not, go to Step 24.

    • If you do need to install host adapters to Node A, continue with Step 18.

  18. Is the host adapter you are installing the first host adapter on Node A?

    • If not, go to Step 20.

    • If it is the first host adapter, use the pkginfo command as shown below to determine whether the required support packages are already installed on this node. The following packages are required:


      # pkginfo | egrep Wlux
      system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
      system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
      system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
      system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
      system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
      system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)

  19. Are the required support packages already installed?

    • If they are already installed, go to Step 20.

    • If not, install the required support packages that are missing.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  20. Shut down and power off Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  21. Install the host adapters in Node A.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  22. Power on and boot Node A into non-cluster mode.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  23. If necessary, upgrade the host adapter firmware on Node A.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  24. If necessary, install GBICs to the FC switches, as shown in Figure B-3.

    For the procedure on installing a GBIC to an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide.

  25. Connect fiber optic cables between Node A and the FC switches, as shown in Figure B-3.


    Note -

    There are no FC switch port-assignment restrictions. You can connect your StorEdge T3/T3+ disk tray and node to any FC switch port.


    For the procedure on installing a fiber optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure B-3 Adding Sun StorEdge T3/T3+ Disk Trays, Partner-Group Configuration

    Graphic

  26. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node A.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  27. Install any required patches or software for Sun StorEdge Traffic Manager software support to Node A from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  28. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 27.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  29. Shut down Node A.


    # shutdown -y -g0 -i0
    

  30. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


    {0} ok boot -r
    

  31. On Node A, update the /devices and /dev entries:


    # devfsadm -C 
    

  32. On Node A, update the paths to the DID instances:


    # scdidadm -C
    

  33. Label the new disk tray logical volume.

    For the procedure on labeling a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  34. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new disk tray.


    # scdidadm -l
    

  35. Do you need to install host adapters in Node B?

    • If not, go to Step 43.

    • If you do need to install host adapters to Node B, continue with Step 36.

  36. Is the host adapter you are installing the first host adapter on Node B?

    • If not, go to Step 38.

    • If it is the first host adapter, determine whether the required support packages are already installed on this node. The following packages are required.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
    system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)
  37. Are the required support packages already installed?

    • If they are already installed, go to Step 38.

    • If not, install the missing support packages.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  38. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    

  39. Shut down and power off Node B.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 U1 System Administration Guide.

  40. Install the host adapters in Node B.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  41. Power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  42. If necessary, upgrade the host adapter firmware on Node B.

    See the Sun Cluster 3.0 U1 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  43. If necessary, install GBICs to the FC switches, as shown in Figure B-4.

    For the procedure on installing GBICs to an FC switch, see the Sun StorEdge network FC switch-8 and switch-16 Installation and Configuration Guide

  44. Connect fiber optic cables between the FC switches and Node B as shown in Figure B-4.

    For the procedure on installing fiber optic cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure B-4 Adding a Sun StorEdge T3/T3+ Disk Tray, Partner-Pair Configuration

    Graphic

  45. If necessary, install the required Solaris patches for StorEdge T3/T3+ disk tray support on Node B.

    For a list of required Solaris patches for StorEdge T3/T3+ disk tray support, see the Sun StorEdge T3 and T3+ Array Release Notes.

  46. If you are installing a partner-group configuration, install any required patches or software for Sun StorEdge Traffic Manager software support to Node B from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  47. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 46.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  48. Shut down Node B.


    # shutdown -y -g0 -i0
    

  49. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    

  50. On Node B, update the /devices and /dev entries:


    # devfsadm -C 
    

  51. On Node B, update the paths to the DID instances:


    # scdidadm -C
    

  52. (Optional) On Node B, verify that the DIDs are assigned to the new disk trays:


    # scdidadm -l
    

  53. On one node attached to the new disk trays, reset the SCSI reservation state:


    # scdidadm -R n
    

    Where n is the DID instance of a disk tray LUN you are adding to the cluster.


    Note -

    Repeat this command on the same node for each disk tray LUN you are adding to the cluster.


  54. Return the resource groups and device groups you identified in Step 15 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 U1 System Administration Guide.

  55. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.