Sun Cluster 3.0 12/01 Release Notes Supplement

How to Add StorEdge 3900 or 6900 Series Systems to a Running Cluster


Note -

Use this procedure to add new StorEdge 3900 and 6900 Series systems to a running cluster. To install systems to a new Sun Cluster that is not running, use the procedure in "How to Install StorEdge 3900 and 6900 Series Systems".


This procedure defines "Node A" as the node you begin working with, and "Node B" as the second attached node.

  1. Unpack, place, and level the system cabinet.

    For instructions, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.

  2. Install cables in the following order.

    1. Install the system power cord.

    2. Install the system grounding strap.

    3. Install the cables from the FC switches to the cluster nodes (see Figure C-1 for an example).

    4. Install the Ethernet cable to the LAN.

    For instructions on cabling, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.

  3. Power on the new system.


    Note -

    The StorEdge T3+ arrays in your system might take several minutes to boot.


    For instructions, see the Sun StorEdge 3900 and 6900 Series Cabinet Installation and Service Manual.

  4. Set the host name, IP address, date, and timezone for the system's Storage Service Processor.

    For detailed instructions, see the initial field installation instructions in the Sun StorEdge 3900 and 6900 Series Reference Manual.

  5. Remove the preconfigured, default hard zoning from the new system's FC switches.


    Note -

    For StorEdge 3900 Series only: To configure the StorEdge 3900 Series system for use with Sun Cluster host-based mirroring, the default hard zones must be removed from the system's FC switches. See the SANbox-8/16 Switch Management User's Manual for instructions on using the installed SANsurfer interface for removing the preconfigured hard zones from all Sun StorEdge Network FC Switch-8 and Switch-16 switches.


  6. Determine the resource groups and device groups running on all nodes.

    Record this information because you will use it in Step 48 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    

  7. Move all resource groups and device groups off Node A.


    # scswitch -S -h nodename
    

  8. Do you need to install host adapters in Node A?

    • If not, go to Step 14.

    • If you do need to install host adapters to Node A, continue with Step 9.

  9. Is the host adapter you are installing the first host adapter on Node A?

    • If not, go to Step 11.

    • If it is the first host adapter, use the pkginfo command as shown below to determine whether the required support packages for the host adapter are already installed on this node. The following packages are required.


      # pkginfo | egrep Wlux
      system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
      system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
      system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
      system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
      system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
      system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)

  10. Are the required support packages already installed?

    • If they are already installed, go to Step 11.

    • If not, install the required support packages that are missing.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  11. Shut down and power off Node A.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  12. Install the host adapters in Node A.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  13. Power on and boot Node A into non-cluster mode.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  14. If necessary, upgrade the host adapter firmware on Node A.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  15. If necessary, install GBICs in the FC switches.

    For the procedure on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  16. Connect fiber optic cables between Node A and the FC switches in your StorEdge 3900 or 6900 Series system (see Figure C-1 for an example).

  17. If necessary, install the required Solaris patches for StorEdge T3+ array support on Node A.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  18. Install any required patches or software for Sun StorEdge Traffic Manager software support to Node A from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  19. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 18.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  20. Shut down Node A.


    # shutdown -y -g0 -i0
    

  21. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


    {0} ok boot -r
    

  22. On all nodes, update the /devices and /dev entries:.


    # devfsadm -C 
    # devfsadm
    

  23. On all nodes, update the paths to the DID instances


    # scdidadm -C 
    # scdidadm -r
    
    .

  24. Are you adding a StorEdge 3900 Series or StorEdge 6900 Series system?

    • If you are adding a StorEdge 3900 Series system, go to Step 26.

    • If you are adding StorEdge 6900 Series systems: On Node A, use the cfgadm command as shown below to view the virtualization engine (VE) controller status and to enable the VE controllers.


      # cfgadm -al
      # cfgadm -c configure <c::controller id>
      

    See the cfgadm(1M) man page for more information about the command and its options.

  25. (Optional) Configure VLUNs on the VEs in the new StorEdge 6900 Series system.

    For instructions on configuring VLUNs in a cluster, see "How to Configure VLUNs on the Virtualization Engines in Your StorEdge 6900 Series System".

  26. (Optional) On Node A, verify that the device IDs (DIDs) were assigned to the new array.


    # scdidadm -l
    

  27. Do you need to install host adapters in Node B?

    • If not, go to Step 34.

    • If you do need to install host adapters to Node B, continue with Step 28.

  28. Is the host adapter you are installing the first host adapter on Node B?

    • If not, go to Step 30.

    • If it is the first host adapter, determine whether the required support packages for the host adapter are already installed on this node. The following packages are required.


    # pkginfo | egrep Wlux
    system 				SUNWluxd 					Sun Enterprise Network Array sf Device Driver
    system 				SUNWluxdx					Sun Enterprise Network Array sf Device Driver (64-bit)
    system 				SUNWluxl 					Sun Enterprise Network Array socal Device Driver
    system 				SUNWluxlx 					Sun Enterprise Network Array socal Device Driver (64-bit)
    system 				SUNWluxop 					Sun Enterprise Network Array firmware and utilities
    system 				SUNWluxox 					Sun Enterprise Network Array libraries (64-bit)
  29. Are the required support packages already installed?

    • If they are already installed, go to Step 30.

    • If not, install the missing support packages.

    The support packages are located in the Product directory of the Solaris CD-ROM. Use the pkgadd command to add any missing packages.


    # pkgadd -d path_to_Solaris/Product Pkg1 Pkg2 Pkg3 ... PkgN
    

  30. Move all resource groups and device groups off Node B.


    # scswitch -S -h nodename
    

  31. Shut down and power off Node B.


    # shutdown -y -g0 -i0
    

    For the procedure on shutting down and powering off a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  32. Install the host adapters in Node B.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and nodes.

  33. Power on and boot Node B.


    {0} ok boot -x
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  34. If necessary, upgrade the host adapter firmware on Node B.

    See the Sun Cluster 3.0 12/01 Release Notes for information about accessing Sun's EarlyNotifier web pages, which list information about any required patches or firmware levels that are available for download. For the procedure on applying any host adapter firmware patch, see the firmware patch README file.

  35. If necessary, install GBICs in the FC switches.

    For the procedure on installing a GBIC to an FC switch, see the SANbox 8/16 Segmented Loop Switch User's Manual.

  36. Connect fiber optic cables between the FC switches in your StorEdge 3900 or 6900 Series system and Node B (see Figure C-1 for an example.

  37. If necessary, install the required Solaris patches for StorEdge T3+ array support on Node B.

    For a list of required Solaris patches for StorEdge T3+ array support, see the Sun StorEdge T3 Disk Tray Release Notes.

  38. Install any required patches or software for Sun StorEdge Traffic Manager software support to Node B from the Sun Download Center Web site, http://www.sun.com/storage/san/

    For instructions on installing the software, see the information on the web site.

  39. Activate the Sun StorEdge Traffic Manager software functionality in the software you installed in Step 38.

    To activate the Sun StorEdge Traffic Manager software functionality, manually edit the /kernel/drv/scsi_vhci.conf file that is installed to change the mpxio-disable parameter to no:


    mpxio-disable="no"
    

  40. Shut down Node B.


    # shutdown -y -g0 -i0
    

  41. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    {0} ok boot -r
    

  42. On all nodes, update the /devices and /dev entries:.


    # devfsadm -C 
    # devfsadm
    

  43. On all nodes, update the paths to the DID instances


    # scdidadm -C 
    # scdidadm -r
    
    .

  44. Are you adding a StorEdge 3900 Series or StorEdge 6900 Series system?

    • If you are adding a StorEdge 3900 Series system, go to Step 46.

    • If you are adding StorEdge 6900 Series systems: On Node A, use the cfgadm command as shown below to view the virtualization engine (VE) controller status and to enable the VE controllers.


      # cfgadm -al
      # cfgadm -c configure <c::controller id>
      

    See the cfgadm(1M) man page for more information about the command and its options.

  45. (Optional) Configure VLUNs on the VEs in the new StorEdge 6900 Series system.

    For instructions on configuring VLUNs in a cluster, see "How to Configure VLUNs on the Virtualization Engines in Your StorEdge 6900 Series System".

  46. (Optional) On Node B, verify that the DIDs are assigned to the new arrays:


    # scdidadm -l
    

  47. On one node attached to the new arrays, reset the SCSI reservation state:


    # scdidadm -R n
    

    Where n is the DID instance of a array LUN you are adding to the cluster.


    Note -

    Repeat this command on the same node for each array LUN you are adding to the cluster.


  48. Return the resource groups and device groups you identified in Step 6 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see the Sun Cluster 3.0 12/01 System Administration Guide.

  49. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.