Sun Cluster 3.0-3.1 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

ProcedureHow to Add a Storage Array to an Existing Cluster, Using a Partner-Group Configuration

This procedure contains instructions about how to add new storage array partner groups to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure defines Node A as the node that you begin working with. Node B is the second node.

Steps
  1. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new storage arrays to reside on. Afterward, assign an IP address to the new storage arrays.


    Note –

    Assign an IP address to the master controller unit only. The master controller unit is the storage array that has the interconnect cables attached to the to the second port of each interconnect card, as shown in Figure 1–6.


    This RARP server enables you to assign an IP address to the new storage arrays. Assign an IP address by using the storage array's unique MAC address. For the procedure about how to set up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  2. Install the  Ethernet cable between the storage arrays and the local area network (LAN), as shown in Figure 1–6.

  3. If not already installed, install interconnect cables between the two storage arrays of each partner group, as shown in Figure 1–6.

    For the procedure about how to install interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure 1–6 Adding a Partner-Group Configuration: Part I

    Illustration: The preceding context describes the graphic.

  4. Power on the storage arrays.


    Note –

    The storage arrays might require several minutes to boot.


    For the procedure about how to power on storage arrays, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  5. Administer the storage arrays' network addresses and settings.

    Use the telnet command to access the master controller unit and administer the storage arrays.

    For the procedure about how to administer the network address and the settings of a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Install any required storage array controller firmware upgrades.

    For partner-group configurations, use the telnet command to the master controller unit. If necessary, install the required controller firmware for the storage array.

    For the required revision number of the storage array controller firmware, see the Sun StorEdge T3 Disk Tray Release Notes.

  7. Ensure that each storage array has a unique target address.

    For the procedure about how to verify and assign a target address to a storage array, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  8. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  9. Ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  10. Configure the new storage arrays with the desired logical volumes.

    For the procedure about how to create and initialize a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. For the procedure about how to mount a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  11. Reset the storage arrays.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  12. Are you adding Sun StorEdge T3+ arrays?

    • If no, proceed to Step 13.

    • If yes, install the media interface adapter (MIA) in the Sun StorEdge T3 arrays that you are adding, as shown in Figure 1–6.

      For the procedure about how to install an MIA, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  13. If necessary, install GBICs or SFPs in the FC switches, as shown in Figure 1–6.

    For the procedure about how to install a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

  14. Install a fiber-optic cable between each FC switch and both new storage arrays of the partner group, as shown in Figure 1–6.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.


  15. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 48 of this procedure to return resource groups and device groups to these nodes.


    # scstat
    
  16. Move all resource groups and device groups off Node A.


    # scswitch -S -h from-node
    
  17. Do you need to install host adapters in Node A?

  18. Is the host adapter that you are installing the first host adapter on Node A?

    • If no, skip to Step 20.

    • If yes, determine whether the required support packages are already installed on this node. The following packages are required.


      # pkginfo | egrep Wlux
      system 	SUNWluxd		Sun Enterprise Network Array sf Device Driver
      system 	SUNWluxdx	Sun Enterprise Network Array sf Device Driver
      								(64-bit)
      system 	SUNWluxl 	Sun Enterprise Network Array socal Device Driver
      system 	SUNWluxlx 	Sun Enterprise Network Array socal Device Driver
      								(64-bit)
      system 	SUNWluxop 	Sun Enterprise Network Array firmware and utilities
      system 	SUNWluxox 	Sun Enterprise Network Array libraries (64-bit)
  19. Are the required support packages already installed?

    • If yes, proceed to Step 20.

    • If no, install the required support packages that are missing.

    The support packages are located in the Product directory of the Solaris CD-ROM. Add any missing packages.

  20. Shut down and power off Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  21. Install the host adapters in Node A.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  22. Power on and boot Node A into noncluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  23. If necessary, upgrade the host adapter firmware on Node A.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  24. Connect fiber-optic cables between Node A and the FC switches, as shown in Figure 1–7.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.


    Figure 1–7 Adding a Partner-Group Configuration: Part II

    Illustration: The preceding context describes the graphic.

  25. If necessary, install the required Solaris patches for storage array support on Node A.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  26. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  27. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


    # boot -r
    
  28. On Node A, update the /devices and /dev entries.


    # devfsadm -C 
    
  29. On Node A, update the paths to the DID instances.


    # scdidadm -C
    
  30. Label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  31. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new LUNs.


    # scdidadm -l
    
  32. Do you need to install host adapters in Node B?

  33. Is the host adapter that you are installing the first host adapter on Node B

    • If no, skip to Step 35.

    • If yes, determine whether the required support packages for host adapters are already installed on this node. The following packages are required.


      # pkginfo | egrep Wlux
      system	SUNWluxd		Sun Enterprise Network Array sf Device Driver
      system	SUNWluxdx	Sun Enterprise Network Array sf Device Driver 
      									(64-bit)
      system	SUNWluxl		Sun Enterprise Network Array socal Device Driver
      system	SUNWluxlx	Sun Enterprise Network Array socal Device Driver 
      									(64-bit)
      system	SUNWluxop	Sun Enterprise Network Array firmware and utilities
      system	SUNWluxox	Sun Enterprise Network Array libraries (64-bit)
  34. Are the required support packages already installed?

    • If yes, skip to Step 35.

    • If no, install the missing support packages.

    The support packages are located in the Product directory of the Solaris CD-ROM. Add any missing packages.

  35. Shut down and power off Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  36. Install the host adapters in Node B.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  37. Power on and boot Node B.

    For more information on booting nodes, see your Sun Cluster system administration documentation.

  38. If necessary, upgrade the host adapter firmware on Node B.

    PatchPro is a patch-management tool that eases the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides an Interactive Mode tool especially for Sun Cluster. The Interactive Tool makes the installation of patches easier. PatchPro's Expert Mode tool helps you to maintain your configuration with the latest set of patches. Expert Mode is especially useful for obtaining all of the latest patches, not just the high availability and security patches.

    To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

    For third-party firmware patches, see the SunSolveSM Online site at http://sunsolve.ebay.sun.com.

  39. If necessary, install GBICs or SFPs to the FC switches, as shown in Figure 1–8.

    For the procedure about how to install GBICs or SFPs to an FC switch, see the documentation that shipped with your FC switch hardware.

  40. Connect fiber-optic cables between the FC switches and Node B, as shown in Figure 1–8.

    For the procedure about how to install fiber-optic cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.


    Figure 1–8 Adding a Partner-Group Configuration: Part III

    Illustration: The preceding context describes the graphic.

  41. If necessary, install the required Solaris patches for storage array support on Node B.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge T3 Disk Tray Release Notes.

  42. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  43. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    # boot -r
    
  44. On Node B, update the /devices and /dev entries.


    # devfsadm -C 
    
  45. On Node B, update the paths to the DID instances.


    # scdidadm -C
    
  46. (Optional) On Node B, verify that the DIDs are assigned to the new LUNs.


    # scdidadm -l
    
  47. On one node that is attached to the new storage arrays, reset the SCSI reservation state.


    # scdidadm -R n
    

    Where n is the DID instance of a storage array LUN that you are adding to the cluster.


    Note –

    Repeat this command on the same node for each storage array LUN that you are adding to the cluster.


  48. Return the resource groups and device groups that you identified in Step 15 to all nodes.


    # scswitch -z -g resource-group -h nodename
    # scswitch -z -D device-group-name -h nodename
    

    For more information, see your Sun Cluster system administration documentation.

  49. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or VERITAS Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.