Sun Cluster 3.1 - 3.2 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

ProcedureHow to Add a Storage Array to an Existing Cluster, Using a Partner-Group Configuration

This procedure contains instructions for adding new storage array partner groups to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Set up a Reverse Address Resolution Protocol (RARP) server on the network you want the new storage arrays to reside on. Afterward, assign an IP address to the new storage arrays.


    Note –

    Assign an IP address to the master controller unit only. The master controller unit is the storage array that has the interconnect cables attached to the second port of each interconnect card, as shown in Figure 1–6.


    This RARP server enables you to assign an IP address to the new storage arrays. Assign an IP address by using the storage array's unique MAC address. To set up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. Install the  Ethernet cable between the storage arrays and the local area network (LAN), as shown in Figure 1–6.

  4. If they are not already installed, install interconnect cables between the two storage arrays of each partner group, as shown in Figure 1–6.

    To install interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    Figure 1–6 Adding a Partner-Group Configuration: Part I

    Illustration: The preceding context describes
the graphic.

  5. Power on the storage arrays.


    Note –

    The storage arrays might require several minutes to boot.


    To power on storage arrays, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. Administer the storage arrays' network addresses and settings.

    Use the telnet command to access the master controller unit and to administer the storage arrays.

    To administer the network address and the settings of a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  7. Install any required storage array controller firmware upgrades.

    For partner-group configurations, use the telnet command to the master controller unit. If necessary, install the required controller firmware for the storage array.

    For the required revision number of the storage array controller firmware, see the Sun StorEdge T3 Disk Tray Release Notes.

  8. Ensure that each storage array has a unique target address.

    To verify and assign a target address to a storage array, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  9. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  10. Ensure that the mp_support parameter for each storage array is set to mpxio.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  11. Configure the new storage arrays with the desired logical volumes.

    To create and initialize a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide. To mount a logical volume, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  12. Reset the storage arrays.

    To reboot or reset a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  13. If you are adding Sun StorEdge T3+ arrays, install the media interface adapter (MIA) in the Sun StorEdge T3 arrays that you are adding, as shown in Figure 1–6.

    To install an MIA, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  14. If necessary, install GBICs or SFPs in the FC switches, as shown in Figure 1–6.

    To install a GBIC or an SFP to an FC switch, see the documentation that shipped with your FC switch hardware.

  15. Install a fiber-optic cable between each FC switch and both new storage arrays of the partner group, as shown in Figure 1–6.

    To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.


  16. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use it in Step 30 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status +
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  17. Move all resource groups and device groups off each node in the cluster.

    • If you are using Sun Cluster 3.2, on each node use the following command:


      # clnode evacuate
      
    • If you are using Sun Cluster 3.1, on each node use the following command:


      # scswitch -S -h from-node
      
  18. If you need to install host adapters in the node, and if the host adapter you are installing is the first adapter on the node, determine whether the required support packages are already installed on this node.

    The following packages are required.


    # pkginfo | egrep Wlux
    system 	SUNWluxd     Sun Enterprise Network Array sf Device Driver
    system 	SUNWluxdx    Sun Enterprise Network Array sf Device Driver
    								(64-bit)
    system 	SUNWluxl     Sun Enterprise Network Array socal Device Driver
    system 	SUNWluxlx    Sun Enterprise Network Array socal Device Driver
    								(64-bit)
    system 	SUNWluxop    Sun Enterprise Network Array firmware and utilities
    system 	SUNWluxox    Sun Enterprise Network Array libraries (64-bit)

    If this is not the first host adapter on the node, skip to Step 20.

  19. If the required support packages are not present, install them.

    The support packages are located in the Product directory of the Solaris DVD. Add any missing packages.

  20. If you need to install host adapters in the node, shut down and power off the node, and then install them in the node.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

    To install host adapters, see the documentation that shipped with your host adapters and nodes.

  21. If you installed host adapters in the node, power on and boot the node into noncluster mode.

    To boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  22. If you installed host adapters in the node, and if necessary, upgrade the host adapter firmware on the node.

  23. If necessary, install the required Solaris patches for storage array support on the node.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  24. If you installed host adapters in the node, reboot the node in cluster mode.

  25. Connect fiber-optic cables between the node and the FC switches, as shown in Figure 1–7.

    To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Considerations for more information.


    Figure 1–7 Adding a Partner-Group Configuration: Part II

    Illustration: The preceding context describes
the graphic.

  26. On the current node, update the /devices and /dev entries.


    # devfsadm
    
  27. From any node in the cluster, update the global device namespace.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  28. Label the new storage array logical volume.

    To label a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  29. (Optional) On the current node, verify that the device IDs (DIDs) are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n CurrentNode -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  30. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  31. For each of the other nodes in the cluster, repeat Step 17 through Step 30.

  32. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.