Sun Cluster 3.1 - 3.2 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

ProcedureHow to Add a Storage Array to an Existing Cluster, Using a Single-Controller Configuration

This procedure contains instructions about how to add a new storage array to a running cluster in a single-controller configuration. The following procedures contain instructions for other array-installation situations:

This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  2. Set up a Reverse Address Resolution Protocol (RARP) server on the network on which you want the new storage arrays to be located.

  3. Assign an IP address to the new storage arrays.

    This RARP server enables you to assign an IP address to the new storage array by using the storage array's unique MAC address.

    To set up a RARP server, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. If you are adding a StorEdge T3+ array, install the media interface adapter (MIA) in storage array that you are adding, as shown in Figure 1–3.

    If you are not adding a StorEdge T3+ array, skip this step.

    To install a media interface adapter (MIA), see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  5. If necessary, install gigabit interface converters (GBICs) or Small Form-Factor Pluggables (SFPs) in the FC hub/switch, as shown in Figure 1–3.

    The GBICs or SFPs enables you to connect the FC hubs/switches to the storage array that you are adding.

    To install an FC hub/switch GBIC or an SFP, see the documentation that shipped with your FC hub/switch hardware.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information.


  6. Install the Ethernet cable between the storage array and the Local Area Network (LAN), as shown in Figure 1–3.

  7. Power on the storage array array.


    Note –

    The storage array might require a few minutes to boot.


    To power on a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  8. Access the storage array that you are adding. If necessary, install the required controller firmware for the storage array.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  9. If this new storage array does not yet have a unique target address, change the target address for this new storage array.

    If the target address for this array is already unique, skip this step.

    To verify and assign a target address, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  10. Install a fiber-optic cable between the FC hub/switch and the storage array, as shown in Figure 1–3.

    To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    Figure 1–3 Adding a Single-Controller Configuration: Part I

    Illustration: The preceding context describes
the graphic.


    Note –

    Figure 1–3 shows how to cable two storage arrays to enable data sharing and host-based mirroring. This configuration prevents a single-point of failure.


  11. Configure the new storage array.

    To create a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  12. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you use this information in Step 40 and Step 41 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  13. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h from-node
      
  14. If you need to install a host adapter in Node A, and if it is the first FC host adapter on Node A, determine whether the Fibre Channel support packages are already installed on these nodes.

    This product requires the following packages.


    # pkginfo | egrep Wlux
    system	SUNWluxd   Sun Enterprise Network Array sf Device Driver
    system	SUNWluxdx  Sun Enterprise Network Array sf Device Driver
    									(64-bit)
    system	SUNWluxl   Sun Enterprise Network Array socal Device Driver
    system	SUNWluxlx  Sun Enterprise Network Array socal Device Driver
    									(64-bit)
    system	SUNWluxop  Sun Enterprise Network Array firmware and utilities

    If this is not the first FC host adapter on Node A, skip to Step 16. If you do not need to install a host adapter in Node A, skip to Step 35.

  15. If the Fibre Channel support packages are not installed, install the required support packages that are missing.

    The storage array packages are located in the Product directory of the Solaris DVD. Add any necessary packages.

  16. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  17. Power off Node A.

  18. Install the host adapter in Node A.

    To install a host adapter, see the documentation that shipped with your host adapter and node.

  19. If necessary, power on and boot Node A into noncluster mode by adding -x to your boot instruction.

    To boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  20. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  21. Connect a fiber-optic cable between the FC hub/switch and Node A, as shown in Figure 1–4.

    To install an FC host adapter GBIC or an SFP, see your host adapter documentation. To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    Figure 1–4 Adding a Single-Controller Configuration: Part II

    Illustration: The preceding context describes
the graphic.

  22. If necessary, install the required Solaris patches for array support on Node A.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge T3 and T3+ Array Release Notes.

  23. Shut down Node A.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  24. To create the new Solaris device files and links on Node A, perform a reconfiguration boot.


    # boot -r
    
  25. Label the new logical volume.

    To label a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  26. (Optional) On Node A, verify that the device IDs (DIDs) are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n NodeA -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  27. If you need to install a host adapter in Node B, and if the host adapter that you are installing the first FC host adapter on Node B, determine whether the Fibre Channel support packages are already installed on these nodes.

    This product requires the following packages.


    # pkginfo | egrep Wlux
    system	SUNWluxd	   Sun Enterprise Network Array sf Device Driver
    system	SUNWluxdx	   Sun Enterprise Network Array sf Device Driver 
    									(64-bit)
    system	SUNWluxl	   Sun Enterprise Network Array socal Device Driver
    system	SUNWluxlx	   Sun Enterprise Network Array socal Device Driver 
    									(64-bit)
    system	SUNWluxop	   Sun Enterprise Network Array firmware and utilities

    If this is not the first FC host adapter on Node B, skip to Step 29. If you do not need to install a host adapter, skip to Step 34.

  28. If the Fibre Channel support packages are not installed, install the required support packages that are missing.

    The storage array packages are located in the Product directory of the Solaris DVD. Add any necessary packages.

  29. Shut down Node B.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  30. Power off Node B.

    For more information, see your Sun Cluster system administration documentation.

  31. Install the host adapter in Node B.

    To install a host adapter, see the documentation that shipped with your host adapter and node.

  32. If necessary, power on and boot Node B into noncluster mode by adding -x to your boot instruction.

    To boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  33. If necessary, upgrade the host adapter firmware on Node B.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  34. If necessary, install a GBIC or an SFP, as shown in Figure 1–5.

    To install an FC hub/switch GBIC or an SFP, see the documentation that shipped with your FC hub/switch hardware.


    Note –

    If you are using two FC switches and Sun SAN software to create a storage area network (SAN), see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information.


  35. If necessary, connect a fiber-optic cable between the FC hub/switch and Node B, as shown in Figure 1–5.

    To install an FC host adapter GBIC or an SFP, see your host adapter documentation. To install a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    Figure 1–5 Adding a Single-Controller Configuration: Part III

    Illustration: The preceding context describes
the graphic.

  36. If necessary, install the required Solaris patches for storage array support on Node B.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge T3 and T3+ Array Release Notes.

  37. Shut down Node B.

    To shut down and power off a node, see your Sun Cluster system administration documentation.

  38. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    # boot -r
    
  39. (Optional) On Node B, verify that the device IDs (DIDs) are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n NodeB -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  40. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  41. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  42. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.