Sun Cluster 3.1 - 3.2 With Sun StorEdge 6120 Array Manual for Solaris OS

How to Add a Single-Controller Configuration to an Existing Cluster

Use this procedure to add a single-controller configuration to a running cluster. The following procedures contain instructions for other array-installation situations:

This procedure defines Node N as the node with which you begin working.

ProcedureHow to Perform Initial Configuration Tasks on the Storage Array

  1. Power on the storage array.


    Note –

    The storage array might require a few minutes to boot.


    For the procedure about how to power on a storage array, see the Sun StorEdge 6120 Array Installation Guide.

  2. Administer the storage array's network settings. Network settings include the following settings.

    • IP address

    • gateway (if necessary)

    • netmask (if necessary)

    • hostname

    For the procedure about how to set up an IP address, gateway, netmask, and hostname on a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  3. Ensure that the cache and mirror settings for each storage array are set to auto.

    For more information, see the Sun StorEdge 6020 and 6120 Array System Manual.

  4. Ensure that the mp_support parameter for each storage array is set to none.

    For more information about, see the Sun StorEdge 6020 and 6120 Array System Manual.

  5. Install any required controller firmware for the storage arrays you are adding.

    For the required controller firmware for the storage array, see the Sun StorEdge 6120 Array Release Notes.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  6. Reset the storage array to update the network settings and system settings that you changed.

    For the procedure about how to reboot or reset a storage array, see the Sun StorEdge 6020 and 6120 Array System Manual.

  7. Confirm that all storage arrays that you upgraded are visible to all nodes.


    # luxadm probe 
    

ProcedureHow to Connect the Storage Array to FC Switches

  1. Install the GBICs or SFPs in the storage array that you plan to add.

    For the procedure about how to install a GBICs or SFPs, see the Sun StorEdge 6120 Array Installation Guide.

  2. If necessary, install GBICs or SFPs in the FC switches.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware.

  3. Install the Ethernet cable between the storage array and the Local Area Network (LAN).

  4. If necessary, daisy-chain or interconnect the storage arrays.

    For the procedure about how to install interconnect cables, see the Sun StorEdge 6120 Array Installation Guide.

  5. Install a fiber-optic cable between the FC switch and the storage array.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

ProcedureHow to Connect the Node to the FC Switches or the Storage Array

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 19 and Step 20 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status +
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  2. Move all resource groups and device groups off Node N.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  3. If you need to install a host adapter in Node N, proceed to Step 4.

    If you do not need to install host adapters, skip to Step 10.

  4. If the host adapter that you are installing is the first FC host adapter on Node N, determine whether the required drivers for the host adapter are already installed on this node.

    For the required packages, see the documentation that shipped with your host adapters. If the host adapter that you are installing is not the first FC host adapter on Node N, skip to Step 6.

  5. If the Fibre Channel support packages are not installed, install them.

    The storage array packages are located in the Product directory of the Solaris CD-ROM. Add any necessary packages.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  7. Install the host adapter in Node N.

    For the procedure about how to install a host adapter, see the documentation that shipped with your host adapter and node.

  8. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  9. If necessary, upgrade the host adapter firmware on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware. For the procedure on how to install a GBIC or an SFP, see the Sun StorEdge 6120 Array Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6120 Array Installation Guide.

  12. If necessary, install the required Solaris patches for storage array support on Node N.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge 6120 Array Release Notes.

  13. On the node, update the /devices and /dev entries.


    # devfsadm -C 
    
  14. Boot the node into cluster mode.

    For the procedure about how to boot a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  15. On the node, update the paths to the DID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate 
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  16. If necessary, label the new logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6020 and 6120 Array System Manual.

  17. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following commands:


      # cldevice clear
      # cldevice list -v 
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scdidadm -C
      # scdidadm -l
      
  18. Repeat Step 2 through Step 17 for each remaining node that you plan to connect to the storage array.

  19. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  20. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename  resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  21. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.