Sun Cluster 3.1 - 3.2 With Sun StorEdge 6320 System Manual for Solaris OS

Adding Storage Systems to an Existing Cluster

Use this procedure to add a new storage system to a running cluster. To install systems to a new Sun Cluster configuration that is not running, use the procedure in How to Install Storage Systems in a New Cluster.

This procedure defines Node N as the node to be connected to the storage system you are adding and the node with which you begin working.

ProcedureHow to Perform Initial Configuration Tasks on the Storage Array

  1. (StorEdge 6320SL storage system ONLY) Install the Fibre Channel (FC) switch for the storage system if you do not have a switch installed.


    Note –

    In a StorEdge 6320SL storage system, the customer provides the switch.


    For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.

  2. Configure the service processor.

    For more information, see the Sun StorEdge 6320 System Installation Guide.

  3. Create a volume.

    For the procedure about how to create a volume, see the Sun StorEdge 6320 System Reference and Service Manual.

  4. (Optional) Specify initiator groups for the volume.

    For the procedure about how to specify initiator groups, see the Sun StorEdge 6320 System Reference and Service Manual.

  5. Unpack, place, and level the storage system.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  6. Install the system power cord and the system grounding strap.

    For instructions, see the Sun StorEdge 6320 System Installation Guide.

  7. (StorEdge 6320SL storage system ONLY) Connect the storage arrays to the FC switches by using fiber-optic cables.


    Caution – Caution –

    Do not connect the switch's Ethernet port to the storage system's private LAN.


    For the procedure about how to cable the storage system, see the Sun StorEdge 6320 System Installation Guide.

  8. Power on the new storage system.


    Note –

    The storage arrays in your system might require several minutes to boot.


    For the procedure about how to power on the storage system, see the Sun StorEdge 6320 System Installation Guide.

  9. If necessary, reconfigure the storage system's FC switches to ensure that all nodes can access each storage array.

    The following configurations might prevent some nodes from accessing each storage array in the cluster:

    • Zone configuration

    • Multiple clusters that use the same switch

    • Unconfigured ports or misconfigured ports

ProcedureHow to Connect the Node to the FC Switches

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 20 and Step 21 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status +  
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  2. Move all resource groups and device groups off Node N.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -S -h nodename
      
  3. If you do not need to install one or more host adapters in Node N, skip to Step 10.

    To install host adapters, proceed to Step 4.

  4. If the host adapter that you that are installing is the first host adapter on Node N, determine whether the required drivers for the host adapter are already installed on this node.

    For the required packages, see the documentation that shipped with your host adapters.

    If this is not the first host adapter, skip to Step 6.

  5. If the required support packages not already installed, install them.

    The support packages are located in the Product directory of the Solaris CD-ROM.

  6. Shut down and power off Node N.

    For the procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  7. Install one or more host adapters in Node N.

    For the procedure about how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  8. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  9. If necessary, upgrade the host adapter firmware on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  10. If necessary, install a GBIC or an SFP in the FC switch or the storage array.

    For the procedure about how to install a GBIC or an SFP, see the documentation that shipped with your FC switch hardware or the Sun StorEdge 6320 System Installation Guide.

  11. Connect a fiber-optic cable between the FC switch and Node N.

    For the procedure about how to install a fiber-optic cable, see the Sun StorEdge 6320 System Installation Guide.

  12. Install the required Solaris patches for storage array support on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  13. To create the new Solaris device files and links, perform a reconfiguration boot on Node N by adding -r to your boot instruction.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  14. Configure any unconfigured STMS paths.

    1. Determine whether any devices are at an unconfigured state.


      cfgadm -c configure controllerinstance
      
    2. If any devices are at an unconfigured state, configure the STMS paths.


      cfgadm -c configure controllerinstance
      

      For the procedure about how to configure STMS paths, see the Sun StorEdge Traffic Manager Installation and Configuration Guide.


    Note –

    You need to reboot if the cfgadm command does not configure the unconfigured devices that are associated with the volume you are creating. See the Sun StorEdge Traffic Manager Installation and Configuration Guide for more information.


  15. Update the Solaris device files and links.


    # devfsadm
    

    Note –

    You can wait for the devfsadm daemon to automatically update the Solaris device files and links, or you can run the devfsadm command to immediately update the Solaris device files and links.


  16. On Node N, update the paths to the DID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scgdevs
      
  17. If necessary, label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge 6320 System Reference and Service Manual.

  18. (Optional) On Node N, verify that the device IDs (DIDs) are assigned to the new storage array.

    • If you are using Sun Cluster 3.2, use the following commands:


      # cldevice list -n NodeN -v
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scdidadm -l
      
  19. Repeat Step 2 through Step 18 for each remaining node that you plan to connect to the storage array.

  20. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  21. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  22. Perform volume management administration to incorporate the new volumes into the cluster.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.