Sun Cluster 3.1 - 3.2 With Sun StorEdge or StorageTek 9900 Series Storage Device Manual for Solaris OS

Chapter 2 Installing Multipathing Software in a Sun StorEdge or StorageTek 9900 Series Storage Array

This chapter contains procedures on how to install and add multipathing software. Multipathing software enables you to define and control redundant physical paths to I/O devices such as storage arrays and networks. If the active path to a device becomes unavailable, the multipathing software can automatically switch to an alternate path to maintain availability. This capability is known as automatic failover. To maximize multipathing capabilities, your servers must be configured with redundant hardware. Redundant hardware is two or more host bus adapters from each node, that are connected to the same dual-ported storage array.

This chapter contains the following procedures.

Adding Multipathing Software

This section contains a procedure on how to add multipathing software in a running cluster. Choose one of the following multipathing solutions.

ProcedureSPARC: How to Add Sun StorEdge 9900 Dynamic Link Manager Software

Use this procedure to add Sun StorEdge 9900 Dynamic Link Manager (Sun SDLM) software in a running cluster.

Do not use this procedure to convert from host-based mirroring to a multipathing solution. For the procedure on how to convert from host-based mirroring to a multipathing solution, contact your Sun service provider.

Perform this procedure on one node at a time. This procedure defines Node N as the node on which you are installing the multipathing software.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

Before You Begin

This procedure assumes that you installed and configured your storage array.

  1. Determine the resource groups and device groups that are running on Node N.

    Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeN 
      # cldevicegroup status -n NodeN
      
      -n NodeN

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      

    For more information, see the Sun Cluster system administration documentation.

  2. Move all resource groups and device groups off Node N.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h from-node
      

    For more information, see the Sun Cluster system administration documentation.

  3. If you need to install additional physical paths between Node N and the storage, shut down and power off Node N.


    Note –

    If you use multipathing software, each node requires two paths to the same set of LUNs.

    If you do not need additional physical paths, skip to Step 7.


    For the procedure on how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  4. Install the host adapters and the cables between Node N and the storage.

    For the procedure on how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  5. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  6. If necessary, upgrade the host adapter firmware on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  7. Install and configure Sun SDLM software on Node N, and apply required patches for Sun SDLM software support on Node N.

    For instructions on how to install and configure the Sun SDLM software, see the documentation that shipped with your storage array.

  8. If you did not perform a reconfiguration reboot when you configured Sun SDLM software, perform a reconfiguration reboot to create the new Solaris device files and links on Node N.


    # boot -r
    
  9. On Node N, update the paths to the device ID instances.

    • If you are using Sun Cluster 3.2, use the following command:


       # cldevice populate
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scgdevs
      
  10. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  11. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  12. On all remaining nodes and one node at a time, repeat Step 2 through Step 11.

ProcedureHow to Add Solaris I/O multipathing Software

The best way to enable multipathing for a cluster is to install the multipathing software (or enable multipathing in Solaris 10) before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, use this procedure and be careful to perform the steps that clean up the device IDs.

Do not use this procedure to convert from host-based mirroring to a multipathing solution. For the procedure on how to convert from host-based mirroring to a multipathing solution, contact your Sun service provider.

Perform this procedure on one node at a time. This procedure defines Node N as the node on which you are installing the multipathing software.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Before You Begin

This procedure assumes that you have already installed and configured your storage array and added multipathing software. If you use Solaris 8 or Solaris 9, you must install Sun StorEdge TrafficManager Software to enable multipathing. (In Solaris 10, you just need to enable the multipathing software.) For instructions, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Determine the resource groups and device groups that are running on Node N.

    Record this information because you use this information in Step 14 and Step 15 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeN 
      # cldevicegroup status -n NodeN
      
      -n NodeN

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      

    For more information, see the Sun Cluster system administration documentation.

  2. Move all resource groups and device groups off Node N.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h from-node
      

    For more information, see the Sun Cluster system administration documentation.

  3. If you need to install additional physical paths between Node N and the storage, shut down and power off Node N.


    Note –

    If you use multipathing software, each node requires two paths to the same set of LUNs. For the full procedure on how to shut down and power off a node, see Sun Cluster system administration documentation.


  4. Install the host adapters and the cables between Node N and the storage.


    Note –

    If you use multipathing software, each node requires two paths to the same set of LUNs.


    For the procedure on how to install host adapters, see the documentation that shipped with your host adapters and nodes.

  5. Power on and boot Node N into noncluster mode by adding -x to your boot instruction.

    For the procedure about how to boot a node in noncluster mode, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  6. If necessary, upgrade the host adapter firmware on Node N.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  7. Install any required patches or software for Solaris I/O multipathing software support on Node N.

    For instructions on installing the software, see the Sun StorEdge Traffic Manager Installation and Configuration Guide at http://www.sun.com/products-n-solutions/hardware/docs/.

  8. If you are using Solaris OS version 9, edit the /kernel/drv/scsi_vhci.conf file to activate the Solaris I/O multipathing software. Set the mpxio-disable parameter to no.


    mpxio-disable=”no”
    
  9. (Optional) Activate multipathing functionality.

    • If you are using the Solaris 10 operating system, issue the following commands to enable multipathing:


      # /usr/sbin/stmsboot -e
    • If you are using the Solaris 8 or 9 operating system, edit the /kernel/drv/scsi_vhci.conf file.

      Set the mpxio-disable parameter to no.

      For more information, see the STMS software documentation.

  10. If you are using Solaris 8 or Solaris 9 OS software, boot Node N into cluster mode.

    For more information on booting nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  11. On each node connected to the storage array, use the appropriate multipathing software commands to verify that the same set of LUNs is visible to the expected controllers.

  12. If you are using Solaris 8 or Solaris 9 OS software, on Node N configure all controllers that are affected by the new physical path.


    # cfgadm -c configure cN
    
  13. On Node N, update the paths to the device ID instances.

    • If you are using Sun Cluster 3.2, use the following command:


       # cldevice clear
      # cldevice refresh
      
    • If you are using Sun Cluster 3.1, use the following commands:


      # scdidadm -C
      # scdidadm -r
      
      scdidadm -C and -r

      Update DID mappings with new device names while preserving DID instance numbers for disks that are connected to multiple cluster nodes. DID instance numbers of the local disks might not be preserved. For this reason, the DID disk names for local disks might change.

  14. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  15. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  16. On each node connected to the storage array, use the format(1M) command. Use this command to verify that you see half the number of disks you saw in Step 11.


    # format
    

    See the format command man page for more information about by using the command.

  17. On all remaining nodes and one node at a time, repeat Step 2 through Step 16.