Sun Cluster 3.1 - 3.2 With Sun StorEdge or StorageTek 9900 Series Storage Device Manual for Solaris OS

Maintaining Storage Arrays

This section contains the procedures for maintaining a storage system in a running cluster. Table 3–1 lists these procedures.

Table 3–1 Task Map: Maintaining a Storage Array

Task 

Information 

Remove a storage array. 

How to Remove a Storage Array

Add a node to the storage array.

Sun Cluster system administration documentation 

Remove a node from the storage array.

Sun Cluster system administration documentation 

Replace a node's host adapter.

How to Replace a Host Adapter

Replace an FC switch or storage array-to-switch component.

How to Replace an FC Switch or Storage Array-to-Switch Component

Replace a node-to-switch/storage component.

Replacing a Node-to-Switch Component

ProcedureHow to Remove a Storage Array

Use this procedure to permanently remove a storage array. This procedure provides the flexibility to remove the host adapters from the nodes that are attached to the storage array that you are removing.

This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.

If you need to remove a storage array from more than two nodes, repeat Step 15 through Step 23 for each additional node that connects to the storage array.


Caution – Caution –

During this procedure, you lose access to the data that resides on the storage array that you are removing.


Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. If necessary, back up all data and migrate all resource groups and disk device groups to another node.

  2. If the storage array that you plan to remove contains a quorum device, choose and configure another device to be the new quorum device. Then remove the old quorum device.

    To determine whether this logical volume is configured as a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      #clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      #scstat -q
      

    To add or remove a quorum device in your configuration, see your Sun Cluster system administration documentation.

  3. If necessary, detach the submirrors from the storage array that you are removing in order to stop all I/O activity to the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. Run the appropriate Solaris Volume Manager or Veritas Volume Manager commands to remove the references to the logical volumes from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  5. Are your nodes enabled with the Solaris dynamic reconfiguration (DR) feature?

    • If yes, disconnect the fiber-optic cables and, if desired, remove the host adapters from both nodes. Then perform Step 23 on each node that was connected to the storage array

      For the procedure about how to remove a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

    • If no, proceed to Step 6.

  6. Determine the resource groups and device groups that are running on Node A and Node B.

    Record this information because you use this information in Step 21 and Step 22 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA[ NodeB ...] 
      # cldevicegroup status -n NodeA[ NodeB ...]
      
      -n NodeA[ NodeB …]

      The node or nodes for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      

    For more information, see your Sun Cluster system administration documentation.

  7. If you want to remove any multipathing software, remove the multipathing software packages.

  8. Shut down Node A.

    For the procedure about how to shut down a node, see your Sun Cluster system administration documentation.

  9. Disconnect the fiber-optic cable between the storage array and Node A.

  10. If you do not want to remove host adapters from Node A, skip to Step 13.

  11. If you want to remove the host adapter from Node A, power off Node A.

  12. Remove the host adapter from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.

  13. Power on Node A and allow it to boot into cluster mode.

    For more information, see the documentation that shipped with your server. For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  14. On Node A, update the device namespace.


    # devfsadm -C
    
  15. Shut down Node B.

    For the procedure about how to shut down a node, see your Sun Cluster system administration documentation.

  16. Disconnect the fiber-optic cable between the storage array and Node B.

  17. If you do not want to remove host adapters from Node B, skip to Step 20.

  18. If you want to remove the host adapter from Node B, power off Node B.

  19. Remove the host adapter from Node B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your server and host adapter.

  20. Power on Node B and allow it to boot into cluster mode.

    For more information, see the documentation that shipped with your server. For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  21. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  22. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      
  23. On Node B, update the device namespace.


    # devfsadm -C
    
  24. Repeat Step 15 through Step 22 for each additional node that connects to the storage array.

  25. From one node, remove device ID references to the storage array that was removed.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      

ProcedureHow to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA 
      # cldevicegroup status -n NodeA
      
      -n NodeA

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  4. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  5. Power off Node A.

  6. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  7. If you need to upgrade the node's host adapter firmware, boot Node A into noncluster mode by adding -x to your boot instruction. Proceed to Step 8.

    If you do not need to upgrade firmware, skip to Step 9.

  8. Upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  9. Boot Node A into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  10. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  11. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename
      

ProcedureHow to Replace an FC Switch or Storage Array-to-Switch Component

Use this procedure to replace an FC switch, or the following storage array-to-switch components in a running cluster.

  1. Replace the component by using the following references.

    • For the procedure about how to replace a fiber-optic cable between a storage array and an FC switch, see the documentation that shipped with your switch hardware.

    • For the procedure about how to replace a GBIC on an FC switch, see the documentation that shipped with your switch hardware.

    • For the procedure about how to replace an SFP on the storage array, contract your service provider.

    • For the procedure about how to replace an FC switch, see the documentation that shipped with your switch hardware.


      Note –

      If you are replacing an FC switch and you intend to save the switch configuration for restoration to the replacement switch, do not connect the cables to the replacement switch until after you recall the Fabric configuration to the replacement switch. For more information about how to save and recall switch configurations see the documentation that shipped with your switch hardware.

      Before you replace an FC switch, be sure that the probe_timeout parameter of your data service software is set to more than 90 seconds. Increasing the value of the probe_timeout parameter to more than 90 seconds avoids unnecessary resource group restarts when one of the FC switches is powered off.


Replacing a Node-to-Switch Component

Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.


Note –

Node-to-switch components that are covered by this procedure include the following components:

To replace a host adapter, see How to Replace a Host Adapter.


This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.

Ensure that you are following the appropriate instructions:

ProcedureHow to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

  1. If your configuration is active-passive, and if the active path is the path that needs a component replaced, make that path passive.

  2. Replace the component.

    Refer to your hardware documentation for any component-specific instructions.

  3. (Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.

ProcedureHow to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. If the physical data path has failed, do the following:

    1. Replace the component.

    2. Fix the volume manager error that was caused by the failed data path.

    3. (Optional) If necessary, return resource groups and device groups to this node.

    You have completed this procedure.

  3. If the physical data path has not failed, determine the resource groups and device groups that are running on Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
      -n NodeA

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  4. Move all resource groups and device groups to another node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  5. Replace the node-to-switch component.

    Refer to your hardware documentation for any component-specific instructions.

  6. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  7. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename