Sun Cluster 3.1 - 3.2 With Sun StorEdge 3900 Series or Sun StorEdge 6900 Series System Manual

Chapter 3 Maintaining a Sun StorEdge 3900 or 6900 Series System

This chapter contains the procedures about how to maintain Sun StorEdge 3900 and 6900 series systems in a Sun Cluster environment. It contains the following procedures:

For information about storage system architecture, features, and configuration utilities, see your storage documentation listed in Related Documentation.

Maintaining Storage Systems

This section contains the procedures about how to maintain storage systems in a running cluster. Table 3–1 lists these procedures. This section does not include procedures about how to add or remove disk drives. Storage arrays in your storage system operate only when fully configured with disk drives.


Caution – Caution –

If you remove any field replaceable unit (FRU) from the storage arrays for an extended period of time, thermal complications might result. To prevent these complications, the storage array is designed so that an orderly shutdown occurs. This shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before you start an FRU replacement procedure. You must replace an FRU within 30 minutes. If you do not, the storage array, and all attached storage arrays, shut down and power off.

This caution does not apply to the StorEdge 6920 system.



Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


Table 3–1 Task Map: Maintaining a Storage System  

Task 

Information 

Remove a storage system. 

How to Remove a Storage System

Replace a virtualization engine. 

This procedure applies to a Sun StorEdge 6910 or a Sun StorEdge 6960 storage system only.

SPARC: How to Replace a Virtualization Engine (Sun StorEdge 6910 or Sun StorEdge 6960 Storage System Only)

Replace a node-to-switch fiber-optic cable. 

Replacing a Node-to-Switch Component

Replace a gigabit interface converter (GBIC) or Small Form-Factor Pluggable (SFP) on a node's host adapter. 

Replacing a Node-to-Switch Component

Replace a GBIC or an SFP on an FC switch, connecting to a node. 

Replacing a Node-to-Switch Component

Upgrade storage array firmware. 

How to Upgrade Storage Array Firmware When Using Mirroring or How to Upgrade Storage Array Firmware When Not Using Mirroring

Replace a disk drive. 

How to Replace a Disk Drive

Replace a node's host adapter. 

How to Replace a Host Adapter

Add a node to the storage array.

Sun Cluster system administration documentation 

Remove a node from the storage array.

Sun Cluster system administration documentation 

FRUs That Do Not Require Sun Cluster Maintenance Procedures

This section contains lists of administrative tasks that require no cluster-specific procedures.

SPARC: FRUs for the Sun StorEdge 3900 Series, StorEdge 6910, and StorEdge 6960 Systems

SPARC: See the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual for the following procedures.

SPARC: For the following procedures, see the Sun StorEdge 3900 and 6900 Series 20 Reference and Service Manual. For a URL to this storage documentation, see Related Documentation.

FRUs for Sun StorEdge 6920 Storage Systems

ProcedureHow to Remove a Storage System

Use this procedure to permanently remove a storage system from a running cluster.

This procedure defines Node N as the node that is connected to the storage system you are removing and the node with which you begin working.


Caution – Caution –

During this procedure, you lose access to the data that resides on the storage system that you are removing.


Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  1. If necessary, back up all database tables, data services, and volumes that are associated with each partner group that is affected.

  2. Remove references to the volumes that reside on the storage system that you are removing.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. Disconnect the cables that connected Node N to the FC switches in your storage system.

  4. On all nodes, remove the obsolete Solaris links and device IDs.

    • If you are using Sun Cluster 3.2, use the following command:


      # devfsadm -C
      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # devfsadm -C
      # scdidadm -C
      
  5. Repeat Step 3 through Step 4 for each node that is connected to the storage system.

ProcedureSPARC: How to Replace a Virtualization Engine (Sun StorEdge 6910 or Sun StorEdge 6960 Storage System Only)

Use this procedure to replace a virtualization engine in a storage system in a running cluster.

  1. Replace the virtualization engine hardware.

    For instructions, see the Sun StorEdge 3900 and 6900 Series 20 Reference and Service Manual. For a URL to this storage documentation, see Related Documentation.

  2. On any node, view the virtualization engine controller status and enable the virtualization engine controllers.


    # cfgadm -al
    # cfgadm -c configure c::controller id
    

Replacing a Node-to-Switch Component

Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.


Note –

Node-to-switch components that are covered by this procedure include the following components:

To replace a host adapter, see How to Replace a Host Adapter.


This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.

Ensure that you are following the appropriate instructions:

ProcedureHow to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

  1. If your configuration is active-passive, and if the active path is the path that needs a component replaced, make that path passive.

  2. Replace the component.

    Refer to your hardware documentation for any component-specific instructions.

  3. (Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.

ProcedureHow to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. If the physical data path has failed, do the following:

    1. Replace the component.

    2. Fix the volume manager error that was caused by the failed data path.

    3. (Optional) If necessary, return resource groups and device groups to this node.

    You have completed this procedure.

  3. If the physical data path has not failed, determine the resource groups and device groups that are running on Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
      -n NodeA

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  4. Move all resource groups and device groups to another node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  5. Replace the node-to-switch component.

    Refer to your hardware documentation for any component-specific instructions.

  6. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  7. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename
      

ProcedureHow to Upgrade Storage Array Firmware When Using Mirroring

Use this procedure to upgrade out-of-date controller firmware, disk drive firmware, or unit interconnect card (UIC) firmware. This procedure assumes that your cluster is operational. This procedures defines Node A as the node on which you are upgrading firmware. Node B is another node in the cluster.


Caution – Caution –

Perform this procedure on one storage array at a time. This procedure requires that you reset the storage arrays that you are upgrading. If you reset more than one storage array at a time, your cluster loses access to data.


  1. On the node that currently owns the disk group or diskset to which the mirror belongs, detach the storage array logical volume. This storage array is the storage array on which you are upgrading firmware.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Apply the controller, disk drive, and UIC firmware patches.

    • For the list of required storage array patches and to verify the firmware level, see the Sun StorEdge 3900 and 6900 Series Reference Manual.

    • To apply firmware patches, see the firmware patch README file.

    For a URL to this storage documentation, see Related Documentation.

  3. Disable the storage array controller that is attached to Node B. Disable the controller so that all logical volumes are managed by the remaining controller.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  4. On one node that is connected to the partner group, verify that the storage array controllers are visible to the node.


    # format
    
  5. Enable the storage array controller that you disabled in Step 3.

  6. Reattach the mirrors that you detached in Step 1 to resynchronize the mirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Upgrade Storage Array Firmware When Not Using Mirroring

In a partner-pair configuration, you can have nonmirrored data. However, this configuration requires you to shut down the cluster when upgrading firmware.

  1. Shut down the entire cluster.

    For the procedure about how to shut down a cluster, see your Sun Cluster system administration documentation.

  2. Apply the controller, disk drive, and UIC firmware patches.

    • For the list of required storage array patches, see the Sun StorEdge 3900 and 6900 Series Reference Manual.

    • For the procedure about how to apply firmware patches, see the firmware patch README file.

    • For the procedure about how to verify the firmware level, see the Sun StorEdge 3900 and 6900 Series Reference Manual.

    • For a URL to this storage documentation, see Related Documentation.

  3. If you have not already done so, reset the storage arrays.

    • For the procedure about how to reset a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    • For a URL to this storage documentation, see Related Documentation.

  4. Boot all nodes back into the cluster.

    For more information on booting nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  5. On one node connected to the partner-group, verify that the storage array controllers are visible to the node.


    # format
    

ProcedureHow to Replace a Disk Drive

Use this procedure to replace a failed disk drive in a storage array in a running cluster.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read RBAC authorization.

  2. If the failed disk drive affect the storage array logical volume's availability, If yes, use volume manager commands to detach the submirror or plex.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. If the logical volume (in Step 1) is configured as a quorum device, choose another volume to configure as the quorum device. Then remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


       # scstat -q
      

    For procedures about how to add and remove quorum devices, see your Sun Cluster system administration documentation.

  4. Replace the failed disk drive.

    For instructions, refer to the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  5. (Optional) If the new disk drive is part of a logical volume that you want to be a quorum device, add the quorum device.

    To add a quorum device, see your Sun Cluster system administration documentation.

  6. If you detached a submirror or plex in Step 1, use volume manager commands to reattach the submirror or plex.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA 
      # cldevicegroup status -n NodeA
      
      -n NodeA

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  4. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  5. Power off Node A.

  6. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  7. If you need to upgrade the node's host adapter firmware, boot Node A into noncluster mode by adding -x to your boot instruction. Proceed to Step 8.

    If you do not need to upgrade firmware, skip to Step 9.

  8. Upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  9. Boot Node A into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  10. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  11. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename