Sun Cluster 3.1 - 3.2 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

Chapter 2 Maintaining and Upgrading a Sun StorEdge T3 or T3+ Array

This chapter contains the procedures about how to maintain Sun StorEdgeTM T3 and Sun StorEdge T3+ arrays in a single-controller (noninterconnected) or partner-group (interconnected) configuration in a SunTM Cluster environment.

This book contains the following topics.

For information about how to use a storage array in a storage area network (SAN), see SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Maintaining Sun StorEdge T3 or T3+ Storage Array Components

This section contains the procedures about how to maintain a storage array. The following table lists these procedures. This section does not include a procedure about how to add a disk drive and a procedure about how to remove a disk drive, because a storage array only operates when fully configured.


Caution – Caution –

If you remove any field-replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the storage array is designed so that an orderly shutdown occurs. This shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before you start an FRU replacement procedure. You must replace an FRU within 30 minutes. If you do not replace the FRU within this time, the storage array, and all attached storage array, shut down and power off.



Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


Table 2–1 Task Map: Maintaining a Storage Array

Task 

Information 

Replace a disk drive. 

How to Replace a Disk Drive

Replace a host-to-hub/switch fiber-optic cable. 

Replacing a Node-to-Switch Component

Replace an FC host adapter GBIC or an SFP. 

Replacing a Node-to-Switch Component

Replace an FC hub/switch GBIC or an SFP that connects an FC hub/switch to a host. 

Replacing a Node-to-Switch Component

Replace a hub/switch-to-storage array fiber-optic cable. 

How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration

Replace an FC hub/switch GBIC or an SFP that connects the FC hub/switch to a storage array. 

How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration

Replace an FC hub/switch. 

How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration

Replace an FC switch. 

This procedure applies to SAN-configured clusters only. 

How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration

Replace an FC hub/switch power cord. 

How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration

Replace a media interface adapter (MIA) on a storage array. 

This procedure does not apply to storage arrays. 

How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration

Replace a storage array controller. 

How to Replace a Storage Array Controller

Replace a chassis. 

How to Replace a Chassis

Replace a host adapter. 

How to Replace a Host Adapter

Replace a host adapter. 

How to Replace a Host Adapter

Remove a storage array. 

How to Remove a Storage Array in a Single-Controller Configuration

Remove a storage array. 

How to Remove a Storage Array in a Single-Controller Configuration

Remove a partner group. 

How to Remove a Partner Group

Add a node to the storage device.

Sun Cluster system administration documentation 

Remove a node from the storage device.

Sun Cluster system administration documentation 

Sun StorEdge T3 and T3+ Array FRUs

The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual for the following procedures.

Upgrading a Sun StorEdge T3 array controller to a Sun StorEdge T3+ array controller requires no cluster-specific procedures. See the Sun StorEdge T3 Array Controller Upgrade Manual for the following procedures.

ProcedureHow to Replace a Disk Drive

Use this procedure to replace a failed disk drive in a storage array in a running cluster.


Note –

Sun storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read RBAC authorization.

  2. If the failed disk drive affect the storage array logical volume's availability, If yes, use volume manager commands to detach the submirror or plex.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. If the logical volume (in Step 1) is configured as a quorum device, choose another volume to configure as the quorum device. Then remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


       # scstat -q
      

    For procedures about how to add and remove quorum devices, see your Sun Cluster system administration documentation.

  4. Replace the failed disk drive.

    For instructions, refer to the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  5. (Optional) If the new disk drive is part of a logical volume that you want to be a quorum device, add the quorum device.

    To add a quorum device, see your Sun Cluster system administration documentation.

  6. If you detached a submirror or plex in Step 1, use volume manager commands to reattach the submirror or plex.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

Replacing a Node-to-Switch Component

Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.


Note –

Node-to-switch components that are covered by this procedure include the following components:

To replace a host adapter, see How to Replace a Host Adapter.


This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.

Ensure that you are following the appropriate instructions:

ProcedureHow to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

  1. If your configuration is active-passive, and if the active path is the path that needs a component replaced, make that path passive.

  2. Replace the component.

    Refer to your hardware documentation for any component-specific instructions.

  3. (Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.

ProcedureHow to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. If the physical data path has failed, do the following:

    1. Replace the component.

    2. Fix the volume manager error that was caused by the failed data path.

    3. (Optional) If necessary, return resource groups and device groups to this node.

    You have completed this procedure.

  3. If the physical data path has not failed, determine the resource groups and device groups that are running on Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA
      # cldevicegroup status -n NodeA
      
      -n NodeA

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  4. Move all resource groups and device groups to another node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  5. Replace the node-to-switch component.

    Refer to your hardware documentation for any component-specific instructions.

  6. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  7. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename
      

ProcedureHow to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration

Use this procedure to replace a hub/switch or the following hub/switch-to-array components for an array in a single-controller configuration. To replace these components for an array in a partner-group configuration, see How to Replace an FC Switch or Storage Array-to-Switch Component in a Partner-Group Configuration. You can use storage array in single-controller configuration with FC switches when you create a SAN.

  1. Detach the submirrors on the storage array. This array is connected to the hub/switch-to-array fiber-optic cable that you are replacing. Detach the submirrors to stop all I/O activity to this storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Replace the hub/switch or hub/switch-to-array component.

    • For the procedure about how to replace a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

    • For the procedure about how to replace an FC hub/switch GBIC or an SFP, an FC hub/switch, or an FC hub/switch power cord, see the documentation that shipped with your FC hub/switch hardware.

    • For the procedure about how to replace an MIA, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    • If you are replacing FC switches in a SAN, follow the hardware installation and SAN configuration instructions in the documentation that shipped with your switch hardware.


      Note –

      If you are replacing an FC switch and you intend to save the switch configuration for restoration to the replacement switch, do not connect the cables to the replacement switch until after you recall the Fabric configuration to the replacement switch. For more information about how to save and recall switch configurations, see the documentation that shipped with your switch hardware.



      Note –

      Before you replace an FC switch, be sure that the probe_timeout parameter of your data service software is set to more than 90 seconds. If you increase the value of the probe_timeout parameter to more than 90 seconds, you avoid unnecessary resource group restarts. Resource group restarts occur when one of the FC switches is powered off.


  3. Reattach the submirrors to resynchronize the submirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Replace an FC Switch or Storage Array-to-Switch Component in a Partner-Group Configuration

Use this procedure to replace components for a storage array in a partner-group configuration in a running cluster. To replace components for arrays in a single-controller configuration, see How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration. Use this procedure to replace the following storage array-to-switch components.

  1. Access the storage array. This storage array is connected to the FC switch or component that you are replacing.

  2. View the controller status for the two storage arrays in the partner group.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  3. If you replacing an FC switch or component that is attached to a storage array controller that is ONLINE, disable the controller.

    If the controller is already DISABLED, skip to Step 5.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  4. Verify that the controller's state has been changed to DISABLED.

  5. Replace the component by using the following references.

    • For the procedure about how to replace a fiber-optic cable between a storage array and an FC switch, see the documentation that shipped with your switch hardware.

    • For the procedure about how to replace a GBIC or an SFP on an FC switch, see the documentation that shipped with your FC switch hardware.

    • For the procedure about how to replace an FC switch, see the documentation that shipped with your switch hardware.


      Note –

      If you are replacing an FC switch and you intend to save the switch IP configuration for restoration to the replacement switch, wait to connect the cables to the replacement switch. Connect the cables to the replacement switch after you recall the Fabric configuration to the replacement switch. For more information about how to save and recall switch configurations, see the documentation that shipped with your switch hardware.



      Note –

      Before you replace an FC switch, be sure that the probe_timeout parameter of your data service software is set to more than 90 seconds. If you increase the value of the probe_timeout parameter to more than 90 seconds, you avoid unnecessary resource group restarts. Resource group restarts occur when one of the FC switches is powered off.


    • For the procedure about how to replace an MIA, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

    • For the procedure about how to replace interconnect cables, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  6. If necessary, access the storage array. This storage array is the storage array in the partner group that is still online.

  7. Enable the storage array that you disabled in Step 3.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  8. Verify that the controller's state has been changed to ONLINE.

ProcedureHow to Replace a Storage Array Controller

  1. Detach the submirrors on the storage array. This array is connected to the controller that you are replacing. Detach the submirrors to stop all I/O activity to this storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Replace the controller.

    For the procedure about how to replace a storage array controller, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  3. Reattach the submirrors to resynchronize the submirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Replace a Chassis

Use this procedure to replace a storage array chassis. This procedure assumes that you are retaining all FRUs other than the chassis and the midplane. To replace the chassis, you must replace both the chassis and the midplane because these components are manufactured as one part.


Caution – Caution –

You must be a Sun service provider to perform this procedure. If you need to replace a chassis, contact your Sun service provider.


  1. Detach the submirrors on the storage array. This array is connected to the chassis that you are replacing. Detach the submirrors to stop all I/O activity to this storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Replace the chassis and the midplane.

    For the procedure about how to replace a storage array chassis, see the Sun StorEdge T3 and T3+ Array Field Service Manual.

  3. Reattach the submirrors to resynchronize the submirrors.


    Caution – Caution –

    The world wide names (WWNs) change as a result of this procedure. You must reconfigure your volume manager software to recognize the new WWNs.


    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status -n NodeA 
      # cldevicegroup status -n NodeA
      
      -n NodeA

      The node for which you are determining resource groups and device groups.

    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  3. Move all resource groups and device groups off Node A.

    • If you are using Sun Cluster 3.2, use the following command:


      # clnode evacuate nodename
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -S -h nodename
      
  4. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  5. Power off Node A.

  6. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  7. If you need to upgrade the node's host adapter firmware, boot Node A into noncluster mode by adding -x to your boot instruction. Proceed to Step 8.

    If you do not need to upgrade firmware, skip to Step 9.

  8. Upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  9. Boot Node A into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  10. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  11. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename
      

ProcedureHow to Remove a Storage Array in a Single-Controller Configuration

Use this procedure to remove a storage array and its submirrors from a running cluster. This procedure provides the flexibility to remove the host adapters from the nodes for the storage array that you are removing. To remove a partner group from the cluster, see How to Remove a Partner Group.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.


Caution – Caution –

During this procedure, you lose access to the data that resides on the storage array that you are removing.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. Back up all database tables, data services, and volumes that are associated with the storage array. This storage array is the storage array you are removing.

  2. Detach the submirrors from the storage array that you are removing. Detach the submirrors to stop all I/O activity to the storage array.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. Remove the references to the LUN(s) from any diskset or disk group.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on Node B.

    Record this information because you use this information in Step 18 and Step 19 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  5. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  6. If the storage array that you are removing is the last storage array that is connected to Node A, disconnect the fiber-optic cable between Node A and the FC hub/switch that is connected to this storage array. Afterward, disconnect the fiber-optic cable between the FC hub/switch and this storage array.

    If this is not the last storage array. skip this step.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note –

    If you are use your storage array in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information.


  7. If you want to remove the host adapter from Node A, power off the node.

    If you do not want to remove host adapters, skip to Step 10.

  8. Remove the host adapter from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  9. Without enabling the node to boot, power on Node A.

    For more information, see your Sun Cluster system administration documentation.

  10. Boot Node A into cluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  11. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  12. If the storage array that you are removing is the last storage array that is connected to the FC hub/switch, disconnect the fiber-optic cable that connects this FC hub/switch and Node B.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note –

    If you are use your storage array in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Solutions in a Sun Cluster Environment in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for more information.


  13. If you want to remove the host adapter from Node B, power off the node.

    If you do not want to remove host adapters, skip to Step 16.

  14. Remove the host adapter fromNode B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  15. Without enabling the node to boot, power on Node B

    For more information, see your Sun Cluster system administration documentation.

  16. Boot Node B into cluster mode.

    For more information about how to boot nodes, see your Sun Cluster system administration documentation.

  17. On all nodes, update the /devices and /dev entries.

    • If you are using Sun Cluster 3.2, use the following command:


      # devfsadm -C
      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # devfsadm -C
      # scdidadm -C
      
  18. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodenamedevicegroup1[ devicegroup2 …]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  19. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      

ProcedureHow to Remove a Partner Group

Use this procedure to permanently remove storage array partner groups and their submirrors from a running cluster. To remove a storage array in single-controller configuration, see How to Remove a Storage Array in a Single-Controller Configuration.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


This procedure defines Node A as the cluster node that you begin working with. Node B is another node in the cluster.


Caution – Caution –

During this procedure, you lose access to the data that resides on each storage array partner group that you are removing.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. If necessary, back up all database tables, data services, and volumes associated with each partner group that you are removing.

  2. If necessary, detach the submirrors from each storage array or partner group that you are removing. Detach the submirrors to stop all I/O activity to the storage array or partner group.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.

  3. Remove references to each LUN. Each LUN belongs to the storage array or partner group that you are removing.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 19 and Step 20 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  5. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  6. Disconnect from both storage arrays the fiber-optic cables connecting to the FC switches, then the Ethernet cable(s).

  7. If any storage array that you are removing is the last storage array connected to an FC switch that is on Node A, disconnect the fiber-optic cable between Node A and the FC switch that was connected to this storage array.

    If no array that you are removing is the last array connected to the node, skip to Step 11.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note –

    If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Considerations for more information.


  8. If you want to remove the host adapters from Node A, power off the node.

    If you do not want to remove host adapters, skip to Step 11.

  9. Remove the host adapters from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your host adapter and nodes.

  10. Without enabling the node to boot, power on Node A

    For more information, see your Sun Cluster system administration documentation.

  11. Boot Node A into cluster mode.

    For more information on booting nodes, see your Sun Cluster system administration documentation.

  12. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  13. If any storage array that you are removing is the last storage array connected to an FC switch that is on Node B, disconnect the fiber-optic cable that connects this FC switch to Node B.

    If array that you are removing is the last array connected to the node, , proceed to Step 14.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note –

    If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Considerations for more information.


  14. If you want to remove the host adapters from Node B, power off the node.

    If you do not want to remove host adapters, skip to Step 17.

  15. Remove the host adapters from Node B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  16. Without enabling the node to boot, power on Node B

    For more information, see your Sun Cluster system administration documentation.

  17. Boot Node B into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  18. On all nodes, update the /devices and /dev entries.

    • If you are using Sun Cluster 3.2, use the following command:


      # devfsadm -C
      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # devfsadm -C
      # scdidadm -C
      
  19. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  20. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename
      

Upgrading Sun StorEdge T3 or T3+ Storage Arrays

This section contains the procedures about how to upgrade storage arrays. The following table lists these procedures.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


Table 2–2 Task Map: Maintaining a Storage Array

Task 

Information 

Upgrade storage array firmware. 

How to Upgrade Storage Array Firmware (No Submirrors)

Upgrade a StorEdge T3 array controller to a StorEdge T3+ array controller. 

How to Upgrade a StorEdge T3 Controller to a StorEdge T3+ Controller

Migrate to a partner group 

How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration

ProcedureHow to Upgrade Storage Array Firmware (No Submirrors)

Use this procedure to upgrade storage array firmware in a running cluster, when your arrays are not configured to support submirrors. To upgrade firmware when you are using submirrors, see How to Upgrade Storage Array Firmware When Using Mirroring. Firmware includes controller firmware, unit interconnect card (UIC) firmware, and disk drive firmware.


Caution – Caution –

Perform this procedure on one storage array at a time. This procedure requires that you reset the storage arrays that you are upgrading. If you reset more than one storage array at a time, your cluster loses access to data.



Note –

For all firmware installations, always read any README files that accompany the firmware patch for the latest information and special notes.


  1. On one node that is attached to the storage array, detach the submirrors. This storage array is the storage array that you are upgrading.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Apply the controller, disk drive, and UIC firmware patches.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  3. Reset the storage array, if you have not already done so.

    For the procedure about how to reboot a storage array, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.

  4. Reattach the submirrors to resynchronize the submirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Upgrade Storage Array Firmware When Using Mirroring

Use this procedure to upgrade out-of-date controller firmware, disk drive firmware, or unit interconnect card (UIC) firmware. This procedure assumes that your cluster is operational. This procedures defines Node A as the node on which you are upgrading firmware. Node B is another node in the cluster.


Caution – Caution –

Perform this procedure on one storage array at a time. This procedure requires that you reset the storage arrays that you are upgrading. If you reset more than one storage array at a time, your cluster loses access to data.


  1. On the node that currently owns the disk group or diskset to which the mirror belongs, detach the storage array logical volume. This storage array is the storage array on which you are upgrading firmware.

    For more information, see your Solstice DiskSuite/Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Apply the controller, disk drive, and UIC firmware patches.

    For the list of required storage array patches, see the Sun StorEdge T3 Disk Tray Release Notes. To apply firmware patches, see the firmware patch README file. To verify the firmware level, see the Sun StorEdge T3 Disk Tray Release Notes.

  3. Disable the storage array controller that is attached to Node B. Disable the controller so that all logical volumes are managed by the remaining controller.

    For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  4. On one node that is connected to the partner group, verify that the storage array controllers are visible to the node.


    # format
    
  5. Enable the storage array controller that you disabled in Step 3.

  6. Reattach the mirrors that you detached in Step 1 to resynchronize the mirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Upgrade a StorEdge T3 Controller to a StorEdge T3+ Controller

Use the following procedure to upgrade a StorEdge T3 storage array controller to a StorEdge T3+ storage array controller in a running cluster.


Caution – Caution –

Perform this procedure on one storage array at a time. This procedure requires you to reset the storage arrays that you are upgrading. If you reset more than one storage array at a time, your cluster loses access to data.


  1. On one node that is attached to the StorEdge T3 storage array in which you are upgrading the controller, detach that storage array's submirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  2. Upgrade the StorEdge T3 storage array controller to a StorEdge T3+ storage array controller.

    For instructions, see the Sun StorEdge T3 Array Controller Upgrade Manual .

  3. Reattach the submirrors to resynchronize the submirrors.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

ProcedureHow to Migrate From a Single-Controller Configuration to a Partner-Group Configuration

Use this procedure to migrate your storage arrays from a single-controller (noninterconnected) configuration to a partner-group (interconnected) configuration. This procedure assumes that the two storage arrays in the partner-group configuration are correctly isolated from each other on separate FC switches. Do not disconnect the cables from the FC switches or the nodes.


Caution – Caution –

You must be a Sun service provider to perform this procedure. If you need to migrate from a single-controller configuration to a partner-group configuration, contact your Sun service provider.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. Back up all data on the storage arrays before you remove the storage arrays from the Sun Cluster configuration.

  2. Remove the noninterconnected storage arrays to be in your partner group from the cluster configuration.

    Follow the procedure in How to Remove a Storage Array in a Single-Controller Configuration.

  3. Connect and configure the single storage arrays to form a partner group.

    Follow the procedure in the Sun StorEdge T3 and T3+ Array Field Service Manual.

  4. Ensure that each storage array has a unique target address.

    For the procedure about how to verify and assign a target address to a storage array, see the Sun StorEdge T3 and T3+ Array Configuration Guide.

  5. Ensure that the cache and mirror settings for each storage array are set to auto.

  6. Ensure that the mp_support parameter for each storage array is set to mpxio.

  7. If necessary, upgrade the host adapter firmware on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  8. If necessary, install the required Solaris patches for storage array support on Node A.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  9. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  10. Perform a reconfiguration boot on Node A to create the new Solaris device files and links.


    # boot -r
    
  11. On Node A, update the /devices and /dev entries.


    # devfsadm -C 
    
  12. On Node A, update the paths to the DID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      
  13. Label the new storage array logical volume.

    For the procedure about how to label a logical volume, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.

  14. If necessary, upgrade the host adapter firmware on Node B.

    If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.

    You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.

    Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.

    For required firmware, see the Sun System Handbook.

  15. If necessary, install the required Solaris patches for storage array support on Node B.

    For a list of required Solaris patches for storage array support, see the Sun StorEdge T3 Disk Tray Release Notes.

  16. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  17. Perform a reconfiguration boot to create the new Solaris device files and links on Node B.


    # boot -r
    
  18. On Node B, update the /devices and /dev entries.


    # devfsadm -C 
    
  19. On Node B, update the paths to the DID instances.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      
  20. (Optional) On Node B, verify that the DIDs are assigned to the new LUNs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -n NodeB -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l
      
  21. On one node that is attached to the new storage arrays, reset the SCSI reservation state.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice repair
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -R
      

    Note –

    Repeat this command on the same node for each storage array LUN that you are adding to the cluster.


  22. Perform volume management administration to incorporate the new logical volumes into the cluster.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.

Next Steps

The best way to enable multipathing for a cluster is to install the multipathing software and enable multipathing before installing the Sun Cluster software and establishing the cluster. For this procedure, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS. If you need to add multipathing software to an established cluster, see How to Install Sun Multipathing Software in Sun Cluster Software Installation Guide for Solaris OS and follow the troubleshooting steps to clean up the device IDs.

SAN Considerations

This section contains information about how to use storage array in a SAN. This information is specific to a SAN in a Sun Cluster environment. Use the cluster-specific procedures in this chapter to install and maintain a storage array in your cluster.

For instructions about how to create and maintain a SAN, see the documentation that shipped with your switch hardware. For information on switch ports, zoning, and required software and firmware, also see the documentation that shipped with your switch hardware.

SAN hardware includes the following components.

SAN software includes the following components.

SAN Clustering Considerations

If you are replacing an FC switch and you intend to save the switch IP configuration for restoration to the replacement switch, wait to connect the cables to the replacement switch. Connect the cables to the replacement switch after you recall the Fabric configuration to the replacement switch. For more information about how to save and recall switch configurations, see the documentation that shipped with your switch hardware.