JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 3.3 With Sun StorEdge 6130 Array Manual
search filter icon
search icon

Document Information

Preface

1.  Restrictions and Requirements

2.  Installing and Configuring a Sun StorEdge 6130 Array

3.  Maintaining a Sun StorEdge 6130 Array

FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures

Maintaining Storage Arrays

How to Upgrade Storage Array Firmware

How to Remove a Storage Array

Replacing a Node-to-Switch Component

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

How to Replace a Host Adapter

How to Replace a Disk Drive

Index

Maintaining Storage Arrays

This section contains the procedures about how to maintain a storage system in a running cluster. Table 3-1 lists these procedures.

Table 3-1 Task Map: Maintaining a Storage System

Task
Information
Remove a storage array.
Upgrade storage array firmware.
Replace a node-to-switch component.
Replace a node's host adapter.
Replace a disk drive.
Add a node to the storage array.
Oracle Solaris Cluster system administration documentation
Remove a node from the storage array.
Oracle Solaris Cluster system administration documentation

How to Upgrade Storage Array Firmware

Use this procedure to upgrade storage array firmware in a running cluster. Storage array firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.


Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.

device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair command for each affected device.


  1. Stop all I/O to the storage arrays you are upgrading.
  2. Apply the controller, disk drive, and loop-card firmware patches by using the arrays' GUI tools.

    For specific instructions, see your storage array's documentation.

  3. Confirm that all storage arrays that you upgraded are visible to all nodes.
    # luxadm probe
  4. Restart all I/O to the storage arrays.

    You stopped I/O to these storage arrays in Step 1.

How to Remove a Storage Array

Use this procedure to permanently remove a storage array from a running cluster.

This procedure defines Node N as the node that is connected to the storage array you are removing and the node with which you begin working.


Caution

Caution - During this procedure, you lose access to the data that resides on the storage array that you are removing.


This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.

  1. If necessary, back up all database tables, data services, and volumes that are associated with each partner group that is affected.
  2. Remove references to the volumes that reside on the storage array that you are removing.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. Disconnect the cables that connected Node N to the FC switches in your storage array.
  4. On all nodes, remove the obsolete Oracle Solaris links and device IDs.
    # devfsadm -C 
    # cldevice clear
  5. Repeat Step 3 through Step 4 for each node that is connected to the storage array.

Replacing a Node-to-Switch Component

Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.


Note - Node-to-switch components that are covered by this procedure include the following components:

To replace a host adapter, see How to Replace a Host Adapter.


This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.

Ensure that you are following the appropriate instructions:

How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing

  1. If your configuration is active-passive, and if the active path is the path that needs a component replaced, make that path passive.
  2. Replace the component.

    Refer to your hardware documentation for any component-specific instructions.

  3. (Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.

How to Replace a Node-to-Switch Component in a Cluster Without Multipathing

Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
  2. If the physical data path has failed, do the following:
    1. Replace the component.
    2. Fix the volume manager error that was caused by the failed data path.
    3. (Optional) If necessary, return resource groups and device groups to this node.

    You have completed this procedure.

  3. If the physical data path has not failed, determine the resource groups and device groups that are running on Node A.
    # clresourcegroup status -n NodeA
    # cldevicegroup status -n NodeA
    -n NodeA

    The node for which you are determining resource groups and device groups.

  4. Move all resource groups and device groups to another node.
    # clnode evacuate nodename
  5. Replace the node-to-switch component.

    Refer to your hardware documentation for any component-specific instructions.

  6. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  7. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

How to Replace a Host Adapter

Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
  2. Determine the resource groups and device groups that are running on Node A.

    Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.

    # clresourcegroup status -n NodeA 
    # cldevicegroup status -n NodeA
    -n NodeA

    The node for which you are determining resource groups and device groups.

  3. Move all resource groups and device groups off Node A.
    # clnode evacuate nodename
  4. Shut down Node A.

    For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  5. Power off Node A.
  6. Replace the failed host adapter.

    To remove and add host adapters, see the documentation that shipped with your nodes.

  7. If you need to upgrade the node's host adapter firmware, boot Node A into noncluster mode by adding -x to your boot instruction. Proceed to Step 8.

    If you do not need to upgrade firmware, skip to Step 9.

  8. Upgrade the host adapter firmware on Node A.

    The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.

    Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.

    If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.

    For required firmware, see the Sun System Handbook.

  9. Boot Node A into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.

  10. (Optional) Restore the device groups to the original node.

    Do the following for each device group that you want to return to the original node.

    # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
    -n nodename

    The node to which you are restoring device groups.

    devicegroup1[ devicegroup2 …]

    The device group or groups that you are restoring to the node.

  11. (Optional) Restore the resource groups to the original node.

    Do the following for each resource group that you want to return to the original node.

    # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
    nodename

    For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

    resourcegroup1[ resourcegroup2 …]

    The resource group or groups that you are returning to the node or nodes.

How to Replace a Disk Drive

Use this procedure to replace a failed disk drive in a storage array in a running cluster.


Note - Oracle storage documentation uses the following terms:

This manual uses logical volume to refer to all such logical constructs.


Before You Begin

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read RBAC authorization.
  2. If the failed disk drive affect the storage array logical volume's availability, If yes, use volume manager commands to detach the submirror or plex.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. If the logical volume (in Step 1) is configured as a quorum device, choose another volume to configure as the quorum device. Then remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use one of the following commands.

    # clquorum show

    For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation.

  4. Replace the failed disk drive.
  5. (Optional) If the new disk drive is part of a logical volume that you want to be a quorum device, add the quorum device.

    To add a quorum device, see your Oracle Solaris Cluster system administration documentation.

  6. If you detached a submirror or plex in Step 1, use volume manager commands to reattach the submirror or plex.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.