This section contains the procedures about how to maintain a storage system in a running cluster. Table 3–1 lists these procedures.
Table 3–1 Task Map: Maintaining a Storage System
Task |
Information |
---|---|
Remove a storage array. | |
Upgrade storage array firmware. | |
Replace a node-to-switch component. |
How to Replace a Node-to-Switch Component in a Cluster Without Multipathing |
Replace a node's host adapter. | |
Replace a disk drive. | |
Sun Cluster system administration documentation |
|
Sun Cluster system administration documentation |
Use this procedure to upgrade storage array firmware in a running cluster. Storage array firmware includes controller firmware, unit interconnect card (UIC) firmware, EPROM firmware, and disk drive firmware.
When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device's id for ddecimalnumber, device may have been replaced. |
To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.
Stop all I/O to the storage arrays you are upgrading.
Apply the controller, disk drive, and loop-card firmware patches by using the arrays' GUI tools.
For specific instructions, see your storage array's documentation.
Confirm that all storage arrays that you upgraded are visible to all nodes.
# luxadm probe |
Restart all I/O to the storage arrays.
You stopped I/O to these storage arrays in Step 1.
Use this procedure to permanently remove a storage array from a running cluster.
This procedure defines Node N as the node that is connected to the storage array you are removing and the node with which you begin working.
During this procedure, you lose access to the data that resides on the storage array that you are removing.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, seeAppendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.
If necessary, back up all database tables, data services, and volumes.
Remove references to the volumes that reside on the storage array that you are removing.
For more information, see your Sun Cluster, Solaris Volume Manager, or Veritas Volume Manager documentation.
Disconnect the cables that connected Node N to the FC switches in your storage array.
On all nodes, remove the obsolete Solaris links and device IDs.
Repeat Step 3 through Step 4 for each node that is connected to the storage array.
Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.
Node-to-switch components that are covered by this procedure include the following components:
Node-to-switch fiber-optic cables
Gigabit interface converters (GBICs) or small form-factor pluggables (SFPs) on an FC switch
FC switches
To replace a host adapter, see How to Replace a Host Adapter.
This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.
Ensure that you are following the appropriate instructions:
If your cluster uses multipathing, see How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing.
If your cluster does not use multipathing, see How to Replace a Node-to-Switch Component in a Cluster Without Multipathing.
If your configuration is active-passive, and if the active path is the path that needs a component replaced, make that path passive.
Replace the component.
Refer to your hardware documentation for any component-specific instructions.
(Optional) If your configuration is active-passive and you changed your configuration in Step 1, switch your original data path back to active.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
If the physical data path has failed, do the following:
Replace the component.
Fix the volume manager error that was caused by the failed data path.
(Optional) If necessary, return resource groups and device groups to this node.
You have completed this procedure.
If the physical data path has not failed, determine the resource groups and device groups that are running on Node A.
Move all resource groups and device groups to another node.
Replace the node-to-switch component.
Refer to your hardware documentation for any component-specific instructions.
(Optional) Restore the device groups to the original node.
Do the following for each device group that you want to return to the original node.
If you are using Sun Cluster 3.2, use the following command:
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...] |
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
If you are using Sun Cluster 3.1, use the following command:
# scswitch -z -D devicegroup -h nodename |
(Optional) Restore the resource groups to the original node.
Do the following for each resource group that you want to return to the original node.
If you are using Sun Cluster 3.2, use the following command:
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …] |
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
If you are using Sun Cluster 3.1, use the following command:
# scswitch -z -g resourcegroup -h nodename |
Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.
This procedure relies on the following prerequisites and assumptions.
Except for the failed host adapter, your cluster is operational and all nodes are powered on.
Your nodes are not configured with dynamic reconfiguration functionality.
If your nodes are configured for dynamic reconfiguration and you are using two entirely separate hardware paths to your shared data, see the Sun Cluster Hardware Administration Manual for Solaris OS and skip steps that instruct you to shut down the cluster.
You cannot replace a single, dual-port HBA that has quorum configured on that storage path by using DR. Follow all steps in the procedure. For the details on the risks and limitations of this configuration, see Configuring Cluster Nodes With a Single, Dual-Port HBA in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
Exceptions to this restriction include three-node or larger cluster configurations where no storage device has a quorum device configured.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
Determine the resource groups and device groups that are running on Node A.
Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.
Move all resource groups and device groups off Node A.
Shut down Node A.
For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.
Power off Node A.
Replace the failed host adapter.
To remove and add host adapters, see the documentation that shipped with your nodes.
If you need to upgrade the node's host adapter firmware, boot Node A into noncluster mode by adding -x to your boot instruction. Proceed to Step 8.
If you do not need to upgrade firmware, skip to Step 9.
Upgrade the host adapter firmware on Node A.
If you use the Solaris 8, Solaris 9, or Solaris 10 Operating System, Sun Connection Update Manager keeps you informed of the latest versions of patches and features. Using notifications and intelligent needs-based updating, Sun Connection helps improve operational efficiency and ensures that you have the latest software patches for your Sun software.
You can download the Sun Connection Update Manager product for free by going to http://www.sun.com/download/products.xml?id=4457d96d.
Additional information for using the Sun patch management tools is provided in Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For a list of patches that affect Sun Cluster, see the Sun Cluster Wiki Patch Klatch.
For required firmware, see the Sun System Handbook.
Boot Node A into cluster mode.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.
(Optional) Restore the device groups to the original node.
Do the following for each device group that you want to return to the original node.
If you are using Sun Cluster 3.2, use the following command:
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...] |
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
If you are using Sun Cluster 3.1, use the following command:
# scswitch -z -D devicegroup -h nodename |
(Optional) Restore the resource groups to the original node.
Do the following for each resource group that you want to return to the original node.
If you are using Sun Cluster 3.2, use the following command:
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …] |
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
If you are using Sun Cluster 3.1, use the following command:
# scswitch -z -g resourcegroup -h nodename |
Use this procedure to replace a failed disk drive in a storage array in a running cluster.
Sun storage documentation uses the following terms:
Logical volume
Logical device
Logical unit number (LUN)
This manual uses logical volume to refer to all such logical constructs.
This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.
Become superuser or assume a role that provides solaris.cluster.read RBAC authorization.
If the failed disk drive affect the storage array logical volume's availability, If yes, use volume manager commands to detach the submirror or plex.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
If the logical volume (in Step 1) is configured as a quorum device, choose another volume to configure as the quorum device. Then remove the old quorum device.
To determine whether the LUN is configured as a quorum device, use one of the following commands.
If you are using Sun Cluster 3.2, use the following command:
# clquorum show |
If you are using Sun Cluster 3.1, use the following command:
# scstat -q |
For procedures about how to add and remove quorum devices, see your Sun Cluster system administration documentation.
Replace the failed disk drive.
(Optional) If the new disk drive is part of a logical volume that you want to be a quorum device, add the quorum device.
To add a quorum device, see your Sun Cluster system administration documentation.
If you detached a submirror or plex in Step 1, use volume manager commands to reattach the submirror or plex.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.