Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With Sun StorEdge 9900 Series Storage Device Manual |
1. Installing and Configuring a Sun StorEdge or StorageTek 9900 Series Storage Array
2. Enabling Multipathing Software in a Sun StorEdge or StorageTek 9900 Series Storage Array
3. Maintaining a Sun StorEdge or StorageTek 9900 Series Storage Array
This section contains the procedures for maintaining a storage system in a running cluster. Table 3-1 lists these procedures.
Table 3-1 Task Map: Maintaining a Storage Array
|
Use this procedure to permanently remove a storage array. This procedure provides the flexibility to remove the host adapters from the nodes that are attached to the storage array that you are removing.
This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.
If you need to remove a storage array from more than two nodes, repeat Step 15 through Step 23 for each additional node that connects to the storage array.
Caution - During this procedure, you lose access to the data that resides on the storage array that you are removing. |
Before You Begin
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.
To determine whether this logical volume is configured as a quorum device, use the following command.
#clquorum show
To add or remove a quorum device in your configuration, see your Oracle Solaris Cluster system administration documentation.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For the procedure about how to remove a host adapter, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.
Record this information because you use this information in Step 21 and Step 22 of this procedure to return resource groups and device groups to these nodes.
Use the following command:
# clresourcegroup status -n NodeA[ NodeB ...] # cldevicegroup status -n NodeA[ NodeB ...]
The node or nodes for which you are determining resource groups and device groups.
For more information, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to shut down a node, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to remove host adapters, see the documentation that shipped with your host adapter or updated information on the manufacturer's web site.
For more information, see the documentation that shipped with your server. For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
# devfsadm -C
For the procedure about how to shut down a node, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to remove host adapters, see the documentation that shipped with your server and host adapter.
For more information, see the documentation that shipped with your server. For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
Perform the following step for each device group you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Perform the following step for each resource group you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
# devfsadm -C
# cldevice clear
Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
Except for the failed host adapter, your cluster is operational and all nodes are powered on.
Your nodes are not configured with dynamic reconfiguration functionality.
If your nodes are configured for dynamic reconfiguration and you are using two entirely separate hardware paths to your shared data, see the Oracle Solaris Cluster 3.3 Hardware Administration Manual and skip steps that instruct you to shut down the cluster.
You cannot replace a single, dual-port HBA that has quorum configured on that storage path by using DR. Follow all steps in the procedure. For the details on the risks and limitations of this configuration, see Configuring Cluster Nodes With a Single, Dual-Port HBA in Oracle Solaris Cluster 3.3 Hardware Administration Manual.
Exceptions to this restriction include three-node or larger cluster configurations where no storage device has a quorum device configured.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.
# clresourcegroup status -n NodeA # cldevicegroup status -n NodeA
The node for which you are determining resource groups and device groups.
# clnode evacuate nodename
For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
To remove and add host adapters, see the documentation that shipped with your nodes.
If you do not need to upgrade firmware, skip to Step 9.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
Do the following for each device group that you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Do the following for each resource group that you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
Use this procedure to replace an FC switch, or the following storage array-to-switch components in a running cluster.
Fiber-optic cable that connects an FC switch to a storage array
GBIC on an FC switch, connecting to a storage array
FC switch
For the procedure about how to replace a fiber-optic cable between a storage array and an FC switch, see the documentation that shipped with your switch hardware.
For the procedure about how to replace a GBIC on an FC switch, see the documentation that shipped with your switch hardware.
For the procedure about how to replace an SFP on the storage array, contract your service provider.
For the procedure about how to replace an FC switch, see the documentation that shipped with your switch hardware.
Note - If you are replacing an FC switch and you intend to save the switch configuration for restoration to the replacement switch, do not connect the cables to the replacement switch until after you recall the Fabric configuration to the replacement switch. For more information about how to save and recall switch configurations see the documentation that shipped with your switch hardware.
Before you replace an FC switch, be sure that the probe_timeout parameter of your data service software is set to more than 90 seconds. Increasing the value of the probe_timeout parameter to more than 90 seconds avoids unnecessary resource group restarts when one of the FC switches is powered off.
Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.
Note - Node-to-switch components that are covered by this procedure include the following components:
Node-to-switch fiber-optic cables
Gigabit interface converters (GBICs) or small form-factor pluggables (SFPs) on an FC switch
FC switches
To replace a host adapter, see How to Replace a Host Adapter.
This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.
Ensure that you are following the appropriate instructions:
If your cluster uses multipathing, see How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing.
If your cluster does not use multipathing, see How to Replace a Node-to-Switch Component in a Cluster Without Multipathing.
Refer to your hardware documentation for any component-specific instructions.
Before You Begin
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
You have completed this procedure.
# clresourcegroup status -n NodeA # cldevicegroup status -n NodeA
The node for which you are determining resource groups and device groups.
# clnode evacuate nodename
Refer to your hardware documentation for any component-specific instructions.
Do the following for each device group that you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Do the following for each resource group that you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.