Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With Sun StorEdge T3 or T3+ Array Manual SPARC Platform Edition |
1. Installing and Configuring a Sun StorEdge T3 or T3+ Array
2. Maintaining and Upgrading a Sun StorEdge T3 or T3+ Array
Maintaining Sun StorEdge T3 or T3+ Storage Array Components
Sun StorEdge T3 and T3+ Array FRUs
Replacing a Node-to-Switch Component
How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing
How to Replace a Node-to-Switch Component in a Cluster Without Multipathing
How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration
How to Replace an FC Switch or Storage Array-to-Switch Component in a Partner-Group Configuration
How to Replace a Storage Array Controller
How to Remove a Storage Array in a Single-Controller Configuration
Upgrading Sun StorEdge T3 or T3+ Storage Arrays
How to Upgrade Storage Array Firmware (No Submirrors)
How to Upgrade Storage Array Firmware When Using Mirroring
How to Upgrade a StorEdge T3 Controller to a StorEdge T3+ Controller
How to Migrate From a Single-Controller Configuration to a Partner-Group Configuration
This section contains the procedures about how to maintain a storage array. The following table lists these procedures. This section does not include a procedure about how to add a disk drive and a procedure about how to remove a disk drive, because a storage array only operates when fully configured.
Caution - If you remove any field-replaceable unit (FRU) for an extended period of time, thermal complications might result. To prevent this complication, the storage array is designed so that an orderly shutdown occurs. This shutdown occurs when you remove a component for longer than 30 minutes. Therefore, a replacement part must be immediately available before you start an FRU replacement procedure. You must replace an FRU within 30 minutes. If you do not replace the FRU within this time, the storage array, and all attached storage array, shut down and power off. |
Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device's id for ddecimalnumber, device may have been replaced.
To fix device IDs that report this error, run the cldevice repair command for each affected device.
Table 2-1 Task Map: Maintaining a Storage Array
|
The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual for the following procedures.
Upgrading a Sun StorEdge T3 array controller to a Sun StorEdge T3+ array controller requires no cluster-specific procedures. See the Sun StorEdge T3 Array Controller Upgrade Manual for the following procedures.
Use this procedure to replace a failed disk drive in a storage array in a running cluster.
Note - Oracle storage documentation uses the following terms:
Logical volume
Logical device
Logical unit number (LUN)
This manual uses logical volume to refer to all such logical constructs.
Before You Begin
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
To determine whether the LUN is configured as a quorum device, use one of the following commands.
# clquorum show
For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation.
For instructions, refer to the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
To add a quorum device, see your Oracle Solaris Cluster system administration documentation.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Use this procedure to replace a node-to-switch component that has failed or that you suspect might be contributing to a problem.
Note - Node-to-switch components that are covered by this procedure include the following components:
Node-to-switch fiber-optic cables
Gigabit interface converters (GBICs) or small form-factor pluggables (SFPs) on an FC switch
FC switches
To replace a host adapter, see How to Replace a Host Adapter.
This procedure defines Node A as the node that is connected to the node-to-switch component that you are replacing. This procedure assumes that, except for the component you are replacing, your cluster is operational.
Ensure that you are following the appropriate instructions:
If your cluster uses multipathing, see How to Replace a Node-to-Switch Component in a Cluster That Uses Multipathing.
If your cluster does not use multipathing, see How to Replace a Node-to-Switch Component in a Cluster Without Multipathing.
Refer to your hardware documentation for any component-specific instructions.
Before You Begin
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
You have completed this procedure.
# clresourcegroup status -n NodeA # cldevicegroup status -n NodeA
The node for which you are determining resource groups and device groups.
# clnode evacuate nodename
Refer to your hardware documentation for any component-specific instructions.
Do the following for each device group that you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Do the following for each resource group that you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
Use this procedure to replace a hub/switch or the following hub/switch-to-array components for an array in a single-controller configuration. To replace these components for an array in a partner-group configuration, see How to Replace an FC Switch or Storage Array-to-Switch Component in a Partner-Group Configuration. You can use storage array in single-controller configuration with FC switches when you create a SAN.
Fiber-optic cable that connects a hub/switch to a storage array
FC hub/switch GBIC or an SFP that connects a hub/switch to a storage array
FC hub/switch
FC hub/switch power cord
Media interface adapter (MIA) on a StorEdge T3 array
This component does not apply to StorEdge T3+ arrays.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For the procedure about how to replace a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.
For the procedure about how to replace an FC hub/switch GBIC or an SFP, an FC hub/switch, or an FC hub/switch power cord, see the documentation that shipped with your FC hub/switch hardware.
For the procedure about how to replace an MIA, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
If you are replacing FC switches in a SAN, follow the hardware installation and SAN configuration instructions in the documentation that shipped with your switch hardware.
Note - If you are replacing an FC switch and you intend to save the switch configuration for restoration to the replacement switch, do not connect the cables to the replacement switch until after you recall the Fabric configuration to the replacement switch. For more information about how to save and recall switch configurations, see the documentation that shipped with your switch hardware.
Note - Before you replace an FC switch, be sure that the probe_timeout parameter of your data service software is set to more than 90 seconds. If you increase the value of the probe_timeout parameter to more than 90 seconds, you avoid unnecessary resource group restarts. Resource group restarts occur when one of the FC switches is powered off.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Use this procedure to replace components for a storage array in a partner-group configuration in a running cluster. To replace components for arrays in a single-controller configuration, see How to Replace a Hub/Switch or Hub/Switch-to-Array Component in a Single-Controller Configuration. Use this procedure to replace the following storage array-to-switch components.
Fiber-optic cable that connects an FC switch to a storage array.
GBIC or an SFP on an FC switch, connecting to a storage array.
FC switch.
Media Interface Adapter (MIA) on a Sun StorEdge T3 storage array. This component does not apply to Sun StorEdge T3+ storage arrays.
Interconnect cables between two interconnected storage arrays of a partner group.
For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.
If the controller is already DISABLED, skip to Step 5.
For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.
Note - If you are replacing an FC switch and you intend to save the switch IP configuration for restoration to the replacement switch, wait to connect the cables to the replacement switch. Connect the cables to the replacement switch after you recall the Fabric configuration to the replacement switch. For more information about how to save and recall switch configurations, see the documentation that shipped with your switch hardware.
Note - Before you replace an FC switch, be sure that the probe_timeout parameter of your data service software is set to more than 90 seconds. If you increase the value of the probe_timeout parameter to more than 90 seconds, you avoid unnecessary resource group restarts. Resource group restarts occur when one of the FC switches is powered off.
For more information, see the Sun StorEdge T3 and T3+ Array Administrator's Guide.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For the procedure about how to replace a storage array controller, see the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Use this procedure to replace a storage array chassis. This procedure assumes that you are retaining all FRUs other than the chassis and the midplane. To replace the chassis, you must replace both the chassis and the midplane because these components are manufactured as one part.
Caution - You must be an Oracle service provider to perform this procedure. If you need to replace a chassis, contact your Oracle service provider. |
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For the procedure about how to replace a storage array chassis, see the Sun StorEdge T3 and T3+ Array Field Service Manual.
Caution - The world wide names (WWNs) change as a result of this procedure. You must reconfigure your volume manager software to recognize the new WWNs. |
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
Except for the failed host adapter, your cluster is operational and all nodes are powered on.
Your nodes are not configured with dynamic reconfiguration functionality.
If your nodes are configured for dynamic reconfiguration and you are using two entirely separate hardware paths to your shared data, see the Oracle Solaris Cluster 3.3 Hardware Administration Manual and skip steps that instruct you to shut down the cluster.
You cannot replace a single, dual-port HBA that has quorum configured on that storage path by using DR. Follow all steps in the procedure. For the details on the risks and limitations of this configuration, see Configuring Cluster Nodes With a Single, Dual-Port HBA in Oracle Solaris Cluster 3.3 Hardware Administration Manual.
Exceptions to this restriction include three-node or larger cluster configurations where no storage device has a quorum device configured.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Record this information because you use this information in Step 10 and Step 11 of this procedure to return resource groups and device groups to Node A.
# clresourcegroup status -n NodeA # cldevicegroup status -n NodeA
The node for which you are determining resource groups and device groups.
# clnode evacuate nodename
For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
To remove and add host adapters, see the documentation that shipped with your nodes.
If you do not need to upgrade firmware, skip to Step 9.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
Do the following for each device group that you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Do the following for each resource group that you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
Use this procedure to remove a storage array and its submirrors from a running cluster. This procedure provides the flexibility to remove the host adapters from the nodes for the storage array that you are removing. To remove a partner group from the cluster, see How to Remove a Partner Group.
Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device's id for ddecimalnumber, device may have been replaced.
To fix device IDs that report this error, run the cldevice repair command for each affected device.
This procedure defines Node A as the node with which you begin working. Node B is another node in the cluster.
Caution - During this procedure, you lose access to the data that resides on the storage array that you are removing. |
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Before You Begin
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Record this information because you use this information in Step 18 and Step 19 of this procedure to return resource groups and device groups to these nodes.
Use the following command:
# clresourcegroup status + # cldevicegroup status +
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
If this is not the last storage array. skip this step.
For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.
Note - If you are use your storage array in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. SeeSAN Solutions in an Oracle Solaris Cluster Environment in Oracle Solaris Cluster 3.3 Hardware Administration Manual for more information.
If you do not want to remove host adapters, skip to Step 10.
For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.
For more information, see your Oracle Solaris Cluster system administration documentation.
For more information about how to boot nodes, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.
Note - If you are use your storage array in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Solutions in an Oracle Solaris Cluster Environment in Oracle Solaris Cluster 3.3 Hardware Administration Manual for more information.
If you do not want to remove host adapters, skip to Step 16.
For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.
For more information, see your Oracle Solaris Cluster system administration documentation.
For more information about how to boot nodes, see your Oracle Solaris Cluster system administration documentation.
# devfsadm -C # cldevice clear
Perform the following step for each device group you want to return to the original node.
# cldevicegroup switch -n nodenamedevicegroup1[ devicegroup2 …]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Perform the following step for each resource group you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
Use this procedure to permanently remove storage array partner groups and their submirrors from a running cluster. To remove a storage array in single-controller configuration, see How to Remove a Storage Array in a Single-Controller Configuration.
Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device's id for ddecimalnumber, device may have been replaced.
To fix device IDs that report this error, run the cldevice repair command for each affected device.
This procedure defines Node A as the cluster node that you begin working with. Node B is another node in the cluster.
Caution - During this procedure, you lose access to the data that resides on each storage array partner group that you are removing. |
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Before You Begin
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
For more information, see your Veritas Volume Manager documentation.
For more information, see your Veritas Volume Manager documentation.
Record this information because you use this information in Step 19 and Step 20 of this procedure to return resource groups and device groups to these nodes.
Use the following command:
# clresourcegroup status + # cldevicegroup status +
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
If no array that you are removing is the last array connected to the node, skip to Step 11.
For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.
Note - If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Considerations for more information.
If you do not want to remove host adapters, skip to Step 11.
For the procedure about how to remove host adapters, see the documentation that shipped with your host adapter and nodes.
For more information, see your Oracle Solaris Cluster system administration documentation.
For more information on booting nodes, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
If array that you are removing is the last array connected to the node, , proceed to Step 14.
For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.
Note - If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Considerations for more information.
If you do not want to remove host adapters, skip to Step 17.
For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.
For more information, see your Oracle Solaris Cluster system administration documentation.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
# devfsadm -C # cldevice clear
Perform the following step for each device group you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Perform the following step for each resource group you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.