Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual |
1. Installing and Configuring a Sun StorEdge 3310 or 3320 SCSI RAID Array
This section contains the procedures about how to maintain a RAID storage array in an Oracle Solaris Cluster environment. Maintenance tasks in Table 2-1 contain cluster-specific tasks. Tasks that are not cluster-specific are referenced in a list following the table.
Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device's id for ddecimalnumber, device may have been replaced.
To fix device IDs that report this error, run the cldevice repair command for each affected device.
Table 2-1 Tasks: Maintaining a RAID Storage Array
|
The following is a list of administrative tasks that require no cluster-specific procedures. See theSun StorEdge 3000 Family FRU Installation Guide for instructions on replacing the following FRUs.
Use this procedure to remove a RAID storage array from a running cluster.
Caution - This procedure removes all data that is on the RAID storage array that you remove. |
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Before You Begin
This procedure assumes that your nodes are not configured with dynamic reconfiguration functionality. If your nodes are configured for dynamic reconfiguration, see your Oracle Solaris Cluster Hardware Administration Manual.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC (role-based access control) authorization.
To determine whether any of the disks is configured as a quorum device, use the following command.
# clquorum show +
For procedures about how to add and remove quorum devices, see your Oracle Solaris Cluster system administration documentation.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
If a volume manager does manage the LUN, run the appropriate Solaris Volume Manager commands or Veritas Volume Manager commands to remove the LUN from any diskset or disk group. For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation. See the following paragraph for additional Veritas Volume Manager commands that are required.
Note - LUNs that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can delete the LUNs from the Oracle Solaris Cluster environment. After you delete the LUN from any disk group, use the following commands on both nodes to remove the LUN from Veritas Volume Manager control.
# vxdisk offline cNtXdY # vxdisk rm cNtXdY
# cfgadm -al
# cfgadm -c unconfigure cN::dsk/cNtXdY
# devfsadm -C
# cldevice clear
Perform this step on both nodes to prevent extended boot time caused by unassigned LUN entries.
Note - Do not remove default t0d0 entries.
For the procedure about how to power off a storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual, 3310 SCSI Array.
For the procedure about how to remove a storage array, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual, 3310 SCSI Array.
For the procedure about how to remove a host adapter, see your Oracle Solaris Cluster system administration documentation and the documentation that shipped with your host adapter and node.
# cldevice list -v
If the RAID storage array is configured with dual controllers, see theSun StorEdge 3000 Family FRU Installation Guide for controller replacement procedures. If the RAID storage array is configured with a single controller, perform the procedure below to ensure high availability.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For the procedure about how to replace a controller, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual, 3310 SCSI Array.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Use this procedure to replace a RAID storage array I/O module.
Note - If your configuration is running in RAID level 0, take appropriate action to prepare the volume manager for the impacted disk to be inaccessible.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For the procedure about how to replace the I/O module, see the Sun StorEdge 3000 Family FRU Installation Guide.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Use this procedure to replace a RAID storage array terminator module.
Note - If your configuration is running in RAID level 0, take appropriate action to prepare the volume manager for the impacted disk to be inaccessible.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For the procedure about how to replace the terminator module, see the Sun StorEdge 3000 Family FRU Installation Guide.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
Except for the failed host adapter, your cluster is operational and all nodes are powered on.
Your nodes are not configured with dynamic reconfiguration functionality.
If your nodes are configured for dynamic reconfiguration and you are using two entirely separate hardware paths to your shared data, see the Oracle Solaris Cluster 3.3 Hardware Administration Manual and skip steps that instruct you to shut down the cluster.
If you are using a single, dual-port HBA to provide the connections to your shared data, you cannot use dynamic reconfiguration for this procedure. Follow all steps in the procedure. For the details on the risks and limitations of this configuration, see Configuring Cluster Nodes With a Single, Dual-Port HBA in Oracle Solaris Cluster 3.3 Hardware Administration Manual.
To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC (role-based access control) authorization.
Record this information because you use this information in Step 12 and Step 13 of this procedure to return resource groups and device groups to Node A.
# clresourcegroup status -n nodename # cldevicegroup status -n nodename
Record this information because you use it in Step 11 of this procedure to repair any affected metadevices.
# clnode evacuate NodeA
For the full procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to remove and add host adapters, see the documentation that shipped with your nodes.
For more information about how to boot nodes, see your Oracle Solaris Cluster system administration documentation.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
For more information, see your volume manager software documentation.
Perform the following step for each device group you want to return to the original node.
# cldevicegroup switch -n NodeA devicegroup1[ devicegroup2 …]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Perform the following step for each resource group you want to return to the original node.
# clresourcegroup switch -n NodeA resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.