Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 With Sun StorEdge A3500FC System Manual SPARC Platform Edition |
1. Installing and Maintaining a Sun StorEdge A3500FC System
How to Install a Storage System in a New Cluster
How to Add a Storage System to an Existing Cluster
How to Reset the LUN Configuration
How to Correct Mismatched Device ID Numbers
How to Remove a Storage System
How to Replace a Failed Controller or Restore an Offline Controller
How to Upgrade Controller Module Firmware in a Running Cluster
How to Add a Disk Drive in a Running Cluster
How to Replace a Failed Disk Drive in a Running Cluster
How to Remove a Disk Drive From a Running Cluster
This section contains the procedures about how to maintain a storage system in an Oracle Solaris Cluster environment.
Some maintenance procedures in Table 1-3 are performed in the same way as in a noncluster environment. This section references these procedures, but this section does not contain these procedures. Table 1-3 lists the procedures about how to maintain a storage system.
Note - When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device's id for ddecimalnumber, device may have been replaced.
To fix device IDs that report this error, run the cldevice repair command for each affected device.
Table 1-3 Tasks: Maintaining a Storage System
|
With the exception of one instruction, the following is a list of administrative tasks that require no cluster-specific procedures. Shut down the cluster, and then see the Sun StorEdge A3500/A3500FC Controller Module Guide, the Sun StorEdge A1000 and D1000 Installation, Operations, and Service Manual, and the Sun StorEdge Expansion Cabinet Installation and Service Manual for the following procedures. See the Oracle Solaris Cluster system administration documentation for procedures about how to shut down a cluster.
Replacing a power cord that connects to the cabinet power distribution unit (see the Sun StorEdge Expansion Cabinet Installation and Service Manual).
Replacing a power cord to a storage array (see the Sun StorEdge A1000 and D1000 Installation, Operations, and Service Manual).
The following is a list of administrative tasks that require no cluster-specific procedures. See the Sun StorEdge A3500/A3500FC Controller Module Guide, the Sun StorEdge RAID Manager User’s Guide, the Sun StorEdge RAID Manager Release Notes, the Sun StorEdge FC-100 Hub Installation and Service Manual, and the documentation that shipped with your FC hub or FC switch for the following procedures.
Replacing a SCSI cable from the controller module to the storage array.
Replacing a storage array-to-host or storage array–to-hub fiber-optic cable.
Replacing an FC hub (see the Sun StorEdge FC-100 Hub Installation and Service Manual).
Replacing an FC hub gigabit interface converters (GBIC) or Small Form-Factor Pluggable (SFP) that connects cables to the host or hub.
Use this procedure to remove a storage system from a running cluster.
Caution - This procedure removes all data that is on the storage system you remove. |
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Before You Begin
To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
For instructions, see the Sun StorEdge RAID Manager User’s Guide and your operating system documentation.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
You must completely remove LUNs that were managed by Veritas Volume from Veritas Volume Manager control before you can delete the LUNs.
# vxdisk offline cNtXdY # vxdisk rm cNtXdY
For the procedure about how to delete a LUN, see the Sun StorEdge RAID Manager User’s Guide.
# rm /dev/rdsk/cNtXdY* # rm /dev/dsk/cNtXdY* # rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
The RAID Manager software creates two paths to the LUN in the /dev/osa/dev/rdsk directory. Substitute the cNtXdY number from the other controller module in the storage array to determine the alternate path.
For example, with this configuration.
# lad c0t5d0 1T93600714 LUNS: 0 1 c1t4d0 1T93500595 LUNS: 2
The alternate paths would be the following.
/dev/osa/dev/dsk/c1t4d1* /dev/osa/dev/rdsk/c1t4d1*
# rm /dev/osa/dev/dsk/cNtXdY* # rm /dev/osa/dev/rdsk/cNtXdY*
# cldevice clear
Note - If you are using your StorEdge A3500FC storage array in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Solutions in an Oracle Solaris Cluster Environment in Oracle Solaris Cluster 3.3 Hardware Administration Manual for more information.
If you are not removing the last controller module, skip to Step 12.
Note - If there are no other parallel SCSI devices connected to the nodes, you can delete the contents of the nvramrc script and, at the OpenBoot PROM, set setenv use-nvramrc? false.
For the procedure about how to shut down and power off a node, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to remove a host adapter, see the documentation that shipped with your node hardware.
# clresourcegroup online +
For the procedure about how to remove software packages, see the documentation that shipped with your storage system.
Use this procedure to replace a controller, or to restore an offline controller.
For conceptual information on SCSI reservations and failure fencing, see your Oracle Solaris Cluster concepts documentation.
Note - If you want to create a SAN by using two FC switches and Sun SAN software, see SAN Solutions in an Oracle Solaris Cluster Environment in Oracle Solaris Cluster 3.3 Hardware Administration Manual for more information.
Caution - You must set the System_LunReDistribution parameter in the /etc/raid/rmparams file to false so that no LUNs are assigned to the controller being brought online. After you verify in Step 5 that the controller has the correct SCSI reservation state, you can balance LUNs between both controllers. |
For the procedure about how to modify the rmparams file, see the Sun StorEdge RAID Manager Installation and Support Guide.
# /etc/init.d/amdemon stop # /etc/init.d/amdemon start
Note - Do not bring the controller online.
For the procedure about how to replace controllers, see the Sun StorEdge A3500/A3500FC Controller Module Guide and the Sun StorEdge RAID Manager Installation and Support Guide for additional considerations.
If your controller module is offline, but does not have a failed controller, proceed to Step 4.
Note - You must use the RAID Manager GUI's Recovery application to bring the controller online. Do not use the Redundant Disk Array Controller Utility (rdacutil) because this utility ignores the value of the System_LunReDistribution parameter in the /etc/raid/rmparams file.
For information on the Recovery application, see the Sun StorEdge RAID Manager User’s Guide. If you have problems with bringing the controller online, see the Sun StorEdge RAID Manager Installation and Support Guide.
Run the repair device command on LUN 0 of the controller you want to bring online.
# cldevice repair
For more information on controller modes, see the Sun StorEdge RAID Manager Installation and Support Guide and the Sun StorEdge RAID Manager User’s Guide.
For the procedure about how to change the rmparams file, see the Sun StorEdge RAID Manager Installation and Support Guide.
# /etc/init.d/amdemon stop # /etc/init.d/amdemon start
Use this procedure to upgrade firmware in a controller module in a running cluster. Use either the online or the offline method to upgrade your NVSRAM firmware. The method that you choose depends on your firmware.
Upgrade the firmware by using the online method, as described in the Sun StorEdge RAID Manager User’s Guide. No special steps are required for a cluster environment.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
This step completes the firmware upgrade.
For the procedure about how to shut down a cluster, see your Oracle Solaris Cluster system administration documentation.
For the procedure about how to boot a node in noncluster mode, see your Oracle Solaris Cluster system administration documentation.
For more information about how to boot nodes, see your Oracle Solaris Cluster system administration documentation.
This step completes the firmware upgrade.
Use this procedure to add a disk drive to a storage array that is in a running cluster.
Caution - If the disk drive that you are adding was previously owned by another controller module, reformat the disk drive. Reformat the disk drive to wipe clean the old DacStore information before adding the disk drive to this storage array. |
For the procedure about how to install a disk drive, see the Sun StorEdge D1000 Storage Guide.
For instructions about how to run Recovery Guru and Health Check, see the Sun StorEdge RAID Manager User’s Guide.
For procedure about how to fail and revive drives, see the Sun StorEdge RAID Manager User’s Guide.
See Also
To create LUNs for the new drives, see How to Create a LUN for more information.
Use this procedure to replace a failed disk drive in a running cluster.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
For the procedure about how to replace a disk drive, see the Sun StorEdge D1000 Storage Guide.
For the procedure about how to run Recovery Guru and Health Check, see the Sun StorEdge RAID Manager User’s Guide.
If reconstruction does not start automatically for any reason, then select Reconstruct from the Manual Recovery application. Do not select Revive. When reconstruction is complete, skip to Step 7.
For the procedure about how to fail and revive drives, see the Sun StorEdge RAID Manager User’s Guide.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Use this procedure to remove a disk drive from a running cluster.
For the procedure about how to remove a LUN, see How to Delete a LUN.
For the procedure about how to remove a disk drive, see the Sun StorEdge D1000 Storage Guide.
Caution - After you remove the disk drive, install a dummy drive to maintain proper cooling. |
Caution - You must be an Oracle service provider to perform disk drive firmware updates. If you need to upgrade drive firmware, contact your Oracle service provider. |
Use this procedure to replace a failed host adapter in a running cluster. This procedure defines Node A as the node with the failed host adapter that you are replacing.
Before You Begin
This procedure relies on the following prerequisites and assumptions.
Except for the failed host adapter, your cluster is operational and all nodes are powered on.
Your nodes are not configured with dynamic reconfiguration functionality.
If your nodes are configured for dynamic reconfiguration and you are using two entirely separate hardware paths to your shared data, see the Oracle Solaris Cluster 3.3 Hardware Administration Manual and skip steps that instruct you to shut down the cluster.
You cannot replace a single, dual-port HBA that has quorum configured on that storage path by using DR. Follow all steps in the procedure. For the details on the risks and limitations of this configuration, see Configuring Cluster Nodes With a Single, Dual-Port HBA in Oracle Solaris Cluster 3.3 Hardware Administration Manual.
Exceptions to this restriction include three-node or larger cluster configurations where no storage device has a quorum device configured.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Record this information because you use this information in Step 13 and Step 14 of this procedure to return resource groups and device groups to Node A.
# clresourcegroup status -n NodeA # cldevicegroup status -n NodeA
The node for which you are determining resource groups and device groups.
# clnode evacuate nodename
For instructions, see the Sun StorEdge RAID Manager User's Guide.
For the full procedure about how to shut down and power off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
To remove and add host adapters, see the documentation that shipped with your nodes.
If you do not need to upgrade firmware, skip to Step 10.
The Oracle Enterprise Manager Ops Center 2.5 software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center 2.5 helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center 2.5.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration at http://docs.sun.com. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Sun System Handbook.
For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Oracle Solaris Cluster System Administration Guide.
For instructions, see the Sun StorEdge RAID Manager User's Guide and your operating system documentation.
For instructions, see the Sun StorEdge RAID Manager User's Guide.
Do the following for each device group that you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Do the following for each resource group that you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.