Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 3/13 Hardware Administration Manual Oracle Solaris Cluster 3.3 3/13 |
1. Introduction to Oracle Solaris Cluster Hardware
2. Installing and Configuring the Terminal Concentrator
3. Installing Cluster Interconnect Hardware and Configuring VLANs
4. Maintaining Cluster Interconnect Hardware
5. Installing and Maintaining Public Network Hardware
6. Maintaining Platform Hardware
Configuring Cluster Nodes With a Single, Dual-Port HBA
Risks and Trade-offs When Using One Dual-Port HBA
Supported Configurations When Using a Single, Dual-Port HBA
Cluster Configuration When Using Solaris Volume Manager and a Single Dual-Port HBA
Expected Failure Behavior with Solaris Volume Manager
Failure Recovery with Solaris Volume Manager
Expected Failure Behavior with Solaris Volume Manager for Oracle Solaris Cluster
Failure Recovery with Solaris Volume Manager for Oracle Solaris Cluster
Preparing the Cluster for Kernel Cage DR
How to Recover From an Interrupted Kernel Cage DR Operation
7. Campus Clustering With Oracle Solaris Cluster Software
Some servers support the mirroring of internal hard drives (internal hardware disk mirroring or integrated mirroring) to provide redundancy for node data. To use this feature in a cluster environment, follow the steps in this section.
The best way to set up hardware disk mirroring is to perform RAID configuration during cluster installation, before you configure multipathing. For instructions on performing this configuration, see the Oracle Solaris Cluster Software Installation Guide. If you need to change your mirroring configuration after you have established the cluster, you must perform some cluster-specific steps to clean up the device IDs, as described in the procedure that follows.
Note - Specific servers might have additional restrictions. See the documentation that shipped with your server hardware.
For specifics about how to configure your server's internal disk mirroring, refer to the documents that shipped with your server and the raidctl(1M) man page.
Before You Begin
This procedure assumes that you have already installed your hardware and software and have established the cluster. To configure an internal disk mirror during cluster installation, see the Oracle Solaris Cluster Software Installation Guide.
The Oracle Enterprise Manager Ops Center software helps you patch and monitor your data center assets. Oracle Enterprise Manager Ops Center helps improve operational efficiency and ensures that you have the latest software patches for your software. Contact your Oracle representative to purchase Oracle Enterprise Manager Ops Center.
Additional information for using the Oracle patch management tools is provided in Oracle Solaris Administration Guide: Basic Administration on the Oracle Technology Network. Refer to the version of this manual for the Oracle Solaris OS release that you have installed.
If you must apply a patch when a node is in noncluster mode, you can apply it in a rolling fashion, one node at a time, unless instructions for a patch require that you shut down the entire cluster. Follow the procedures in How to Apply a Rebooting Patch (Node) in Oracle Solaris Cluster System Administration Guide to prepare the node and to boot it in noncluster mode. For ease of installation, consider applying all patches at the same time. That is, apply all patches to the node that you place in noncluster mode.
For required firmware, see the Oracle Technology Network.
Caution - If there are state database replicas on the disk that you are mirroring, you must recreate them during this procedure. |
Record this information because you use it later in this procedure to return resource groups and device groups to the node.
Use the following command:
# clresourcegroup status -n nodename # cldevicegroup status -n nodename
# clnode evacuate fromnode
# raidctl -c clt0d0 clt1d0
Creates the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the first argument. Enter the name of the mirror disk as the second argument.
# reboot -- -S
Use the following command:
# cldevice repair /dev/rdsk/clt0d0
Updates the cluster's record of the device IDs for the primary disk. Enter the name of your primary disk as the argument.
# cldevice list
The command lists only the primary disk, and not the mirror disk, as visible to the cluster.
# reboot
# metadb -a /dev/rdsk/clt0d0s4
Perform the following step for each device group you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Perform the following step for each resource group you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.
The resource group or groups that you are returning to the node or nodes.
Record this information because you use this information later in this procedure to return resource groups and device groups to the node.
Use the following command:
# clresourcegroup status -n nodename # cldevicegroup status -n nodename
# clnode evacuate fromnode
# raidctl -d clt0d0
Deletes the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the argument.
# reboot -- -S
Use the following command:
# cldevice repair /dev/rdsk/clt0d0 /dev/rdsk/clt1d0
Updates the cluster's record of the device IDs. Enter the names of your disks separated by spaces.
# cldevice list
The command lists both disks as visible to the cluster.
# reboot
# metadb -c 3 -ag /dev/rdsk/clt0d0s4
# cldevicegroup switch -n nodename devicegroup1 devicegroup2 …
The node to which you are restoring device groups.
The device group or groups that you are restoring to the node.
Perform the following step for each resource group you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup[ resourcegroup2 …]
For failover resource groups, the node to which the groups are restored. For scalable resource groups, the node list to which the groups are restored.
The resource group or groups that you are restoring to the node or nodes.