Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 4.1 Hardware Administration Manual Oracle Solaris Cluster 4.1 |
1. Introduction to Oracle Solaris Cluster Hardware
2. Installing and Configuring the Terminal Concentrator
3. Installing Cluster Interconnect Hardware and Configuring VLANs
4. Maintaining Cluster Interconnect Hardware
5. Installing and Maintaining Public Network Hardware
6. Maintaining Platform Hardware
7. Campus Clustering With Oracle Solaris Cluster Software
8. Verifying Oracle Solaris Cluster Hardware Redundancy
Testing Cluster Interconnect Redundancy
How to Test Cluster Interconnects
Testing Public Network Redundancy
This section provides the procedure for testing node redundancy and high availability of device groups. Perform the following procedure to confirm that the secondary node takes over the device group that is mastered by the primary node when the primary node fails.
Before You Begin
To perform this procedure, assume a role that provides solaris.cluster.modify RBAC authorization.
Use the following command:
# clresourcegroup create testgroup # clresourcetype register SUNW.HAStoragePlus # clresource create -t HAStoragePlus -g testgroup \ -p GlobalDevicePaths=/dev/md/red/dsk/d0 \ -p Affinityon=true testresource
If the HAStoragePlus resource type is not already registered, register it.
Replace this path with your device path.
# clresourcegroup status testgroup
Cluster interconnect error messages appear on the consoles of the existing nodes.
Use the following command to check the output for the resource group ownership:
# clresourcegroup status testgroup
Wait for the system to boot. The system automatically starts the membership monitor software. The node then rejoins the cluster.
# clresourcegroup switch -n nodename testgroup
In these commands, nodename is the name of the primary node.
Use the following command to look for the output that shows the device group ownership.
# clresourcegroup status testgroup