This document provides the following information for SunTM Cluster Geographic Edition 3.1 2006Q4 software.
This section describes the supported software for Sun Cluster Geographic Edition 3.1 2006Q4 software.
Table 1 Supported Products
Sun StorageTek Availability Suite 4 software is supported for data replication in the 3.1 2006Q4 release of Sun Cluster Geographic Edition. The Sun Cluster Geographic Edition Data Replication Guide for Sun StorEdge Availability Suite has not been updated to reflect that support. The guide will updated in a forthcoming release.
All procedures in the guide apply equally to both Sun StorageTek Availability Suite 4 software and Sun StorEdge Availability Suite 3.2.1 software with the following exceptions:
Sun StorageTek Availability Suite 4 software commands are located in /usr/sbin/, not in /usr/opt/SUNWesm/sbin/.
Sun StorageTek Availability Suite 4 software log files are located in /var/adm/, not in /var/opt/SUNWesm/.
Sun Cluster Geographic Edition 3.1 2006Q4 software currently supports the following SRDF configurations:
Static SRDF device group
Dynamic SRDF device group
Sun Cluster Geographic Edition 3.1 2006Q4 software currently supports the SRDF configurations in these modes:
Synchronous
Semi-Synchronous
Adaptive Copy Write Pending
Adaptive Copy -Disk
Adaptive Copy Change Skew
The following known issues and bugs affect the operation of the Sun Cluster Geographic Edition 3.1 2006Q4 release.
Problem Summary: The Sun Cluster Geographic Edition infrastructure might remain offline after a cluster is rebooted. This is a timing issue, due to the lack of synchronization between the startup of Sun Cluster Geographic Edition software and the startup of the common agent container.
Workaround: Once the cluster is rebooted, start the Sun Cluster Geographic Edition software by issuing the command geoadm start.
Problem Summary: SunPlex Manager does not support RBAC.
Workaround: Invoke SunPlex Manager as superuser on the local cluster.
Problem Summary: To use the SunPlex Manager GUI, the root password must be the same on all nodes of both clusters in the Sun Cluster Geographic Edition deployment.
Workaround: If you use SunPlex Manager to configure your clusters, ensure that the root password is the same on every node of both clusters. If you prefer to not set the root password identically on all nodes, use the command-line interface to configure your clusters.
Problem Summary: If a partnership is created on a remote cluster by using a custom heartbeat, then a heartbeat by the same name must exist on the local cluster before it can join the partnership. You cannot create a heartbeat by using the GUI, so the appropriate heartbeat will not be available to choose on the Join Partnership page.
Workaround: Use the command-line interface (CLI) to create the custom heartbeat, and then use either the CLI or SunPlex Manager to join the partnership.
Problem Summary: The geopg validate command fails, saying that it could not retrieve the resource group information for an existing resource group.
Workaround: Update the resource group properties to cause an update of the resource group mbean.
Problem Summary: Configuration and state changes of entities on a page displayed in SunPlex Manager should cause the page to be refreshed automatically. Sometimes the refresh does not take place.
Workaround: Use the navigation tree to navigate to a different page, then return to the original page. The page will be refreshed on reload.
Problem Summary: When restarting the clusters, the heartbeat is in the Degraded state and the plug-in tcp_udp_plugin is in the No_Response state. On the partner cluster, the process tcp_udp_resp does not exist.
Workaround: Restart the tcp_udp_resp process on the partner cluster by issuing pkill -9 tcp_udp_resp on the partner cluster.
Problem Summary: It is possible that certain unusual configuration errors might leave the cluster in a state where the Sun Cluster Geographic Edition framework can neither be started (geoadm start) nor cleanly stopped (geoadm stop).
Workaround: It is most likely that a Sun Cluster Geographic Edition infrastructure resource is in the STOP_FAILED state. To clear the STOP_FAILED state, take the following actions:
Use the scstat -g command to determine which resources and resource groups are affected.
Clear the STOP_FAILED flag for all resources and resource groups that are in the STOP_FAILED state by using the following command for each:
# scswitch -c -j resource —h nodename -f STOP_FAILED |
Manually stop the application that failed to stop.
For example, if an ora lsnr failed to stop, then stop it fully. Ignore this step if the affected resources are Sun Cluster Geographic Edition infrastructure only.
If necessary, stop the resource groups.
If a resource failed to stop during a resource-group stop, then the resource group remains in the STOP_FAILED state and you must stop it by using the following command:
# scswitch -F -g resourcegroup |
If the resources failed to stop during a restart of the resource or while the resource was being disabled, ignore this step.
Retry the geoadm stop command.
Problem Summary: The geopg get command sometimes reports a failure even though replication was successful. When replicating a protection group from the partner cluster, geopg get failed with the following message:
# geopg get -s partnershipname protectiongroupname Operation failed for the following protection groups: Permission denied: configuration is locked by cluster clustername. Retry the operation after a while. Protection group protectiongroupname has been replicated from the partner cluster clustername, but validation failed. |
In some cases, geoadm status also reports the synchronization status of the protection group in Error.
If you issue the geopg get command again, it is rejected because the protection group has already be replicated on the local cluster.
Workaround: Resynchronize the protection group with the partner cluster by using the following command:
# geopg update protectiongroupname |
Then revalidate the protection group by using the following command:
# geopg validate protectiongroupname |
Problem Summary: If the description property value includes unquoted spaces, a java.lang.Exception is thrown.
Workaround: Place quotes around description values that require spaces.
Problem Summary: If you use pkgrm to remove the Sun Cluster Geographic Edition software, the gchb_resd process might be left running. In this case, if you then reinstall, the process crashes.
Workaround: None needed. The gchb_resd process restarts automatically.
This section provides information about patches for Sun Cluster Geographic Edition 3.1 2006Q4 configurations.
You must be a registered SunSolveTM user to view and download the required patches for the Sun Cluster Geographic Edition product. If you do not have a SunSolve account, contact your Sun service representative or sales engineer, or register online at http://sunsolve.sun.com.
Product |
Component |
Platform |
Minimum Patch Level |
---|---|---|---|
Solaris OS 8 |
110380-06 110934-24 |
||
Solaris OS 10 |
SPARC |
118562-09 118918-17 |
|
x86 |
118563-10 118919-17 |
||
Sun StorEdge Availability Suite 3.2.1 |
Core |
116466-09 |
|
Point-in-Time |
116467-09 |
||
Remote Mirror |
116468-13 |
||
Sun StorageTek Availability Suite 4 |
SNDR |
SPARC |
123246-01 |
x86 |
123247-01 |
||
Common agent container |
118671-03 120675–01 |
Check with a Sun service representative for the availability of the common agent container 120675–01 patch.
To use scalable resource groups in Sun Cluster Geographic Edition 3.1 2006Q4, you must also install one of the following patches:
Solaris OS 10: a minimum of 120500-08
Solaris OS 9: a minimum of 117949-23
Solaris OS 8: a minimum of 117950-23
Check with a Sun service representative for the availability of these patches.
The Sun Cluster Geographic Edition 3.1 2006Q4 user documentation set consists of the following collections:
Sun Cluster Geographic Edition Release Notes Collection
Sun Cluster Geographic Edition Software Collection
Sun Cluster Geographic Edition Reference Collection
For the latest documentation, go to the docs.sun.comSM web site. The docs.sun.com web site enables you to access Sun Cluster Geographic Edition documentation on the Web. You can browse the docs.sun.com archive or search for a specific book title or subject at the following Web site:
Part Number |
Book Title |
---|---|
819–4243 | |
819–8004 | |
819–8003 | |
819–4246 |
Sun Cluster Geographic Edition Data Replication Guide for Sun StorEdge Availability Suite |
819–4245 |
Sun Cluster Geographic Edition Data Replication Guide for Hitachi TrueCopy |
819–8006 |
Sun Cluster Geographic Edition Data Replication Guide for EMC Symmetrix Remote Data Facility |
This collection contains the Sun Cluster Geographic Edition Reference Manual, part number 819–4244.
This section discusses known errors or omissions for man pages, documentation, or online help and steps to correct these problems.
This section discusses errors and omissions from the Sun Cluster Geographic Edition Installation Guide.
Incorrect Information: In the section Installing the Software on Solaris OS 8 in Sun Cluster Geographic Edition Installation Guide, the following information is included:
If you are using EMC Symmetrix Remote Data Facility data replication:
SUNWscgrepsrdf: EMC Symmetrix Remote Data Facility data replication
SUNWscgrepsrdfu: EMC Symmetrix Remote Data Facility data replication
Correct Information: EMC Symmetrix Remote Data Facility software is not supported in combination with Solaris OS 8. Disregard the incorrect information.
This section discusses errors and omissions from the geopg(1M) man page.
Problem Summary: The description of the geopg start command in thegeopg(1M) man page is unclear.
Workaround: The -e option defines the scope of the geopg start command. If you specify -e local, the geopg start command is run on the cluster partner where the command is given. If you specify -e global, the geopg start command is run on both clusters in the partnership.
When the geopg start command is run on the primary cluster, either by running the geopg start -e local command on the primary cluster or by running the geopg start -e global command, the Sun Cluster Geographic Edition software brings online resource groups on the primary cluster only by using the scswitch -Z -g command.
When the geopg start command is run on the secondary cluster of the partnership, either as a result of running the geopg start -e local command on the secondary cluster or by running the geopg start -e global command, resource groups are not be started on the secondary cluster. The resource group are put in the unmanaged state by using the scswitch -u command.
The geopg start command activates the protection group on both the primary and the secondary clusters, which starts the Sun Cluster Geographic Edition management of the resource groups. However, the resource groups start, or are brought online, only on the primary cluster.
Depending on the role of the protection group, activating the protection group might not start the resource groups that the protection group contains.
Incorrect Information: The geopg(1M) man page indicates that tuning of the cluster_dgs property is allowable at any time.
Correct Information: You can tune the cluster_dgs property only when the protection group is offline on both partner clusters.