This chapter contains information about enabling your cluster for participation in a partnership. It also contains information for disabling the Sun Cluster Geographic Edition software so that your cluster no longer can participate in partnerships.
This chapter contains the following sections:
Sun Cluster Geographic Edition Infrastructure Resource Groups
Checking the Status of the Sun Cluster Geographic Edition Infrastructure
When you enable the Sun Cluster Geographic Edition infrastructure, the following Sun Cluster resource groups are created:
geo-clusterstate – A scalable resource group that the Sun Cluster Geographic Edition software uses to distinguish between node failover and cluster reboot scenarios. This resource group does not contain any resources.
geo-infrastructure – A failover resource group that encapsulates the Sun Cluster Geographic Edition infrastructure. The resource group contains the following resources:
geo-clustername – The logical hostname for the Sun Cluster Geographic Edition software. The Sun Cluster Geographic Edition software uses the logical hostname of a cluster for inter-cluster management communication and heartbeat communication. An entry in the naming services must be the same as the name of the cluster and be available on the namespace of each cluster.
geo-hbmonitor – Encapsulates the heartbeat processes for the Sun Cluster Geographic Edition software.
geo-failovercontrol – Encapsulates the Sun Cluster Geographic Edition software itself. The Sun Cluster Geographic Edition module uses this resource to load into the common agent container.
These resources are for internal purposes only, so you must not change them.
These internal resources are removed when you disable the Sun Cluster Geographic Edition infrastructure.
You can monitor the status of these resources by using the clresource status command. For more information about this command, see the clresource(1CL) man page.
When you enable the Sun Cluster Geographic Edition software, the cluster is ready to enter a partnership with another enabled cluster. You can use the CLI commands or the GUI to create a cluster partnership.
For more information about setting up and installing the Sun Cluster Geographic Edition software, see the Sun Cluster Geographic Edition Installation Guide.
This procedure enables the Sun Cluster Geographic Edition infrastructure on the local cluster only. Repeat this procedure on all the clusters of your geographically separated cluster.
Ensure that the following conditions are met:
The cluster is running the Solaris Operating System and the Sun Cluster software.
The Sun Cluster management-agent container for Sun Cluster Manager is running.
The Sun Cluster Geographic Edition software is installed.
The cluster has been configured for secure cluster communication by using security certificates, that is, nodes within the same cluster must share the same security certificates. This is done during Sun Cluster installation.
When you upgrade to Sun Cluster 3.2 software, the security certificates must be identical on all nodes of the cluster. Therefore, you must copy the security certificates manually from one node of the cluster to the other nodes of the cluster. For more information on copying the security files for the common agent container, see the procedures in How to Finish Upgrade to Sun Cluster 3.2 11/09 Software in Sun Cluster Upgrade Guide for Solaris OS.
Log in to a cluster node.
You must be assigned the Geo Operation RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC.
Ensure that the logical hostname, which is the same as the cluster name, is available and defined.
# cluster list |
If the cluster name is not the name you want to use, change the cluster name.
If you must change the name of a cluster that is configured in a partnership, do not perform this step. Instead, follow instructions in Renaming a Cluster That Is in a Partnership.
Follow cluster naming guidelines as described in Planning Required IP Addresses and Hostnames in Sun Cluster Geographic Edition Installation Guide. Cluster names must follow the same requirements as for host names.
# cluster rename -c newclustername oldclustername |
For more information, see the cluster(1CL) man page.
After you have enabled the Sun Cluster Geographic Edition infrastructure, you must not change the cluster name while the infrastructure is enabled.
Confirm that the naming service and the local hosts files contain a host entry that matches the cluster name.
The local host file, hosts, is located in the /etc/inet directory.
On a node of the cluster, start the Sun Cluster Geographic Edition infrastructure.
# geoadm start |
The geoadm start command enables the Sun Cluster Geographic Edition infrastructure on the local cluster only. For more information, see the geoadm(1M) man page.
Verify that you have enabled the infrastructure and that the Sun Cluster Geographic Edition resource groups are online.
For a list of the Sun Cluster Geographic Edition resource groups, see Sun Cluster Geographic Edition Infrastructure Resource Groups.
# geoadm show # clresourcegroup status # clresource status |
The output for the geoadm show command displays that the Sun Cluster Geographic Edition infrastructure is active from a particular node in the cluster.
The output for the clresourcegroup status and clresource status commands displays that the geo-failovercontrol, geo-hbmonitor, and geo-clustername resources and the geo-infrastructure resource groups are online on one node of the cluster.
For more information, see the clresourcegroup(1CL) and clresource(1CL) man pages.
This example enables the Sun Cluster Geographic Edition software on the cluster-paris cluster.
Start the Sun Cluster Geographic Edition software on cluster-paris.
phys-paris-1# geoadm start |
Ensure that the Sun Cluster Geographic Edition infrastructure was successfully enabled.
phys-paris-1# geoadm show --- CLUSTER LEVEL INFORMATION --- Sun Cluster Geographic Edition is active on cluster-paris from node phys-paris-1 Command execution successful phys-paris-1# |
Verify the status of the Sun Cluster Geographic Edition resource groups and resources.
phys-paris-1# clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ geo-clusterstate phys-paris-1 No Online phys-paris-2 No Online geo-infrastructure phys-paris-1 No Online phys-paris-2 No Offline # clresource status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- geo-clustername phys-paris-1 Online Online - LogicalHostname online. phys-paris-2 Offline Offline geo-hbmonitor phys-paris-1 Online Online - Daemon OK phys-paris-2 Offline Offline geo-failovercontrol phys-paris-1 Online Online phys-paris-2 Offline Offline |
For information about creating protection groups, see the Sun Cluster Geographic Edition Data Replication Guide that corresponds to the type of data replication software you are using.
You can disable the Sun Cluster Geographic Edition infrastructure by using the following procedure.
Ensure that all protection groups on the local cluster are offline.
Log in to a cluster node.
You must be assigned the Geo Management RBAC rights profile to complete this procedure. For more information about RBAC, see Sun Cluster Geographic Edition Software and RBAC.
Confirm that all of the protection groups are offline on the local cluster.
phys-paris-1# geoadm status |
For more information about the geoadm status command and its output, see Monitoring the Runtime Status of the Sun Cluster Geographic Edition Software.
If you want to keep the application resource groups online while deactivating a protection group, follow the procedure described in the following data replication guides:
Disable the Sun Cluster Geographic Edition software.
phys-paris-1# geoadm stop |
This command removes the infrastructure resource groups that were created when you enabled the Sun Cluster Geographic Edition infrastructure.
For more information about this command, see the geoadm(1M) man page.
Disabling the Sun Cluster Geographic Edition software removes only the infrastructure resource groups. Resource groups that have been created to support data replication are not removed unless you remove the protection group that the resource groups are supporting by using the geopg delete command.
Verify that the software was disabled and that the Sun Cluster Geographic Edition resource groups are no longer displayed.
phys-paris-1# geoadm show phys-paris-1# clresourcegroup status |
For more information, see the clresourcegroup(1CL) man page.
This example disables the cluster-paris cluster.
Confirm that all protection groups are offline.
phys-paris-1# geoadm status |
Cluster: cluster-paris Partnership "paris-newyork-ps" :OK Partner clusters :cluster-newyork Synchronization :OK ICRM Connection :OK Heartbeat "paris-to-newyork" monitoring "cluster-newyork":OK Heartbeat plug-in "ping_plugin" :Inactive Heartbeat plug-in "tcp_udp_plugin":OK Protection group "tcpg" :OK Partnership :paris-newyork-ps Synchronization :OK Cluster cluster-paris :OK Role :Primary PG activation state :Deactivated Configuration :OK Data replication :OK Resource groups :OK Cluster cluster-newyork :OK Role :Secondary PG activation state :Deactivated Configuration :OK Data replication :OK Resource groups :OK
Disable the Sun Cluster Geographic Edition infrastructure.
phys-paris-1# geoadm stop ... verifying pre conditions and performing pre remove operations ... done ...removing product infrastructure ... please wait ... |
Confirm that the Sun Cluster Geographic Edition infrastructure was successfully disabled.
phys-paris-1# geoadm show --- CLUSTER LEVEL INFORMATION --- Sun Cluster Geographic Edition is not active on cluster-paris --- LOCAL NODE INFORMATION --- Node phys-paris-1 does not host active product module. Command execution successful phys-paris-1# |
Verify that Sun Cluster Geographic Edition resource groups and resources have been removed.
phys-paris-1# clresourcegroup status phys-paris-1# |
Use the geoadm show command to determine whether the Sun Cluster Geographic Edition infrastructure is enabled on the local cluster and on which node the infrastructure is active. The Sun Cluster Geographic Edition infrastructure is considered active on the node on which the geo-infrastructure resource group has a state of Online.
This example displays information on the phys-paris-1 node of the cluster-paris cluster.
phys-paris-1# geoadm show --- CLUSTER LEVEL INFORMATION --- Sun Cluster Geographic Edition is active on: node phys-paris-2, cluster cluster-paris Command execution successful phys-paris-1# |
The following events take place when you boot a cluster:
After the Sun Cluster infrastructure is enabled, the Sun Cluster Geographic Edition software starts automatically. Verify that the software started successfully by using the geoadm show command.
The heartbeat framework checks which partners it can reach.
Check the current status of the cluster by using the geoadm status command. For more information about this command and its output, see Monitoring the Runtime Status of the Sun Cluster Geographic Edition Software.
Observe the following guidelines and requirements to patch Sun Cluster Geographic Edition software:
You must run the same patch levels for Sun Cluster software and the common agent container software on all nodes of both clusters.
The patch level for each node on which you have installed Sun Cluster Geographic Edition software must meet the Sun Cluster patch-level requirements.
All nodes in one cluster must have the same version of Sun Cluster Geographic Edition software and the same patch level. However, primary and secondary clusters can run different versions of Sun Cluster Geographic Edition software, provided that each version is correctly patched and the versions are no more than one release different. For example, if one cluster is running Sun Cluster Geographic Edition 3.2 software that has been fully patched, and the partner cluster is running Sun Cluster Geographic Edition 3.2 11/09 software that has been fully patched, then both clusters should be brought to the same patch level as soon as possible. Also, if both partner clusters are running Sun Cluster Geographic Edition 3.2 11/09, then both partner clusters should be brought to the same patch level as soon as possible.
To ensure that the patches have been installed properly, install the patches on your secondary cluster before you install the patches on the primary cluster.
For additional information about Sun Cluster Geographic Edition patches, see the patch README file.
See Required Patches in Sun Cluster Geographic Edition 3.2 11/09 Release Notes for a list of required patches.
Ensure that the cluster is functioning properly.
To view the current status of the cluster, run the following command from any node:
% cluster status |
See the cluster(1CL) man page for more information.
Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
Become superuser on a node of the cluster.
Remove all application resource groups from protection groups.
This step ensures that resource groups are not stopped when you later stop the protection groups.
# geopg remove-resource-group resourcegroup protectiongroup |
See the geopg(1M) man page for more information.
Perform the preceding steps on all clusters that have a partnership with this cluster.
Stop all protection groups that are active on the cluster.
# geopg stop -e local protectiongroup |
See the geopg(1M) man page for more information.
Stop the Sun Cluster Geographic Edition infrastructure.
# geoadm stop |
Shutting down the infrastructure ensures that a patch installation on one cluster does not affect the other cluster in the partnership.
See the geoadm(1M) man page for more information.
On each node, stop the common agent container.
# /usr/sbin/cacaoadm stop |
You must use common agent container 2, which is located in the /usr/sbin directory. Use the /usr/sbin/cacaoadm -V command to check which version of common agent container you are using.
Install the required patches for the Sun Cluster Geographic Edition software. Go to How to Install Patches on a Sun Cluster Geographic Edition System.
Perform this procedure on all nodes of the cluster.
Patch the secondary cluster before you patch the primary cluster, to permit testing.
Perform the following tasks:
Ensure that the Solaris OS is installed to support Sun Cluster Geographic Edition software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster Geographic Edition software and any other software that you intend to install on the cluster.
Ensure that Sun Cluster Geographic Edition software packages are installed on the node.
Ensure that you completed all steps in How to Prepare a Sun Cluster Geographic Edition System for Patches.
Ensure that all the nodes are online and part of the cluster.
To view the current status of the cluster, run the following command from any node:
% cluster status |
See the cluster(1CL) man page for more information.
Search the /var/adm/messages log on the same node for unresolved error messages or warning messages.
Become superuser on each node.
On each node, install any necessary patches to support Sun Cluster Geographic Edition software by using the patchadd command.
If you are applying Sun Cluster patches, use the Sun Cluster methods on both clusters.
After you have installed all required patches on all nodes of the cluster, on each node start the common agent container.
# /usr/sbin/cacaoadm start |
You must use common agent container 2, which is located in the /usr/sbin directory. Use the /usr/sbin/cacaoadm -V command to check which version of common agent container you are using.
On one node, enable Sun Cluster Geographic Edition software.
# geoadm start |
Add all application resource groups that you removed while you were preparing the cluster for a patch installation back to the protection group.
# geopg add-resource-group resourcegroup protectiongroup |
See the geopg(1M) man page for more information.
Start all the protection groups that you have added.
# geopg start -e local [-n] protectiongroup |
See the geopg(1M) man page for more information.
After you patch the secondary cluster, perform a sanity test on the Sun Cluster Geographic Edition software, and then repeat this procedure on the primary cluster.