On a cluster where the Solaris 10 OS is running, you can configure a resource group to run on a global-cluster voting node or a global-cluster non-voting node. The RGM manages each global-cluster non-voting node as a switchover target. If a global-cluster non-voting node is specified in the node list of a resource group, the RGM brings the resource group online in the specified node.
Figure 3–8 illustrates the failover of resource groups between nodes in a two-host cluster. In this example, identical nodes are configured to simplify the administration of the cluster.
You can configure a scalable resource group (which uses network load balancing) to run in a cluster non-voting node as well.
In Sun Cluster commands, you specify a zone by appending the name of the zone to the name of the host, and separating them with a colon, for example:
phys-schost-1:zoneA
Use support for Solaris zones directly through the RGM if any of following criteria is met:
Your application cannot tolerate the additional failover time that is required to boot a zone.
You require minimum downtime during maintenance.
You require dual-partition software upgrade.
You are configuring a data service that uses a shared address resource for network load balancing.
If you plan to use support for Solaris zones directly through the RGM for an application, ensure that the following requirements are met:
The application is supported to run in non-global zones.
The data service for the application is supported to run on a global-cluster non-voting node.
If you use support for Solaris zones directly through the RGM, ensure that resource groups that are related by an affinity are configured to run on the same Solaris host.
For information about how to configure support for Solaris zones directly through the RGM, see the following documentation: