For information about the purpose and function of Solaris 10 zones in a cluster, see Support for Solaris Zones in Sun Cluster Concepts Guide for Solaris OS.
For guidelines about configuring a cluster of non-global zones, see Zone Clusters.
Consider the following points when you create a Solaris 10 non-global zone, simply referred to as a zone, on a global-cluster node.
Reusing a zone name on multiple nodes – To simplify cluster administration, you can use the same name for a zone on each node where resource groups are to be brought online in that zone.
Private IP addresses – Do not attempt to use more private IP addresses than are available in the cluster.
Mounts – Do not include global mounts in zone definitions. Include only loopback mounts.
Failover services – In multiple-host clusters, while Sun Cluster software permits you to specify different zones on the same Solaris host in a failover resource group's node list, doing so is useful only during testing. If a single host contains all zones in the node list, the node becomes a single point of failure for the resource group. For highest availability, zones in a failover resource group's node list should be on different hosts.
In single-host clusters, no functional risk is incurred if you specify multiple zones in a failover resource group's node list.
Scalable services – Do not create non-global zones for use in the same scalable service on the same Solaris host. Each instance of the scalable service must run on a different host.
Cluster file systems - For cluster file systems that use UFS or VxFS, do not directly add a cluster file system to a non-global zone by using the zonecfs command. Instead, configure an HAStoragePlus resource, which manages the mounting of the cluster file system in the global zone and performs a loopback mount of the cluster file system in the non-global zone.
LOFS – Solaris Zones requires that the loopback file system (LOFS) be enabled. However, the Sun Cluster HA for NFS data service requires that LOFS be disabled, to avoid switchover problems or other failures. If you configure both non-global zones and Sun Cluster HA for NFS in your cluster, do one of the following to prevent possible problems in the data service:
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS.
Logical-hostname resource groups – In a resource group that contains a LogicalHostname resource, if the node list contains any non-global zone with the ip-type property set to exclusive, all zones in that node list must have this property set to exclusive. Note that a global zone always has the ip-type property set to shared, and therefore cannot coexist in a node list that contains zones of ip-type=exclusive. This restriction applies only to versions of the Solaris OS that use the Solaris zones ip-type property.
IPMP groups – For all public-network adapters that are used for data-service traffic in the non-global zone, you must manually configure IPMP groups in all /etc/hostname.adapter files on the zone. This information is not inherited from the global zone. For guidelines and instructions to configure IPMP groups, follow the procedures in Part VI, IPMP, in System Administration Guide: IP Services.
Private-hostname dependency - Exclusive-IP zones cannot depend on the private hostnames and private addresses of the cluster.
Shared-address resources – Shared-address resources cannot use exclusive-IP zones.