This section provides the following procedures to create a non-global zone on a global-cluster node.
Perform this procedure for each non-global zone that you create in the global cluster.
For complete information about installing a zone, refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
You can configure a Solaris 10 non-global zone, simply referred to as a zone, on a cluster node while the node is booted in either cluster mode or in noncluster mode.
If you create a zone while the node is booted in noncluster mode, the cluster software discovers the zone when the node joins the cluster.
If you create or remove a zone while the node is in cluster mode, the cluster software dynamically changes its list of zones that can master resource groups.
Perform the following tasks:
Plan your non-global zone configuration. Observe the requirements and restrictions in Guidelines for Non-Global Zones in a Global Cluster.
Have available the following information:
The total number of non-global zones that you will create.
The public adapter and public IP address that each zone will use.
The zone path for each zone. This path must be a local file system, not a cluster file system or a highly available local file system.
One or more devices that should appear in each zone.
(Optional) The name that you will assign each zone.
If you will assign the zone a private IP address, ensure that the cluster IP address range can support the additional private IP addresses that you will configure. Use the cluster show-netprops command to display the current private-network configuration.
If the current IP address range is not sufficient to support the additional private IP addresses that you will configure, follow the procedures in How to Change the Private Network Configuration When Adding Nodes or Private Networks to reconfigure the private IP-address range.
For additional information, see Zone Components in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Become superuser on the global-cluster node where you are creating the non-voting node.
You must be working in the global zone.
If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default
Configure, install, and boot the new zone.
Follow procedures in the Solaris documentation:
Verify that the zone is in the ready state.
phys-schost# zoneadm list -v ID NAME STATUS PATH 0 global running / 1 my-zone ready /zone-path
For a whole-root zone with the ip-type property set to exclusive: If the zone might host a logical-hostname resource, configure a file system resource that mounts the method directory from the global zone.
phys-schost# zonecfg -z sczone zonecfg:sczone> add fs zonecfg:sczone:fs> set dir=/usr/cluster/lib/rgm zonecfg:sczone:fs> set special=/usr/cluster/lib/rgm zonecfg:sczone:fs> set type=lofs zonecfg:sczone:fs> end zonecfg:sczone> exit
The following command chooses and assigns an available IP address from the cluster's private IP-address range. The command also assigns the specified private hostname, or host alias, to the zone and maps it to the assigned private IP address.
phys-schost# clnode set -p zprivatehostname=hostalias node:zone
Specifies a property.
Specifies the zone private hostname, or host alias.
The name of the node.
The name of the global-cluster non-voting node.
Perform the initial internal zone configuration.
Follow the procedures in Performing the Initial Internal Zone Configuration in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. Choose either of the following methods:
Log in to the zone.
Use an /etc/sysidcfg file.
These changes enable the zone to resolve searches for cluster-specific hostnames and IP addresses.
Log in to the zone.
phys-schost# zlogin -c zonename
Open the /etc/nsswitch.conf file for editing.
sczone# vi /etc/nsswitch.conf
Add the cluster switch to the beginning of the lookups for the hosts and netmasks entries, followed by the files switch.
The modified entries should appear similar to the following:
… hosts: cluster files nis [NOTFOUND=return] … netmasks: cluster files nis [NOTFOUND=return] …
For all other entries, ensure that the files switch is the first switch that is listed in the entry.
Exit the zone.
You must configure an IPMP group for each public-network adapter that is used for data-service traffic in the zone. This information is not inherited from the global zone. See Public Networks for more information about configuring IPMP groups in a cluster.
Set up name-to-address mappings for all logical hostname resources that are used by the zone.
To install an application in a non-global zone, use the same procedure as for a stand-alone system. See your application's installation documentation for procedures to install the software in a non-global zone. Also see Adding and Removing Packages and Patches on a Solaris System With Zones Installed (Task Map) in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
To install and configure a data service in a non-global zone, see the Sun Cluster manual for the individual data service.
Use this procedure to make a cluster file system available for use by a native brand non-global zone that is configured on a cluster node.
Use this procedure with only the native brand of non-global zones. You cannot perform this task with any other brand of non-global zone, such as the solaris8 brand or the cluster brand which is used for zone clusters.
On one node of the global cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Create a resource group with a node list of native brand non-global zones.
Use the following command to create a failover resource group:
phys-schost# clresourcegroup create -n node:zone[,…] resource-group
Specifies the names of the non-global zones in the resource-group node list.
The name of the resource group that you create.
Use the following command to create a scalable resource group:
phys-schost# clresourcegroup create -S -n node:zone[,…] resource-group
Specifies that the resource group is scalable.
Register the HAStoragePlus resource type.
phys-schost# clresourcetype register SUNW.HAStoragePlus
On each global-cluster node where a non-global zone in the node list resides, add the cluster file system entry to the /etc/vfstab file.
Entries in the /etc/vfstab file for a cluster file system must contain the global keyword in the mount options.
Create the HAStoragePlus resource and define the file-system mount points.
phys-schost# clresource create -g resource-group -t SUNW.HAStoragePlus \ -p FileSystemMountPoints="mount-point-list" hasp-resource
Specifies the name of the resource group that the new resource is added to.
Specifies one or more file-system mount points for the resource.
The name of the HAStoragePlus resource that you create.
The resource is created in the enabled state.
Add a resource to resource-group and set a dependency for the resource on hasp-resource.
If you have more than one resource to add to the resource group, use a separate command for each resource.
phys-schost# clresource create -g resource-group -t resource-type \ -p Network_resources_used=hasp-resource resource
Specifies the resource type that you create the resource for.
Specifies that the resource has a dependency on the HAStoragePlus resource, hasp-resource.
The name of the resource that you create.
Bring online and in a managed state the resource group that contains the HAStoragePlus resource.
phys-schost# clresourcegroup online -M resource-group
Specifies that the resource group is managed.
The following example creates a failover resource group, cfs-rg, to manage an HA-Apache data service. The resource-group node list contains two non-global zones, sczone1 on phys-schost-1 and sczone1 on phys-schost-2. The resource group contains an HAStoragePlus resource, hasp-rs, and a data-service resource, apache-rs. The file-system mount point is /global/local-fs/apache.
phys-schost-1# clresourcegroup create -n phys-schost-1:sczone1,phys-schost-2:sczone1 cfs-rg phys-schost-1# clresourcetype register SUNW.HAStoragePlus Add the cluster file system entry to the /etc/vfstab file on phys-schost-1 phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/kappa-1/dsk/d0 /dev/md/kappa-1/rdsk/d0 /global/local-fs/apache ufs 5 yes logging,global Add the cluster file system entry to the /etc/vfstab file on phys-schost-2 phys-schost-2# vi /etc/vfstab … phys-schost-1# clresource create -g cfs-rg -t SUNW.HAStoragePlus \ -p FileSystemMountPoints="/global/local-fs/apache" hasp-rs phys-schost-1# clresource create -g cfs-rg -t SUNW.apache \ -p Network_resources_used=hasp-rs apache-rs phys-schost-1# clresourcegroup online -M cfs-rg