Perform this procedure for each non-global zone that you create in the global cluster.
For complete information about installing a zone, refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
You can configure a Solaris 10 non-global zone, simply referred to as a zone, on a cluster node while the node is booted in either cluster mode or in noncluster mode.
If you create a zone while the node is booted in noncluster mode, the cluster software discovers the zone when the node joins the cluster.
If you create or remove a zone while the node is in cluster mode, the cluster software dynamically changes its list of zones that can master resource groups.
Perform the following tasks:
Plan your non-global zone configuration. Observe the requirements and restrictions in Guidelines for Non-Global Zones in a Global Cluster.
Have available the following information:
The total number of non-global zones that you will create.
The public adapter and public IP address that each zone will use.
The zone path for each zone. This path must be a local file system, not a cluster file system or a highly available local file system.
One or more devices that should appear in each zone.
(Optional) The name that you will assign each zone.
If you will assign the zone a private IP address, ensure that the cluster IP address range can support the additional private IP addresses that you will configure. Use the cluster show-netprops command to display the current private-network configuration.
If the current IP address range is not sufficient to support the additional private IP addresses that you will configure, follow the procedures in How to Change the Private Network Configuration When Adding Nodes or Private Networks to reconfigure the private IP-address range.
For additional information, see Zone Components in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Become superuser on the global-cluster node where you are creating the non-voting node.
You must be working in the global zone.
For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.
If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default |
Configure, install, and boot the new zone.
You must set the autoboot property to true to support resource-group functionality in the non-voting node on the global cluster.
Follow procedures in the Solaris documentation:
Perform procedures in Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Perform procedures in Installing and Booting Zones in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Perform procedures in How to Boot a Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Verify that the zone is in the ready state.
phys-schost# zoneadm list -v ID NAME STATUS PATH 0 global running / 1 my-zone ready /zone-path |
For a whole-root zone with the ip-type property set to exclusive: If the zone might host a logical-hostname resource, configure a file system resource that mounts the method directory from the global zone.
phys-schost# zonecfg -z sczone zonecfg:sczone> add fs zonecfg:sczone:fs> set dir=/usr/cluster/lib/rgm zonecfg:sczone:fs> set special=/usr/cluster/lib/rgm zonecfg:sczone:fs> set type=lofs zonecfg:sczone:fs> end zonecfg:sczone> exit |
(Optional) For a shared-IP zone, assign a private IP address and a private hostname to the zone.
The following command chooses and assigns an available IP address from the cluster's private IP-address range. The command also assigns the specified private hostname, or host alias, to the zone and maps it to the assigned private IP address.
phys-schost# clnode set -p zprivatehostname=hostalias node:zone |
Specifies a property.
Specifies the zone private hostname, or host alias.
The name of the node.
The name of the global-cluster non-voting node.
Perform the initial internal zone configuration.
Follow the procedures in Performing the Initial Internal Zone Configuration in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. Choose either of the following methods:
Log in to the zone.
Use an /etc/sysidcfg file.
In the non-voting node, modify the nsswitch.conf file.
These changes enable the zone to resolve searches for cluster-specific hostnames and IP addresses.
Log in to the zone.
phys-schost# zlogin -c zonename |
Open the /etc/nsswitch.conf file for editing.
sczone# vi /etc/nsswitch.conf |
Add the cluster switch to the beginning of the lookups for the hosts and netmasks entries, followed by the files switch.
The modified entries should appear similar to the following:
… hosts: cluster files nis [NOTFOUND=return] … netmasks: cluster files nis [NOTFOUND=return] … |
For all other entries, ensure that the files switch is the first switch that is listed in the entry.
Exit the zone.
If you created an exclusive-IP zone, configure IPMP groups in each /etc/hostname.interface file that is on the zone.
You must configure an IPMP group for each public-network adapter that is used for data-service traffic in the zone. This information is not inherited from the global zone. See Public Networks for more information about configuring IPMP groups in a cluster.
Set up name-to-address mappings for all logical hostname resources that are used by the zone.
To install an application in a non-global zone, use the same procedure as for a stand-alone system. See your application's installation documentation for procedures to install the software in a non-global zone. Also see Adding and Removing Packages and Patches on a Solaris System With Zones Installed (Task Map) in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
To install and configure a data service in a non-global zone, see the Sun Cluster manual for the individual data service.