Sun Cluster Software Installation Guide for Solaris OS

Configuring a Non-Global Zone on a Global-Cluster Node

This section provides procedures to create a non-global zone on a global-cluster node.

ProcedureHow to Create a Non-Global Zone on a Global-Cluster Node

Perform this procedure for each non-global zone that you create in the global cluster.


Note –

For complete information about installing a zone, refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.


You can configure a Solaris 10 non-global zone, simply referred to as a zone, on a cluster node while the node is booted in either cluster mode or in noncluster mode.

Before You Begin

Perform the following tasks:

For additional information, see Zone Components in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

  1. Become superuser on the global-cluster node where you are creating the non-voting node.

    You must be working in the global zone.

  2. For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.


    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  3. Configure, install, and boot the new zone.


    Note –

    You must set the autoboot property to true to support resource-group functionality in the non-voting node on the global cluster.


    Follow procedures in the Solaris documentation:

    1. Perform procedures in Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

    2. Perform procedures in Installing and Booting Zones in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

    3. Perform procedures in How to Boot a Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

  4. Verify that the zone is in the ready state.


    phys-schost# zoneadm list -v
    ID  NAME     STATUS       PATH
     0  global   running      /
     1  my-zone  ready        /zone-path
    
  5. For a whole-root zone with the ip-type property set to exclusive: If the zone might host a logical-hostname resource, configure a file system resource that mounts the method directory from the global zone.


    phys-schost# zonecfg -z sczone
    zonecfg:sczone> add fs
    zonecfg:sczone:fs> set dir=/usr/cluster/lib/rgm
    zonecfg:sczone:fs> set special=/usr/cluster/lib/rgm
    zonecfg:sczone:fs> set type=lofs
    zonecfg:sczone:fs> end
    zonecfg:sczone> exit
    
  6. (Optional) For a shared-IP zone, assign a private IP address and a private hostname to the zone.

    The following command chooses and assigns an available IP address from the cluster's private IP-address range. The command also assigns the specified private hostname, or host alias, to the zone and maps it to the assigned private IP address.


    phys-schost# clnode set -p zprivatehostname=hostalias node:zone
    
    -p

    Specifies a property.

    zprivatehostname=hostalias

    Specifies the zone private hostname, or host alias.

    node

    The name of the node.

    zone

    The name of the global-cluster non-voting node.

  7. Perform the initial internal zone configuration.

    Follow the procedures in Performing the Initial Internal Zone Configuration in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. Choose either of the following methods:

    • Log in to the zone.

    • Use an /etc/sysidcfg file.

  8. In the non-voting node, modify the nsswitch.conf file.

    These changes enable the zone to resolve searches for cluster-specific hostnames and IP addresses.

    1. Log in to the zone.


      phys-schost# zlogin -c zonename
      
    2. Open the /etc/nsswitch.conf file for editing.


      sczone# vi /etc/nsswitch.conf
      
    3. Add the cluster switch to the beginning of the lookups for the hosts and netmasks entries, followed by the files switch.

      The modified entries should appear similar to the following:


      …
      hosts:      cluster files nis [NOTFOUND=return]
      …
      netmasks:   cluster files nis [NOTFOUND=return]
      …
    4. For all other entries, ensure that the files switch is the first switch that is listed in the entry.

    5. Exit the zone.

  9. If you created an exclusive-IP zone, configure IPMP groups in each /etc/hostname.interface file that is on the zone.

    You must configure an IPMP group for each public-network adapter that is used for data-service traffic in the zone. This information is not inherited from the global zone. See Public Networks for more information about configuring IPMP groups in a cluster.

  10. Set up name-to-address mappings for all logical hostname resources that are used by the zone.

    1. Add name-to-address mappings to the /etc/inet/hosts file on the zone.

      This information is not inherited from the global zone.

    2. If you use a name server, add the name-to-address mappings.

Next Steps

To install an application in a non-global zone, use the same procedure as for a stand-alone system. See your application's installation documentation for procedures to install the software in a non-global zone. Also see Adding and Removing Packages and Patches on a Solaris System With Zones Installed (Task Map) in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

To install and configure a data service in a non-global zone, see the Sun Cluster manual for the individual data service.