Go to main content

Oracle® Solaris Cluster 4.3 Software Installation Guide

Exit Print View

Updated: June 2019
 
 

Creating a Zone Cluster

This section provides procedures on how to use the clsetup utility to create a zone cluster, and add a network address, file system, ZFS storage pool, and storage device to the new zone cluster.

If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.

You can alternatively use the clzonecluster utility to create and configure a cluster. See the clzonecluster(1CL) man page for more information.


Note -  You cannot change the zone cluster name after the zone cluster is created.

Also, once the zone cluster is configured, switching ip-type between exclusive and shared is not supported.


This section contains the following procedures:

How to Install and Configure Trusted Extensions

This procedure prepares the global cluster to use the Trusted Extensions feature of Oracle Solaris with zone clusters. If you do not plan to enable Trusted Extensions, proceed to Creating a Zone Cluster.

Perform this procedure on each node in the global cluster.

Before You Begin

Perform the following tasks:

  1. Assume the root role on a node of the global cluster.
  2. Install and configure Trusted Extensions software.

    Follow procedures in Chapter 3, Adding the Trusted Extensions Feature to Oracle Solaris in Trusted Extensions Configuration and Administration.

  3. Disable the Trusted Extensions zoneshare and zoneunshare scripts.

    The Trusted Extensions zoneshare and zoneunshare scripts support the ability to export home directories on the system. An Oracle Solaris Cluster configuration does not support this feature.

    Disable this feature by replacing each script with a symbolic link to the /bin/true utility.

    phys-schost# ln -s /usr/lib/zones/zoneshare /bin/true
    phys-schost# ln -s /usr/lib/zones/zoneunshare /bin/true
  4. Configure all logical-hostname and shared-IP addresses that are to be used in the zone cluster.

    See How to Create a Default Trusted Extensions System in Trusted Extensions Configuration and Administration.

  5. (Optional) Enable remote login by the LDAP server to the global-cluster node.
    1. In the /etc/default/login file, comment out the CONSOLE entry.
    2. Enable remote login.
      phys-schost# svcadm enable rlogin
    3. Modify the /etc/pam.conf file.

      Modify the account management entries by appending a Tab and typing allow_remote or allow_unlabeled respectively, as shown below.

      other   account requisite       pam_roles.so.1        Tab  allow_remote
      other   account required        pam_unix_account.so.1 Tab  allow_unlabeled
  6. Modify the admin_low template.
    1. Assign the admin_low template to each IP address that does not belong to a Trusted Extensions machine that is used by the global zone.
      # tncfg -t admin_low
      tncfg:admin_low> add host=ip-address1
      tncfg:admin_low> add host=ip-address2
      …
      tncfg:admin_low> exit
    2. Remove the wildcard address 0.0.0.0/32 from the tncfg template.
      # tncfg -t admin_low remove host=0.0.0.0
  7. Assign the cipso template to each IP address that does belong to a Trusted Extensions machine that is used by the global zone.
    # tncfg -t cipso
    tncfg:cipso> add host=ip-address1
    tncfg:cipso> add host=ip-address2
    …
    tncfg:cipso> exit
  8. Repeat Step 1 through Step 7 on each remaining node of the global cluster.

    When all steps are completed on all global-cluster nodes, perform the remaining steps of this procedure on each node of the global cluster.

  9. On each global-cluster node, add the IP address of the Trusted Extensions-enabled LDAP server to the /etc/inet/hosts file.

    The LDAP server is used by the global zone and by the nodes of the zone cluster.

  10. (Optional) Make the global-cluster node an LDAP client.

    See Make the Global Zone an LDAP Client in Trusted Extensions in Trusted Extensions Configuration and Administration.

  11. Add Trusted Extensions users.

    See Creating Roles and Users in Trusted Extensions in Trusted Extensions Configuration and Administration.

Next Steps

Create the zone cluster. Go to Creating a Zone Cluster.

How to Create a Zone Cluster (clsetup)

Perform this procedure to create a zone cluster using the clsetup utility.

To modify the zone cluster after it is installed, see Performing Zone Cluster Administrative Tasks in Oracle Solaris Cluster 4.3 System Administration Guide and the clzonecluster(1CL) man page.


Note -  You cannot change the zone cluster name after the zone cluster is created.

Before You Begin

  • Create a global cluster. See Establishing the Global Cluster.

  • Read the guidelines and requirements for creating a zone cluster. See Zone Clusters.

  • If you plan to use a zone cluster configuration profile when creating a solaris or labeled brand zone cluster, ensure that the file is created and the file name has the .xml extension. See the Example section of the clzonecluster(1CL) man page for an example of the profile contents.

  • If the zone cluster will use Trusted Extensions, ensure that you have installed, configured, and enabled Trusted Extensions as described in How to Install and Configure Trusted Extensions.

  • If the cluster does not have sufficient subnets available to add a zone cluster, you must modify the private IP address range to provide the needed subnets. For more information, see How to Change the Private Network Address or Address Range of an Existing Cluster in Oracle Solaris Cluster 4.3 System Administration Guide.

  • Have available the following information:

    • The unique name to assign to the zone cluster.


      Note -  If Trusted Extensions is enabled, the zone cluster name must be the same name as a Trusted Extensions security label that has the security levels that you want to assign to the zone cluster. Create a separate zone cluster for each Trusted Extensions security label that you want to use.
    • The zone path that the nodes of the zone cluster will use. For more information, see the description of the zonepath property in Configurable Resources and Properties in Oracle Solaris Zones Configuration Resources. By default, whole-root zones are created.

    • The name of each node in the global cluster on which to create a zone-cluster node.

    • The zone public hostname, or host alias, that you assign to each zone-cluster node.

    • If applicable, the public-network IP address that each zone-cluster node uses. Specifying an IP address and NIC for each zone cluster node is required if the zone cluster will be used in a Geographic Edition configuration. Otherwise, this requirement is optional. For more information about this Geographic Edition requirement, see Geographic Edition.

    • If applicable, the name of the public network management object that each zone-cluster node uses to connect to the public network. For a solaris10 branded exclusive-IP zone cluster, you can only use an IPMP group as the public network management object.


    Note -  If you do not configure an IP address for each zone cluster node, two things will occur:
    • That specific zone cluster will not be able to configure NAS devices for use in the zone cluster. The cluster uses the IP address of the zone cluster node when communicating with the NAS device, so not having an IP address prevents cluster support for fencing NAS devices.

    • The cluster software will activate any Logical Host IP address on any NIC.



Tip  -  While in the clsetup utility, you can press the < key to return to a previous screen.

You can also use Oracle Solaris Cluster Manager to create a zone cluster. For the browser interface log-in instructions, see How to Access Oracle Solaris Cluster Manager in Oracle Solaris Cluster 4.3 System Administration Guide.

  1. Assume the root role on an active member node of a global cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Ensure that the node of the global cluster is in cluster mode.
    phys-schost# clnode status
    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-2                                   Online
    phys-schost-1                                   Online
  3. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.

  4. Choose the Zone Cluster menu item.
  5. Choose the Create a Zone Cluster menu item.
  6. Type the name of the zone cluster you want to add.

    A zone cluster name can contain ASCII letters (a-z and A-Z), numbers, a dash, or an underscore. The maximum length of the name is 20 characters.

  7. Choose the property you want to change.

    You can set the following properties:

    Property
    Description
    zonepath=zone-cluster-node-path
    Specifies the path to the zone cluster node. For example, /zones/sczone.
    brand=brand-type
    Specifies the solaris, solaris10, or labeled zones brand used in the zone cluster.

    Note -  To use Trusted Extensions, you must use only the labeled brand. To create an exclusive-IP zone cluster, you can use the solaris or the solaris10 brand.

    To create an exclusive-IP solaris10 brand zone cluster, set the properties using the clzonecluster create command as follows:

    cz1> set brand=solaris10
    cz1> set ip-type=exclusive

    ip-type=value
    Specifies the type of network IP address used by the zone cluster. Valid ip-type values are shared and exclusive.
    The maximum number of exclusive-IP zone clusters is constrained by the cluster property num_xip_zoneclusters, which you can set during initial cluster installation. This value has a default of three. For more information, see the cluster(1CL) man page.
    enable_priv_net=value
    When set to true, Oracle Solaris Cluster private network communication is enabled between the nodes of the zone cluster. The Oracle Solaris Cluster private hostnames and IP addresses for the zone cluster nodes are automatically generated by the system. Private network communication is disabled if the value is set to false. The default value is true.
    When the enable_priv_net property is set to true along with the following properties, private communication occurs in the following ways:
    • ip-type=shared – Communication between zone cluster nodes uses the private networks of the global cluster.

    • ip-type=exclusive (solaris brand only) – Communication between zone cluster nodes uses the specified privnet resources. The privnet resources are either Virtual Network Interfaces (VNICs) for the Ethernet type of private network adapters, or InfiniBand (IB) partitions for the IB type of private network adapters. The VNICs or IB partitions are automatically created by the wizard over each private network adapter of the global cluster, and used to configure a zone cluster.

    The VNICs or IB partitions that the wizard generates use the following naming conventions:
    For the Ethernet type: private-network-interface-name_zone-cluster-name_vnic0.
    For the IB type: private-network-interface-name_zone-cluster-name_ibp0.
    For example, the private network interfaces of the global cluster are net2 and net3, and the zone cluster name is zone1. If net2 and net3 are Ethernet type network interfaces, the two VNICs that are created for the zone cluster will have the names net2_zone1_vnic0 and net3_zone1_vnic0.
    If net2 and net3 are IB type network interfaces, the two IB partitions created for the zone cluster will have the names net2_zone1_ibp0 and net3_zone1_ibp0.
  8. For a solaris10 brand zone cluster, enter a zone root password.

    A root account password is required for a solaris10 brand zone.

  9. (Optional) Choose the Zone System Resource Control property that you want to change.

    You can set the following properties:

    Property
    Description
    max-lwps=value
    Specifies the maximum number of lightweight processes (LWPs) simultaneously available to this zone cluster.
    max-shm-memory=value
    Specifies the maximum amount of shared memory in GBytes allowed for this zone cluster.
    max-shm-ids=value
    Specifies the maximum number of shared memory IDs allowed for this zone cluster.
    max-msg-ids=value
    Specifies the maximum number of message queue IDs allowed for this zone cluster.
    max-sem-ids=value
    Specifies the maximum number of semaphore IDs allowed for this zone cluster.
    cpu-shares=value
    Specifies the number of Fair Share Scheduler (FSS) shares to allocate to this zone cluster.
  10. (Optional) Choose the Zone CPU Resource Control property that you want to change.

    You can set the following properties:

    Property
    Description
    scope=scope-type
    Specifies whether the ncpus property used in a zone cluster is dedicated-cpu or capped-cpu.
    ncpus=value
    Specifies the limit for the scope type.
    • If the scope property is set to dedicated-cpu, the ncpus property sets a limit on the number of CPUs that should be assigned for this zone's exclusive use. The zone will create a pool and processor set when it boots. See the pooladm(1M) and poolcfg(1M) man pages for more information on resource pools.

    • If the scope property is set to capped-cpu, the ncpus property sets a limit on the amount of CPU time that can be used by a zone cluster. The unit used translates to the percentage of a single CPU that can be used by all user threads in a zone, expressed as a fraction (for example, .75) or a mixed number (whole number and fraction, for example, 1.25). An ncpus value of 1 means 100% of a CPU. See the pooladm(1M), pooladm(1M), and poolcfg(1M) man pages for more information on resource pools.

  11. (Optional) Choose the capped-memory property that you want to change.

    You can set the following properties:

    Property
    Description
    physical=value
    Specifies the GByte limit for physical memory.
    swap=value
    Specifies the GByte limit for swap memory.
    locked=value
    Specifies the GByte limit for locked memory.

    You can also use Oracle Solaris Cluster Manager to view the capped-cpu memory configuration of a zone cluster, as well as the dedicated-CPU configuration. For the browser interface log-in instructions, see How to Access Oracle Solaris Cluster Manager in Oracle Solaris Cluster 4.3 System Administration Guide.

  12. Choose a physical host from the list of available physical hosts.

    You can select one or all of the available physical nodes (or hosts), and then configure one zone-cluster node at a time.

    You can set the following properties:

    Property
    Description
    hostname=hostname
    Specifies the zone-cluster node hostname. For example, zc-host-1.
    address=public-network-address
    Specifies the public network address for the zone-cluster node on a shared-IP type zone cluster. For example, 172.1.1.1.
    physical=physical-interface
    Specifies a network physical interface for the public network from the available network interfaces that are discovered on the physical nodes. For example, sc_ipmp0 or net0.
    defrouter=default-router
    Specifies the default router for the network address, if your zone is configured in a different subnet. Each zone or set of zones that uses a different defrouter setting must be on a different subnet, for example, 192.168.0.1. See the zonecfg(1M) man page for more information about the defrouter property.
  13. Specify the network addresses for the zone cluster.

    The network addresses can be used to configure a logical hostname or shared IP cluster resources in the zone cluster. The network address is in the zone cluster global scope.

  14. At the Review Configuration screen, press Return to continue and then type c to create the zone cluster.

    The results of your configuration change are displayed, similar to the following:

     >>> Result of the Creation for the Zone Cluster(sczone) <<<
    
    The zone cluster is being created with the following configuration
    
    /usr/cluster/bin/clzonecluster configure sczone
    create
    set brand=solaris
    set zonepath=/zones/sczone
    set ip-type=shared
    set enable_priv_net=true
    add capped-memory
    set physical=2G
    end
    add node
    set physical-host=phys-schost-1
    set hostname=zc-host-1
    add net
    set address=172.1.1.1
    set physical=net0
    end
    end
    add net
    set address=172.1.1.2
    end
    
    Zone cluster, zc2 has been created and configured successfully.
    
    Continue to install the zone cluster(yes/no) ?
  15. Type yes to continue.

    The clsetup utility performs a standard configuration of a zone cluster and you cannot specify any options.

  16. When finished, exit the clsetup utility.
  17. Verify the zone cluster configuration.

    The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, no output is displayed.

    phys-schost-1# clzonecluster verify zone-cluster-name
    phys-schost-1# clzonecluster status zone-cluster-name
    === Zone Clusters ===
    
    --- Zone Cluster Status ---
    
    Name      Node Name   Zone HostName   Status    Zone Status
    ----      ---------   -------------   ------    -----------
    zone       basenode1     zone-1           Offline   Configured
                basenode2     zone-2           Offline   Configured
  18. For Trusted Extensions, make the password files writable on each zone-cluster node.

    From the global zone, launch the txzonemgr BUI.

    phys-schost# txzonemgr

    Select the global zone, then select the item, Configure per-zone name service.

  19. If you typed No in Step 14, then install the zone cluster.
    phys-schost-1# clzonecluster install options zone-cluster-name
    Waiting for zone install commands to complete on all the nodes
    of the zone cluster "zone-cluster-name"...
    • For a solaris or labeled brand zone cluster, the following options are valid.
      Option
      Description
      -c config-profile.xml
      Includes system configuration information. The -c config-profile.xml option provides a configuration profile for all non-global zones of the zone cluster. Using this option changes only the hostname of the zone, which is unique for each zone in the zone cluster. All profiles must have a .xml extension.
      The contents of the file is a line-delimited list of the commands to be specified to the interactive clzonecluster utility. See the Example section of the clzonecluster(1CL) man page for an example of the profile contents.
      -M manifest.xml
      Specifies a custom Automated Installer manifest that you configure to install the necessary packages on all zone-cluster nodes. Use this option if the base global-cluster nodes for the zone-cluster are not all installed with the same Oracle Solaris Cluster packages but you do not want to change which packages are on the base nodes. If the clzonecluster install command is run without the –M option, zone-cluster installation fails on a base node if it is missing a package that is installed on the issuing base node.
    • For a solaris10 brand zone cluster, the following options are valid when using the clzonecluster install and the clzonecluster install-cluster commands.

      When using the clzonecluster install command, use either the –a option or the –d option to install the solaris10 image.

      When using the clzonecluster install-cluster command, you can use the –d, –s, and –p options in the same command, to install cluster core packages, Geographic Edition software, and agents that are supported in the zone cluster, as well as patches.

      Option
      Description
      -a absolute_path_to_archive
      Specifies the absolute path to a solaris10 system archive to be used as the source image. The archive has to be accessible from all the nodes where the zone cluster is configured.
      # clzonecluster install \
      [-n nodename[,…]] \
      -a absolute_path_to_archive \
      zone-cluster-name
      -d absolute_directory_path
      Specifies the full directory path to the root directory of an installed solaris10 non-global zone. The path should be accessible on all the physical nodes of the cluster where the zone cluster will be installed.
      # clzonecluster install \
      [-n nodename[,…]] \
      -d absolute_directory_path
      zone-cluster-name
      -d dvd-image-directory zone-cluster-name
      -p patchdir=patchdir[,patchlistfile=patchlistfile]
      -s {all | software-component

      Note -  Oracle Solaris Cluster patch 145333-15 for SPARC and 145334–15 for x86 patches are only required when you are installing the zone cluster with either the Oracle Solaris Cluster 3.3 software or the Oracle Solaris Cluster 3.3 5/11 software.

      You must install a minimum of Oracle Solaris Cluster 3.3 patch 145333–15 for SPARC or 145334–15 for x86 before you install the solaris10 brand zone cluster. Log in to My Oracle Support to retrieve the patch. Then from the global zone, use the –p option to install the patch.

      The –d option specifies the full path to a DVD image directory for an Oracle Solaris Cluster release that supports the solaris10 brand zones. The cluster software DVD directory must be accessible from the global zone of the node where you run the command.
      In the –p option, patchdir specifies the directory of Oracle Solaris Cluster patches, and patchlistfile is a file that contains the list of patches in the patchdir directory to install. The patchdir directory is required, and must be accessible from inside the solaris10 brand zone on all nodes of the zone cluster. For additional instructions on installing patches, log in to My Oracle Support (https://support.oracle.com) and search for ID 1278636.1, How to Find and Download any Revision of a Solaris Patch.
      The –s option specifies the cluster software components that include Geographic Edition and data services, in addition to the core packages.
      # clzonecluster install-cluster \
      -d dvd-image-directory \
      [-p patchdir=patchdir[,patchlistfile=filename] \
      [-s all] \
      [-n phys-schost-1[,…]] \
      [-v] \
      zone-cluster-name

    For more information, see the clzonecluster(1CL) man page.

  20. If in Step 19, you did not use the -c config-profile.xml option when you installed the zone cluster, perform sysid configuration.

    If in Step 19, you did use the -c config-profile.xml option when you installed the zone cluster, you do not need to perform sysid configuration. Proceed to Step 21.


    Note -  In the following steps, the non-global zone zcnode and zone-cluster-name share the same name.
    • For an exclusive-IP labeled brand zone cluster, perform the following steps.

      Configure only one zone-cluster node at a time.

      1. Boot the non-global zone of one zone-cluster node.
        phys-schost# zoneadm -z zcnode boot
      2. Unconfigure the Oracle Solaris instance and reboot the zone.
        phys-schost# zlogin zcnode
        zcnode# sysconfig unconfigure
        zcnode# reboot

        The zlogin session terminates during the reboot.

      3. Issue the zlogin command and progress through the interactive screens.
        phys-schost# zlogin -C zcnode
      4. When finished, exit the zone console.

        For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in Creating and Using Oracle Solaris Zones.

      5. From the global zone, halt the zone-cluster node.
        phys-schost# zoneadm -z zcnode halt
      6. Repeat the preceding steps for each remaining zone-cluster node.
    • For a shared-IP labeled brand zone cluster, perform the following steps on each zone-cluster node.
      1. From one global-cluster node, boot the zone cluster.
        phys-schost# clzonecluster boot zone-cluster-name
      2. Unconfigure the Oracle Solaris instance and reboot the zone.
        phys-schost# zlogin zcnode
        zcnode# sysconfig unconfigure
        zcnode# reboot

        The zlogin session terminates during the reboot.

      3. Issue the zlogin command and progress through the interactive screens.
        phys-schost# zlogin -C zcnode
      4. When finished, exit the zone console.

        For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in Creating and Using Oracle Solaris Zones.

      5. Repeat the previous steps for each remaining zone-cluster node.
    • For a solaris or solaris10 brand zone cluster, perform the following steps on each zone-cluster node.
      1. From one global-cluster node, boot the zone cluster.
        phys-schost# clzonecluster boot zone-cluster-name
      2. Issue the zlogin command and progress through the interactive screens.
        phys-schost# zlogin -C zcnode
      3. When finished, exit the zone console.

        For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in Creating and Using Oracle Solaris Zones.

      4. Repeat the previous two steps for each remaining zone-cluster node.
  21. Boot the zone cluster.

    Installation of the zone cluster might take several minutes.

    phys-schost# clzonecluster boot zone-cluster-name
  22. (Exclusive-IP zone clusters) Manually configure an IPMP group.

    The clsetup utility does not automatically configure IPMP groups for exclusive-IP zone clusters. You must create an IPMP group manually before you create a logical-hostname or shared-address resource, and add the underlying public network interface to the IPMP group. Since the underlying interface might have addresses associated with it, you must move the associated addresses to the IPMP group.

    In each of the nodes of the zone cluster, configure the IPMP group and add an underlying public network interface to it. Delete any address that is already associated with the underlying interface as shown in the output of the ipadm show-addr command, and create it back on the IPMP interface.

    zcnode# ipadm create-ipmp -i interface sc_ipmp0
    zcnode# ipadm show-addr interface
    zcnode# ipadm delete-addr interface/name
    zcnode# ipadm create-addr -T static -a IPaddress/prefix sc_ipmp0/name

    Note -  If the zone cluster's public networking interface is created over a global zone link aggregation or a global zone VNIC that is directly backed by a link aggregation, you do not need to create IPMP groups over it.

Next Steps

To configure Oracle Solaris Cluster 3.3 data services that you installed in a solaris10 brand zone cluster, follow procedures for zone clusters in the applicable data-service manual. See Oracle Solaris Cluster 3.3 Documentation (http://www.oracle.com/technetwork/documentation/solaris-cluster-33-192999.html).

To complete Trusted Extensions configuration, go to How to Configure a Zone Cluster to Use Trusted Extensions.

Otherwise, add file systems or storage devices to the zone cluster. See the following sections:

How to Create a solaris10 Brand Zone Cluster (CLI)

The solaris10 brand zone cluster provides a virtualized Oracle Solaris 10 cluster environment in an Oracle Solaris 11 configuration. You can use the solaris10 brand zone cluster to run or migrate cluster applications that are deployed on the Oracle Solaris 10 operating system, without any modification to the application.

Before You Begin

Perform the following tasks:

  • Ensure that all requirements in Planning the Oracle Solaris Cluster Environment are met.

  • Select a zone image to migrate or install. The target systems that can be used to create the zone image for installing a zone cluster are the following:

    • Native brand zone on an Oracle Solaris10 system.

    • Cluster brand zone on an Oracle Solaris Cluster node with proper patch level, archive derived from a physical system installed with Oracle Solaris 10 software. For patch information, see the Oracle Solaris Cluster 4 Compatibility Guide.

    • solaris10 brand zone archive derived from an installed solaris10 brand zone.

    • An Oracle Solaris 10 physical system.

    • An Oracle Solaris 10 physical cluster node.

For more information about solaris10 brand zones, see Creating and Using Oracle Solaris 10 Zones.

  1. Assume the root role on an active member node of a global cluster.

    Perform all steps of this procedure from a node of the global cluster.

  2. Create an archive and store it in a shared location.
    # flarcreate -S -n s10-system -L cpio /net/mysharehost/share/s10-system.flar
    
    This archiver format is NOT VALID for flash installation of ZFS root pool.
    
    This format is useful for installing the system image into a zone.
    Reissue command without -L option to produce an archive for root pool install.
    Full Flash
    Checking integrity...
    Integrity OK.
    Running precreation scripts...
    Precreation scripts done.
    Creating the archive...
    6917057 blocks
    Archive creation complete.
    Running postcreation scripts...
    Postcreation scripts done.
    
    Running pre-exit scripts...
    Pre-exit scripts done.

    For more information about creating archives, see Chapter 2, Assessing an Oracle Solaris 10 System and Creating an Archive in Creating and Using Oracle Solaris 10 Zones.

  3. Configure the zone cluster.

    Create and configure the zone cluster on the global cluster, as shown in the following example.


    Note -  The main difference between the solaris and solaris10 brand zone cluster is setting the brand to solaris10 and adding the sysid configuration.
    # clnode status
    
    === Cluster Nodes ===
    
    --- Node Status ---
    
     Node Name                     Status
    -----------                    ------
    
    phys-host-1                    Online
    
    phys-host-2                    Online 
    
    # cat ./s10-zc.config
    
    create -b
    
    set zonepath=/zones/s10-zc
    
    set brand=solaris10
    
    set autoboot=true
    
    set bootargs="-m verbose"
    
    add attr
    
    set name=cluster
    
    set type=boolean
    
    set value=true
    
    end
    
    add node
    
    set physical-host=phys-host-1
    
    set hostname=zc-host-1
    
    add net
    
    set address=10.134.90.196/24
    
    set physical=sc_ipmp0
    
    end
    
    end
    
    add node
    
    set physical-host=phys-host-2
    
    set hostname=zc-host-2
    
    add net
    
    set address=10.134.90.197/24
    
    set physical=sc_ipmp0
    
    end
    
    end
    
    add sysid
    
    set root_password=N4l3cWQb/s9zY
    
    set name_service="DNS{domain_name=mydomain.com name_server=13.35.24.52,13.35.29.41,19.13.8.13 search=mydomain.com}"
    
    set nfs4_domain=dynamic
    
    set security_policy=NONE
    
    set system_locale=C
    
    set terminal=vt100
    
    set timezone=US/Pacific
    
    end
    
    In the above configuration, the root_password mentioned is solaris.
    
    # clzonecluster configure -f ./s10-zc.config s10-zc
    
    # clzonecluster verify s10-zc
    
    # clzonecluster status s10-zc
    
    === Zone Clusters ===
    
    --- Zone Cluster Status ---
    
    Name         Brand         Node Name         Zone Host Name       Status       Zone Status
    ----         -----         ---------         --------------       ------       -----------
    
    s10-zc       solaris10     phys-host-1        zc-host-1           offline        Configured
    
                               phys-host-2        zc-host-2           offline        Configured
  4. Install the zone image for the zone cluster.

    Use the zone image obtained in Step 3.

    # clzonecluster install -a /net/mysharehost/share/s10-system.flar s10-zc
  5. Install the cluster software.

    Perform this step only if the archive does not contain cluster software in the image.

    1. Boot the zone cluster into Offline/Running mode.
      # clzonecluster boot -o s10-zc
    2. Access the zone on all nodes of zone cluster and make sure that system configuration is complete.
      # zlogin -C s10-zc

      If the configuration is not complete, finish any pending system configuration.

    3. From the global zone, check the zone cluster status.
      # clzonecluster status s10-zc
      
      === Zone Clusters ===
      
      --- Zone Cluster Status ---
      
      Name       Brand         Node Name         Zone Host Name     Status       Zone Status
      ----       -----         ---------         --------------       ------     ----------
      
      s10-zc     solaris10     phys-host-1        zc-host-1         offline      Running
      
                               phys-host-2        zc-host-2         offline      Running
      
      
    4. Install the zone cluster software.
      # clzonecluster install-cluster -d /net/mysharehost.com/osc-dir/ \
      
      -p patchdir=/net/mysharehost/osc-dir,patchlistfile=plist-sparc \
      
      -s all s10-zc
      
      -p patchdir
      
      Specifies the location of the patches to be installed along with the cluster software.
      
      patchlistfile
      
      Specifies the file that contains the list of patches to be installed inside the zone cluster along with the cluster software.
      In this example, the contents of the file plist-sparc are as follows:
      
      # cat /net/mysharehost/osc-dir/plist-sparc
      
      145333-15
      
      Note - Both the patchdir and patchlistfile locations must be accessible to all nodes of the cluster.
      
      -s
      
      Specifies the agent packages that should be installed along with core cluster software. In this example, all is specified to install all the agent packages. 
  6. Boot the zone cluster.
    1. Reboot the zone cluster to boot the zone into Online/Running mode.

      You might have to wait for some time to get the status to Online/Running.

      # clzonecluster reboot s10-zc
    2. From the global zone, check the zone cluster status.

      The status of zone cluster will now be in Online/Running mode.

      # clzonecluster status s10-zc
       
      === Zone Clusters ===
      
      --- Zone Cluster Status ---
      
      Name         Brand         Node Name         Zone Host Name       Status
      ----         -----         ---------         --------------       ------
      
      s10-zc       solaris10     phys-host-1        zc-host-1           online
      
                                 phys-host-2        zc-host-2           online
      
  7. Log into the zone.
    # zlogin s10-zc 
    
     [Connected to zone 's10-zc' pts/2]
    
    Last login: Mon Nov 5 21:20:31 on pts/2
    
  8. Verify the status of the zone.
    # /usr/cluster/bin/clnode status 
    
    === Cluster Nodes ===
    
    --- Node Status ---
    
     Node Name                     Status
    
    zc-host-1                      Online
    
    zc-host-2                      Online 

Next Steps

The solaris10 brand zone cluster configuration is now complete. You can now install and bring up any Oracle Solaris 10 applications and make them highly available by creating the necessary resources and resource groups.

How to Configure a Zone Cluster to Use Trusted Extensions

After you create a labeled brand zone cluster, perform the following steps to finish configuration to use Trusted Extensions.

  1. Complete IP-address mappings for the zone cluster.

    Perform this step on each node of the zone cluster.

    1. From a node of the global cluster, display the node's ID.
      phys-schost# cat /etc/cluster/nodeid
      N
    2. Log in to a zone-cluster node on the same global-cluster node.

      Ensure that the SMF service has been imported and all services are up before you log in.

    3. Determine the IP addresses used by this zone-cluster node for the private interconnect.

      The cluster software automatically assigns these IP addresses when the cluster software configures a zone cluster.

      In the ifconfig -a output, locate the clprivnet0 logical interface that belongs to the zone cluster. The value for inetis the IP address that was assigned to support the use of the cluster private interconnect by this zone cluster.

      zc1# ifconfig -a
      lo0:3: flags=20010008c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
      zone zc1
      inet 127.0.0.1 netmask ff000000
      net0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
      inet 10.11.166.105 netmask ffffff00 broadcast 10.11.166.255
      groupname sc_ipmp0
      ether 0:3:ba:19:fa:b7
      ce0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
      inet 10.11.166.109 netmask ffffff00 broadcast 10.11.166.255
      groupname sc_ipmp0
      ether 0:14:4f:24:74:d8
      ce0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
      zone zc1
      inet 10.11.166.160 netmask ffffff00 broadcast 10.11.166.255
      clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7
      inet 172.16.0.18 netmask fffffff8 broadcast 172.16.0.23
      ether 0:0:0:0:0:2
      clprivnet0:3: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7
      zone zc1
      inet 172.16.0.22 netmask fffffffc broadcast 172.16.0.23
    4. Add to the zone-cluster node's /etc/inet/hosts file the following addresses of the zone-cluster node.
      • The hostname for the private interconnect, which is clusternodeN-priv, where N is the global-cluster node ID

        172.16.0.22    clusternodeN-priv 
      • Each net resource that was specified to the clzonecluster command when you created the zone cluster

    5. Repeat on the remaining zone-cluster nodes.
  2. Authorize communication with zone-cluster components.

    Create new entries for the IP addresses used by zone-cluster components and assign each entry a CIPSO template. These IP addresses which exist in the zone-cluster node's /etc/inet/hosts file are as follows:

    • Each zone-cluster node private IP address

    • All cl_privnet IP addresses in the zone cluster

    • Each logical-hostname public IP address for the zone cluster

    • Each shared-address public IP address for the zone cluster

    phys-schost# tncfg -t cipso
    tncfg:cipso> add host=ipaddress1
    tncfg:cipso> add host=ipaddress2
    …
    tncfg:cipso> exit

    For more information about CIPSO templates, see How to Configure a Different Domain of Interpretation in Trusted Extensions Configuration and Administration.

  3. Set IP strict multihoming to weak.

    Perform the following commands on each node of the zone cluster.

    phys-schost# ipadm set-prop -p hostmodel=weak ipv4
    phys-schost# ipadm set-prop -p hostmodel=weak ipv6

    For more information about the hostmodel property, see hostmodel (IPv4 or IPv6) in Oracle Solaris 11.3 Tunable Parameters Reference Manual.

Next Steps

To add file systems or storage devices to the zone cluster. See the following sections:

See Also

If you want to update the software on a zone cluster, follow procedures in Chapter 11, Updating Your Software in Oracle Solaris Cluster 4.3 System Administration Guide. These procedures include special instructions for zone clusters, where needed.