Go to main content

Oracle® Solaris Cluster Data Service for Oracle Solaris Zones Guide

Exit Print View

Updated: September 2015
 
 

How to Install a solaris Branded Zone and Perform the Initial Internal Zone Configuration

Perform this task on each node that is to host the solaris branded non-global zone. For complete information about installing a solaris branded non-global zone, see Creating and Using Oracle Solaris Zones .

Before You Begin

For detailed information about configuring a solaris branded zone before installation of the zone, see Chapter 1, How to Plan and Configure Non-Global Zones, in Creating and Using Oracle Solaris Zones .


Note -  This procedure assumes you are performing it on a two-node cluster. If you perform this procedure on a cluster with more than two nodes, perform on all nodes any steps that say to perform them on both nodes.
  1. Assume the root role on one node of the cluster.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.

  2. Bring the resource group online.
    phys-schost-1# clresourcegroup online -eM solaris-zone-resource-group
  3. For zones that are not set with the rootzpool zone property, create a ZFS file-system dataset on the ZFS storage pool that you created.

    You will use this file system as the zone root path for zone that you create later in this procedure.

    phys-schost-1# zfs create pool/filesystem
  4. Configure the zone on both nodes.

    You must define the osc-ha-zone attribute in the zone configuration, setting type to boolean and value to true.


    Note -  If the zone is not set with the rootzpool zone property, set the zone root path to the file system that you created on the ZFS storage pool.
    phys-schost# zonecfg -z zonename \
    'create ; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;
    set zonepath=/pool/filesystem/zonename ; set autoboot=false'
  5. Verify the zone configuration.
    phys-schost# zoneadm list -cv
    ID NAME          STATUS        PATH                         BRAND    IP
    0  global        running       /                            solaris  shared
    -  zonename      configured    /pool/filesystem/zonename    solaris  shared
  6. Install the zone.
    1. (Only when rootzpool or zpool zone property is not set) Determine on which node the resource group is online.
      phys-schost-1# clresourcegroup status solaris-zone-resource-group
      === Cluster Resource Groups ===
      
      Group Name                   Node Name      Suspended   Status
      ----------                   ---------      ---------   ------
      solaris-zone-resource-group  phys-schost-1  No          Online
      …

      Perform the rest of this step from the node that masters the resource group, or on all nodes for a multiple-master configuration.

    2. Install the zone on each node where the resource group is online.
      phys-schost-N# zoneadm -z zonename install
      
    3. Verify that the zone is installed.
      phys-schost-N# zoneadm list -cv
      ID NAME           STATUS       PATH                         BRAND    IP
      0  global         running      /                            solaris  shared
      -  zonename       installed    /pool/filesystem/zonename    solaris  shared
    4. Boot the zone that you created and verify that the zone is running.
      phys-schost-N# zoneadm -z zonename boot
      phys-schost-N# zoneadm list -cv
      ID NAME           STATUS       PATH                         BRAND    IP
      0  global         running      /                            solaris  shared
      -  zonename       running      /pool/filesystem/zonename    solaris  shared
    5. Open a new terminal window and log in to the zone console.

      Follow the interactive steps to finish the zone configuration.

      phys-schost-N# zlogin -C zonename
    6. Halt the zone.

      The zone's status should return to installed.

      phys-schost-N# zoneadm -z zonename halt
    7. Detach the zone.

      The zone state changes from installed to configured.

      • If the zone is not set with the rootzpool or zpool zone property, forcibly detach the zone.
        phys-schost-N# zoneadm -z zonename detach -F
      • If the zone is set with the rootzpool or zpool zone property, detach the zone.
        phys-schost-N# zoneadm -z zonename detach
  7. For a failover configuration, verify that the resource group can switch over.

    For a multiple-master configuration, omit this step.

    1. Switch the resource group to the other node.

      Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.

      phys-schost-1# clresourcegroup switch -n phys-schost-2 \
      solaris-zone-resource-group

      Note -  Perform the remaining steps in this procedure from the node to which you switch the resource group, phys-schost-2.
    2. Attach the zone to the node to which you switched the resource group.
      phys-schost-2# zoneadm -z zonename attach
    3. Verify that the zone is installed on the node.

      Output is similar to the following:

      phys-schost-2# zoneadm list -cv
      ID NAME           STATUS       PATH                         BRAND      IP
      0  global         running      /                            solaris    shared
      -  zonename       installed    /pool/filesystem/zonename    solaris10  shared
    4. Boot the zone.
      phys-schost-2# zoneadm -z zonename boot
    5. Open a new terminal window and log in to the zone.

      Perform this step to verify that the zone is functional.

      phys-schost-2# zlogin -C zonename
    6. Halt the zone.
      phys-schost-2# zoneadm -z zonename halt
    7. Detach the zone.

      The zone state changes from installed to configured.

      • If the zone is not set with the rootzpool or zpool zone property, forcibly detach the zone.
        phys-schost-2# zoneadm -z zonename detach -F
      • If the zone is set with the rootzpool or zpool zone property, detach the zone.
        phys-schost-1# zoneadm -z zonename detach