Go to main content

Oracle® Solaris Cluster Data Service for Oracle Solaris Zones Guide

Exit Print View

Updated: September 2015
 
 

How to Install a solaris10 Branded Zone and Perform the Initial Internal Zone Configuration

Perform this task on each node that is to host the solaris10 branded non-global zone. For complete information about installing a solaris10 branded zone, see Creating and Using Oracle Solaris 10 Zones .

Before You Begin

For detailed information about configuring a solaris10 branded zone before installation of the zone, see Chapter 4, Configuring the solaris10 Branded Zone, in Creating and Using Oracle Solaris 10 Zones .


Note -  This procedure assumes you are performing it on a two-node cluster. If you perform this procedure on a cluster with more than two nodes, perform on all nodes any steps that say to perform them on both nodes.
  1. Assume the root role on one node of the cluster.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.

  2. Set up the system image.

    Follow procedures in Creating the Image for Directly Migrating Oracle Solaris 10 Systems Into Zones in Creating and Using Oracle Solaris 10 Zones .

  3. For zones that are not set with the rootzpool zone property, create a ZFS file-system dataset on the ZFS storage pool that you created.

    You will use this file system as the zone root path for the zone that you create later in this procedure.

    phys-schost-1# zfs create pool/filesystem
  4. Configure the zone on both nodes.

    For zones that are not set with the rootzpool zone property, set the zone root path to the file system that you created on the ZFS storage pool.


    Note -  You must define the osc-ha-zone attribute in the zone configuration, setting type to boolean and value to true.
    phys-schost# zonecfg -z zonename \
    'create ; set brand=solaris10; set zonepath=/pool/filesystem/zonename;
    add attr; set name=osc-ha-zone; set type=boolean;
    set value=true; end; set autoboot=false'
  5. Verify the zone configuration.
    phys-schost# zoneadm list -cv
    ID NAME          STATUS        PATH                        BRAND      IP
    0 global        running       /                            solaris    shared
    - zonename      configured   /pool/filesystem/zonename     solaris10  shared
  6. For a failover configuration only, install the zone.

    For a multiple-master configuration, omit this step.

    1. (Only when rootzpool or zpool zone property is not set) Determine on which node the resource group is online.
      phys-schost-1# clresourcegroup status solaris-zone-resource-group
      === Cluster Resource Groups ===
      
      Group Name                        Node Name          Suspended        Status
      ----------                        ---------          ---------        ------
      solaris-zone-resource-group       phys-schost-1      No               Online
      …

      Perform the rest of this step from the node that masters the resource group, or on all nodes for a multiple-master configuration.

    2. Install the zone on each node where the resource group is online.
      phys-schost-N# zoneadm -z zonename install -a flarimage -u
    3. Verify that the zone is installed.
      phys-schost-N# zoneadm list -cv
      ID NAME           STATUS       PATH                           BRAND      IP
      0 global         running      /                               solaris    shared
      - zonename       installed    /pool/filesystem/zonename       solaris10   shared
    4. Boot the zone that you created and verify that the zone is running.
      phys-schost-N# zoneadm -z zonename boot
      phys-schost-N# zoneadm list -cv
      ID NAME           STATUS       PATH                           BRAND      IP
      0 global         running      /                               solaris     shared
      - zonename       running      /pool/filesystem/zonename       solaris10   shared
    5. Open a new terminal window and log in to the zone console.
      phys-schost-N# zlogin -C zonename

      Follow the interactive steps to finish the zone configuration.

    6. Halt the zone.
      phys-schost-1# zoneadm -z zonename halt

      The zone's status should return to installed.

    7. Detach the zone.
      • For a zone that is not set with the rootzpool or zpool zone property, forcibly detach the zone.
        phys-schost-1# zoneadm -z zonename detach -F

        The zone state changes from installed to configured.

      • For a zone that is set with the rootzpool or zpool zone property, detach the zone.
        phys-schost-1# zoneadm -z zonename detach
  7. For a failover configuration, verify that the zone can switch over.

    For a multiple-master configuration, omit this step.

    1. Switch the resource group to the other node.

      Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.

      phys-schost-1# clresourcegroup switch -n phys-schost-2 \
      solaris-zone-resource-group

      Note -  Perform the remaining steps in this procedure from the node to which you switch the resource group, phys-schost-2.
    2. Attach the zone to the node to which you switched the resource group.
      phys-schost-2# zoneadm -z zonename attach
    3. Verify that the zone is installed on the node.

      Output is similar to the following:

      phys-schost-2# zoneadm list -cv
      ID NAME           STATUS       PATH                           BRAND      IP
      0 global         running      /                               solaris     shared
      - zonename       installed    /pool/filesystem/zonename       solaris10   shared
    4. Boot the zone.
      phys-schost-2# zoneadm -z zonename boot
    5. Open a new terminal window and log in to the zone.

      Perform this step to verify that the zone is functional.

      phys-schost-2# zlogin -C zonename
    6. Halt the zone.
      phys-schost-2# zoneadm -z zonename halt
    7. Detach the zone.
      • For a zone that is not set with the rootzpool or zpool zone property, forcibly detach the zone.
        phys-schost-2# zoneadm -z zonename detach -F

        The zone state changes from installed to configured.

      • For a zone that is set with the rootzpool or zpool zone property, detach the zone.
        phys-schost-1# zoneadm -z zonename detach