Oracle® Solaris Cluster Data Service for Oracle Solaris Zones Guide

Exit Print View

Updated: July 2014, E39657-01
 
 

How to Install a Zone and Perform the Initial Internal Zone Configuration

Perform this task on each node that is to host the zone.

Before You Begin

Consult Configuration Restrictions and then determine the following requirements for the deployment of the zone with Oracle Solaris Cluster:

  • The number of Solaris Zone instances that are to be deployed.

  • For non-global zones, the zpool containing the file system that is to be used by each Solaris Zone instance.

  • For an Oracle Solaris Kernel Zone, the boot storage is specified as described in the suri (5) man page. If the storage URI points to a zvol, then the corresponding zpool must be managed by a SUNW.HAStoragePlus resource. If the storage URI points to a logical unit or iSCSI device, then the SUNW.HAStoragePlus resource can be used to monitor the corresponding did device.

If the non-global zone that you are installing is to run in a failover configuration, configure the zone's zone path to specify a file system on a zpool. The zpool must be managed by the SUNW.HAStoragePlus resource that you created in How to Enable a Zone to Run in a Failover Configuration. In the case of kernel zones, this is applicable only if the boot device is pointing to a zvol or if the suspend device is pointing to a path.


Note -  This procedure assumes you are performing it on a two-node cluster. If you perform this procedure on a cluster with more than two nodes, perform on all nodes any steps that say to perform them on both nodes.
  1. Assume the root role on one node of the cluster.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.

  2. If you will use a solaris10 brand zone, set up the system image.

    Follow procedures in Creating the Image for Directly Migrating Oracle Solaris 10 Systems Into Zones in Creating and using Oracle Solaris 10 Zones .

  3. Create a resource for the zone`s disk storage.
    • For non-global zones:

      This HAStoragePlus resource is for the zonepath. The file system must be a failover file system.

      phys-schost# clresource create \
      -g solaris-zone-resource-group \
      -t SUNW.HAStoragePlus \
      -p Zpools=solaris-zone-instance-zpool \
      solaris-zone-has-resource-name

      Note -  This step applies to kernel zones only if the boot device is pointing to a zvol or if the suspend device is pointing to a path.
    • (OPTIONAL) For kernel zones:
      • Identify the devices to be used as boot storage and suspend storage for the kernel zone.

        phys-schost-1# cldev list -v d2
        DID Device     Full Device Path
        d2             node-1:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0
        d2             node-2:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0
        
        phys-schost-1# suriadm lookup-uri /dev/did/dsk/d2
        dev:did/dsk/d2
        
        phys-schost-1# cldev list -v d3
        DID Device       Full Device Path
        d3               node-1:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0
        d3               node-2:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0
        
        phys-schost-1# suriadm lookup-uri /dev/did/dsk/d3
        dev:did/dsk/d3
        
        d2 (suri=dev:did/dsk/d2) will be used for the kernel zone rpool as boot device
        
        d3 (suri=dev:did/dsk/d3) will be used as suspend device
      • (Optional) If you require device monitoring for the storage devices configured to be used by the kernel zone, configure a SUNW.HAStoragePlus resource and specify the corresponding global device group for the did devices identified in the previous step within the GlobalDevicePaths property.

        1. Register the SUNW.HAStoragePlus resource type, if it is not yet registered on the cluster.

          phys-schost-1# clrt register SUNW.HAStoragePlus
        2. Register the SUNW.HAStoragePlus resource.

          phys-schost-1# clrs create -t SUNW.HAStoragePlus -g zone-rg \
          -p GlobalDevicePaths=dsk/d2,dsk/d3 ha-zones-hasp-rs
        3. Set the resource name for that SUNW.HAStoragePlus resource within the HAS_RS variable to ensure the required resource dependency gets setup for the sczbt component. For example:

          HAS_RS=ha-zones-hasp-rs
  4. Bring the resource group online.
    phys-schost-1# clresourcegroup online -eM resourcegroup
  5. For non-global zones, create a ZFS file-system dataset on the ZFS storage pool that you created.

    You will use this file system as the zone root path for the solaris or solaris10 brand zone that you create later in this procedure.

    phys-schost-1# zfs create pool/filesystem
  6. For a solaris brand zone, ensure that the universally unique ID (UUID) of each node's boot-environment (BE) root dataset is the same value.

    Note -  If you are using the Oracle Solaris 11.2 OS, this step is optional.
    • If you omit this step, at failover of a solaris brand zone, the last zone boot environment that was booted is first cloned and then activated on the node.

    • If you perform this step, at failover of a solaris brand zone, the last zone boot environment that was booted is activated on the node without first creating a clone.



    Note -  If there are other solaris branded zones already created in the cluster using the current active boot environment and have been configured with this data service, omit this step for the subsequent configuration for those zones. Instead, proceed to Step 7.
    1. Determine the UUID of the node where you initially created the zone.

      Output is similar to the following.

      phys-schost-1# /opt/SUNWsczone/sczbt/util/ha-solaris-zone-boot-env-id get
      8fe53702-16c3-eb21-ed85-d19af92c6bb

      In this example output, the UUID is 8fe53702-16c3-eb21-ed85-d19af92c6bbd.

    2. Set the same UUID on all nodes where the solaris brand zone is online.
      phys-schost-2# /opt/SUNWsczone/sczbt/util/ha-solaris-zone-boot-env-id \
      set uuid
      -b bename

      Specifies the boot environment in which to define the specified UUID.

      uuid

      The reference UUID that you obtained in Step a.

      For example:

      phys-schost-2# /opt/SUNWsczone/sczbt/util/ha-solaris-zone-boot-env-id \
      set 8fe53702-16c3-eb21-ed85-d19af92c6bbd
      Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for the global zone boot environment dataset
      rpool/ROOT/s11u1-osc41-SRU. Previous UUID was 4c827fc-01e8-4a8a-961f-cb6f5f15c139.
      
      Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for solaris branded zone pse-app boot environment
      dataset rpool/zones/pse-app/rpool/ROOT/solaris-2.
      
      Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for solaris branded zone pse-db boot environment
      dataset rpool/zones/pse-db/rpool/ROOT/solaris-2.
      
      Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for solaris branded zone pse-sched boot environment
      dataset rpool/zones/pse-sched/rpool/ROOT/solaris-2.
      
      Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for solaris branded zone pse-web boot environment
      dataset rpool/zones/pse-web/rpool/ROOT/solaris-2.

      Note -  If you use a multimaster configuration, you do not need to set the UUID as described in this step.
  7. Configure the solaris, solaris10, or solaris-kz brand zone.

    Set the zone root path to the file system that you created on the ZFS storage pool.


    Note -  If using either the solaris or the solaris10 brand zone, then perform the following two substeps on both nodes. If using a kernel zone, then perform the folllowing two substeps only on the first node.
    1. Configure the zone.

      Note -  You must define the osc-ha-zone attribute in the zone configuration, setting type to boolean and value to true.
      • For a solaris brand zone, use the following command.
        phys-schost# zonecfg -z zonename \
        'create ; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;
        set zonepath=/pool/filesystem/zonename ; set autoboot=false'
      • For a solaris10 brand zone, use the following command.
        phys-schost# zonecfg -z zonename \
        'create ; set brand=solaris10; set zonepath=/pool/filesystem/zonename ;
        add attr; set name=osc-ha-zone; set type=boolean;
        set value=true; end; set autoboot=false'
      • For a solaris-kz brand zone, use the following command.

        In the following command, use the did devices identified in Step-3 of this procedure.

        phy-schost# zonecfg -z zonename \
        'create -b; set brand=solaris-kz; add capped-memory; set physical=2G; end; 
        add device; set storage=dev:did/dsk/d2; set bootpri=1; end; 
        add suspend; set storage=dev:did/dsk/d3; end;
        add anet; set lower-link=auto; end; set autoboot=false; 
        add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;'
    2. Verify the zone configuration.
      phys-schost# zoneadm list -cv
      ID NAME          STATUS        PATH                           BRAND    IP
      0 global        running       /                              solaris  shared
      - zonename      configured   /pool/filesystem/zonename         brand    shared
  8. From the node that masters the HAStoragePlus resource, install the solaris, solaris10, or solaris-kz brand non-global zone.

    Note -  For a multi-master configuration, you do not need an HAStoragePlus resource as described in Step a and you do not need to perform the switchover described in Step 11.
    1. Determine which node masters the HAStoragePlus resource.
      phys-schost# clresource status
      === Cluster Resources ===
      
      Resource Name             Node Name       Status        Message
      --------------            ----------      -------       -------
      hasp-resource            phys-schost-1    Online        Online
                               phys-schost-2    Offline       Offline

      Perform the remaining tasks in this step from the node that masters the HAStoragePlus resource.

    2. Install the zone on the node that masters the HAStoragePlus resource for the ZFS storage pool.
      • For a solaris brand zone, use the following command.
        phys-schost-1# zoneadm -z zonename install
        
      • For a solaris10 brand zone, use the following command.
        phys-schost-1# zoneadm -z zonename install -a flarimage -u
      • For a solaris-kz brand zone, use the following command.
        phys-schost-1# zoneadm -z zonename install
        
    3. Verify that the zone is installed.
      phys-schost-1# zoneadm list -cv
      ID NAME           STATUS       PATH                           BRAND    IP
      0 global         running      /                              solaris  shared
      - zonename       installed    /pool/filesystem/zonename        brand    shared
    4. Boot the zone that you created and verify that the zone is running.
      phys-schost-1# zoneadm -z zonename boot
      phys-schost-1# zoneadm list -cv
      ID NAME           STATUS       PATH                           BRAND    IP
      0 global         running      /                              solaris  shared
      - zonename       running      /pool/filesystem/zonename        brand    shared
    5. Open a new terminal window and log in to the zone console.

      Follow the interactive steps to finish the zone configuration.

    6. Halt the zone.

      This step is applicable for non-global zones, and also while configuring kernel zones for cold migration.

      The zone's status should return to installed.

      phys-schost-1# zoneadm -z zonename halt
    7. Suspend the zone.

      This step is only applicable while configuring kernel zones for warm migration.

      phys-schost-1# zoneadm -z zonename suspend
    8. Forcibly detach the zone.
      phys-schost-1# zoneadm -z zonename detach -F

      The zone state changes from installed to configured.

    9. (For kernel zones) Export the kernel zone configuration on node 1, copy it to a secure location on node 2 and import the zone configuration on node 2.

      This is the only supported method to copy the kernel zone configuration to another node while ensuring that it contains the encryption key for the kernel zone host data that it maintains. For more information about the kernel zone, see the solaris-kz (5) man page.

      For example:

      phys-schost-1# zonecfg -z zonename export -f /var/cluster/run/zonename.cfg
      phys-schost-1# scp/var/cluster/run/zonename.cfg  root@node-2:/var/cluster/run/
      phys-schost-1# rm /var/cluster/run/zonename.cfg
      
      phys-schost-2# zonecfg -z zonename -f /var/cluster/run/zonename.cfg
      phys-schost-2# rm /var/cluster/run/zonename.cfg
  9. Switch the resource group to the other node and forcibly attach the zone.
    1. Switch over the resource group.

      Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.

      phys-schost-1# clresourcegroup switch -n phys-schost-2 resourcegroup

      Perform the remaining tasks in this step from the node to which you switch the resource group.

    2. Attach the zone.
      • For a non-global zone, attach the zone to the node to which you switched the resource group.
        phys-schost-2# zoneadm -z zonename attach
      • For a kernel zone, attach the zone on node 2 using the –x force-takeover option.
        phys-schost-2# zoneadm -z zonename attach -x force-takeover
    3. Verify that the zone is installed on the node.

      Output is similar to the following:

      phys-schost-2# zoneadm list -cv
      ID NAME           STATUS       PATH                           BRAND    IP
      0 global         running      /                              solaris  shared
      - zonename       installed    /pool/filesystem/zonename        brand    shared
    4. Boot the zone.
      phys-schost-2# zoneadm -z zonename boot
    5. Open a new terminal window and log in to the zone.

      Perform this step to verify that the zone is functional.

      phys-schost-2# zlogin -C zonename
    6. Halt the zone.
      phys-schost-2# zoneadm -z zonename halt
    7. Forcibly detach the zone.
      phys-schost-2# zoneadm -z zonename detach -F

      The zone state changes from installed to configured.