Go to main content

Oracle® Solaris Cluster Data Service for Oracle Solaris Zones Guide

Exit Print View

Updated: September 2015
 
 

How to Install a solaris-kz Branded Zone and Perform the Initial Internal Zone Configuration

Perform this task on each node that is to host the solaris-kz branded zone, or kernel zone. For complete information about installing a zone, see Creating and Using Oracle Solaris Kernel Zones .

Before You Begin

Consult Planning the HA for Solaris Zones Installation and Configuration and then determine the following requirements for the deployment of the zone with Oracle Solaris Cluster:

For detailed information about configuring a solaris-kz branded (kernel) zone before installation of the zone, see Chapter 1, Planning and Configuring Oracle Solaris Kernel Zones, in Creating and Using Oracle Solaris Kernel Zones .


Note -  This procedure assumes you are performing it on a two-node cluster. If you perform this procedure on a cluster with more than two nodes, perform on all nodes any steps that say to perform them on both nodes.
  1. Assume the root role on one node of the cluster.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.

  2. Configure the zone only on the first node.

    Observe the following requirements for the following zonecfg command:

    • Define the osc-ha-zone attribute in the zone configuration, setting type to boolean and value to true.

    • Use the did devices identified in Step a of this procedure.

    • For warm migration, use the following command:
      phys-schost-1# zonecfg -z zonename \
      'create -b; set brand=solaris-kz; add capped-memory; set physical=2G; end; 
      add device; set storage=dev:did/dsk/d2; set bootpri=1; end; 
      add suspend; set storage=dev:did/dsk/d3; end;
      add anet; set lower-link=auto; end; set autoboot=false; 
      add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;'
    • For cold or live migration, use the following command, which omits the add suspend line:
      phys-schost-1# zonecfg -z zonename \
      'create -b; set brand=solaris-kz; add capped-memory; set physical=2G; end; 
      add device; set storage=dev:did/dsk/d2; set bootpri=1; end;
      add anet; set lower-link=auto; end; set autoboot=false; 
      add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;'
  3. Verify the zone configuration.
    phys-schost-1# zoneadm list -cv
    ID NAME          STATUS        PATH                           BRAND      IP
    0 global        running       /                              solaris     shared
    - zonename      configured    /pool/filesystem/zonename      solaris-kz  shared
  4. Install the zone.
    1. Determine on which node the resource group is online.
      phys-schost-1# clresourcegroup status solaris-zone-resource-group
      === Cluster Resource Groups ===
      
      Group Name                       Node Name          Suspended        Status
      ----------                       ---------          ---------        ------
      solaris-zone-resource-group       phys-schost-1      No               Online
      …

      Perform the rest of this step from the node that masters the resource group, or on all nodes for a multiple-master configuration..

    2. Install the zone on each node where the resource group is online.
      phys-schost-N# zoneadm -z zonename install
    3. Verify that the zone is installed.
      phys-schost-N# zoneadm list -cv
      ID NAME           STATUS       PATH                           BRAND       IP
      0 global          running      /                              solaris      shared
      - zonename        installed    /pool/filesystem/zonename      solaris-kz   shared
    4. Boot the zone that you created and verify that the zone is running.
      phys-schost-N# zoneadm -z zonename boot
      phys-schost-N# zoneadm list -cv
      ID NAME           STATUS       PATH                           BRAND       IP
      0 global          running      /                              solaris      shared
      - zonename        running      /pool/filesystem/zonename      solaris-kz   shared
    5. Open a new terminal window and log in to the zone console.
      phys-schost-N# zlogin -C zonename

      Follow the interactive steps to finish the zone configuration.

    6. Shut down the zone.
      phys-schost-1# zoneadm -z zonename shutdown
    7. Forcibly detach the zone.
      phys-schost-1# zoneadm -z zonename detach -F
    8. Export the zone configuration on the first node, copy it to a secure location on the second node, and import the zone configuration on the second node.

      This is the only supported method to copy the kernel zone configuration to another node while ensuring that it contains the encryption key for the kernel zone host data that it maintains. For more information about the kernel zone, see the solaris-kz (5) man page.

      phys-schost-1# zonecfg -z zonename export -f /var/cluster/run/zonename.cfg
      phys-schost-1# scp /var/cluster/run/zonename.cfg  root@node-2:/var/cluster/run/
      phys-schost-1# rm /var/cluster/run/zonename.cfg
      
      phys-schost-2# zonecfg -z zonename -f /var/cluster/run/zonename.cfg
      phys-schost-2# rm /var/cluster/run/zonename.cfg
  5. Switch the resource group to the other node.

    Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.

    phys-schost-1# clresourcegroup switch -n phys-schost-2 solaris-zone-resource-group

    Note -  Perform the remaining steps in this procedure from the node to which you switch the resource group, phys-schost-2.
  6. Forcibly attach the zone to the second node.
    phys-schost-2# zoneadm -z zonename attach -x force-takeover
  7. Verify that the zone is installed on the node.

    Output is similar to the following:

    phys-schost-2# zoneadm list -cv
    ID NAME           STATUS       PATH                           BRAND       IP
    0 global          running      /                              solaris     shared
    - zonename        installed    /pool/filesystem/zonename      solaris-kz  shared
  8. Boot the zone.
    phys-schost-2# zoneadm -z zonename boot
  9. Open a new terminal window and log in to the zone.

    Perform this step to verify that the zone is functional.

    phys-schost-2# zlogin -C zonename
  10. (Live migration only) On both nodes, enable rad services and the kernel zone migration service.
    phys-schost-N# svcadm enable svc:/system/rad:local svc:/system/rad:remote \
    svc:/network/kz-migr:stream
  11. (Live migration only) Enable passwordless ssh for the root user between the cluster nodes.
    1. On both nodes, create the public and private ssh key for user root with an empty passphrase.
      phys-schost-N# ssh-keygen -N '' -f /root/.ssh/id_rsa -t rsa
    2. On each node, copy the public ssh key for user root to the other node.

      Put the public key of the remote node into the authorized_keys file on the local node.

      phys-schost-1# scp /root/.ssh/id_rsa.pub \
      phys-schost-2:/var/run/phys-schost-1-root-ssh-pubkey.txt
      
      phys-schost-2# scp /root/.ssh/id_rsa.pub \
      phys-schost-1:/var/run/phys-schost-2-root-ssh-pubkey.txt
      
      phys-schost-1# cat /var/run/phys-schost-2-root-ssh-pubkey.txt \
      >> /root/.ssh/authorized_keys
      phys-schost-1# rm /var/run/phys-schost-2-root-ssh-pubkey.txt
      
      phys-schost-2# cat /var/run/phys-schost-1-root-ssh-pubkey.txt \
      >> /root/.ssh/authorized_keys
      phys-schost-2# rm /var/run/phys-schost-1-root-ssh-pubkey.txt
    3. Verify that the passwordless ssh login works between each node.

      Accept the public keys to continue the connection once for each node.

      phys-schost-1# ssh root@clusternode2-priv date
      …
      Are you sure you want to continue connecting (yes/no)? yes
      
      phys-schost-2# ssh root@clusternode1-priv date
      …
      Are you sure you want to continue connecting (yes/no)? yes
  12. (Live migration only) Perform a live migration from the second node to the first node.

    The migration is run over the cluster interconnect.

    phys-schost-2# zoneadm -z sol-kz-fz-1 migrate ssh://clusternode1-priv

    The zone should be running on the first node and its status on the second node should be detached.

  13. Shut down the zone.
    • For cold or warm migration, shut down the zone on phys-schost-2.
      phys-schost-2# zoneadm -z zonename shutdown
    • For live migration, shut down the zone on phys-schost-1.
      phys-schost-1# zoneadm -z zonename shutdown
  14. Forcibly detach the zone.
    • For cold or warm migration, detach the zone from phys-schost-2.
      phys-schost-2# zoneadm -z zonename detach -F
    • For live migration, detach the zone from phys-schost-1.
      phys-schost-1# zoneadm -z zonename detach -F

    The zone state changes from installed to configured.