Perform this task on each node that is to host the solaris-kz branded zone, or kernel zone. For complete information about installing a zone, see Creating and Using Oracle Solaris Kernel Zones .
Before You Begin
Consult Planning the HA for Solaris Zones Installation and Configuration and then determine the following requirements for the deployment of the zone with Oracle Solaris Cluster:
The number of Solaris Zone instances that are to be deployed.
Ensure that the zone is enabled to run in a failover or multiple-masters configuration. See How to Enable a Zone to Run in a Failover Configuration or How to Enable a Zone to Run in a Multiple-Masters Configuration.
For detailed information about configuring a solaris-kz branded (kernel) zone before installation of the zone, see Chapter 1, Planning and Configuring Oracle Solaris Kernel Zones, in Creating and Using Oracle Solaris Kernel Zones .
Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.
Observe the following requirements for the following zonecfg command:
Define the osc-ha-zone attribute in the zone configuration, setting type to boolean and value to true.
Use the did devices identified in Step a of this procedure.
phys-schost-1# zonecfg -z zonename \ 'create -b; set brand=solaris-kz; add capped-memory; set physical=2G; end; add device; set storage=dev:did/dsk/d2; set bootpri=1; end; add suspend; set storage=dev:did/dsk/d3; end; add anet; set lower-link=auto; end; set autoboot=false; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;'
phys-schost-1# zonecfg -z zonename \ 'create -b; set brand=solaris-kz; add capped-memory; set physical=2G; end; add device; set storage=dev:did/dsk/d2; set bootpri=1; end; add anet; set lower-link=auto; end; set autoboot=false; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;'
phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename configured /pool/filesystem/zonename solaris-kz shared
phys-schost-1# clresourcegroup status solaris-zone-resource-group === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ solaris-zone-resource-group phys-schost-1 No Online …
Perform the rest of this step from the node that masters the resource group, or on all nodes for a multiple-master configuration..
phys-schost-N# zoneadm -z zonename install
phys-schost-N# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename solaris-kz shared
phys-schost-N# zoneadm -z zonename boot phys-schost-N# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename running /pool/filesystem/zonename solaris-kz shared
phys-schost-N# zlogin -C zonename
Follow the interactive steps to finish the zone configuration.
phys-schost-1# zoneadm -z zonename shutdown
phys-schost-1# zoneadm -z zonename detach -F
This is the only supported method to copy the kernel zone configuration to another node while ensuring that it contains the encryption key for the kernel zone host data that it maintains. For more information about the kernel zone, see the solaris-kz (5) man page.
phys-schost-1# zonecfg -z zonename export -f /var/cluster/run/zonename.cfg phys-schost-1# scp /var/cluster/run/zonename.cfg root@node-2:/var/cluster/run/ phys-schost-1# rm /var/cluster/run/zonename.cfg phys-schost-2# zonecfg -z zonename -f /var/cluster/run/zonename.cfg phys-schost-2# rm /var/cluster/run/zonename.cfg
Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.
phys-schost-1# clresourcegroup switch -n phys-schost-2 solaris-zone-resource-group
phys-schost-2# zoneadm -z zonename attach -x force-takeover
Output is similar to the following:
phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename solaris-kz shared
phys-schost-2# zoneadm -z zonename boot
Perform this step to verify that the zone is functional.
phys-schost-2# zlogin -C zonename
phys-schost-N# svcadm enable svc:/system/rad:local svc:/system/rad:remote \ svc:/network/kz-migr:stream
phys-schost-N# ssh-keygen -N '' -f /root/.ssh/id_rsa -t rsa
Put the public key of the remote node into the authorized_keys file on the local node.
phys-schost-1# scp /root/.ssh/id_rsa.pub \ phys-schost-2:/var/run/phys-schost-1-root-ssh-pubkey.txt phys-schost-2# scp /root/.ssh/id_rsa.pub \ phys-schost-1:/var/run/phys-schost-2-root-ssh-pubkey.txt phys-schost-1# cat /var/run/phys-schost-2-root-ssh-pubkey.txt \ >> /root/.ssh/authorized_keys phys-schost-1# rm /var/run/phys-schost-2-root-ssh-pubkey.txt phys-schost-2# cat /var/run/phys-schost-1-root-ssh-pubkey.txt \ >> /root/.ssh/authorized_keys phys-schost-2# rm /var/run/phys-schost-1-root-ssh-pubkey.txt
Accept the public keys to continue the connection once for each node.
phys-schost-1# ssh root@clusternode2-priv date … Are you sure you want to continue connecting (yes/no)? yes phys-schost-2# ssh root@clusternode1-priv date … Are you sure you want to continue connecting (yes/no)? yes
The migration is run over the cluster interconnect.
phys-schost-2# zoneadm -z sol-kz-fz-1 migrate ssh://clusternode1-priv
The zone should be running on the first node and its status on the second node should be detached.
phys-schost-2# zoneadm -z zonename shutdown
phys-schost-1# zoneadm -z zonename shutdown
phys-schost-2# zoneadm -z zonename detach -F
phys-schost-1# zoneadm -z zonename detach -F
The zone state changes from installed to configured.