Before You Begin
Ensure you have edited the sczbt_config file or a copy of it to specify configuration parameters for the HA for Solaris Zones zone boot component. For more information, see Specifying Configuration Parameters for the Zone Boot Resource.
For a kernel zone, set Migrationtype to warm for warm migration or to live for live migration.
If you are not using device monitoring, use the following settings instead for the FAILOVER and HAS_RS variables:
FAILOVER="false" HAS_RS=
phys-schost# pkg install ha-cluster/data-service/ha-zones phys-schost# cd /opt/SUNWsczone/sczbt/util phys-schost# cp -p sczbt_config sczbt_config.zoneboot-resource phys-schost# vi sczbt_config.zoneboot-resource Add or modify the following entries in the file. RS="zoneboot-resource" RG="resourcegroup" FAILOVER="true" HAS_RS="hasp-resource" Zonename="zonename" Zonebrand="brand" Zonebootopt="" Milestone="multi-user-server" Mounts="" Migrationtype=cold Save and exit the file.
The resource is configured with the parameters that you set in the zone-boot configuration file.
phys-schost# ./sczbt_register -f ./sczbt_config.zoneboot-resource
phys-schost# clresource enable zoneboot-resource
phys-schost-2# clresourcegroup switch -n phys-schost-1 resourcegroup
Output is similar to the following:
phys-schost-1# clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ resourcegroup phys-schost-1 No Online phys-schost-2 No Offline
phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 zonename running /pool/filesystem/zonename brand excl
This example creates the HAStoragePlus resource hasp-rs, which uses a mirrored ZFS storage pool hapool in the resource group zone-rg. The storage pool is mounted on the /hapool/solaris file system. The hasp-rs resource runs on the solaris branded non-global zone solariszone1, which is configured on both phys-schost-1 and phys-schost-2. The zone-boot resource solariszone1-rs is based on the ORCL.ha-zone_sczbt resource type.
Create a resource group. phys-schost-1# clresourcegroup create zone-rg Create a mirrored ZFS storage pool to be used for the HA zone root path. phys-schost-1# zpool create -m /ha-zones hapool mirror /dev/rdsk/c4t6d0 \ /dev/rdsk/c5t6d0 phys-schost-1# zpool export hapool Create an HAStoragePlus resource that uses the resource group and mirrored ZFS storage pool that you created. phys-schost-1# clresourcetype register SUNW.HAStoragePlus phys-schost-1# clresource create -t SUNW.HAStoragePlus \ -g zone-rg -p Zpools=hapool hasp-rs Bring the resource group online. phys-schost-1# clresourcegroup online -eM zone-rg Create a ZFS file-system dataset on the ZFS storage pool that you created. phys-schost-1# zfs create hapool/solaris Configure the solaris branded non-global zone. phys-schost-1# zonecfg -z solariszone1 'create -b ; \ set zonepath=/hapool/solaris/solariszone1 ; add attr; \ set name=osc-ha-zone; set type=boolean; \ set value=true; end; set autoboot=false phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - solariszone1 configured /hapool/solaris/solariszone1 solaris excl Repeat on phys-schost-2. Identify the node that masters the HAStoragePlus resource, and from that node install solariszone1. phys-schost-1# clresource status === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- ------- hasp-rs phys-schost-1 Online Online phys-schost-2 Offline Offline phys-schost-1# zoneadm -z solariszone1 install phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - solariszone1 installed /hapool/solaris/solariszone1 solaris excl phys-schost-1# zoneadm -z solariszone1 boot phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - solariszone1 running /hapool/solaris/solariszone1 solaris excl Open a new terminal window and log in to solariszone1. phys-schost-1# zoneadm -z solariszone1 halt Forcibly detach the zone. phys-schost-1# zoneadm -z solariszone1 detach -F Switch zone-rg to phys-schost-2 and forcibly attach the zone. phys-schost-1# clresourcegroup switch -n phys-schost-2 zone-rg phys-schost-2# zoneadm -z solariszone1 attach phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - solariszone1 installed /hapool/solaris/solariszone1 solaris excl phys-schost-2# zoneadm -z solariszone1 boot Open a new terminal window and log in to solariszone1. phys-schost-2# zlogin -C solariszone1 phys-schost-2# zoneadm -z solariszone1 halt Forcibly detach the zone. phys-schost-1# zoneadm -z solariszone1 detach -F On both nodes, install and configure the HA for Zones agent. phys-schost# pkg install ha-cluster/data-service/ha-zones phys-schost# cd /opt/SUNWsczone/sczbt/util phys-schost# cp -p sczbt_config sczbt_config.solariszone1-rs phys-schost# vi sczbt_config.solariszone1-rs On both nodes, add or modify entries in the sczbt_config.solariszone1-rs file. RS="solariszone1-rs" RG="zone-rg" FAILOVER="true" HAS_RS="hasp-rs" Zonename="solariszone1" Zonebrand="solaris" Zonebootopt="" Milestone="multi-user-server" Mounts="" Migrationtype=cold Save and exit the file. On both nodes, configure the solariszone1-rs resource and verify that it is enabled. phys-schost# ./sczbt_register -f ./sczbt_config.solariszone1-rs phys-schost# clresource enable solariszone1-rs Verify that zone-rg can switch to another node and that solariszone1 successfully starts there after the switchover. phys-schost-2# clresourcegroup switch -n phys-schost-1 zone-rg phys-schost-1# clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ zone-rg phys-schost-1 No Online phys-schost-2 No Offline phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 solariszone1 running /hapool/solaris/solariszone1 solaris exclExample 7 Configuring the HA for Solaris Zones for a solaris-kz Branded Zone
This example shows how to configure a solaris-kz branded zone on a two-node cluster to perform warm migration.
Identify the devices to be used as boot storage and suspend storage for the kernel zone.
node-1# cldev list -v d2 DID Device Full Device Path d2 node-1:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0 d2 node-2:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0 node-1# suriadm lookup-uri /dev/did/dsk/d2 dev:did/dsk/d2 node-1# cldev list -v d3 DID Device Full Device Path d3 node-1:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0 d3 node-2:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0 node-1# suriadm lookup-uri /dev/did/dsk/d3 dev:did/dsk/d3 d2 (suri=dev:did/dsk/d2) will be used for the kernel Zone rpool as boot device d3 (suri=dev:did/dsk/d3) will be used as suspend device
Configure the kernel zone, sol-kz-fz1, on node 1.
node-1# zonecfg -z sol-kz-fz1 \ 'create -b; set brand=solaris-kz; add capped-memory; set physical=2G; end; add device; set storage=dev:did/dsk/d2; set bootpri=1; end; add suspend; set storage=dev:did/dsk/d3; end; add anet; set lower-link=auto; end; set autoboot=false; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;'
Install the kernel zone, sol-kz-fz1, on node 1.
node-1# zoneadm -z sol-kz-fz1 install
Boot the kernel zone, sol-kz-fz1, on node 1.
node-1# zoneadm -z sol-kz-fz1 boot
Perform the initial zone setup by logging on to another shell.
node-1# zlogin -C sol-kz-fz1
Within the zone console, follow the instructions for the initial zone setup.
Shut down the kernel zone, sol-kz-fz1.
node-1# zoneadm -z sol-kz-fz1 shutdown
Detach the kernel zone, sol-kz-fz1, from node 1.
node-1# zoneadm -z sol-kz-fz1 detach -F
Export the kernel zone configuration on node 1, copy it to a secure location on node 2 and import the zone configuration on node 2.
This is the only supported method to copy the kernel zone configuration to another node while ensuring that it contains the encryption key for the kernel zone host data that it maintains.
node-1# zonecfg -z sol-kz-fz1 export -f /var/cluster/run/sol-kz-fz1.cfg node-1# scp/var/cluster/run/sol-kz-fz1.cfgroot@node-2:/var/cluster/run/ node-1# rm /var/cluster/run/sol-kz-fz1.cfg node-2# zonecfg -z sol-kz-fz1 -f /var/cluster/run/sol-kz-fz1.cfg node-2# rm /var/cluster/run/sol-kz-fz1.cfg
Repeat this step, if you determine that it to be necessary to create a new host information encryption key, by manually using the -x initialize-hostdata option of the zoneadm attach command. Normal operation and setup of kernel zones does not require re-creating the host information.
Attach the kernel zone, sol-kz-fz1, on node 2 using the -x force-takeover option.
node-2# zoneadm -z sol-kz-fz1 attach -x force-takeover
Boot the kernel zone, sol-kz-fz1, on node 2.
node-2# zoneadm -z sol-kz-fz1 boot on another shell: node-2# zlogin -C sol-kf-fz1
Suspend the kernel zone, sol-kz-fz1, on node 2.
node-2# zoneadm -z sol-kz-fz1 suspend
Detach the kernel zone, sol-kz-fz1, on node 2.
node-2# zoneadm -z sol-kz-fz1 detach -F
Configure the failover resource group.
node-2# clrg create zone-rg
(Optional) If you require device monitoring for the storage devices configured to be used by the kernel zone, configure a SUNW.HAStoragePlus resource and specify the corresponding global device group for the did devices identified in Step 1 within the GlobalDevicePaths property.
Register the SUNW.HAStoragePlus resource type, if it is not yet registered on the cluster.
node2# clrt register SUNW.HAStoragePlus
Register the SUNW.HAStoragePlus resource.
node2# clrs create -t SUNW.HAStoragePlus -g zone-rg \ -p GlobalDevicePaths=dsk/d2,dsk/d3 ha-zones-hasp-rs
Set the resource name for that SUNW.HAStoragePlus resource within the HAS_RS variable in Step 15 to ensure the required resource dependency gets setup for the sczbt component:
HAS_RS=ha-zones-hasp-rs
Create the configuration file for the sczbt component to manage the kernel zone, sol-kz-fz1.
node-2# vi /opt/SUNWsczone/sczbt/util/sczbt_config.sol-kz-fz1-rs RS=sol-kz-fz1-rs RG=zone-rg HAS_RS= Zonename="sol-kz-fz1" Zonebrand="solaris-kz" Zonebootopt="" Milestone="svc:/milestone/multi-user-server" Mounts="" Migrationtype="warm"
Register the sczbt component resource.
node-2# /opt/SUNWsczone/sczbt/util/sczbt_register -f \ /opt/SUNWsczone/sczbt/util/sczbt_config.sol-kz-fz1-rs
Switch the resource group online and enable the sczbt resource.
node-2# clrg online -Me zone-rg
Within the zone console for the kernel zone, sol-kz-fz1, confirm the that the zone resumes correctly.
Perform switchover of the zone-rg resource group to node1.
node-2# clrg switch -n node-1 zone-rg
Confirm that the kernel zone suspends on node-2 and resumes on node-1, thus performing a successful warm migration.
Next Steps