Perform this task on each node that is to host the zone.
Before You Begin
Consult Configuration Restrictions and then determine the following requirements for the deployment of the zone with Oracle Solaris Cluster:
The number of Solaris Zone instances that are to be deployed.
For non-global zones, the zpool containing the file system that is to be used by each Solaris Zone instance.
For an Oracle Solaris Kernel Zone, the boot storage is specified as described in the suri (5) man page. If the storage URI points to a zvol, then the corresponding zpool must be managed by a SUNW.HAStoragePlus resource. If the storage URI points to a logical unit or iSCSI device, then the SUNW.HAStoragePlus resource can be used to monitor the corresponding did device.
If the non-global zone that you are installing is to run in a failover configuration, configure the zone's zone path to specify a file system on a zpool. The zpool must be managed by the SUNW.HAStoragePlus resource that you created in How to Enable a Zone to Run in a Failover Configuration. In the case of kernel zones, this is applicable only if the boot device is pointing to a zvol or if the suspend device is pointing to a path.
For detailed information about configuring a zone before installation of the zone, see the following documentation:
Chapter 1, How to Plan and Configure Non-Global Zones, in Creating and Using Oracle Solaris Zones
Chapter 4, Configuring the solaris10 Branded Zone, in Creating and using Oracle Solaris 10 Zones
Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.
Follow procedures in Creating the Image for Directly Migrating Oracle Solaris 10 Systems Into Zones in Creating and using Oracle Solaris 10 Zones .
This HAStoragePlus resource is for the zonepath. The file system must be a failover file system.
phys-schost# clresource create \ -g solaris-zone-resource-group \ -t SUNW.HAStoragePlus \ -p Zpools=solaris-zone-instance-zpool \ solaris-zone-has-resource-name
Identify the devices to be used as boot storage and suspend storage for the kernel zone.
phys-schost-1# cldev list -v d2 DID Device Full Device Path d2 node-1:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0 d2 node-2:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0 phys-schost-1# suriadm lookup-uri /dev/did/dsk/d2 dev:did/dsk/d2 phys-schost-1# cldev list -v d3 DID Device Full Device Path d3 node-1:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0 d3 node-2:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0 phys-schost-1# suriadm lookup-uri /dev/did/dsk/d3 dev:did/dsk/d3 d2 (suri=dev:did/dsk/d2) will be used for the kernel zone rpool as boot device d3 (suri=dev:did/dsk/d3) will be used as suspend device
(Optional) If you require device monitoring for the storage devices configured to be used by the kernel zone, configure a SUNW.HAStoragePlus resource and specify the corresponding global device group for the did devices identified in the previous step within the GlobalDevicePaths property.
Register the SUNW.HAStoragePlus resource type, if it is not yet registered on the cluster.
phys-schost-1# clrt register SUNW.HAStoragePlus
Register the SUNW.HAStoragePlus resource.
phys-schost-1# clrs create -t SUNW.HAStoragePlus -g zone-rg \ -p GlobalDevicePaths=dsk/d2,dsk/d3 ha-zones-hasp-rs
Set the resource name for that SUNW.HAStoragePlus resource within the HAS_RS variable to ensure the required resource dependency gets setup for the sczbt component. For example:
HAS_RS=ha-zones-hasp-rs
phys-schost-1# clresourcegroup online -eM resourcegroup
You will use this file system as the zone root path for the solaris or solaris10 brand zone that you create later in this procedure.
phys-schost-1# zfs create pool/filesystem
If you omit this step, at failover of a solaris brand zone, the last zone boot environment that was booted is first cloned and then activated on the node.
If you perform this step, at failover of a solaris brand zone, the last zone boot environment that was booted is activated on the node without first creating a clone.
Output is similar to the following.
phys-schost-1# /opt/SUNWsczone/sczbt/util/ha-solaris-zone-boot-env-id get 8fe53702-16c3-eb21-ed85-d19af92c6bb
In this example output, the UUID is 8fe53702-16c3-eb21-ed85-d19af92c6bbd.
phys-schost-2# /opt/SUNWsczone/sczbt/util/ha-solaris-zone-boot-env-id \ set uuid
Specifies the boot environment in which to define the specified UUID.
The reference UUID that you obtained in Step a.
For example:
phys-schost-2# /opt/SUNWsczone/sczbt/util/ha-solaris-zone-boot-env-id \ set 8fe53702-16c3-eb21-ed85-d19af92c6bbd Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for the global zone boot environment dataset rpool/ROOT/s11u1-osc41-SRU. Previous UUID was 4c827fc-01e8-4a8a-961f-cb6f5f15c139. Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for solaris branded zone pse-app boot environment dataset rpool/zones/pse-app/rpool/ROOT/solaris-2. Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for solaris branded zone pse-db boot environment dataset rpool/zones/pse-db/rpool/ROOT/solaris-2. Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for solaris branded zone pse-sched boot environment dataset rpool/zones/pse-sched/rpool/ROOT/solaris-2. Setting UUID 8fe53702-16c3-eb21-ed85-d19af92c6bbd for solaris branded zone pse-web boot environment dataset rpool/zones/pse-web/rpool/ROOT/solaris-2.
Set the zone root path to the file system that you created on the ZFS storage pool.
phys-schost# zonecfg -z zonename \ 'create ; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end; set zonepath=/pool/filesystem/zonename ; set autoboot=false'
phys-schost# zonecfg -z zonename \ 'create ; set brand=solaris10; set zonepath=/pool/filesystem/zonename ; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end; set autoboot=false'
In the following command, use the did devices identified in Step-3 of this procedure.
phy-schost# zonecfg -z zonename \ 'create -b; set brand=solaris-kz; add capped-memory; set physical=2G; end; add device; set storage=dev:did/dsk/d2; set bootpri=1; end; add suspend; set storage=dev:did/dsk/d3; end; add anet; set lower-link=auto; end; set autoboot=false; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;'
phys-schost# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename configured /pool/filesystem/zonename brand shared
phys-schost# clresource status === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- ------- hasp-resource phys-schost-1 Online Online phys-schost-2 Offline Offline
Perform the remaining tasks in this step from the node that masters the HAStoragePlus resource.
phys-schost-1# zoneadm -z zonename install
phys-schost-1# zoneadm -z zonename install -a flarimage -u
phys-schost-1# zoneadm -z zonename install
phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename brand shared
phys-schost-1# zoneadm -z zonename boot phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename running /pool/filesystem/zonename brand shared
Follow the interactive steps to finish the zone configuration.
This step is applicable for non-global zones, and also while configuring kernel zones for cold migration.
The zone's status should return to installed.
phys-schost-1# zoneadm -z zonename halt
This step is only applicable while configuring kernel zones for warm migration.
phys-schost-1# zoneadm -z zonename suspend
phys-schost-1# zoneadm -z zonename detach -F
The zone state changes from installed to configured.
This is the only supported method to copy the kernel zone configuration to another node while ensuring that it contains the encryption key for the kernel zone host data that it maintains. For more information about the kernel zone, see the solaris-kz (5) man page.
For example:
phys-schost-1# zonecfg -z zonename export -f /var/cluster/run/zonename.cfg phys-schost-1# scp/var/cluster/run/zonename.cfg root@node-2:/var/cluster/run/ phys-schost-1# rm /var/cluster/run/zonename.cfg phys-schost-2# zonecfg -z zonename -f /var/cluster/run/zonename.cfg phys-schost-2# rm /var/cluster/run/zonename.cfg
Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.
phys-schost-1# clresourcegroup switch -n phys-schost-2 resourcegroup
Perform the remaining tasks in this step from the node to which you switch the resource group.
phys-schost-2# zoneadm -z zonename attach
phys-schost-2# zoneadm -z zonename attach -x force-takeover
Output is similar to the following:
phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename brand shared
phys-schost-2# zoneadm -z zonename boot
Perform this step to verify that the zone is functional.
phys-schost-2# zlogin -C zonename
phys-schost-2# zoneadm -z zonename halt
phys-schost-2# zoneadm -z zonename detach -F
The zone state changes from installed to configured.