Perform this task on each node that is to host the solaris branded non-global zone. For complete information about installing a solaris branded non-global zone, see Creating and Using Oracle Solaris Zones .
Before You Begin
Consult Planning the HA for Solaris Zones Installation and Configuration and then determine the following requirements for the deployment of the zone with Oracle Solaris Cluster:
The number of Solaris Zone instances that are to be deployed.
The zpool containing the file system that is to be used by each Solaris Zone instance.
Ensure that the zone is enabled to run in a failover or multiple-masters configuration. See How to Enable a Zone to Run in a Failover Configuration or How to Enable a Zone to Run in a Multiple-Masters Configuration.
If the zone will run in a failover configuration and it is not set with the rootzpool zone property, ensure that the zone's zone path specifies a file system on a zpool that is managed by the SUNW.HAStoragePlus resource that you created in How to Enable a Zone to Run in a Failover Configuration.
For detailed information about configuring a solaris branded zone before installation of the zone, see Chapter 1, How to Plan and Configure Non-Global Zones, in Creating and Using Oracle Solaris Zones .
Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.
phys-schost-1# clresourcegroup online -eM solaris-zone-resource-group
You will use this file system as the zone root path for zone that you create later in this procedure.
phys-schost-1# zfs create pool/filesystem
You must define the osc-ha-zone attribute in the zone configuration, setting type to boolean and value to true.
phys-schost# zonecfg -z zonename \ 'create ; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end; set zonepath=/pool/filesystem/zonename ; set autoboot=false'
phys-schost# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename configured /pool/filesystem/zonename solaris shared
phys-schost-1# clresourcegroup status solaris-zone-resource-group === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ solaris-zone-resource-group phys-schost-1 No Online …
Perform the rest of this step from the node that masters the resource group, or on all nodes for a multiple-master configuration.
phys-schost-N# zoneadm -z zonename install
phys-schost-N# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename solaris shared
phys-schost-N# zoneadm -z zonename boot phys-schost-N# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename running /pool/filesystem/zonename solaris shared
Follow the interactive steps to finish the zone configuration.
phys-schost-N# zlogin -C zonename
The zone's status should return to installed.
phys-schost-N# zoneadm -z zonename halt
The zone state changes from installed to configured.
phys-schost-N# zoneadm -z zonename detach -F
phys-schost-N# zoneadm -z zonename detach
For a multiple-master configuration, omit this step.
Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.
phys-schost-1# clresourcegroup switch -n phys-schost-2 \ solaris-zone-resource-group
phys-schost-2# zoneadm -z zonename attach
Output is similar to the following:
phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename solaris10 shared
phys-schost-2# zoneadm -z zonename boot
Perform this step to verify that the zone is functional.
phys-schost-2# zlogin -C zonename
phys-schost-2# zoneadm -z zonename halt
The zone state changes from installed to configured.
phys-schost-2# zoneadm -z zonename detach -F
phys-schost-1# zoneadm -z zonename detach