Perform this task on each node that is to host the solaris10 branded non-global zone. For complete information about installing a solaris10 branded zone, see Creating and Using Oracle Solaris 10 Zones.
Before You Begin
Consult Planning the HA for Solaris Zones Installation and Configuration and then determine the following requirements for the deployment of the zone with Oracle Solaris Cluster:
The number of Oracle Solaris Zone instances that are to be deployed.
Ensure that the zone is enabled to run in a failover or multiple-masters configuration. See How to Enable a Zone to Run in a Failover Configuration or How to Enable a Zone to Run in a Multiple-Masters Configuration.
If the zone will run in a failover configuration and it is not set with the rootzpool zone property, ensure that the zone's zone path specifies a file system on a zpool that is managed by the SUNW.HAStoragePlus resource that you created in How to Enable a Zone to Run in a Failover Configuration.
The ha-zones wizard requires that the zone be online on one node of the cluster only and that it does not exist on any of the other nodes. The wizard automates the creation of the zone on the other nodes of the cluster, and will not work if the zones already exist on more than one node. If you will use the wizard to put the zone under cluster management, only run the steps required to bring the zone online on one node of the cluster. The wizard will run all other required steps.
For detailed information about configuring a solaris10 branded zone before installation of the zone, see Chapter 4, Configuring the solaris10 Branded Zone in Creating and Using Oracle Solaris 10 Zones.
Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.
Follow procedures in Creating the Image for Directly Migrating Oracle Solaris 10 Systems Into Zones in Creating and Using Oracle Solaris 10 Zones.
You will use this file system as the zone root path for the zone that you create later in this procedure.
phys-schost-1# zfs create pool/filesystem
For zones that are not set with the rootzpool zone property, set the zone root path to the file system that you created on the ZFS storage pool.
phys-schost# zonecfg -z zonename \ 'create ; set brand=solaris10; set zonepath=/pool/filesystem/zonename; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end; set autoboot=false'
phys-schost# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename configured /pool/filesystem/zonename solaris10 shared
For a multiple-master configuration, omit this step.
phys-schost-1# clresourcegroup status solaris-zone-resource-group === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ solaris-zone-resource-group phys-schost-1 No Online …
Perform the rest of this step from the node that masters the resource group, or on all nodes for a multiple-master configuration.
phys-schost-N# zoneadm -z zonename install -a flarimage -u
phys-schost-N# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename solaris10 shared
phys-schost-N# zoneadm -z zonename boot phys-schost-N# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename running /pool/filesystem/zonename solaris10 shared
phys-schost-N# zlogin -C zonename
Follow the interactive steps to finish the zone configuration.
phys-schost-1# zoneadm -z zonename halt
The zone's status should return to installed.
phys-schost-1# zoneadm -z zonename detach -F
The zone state changes from installed to configured.
phys-schost-1# zoneadm -z zonename detach
For a multiple-master configuration, omit this step.
Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.
phys-schost-1# clresourcegroup switch -n phys-schost-2 \ solaris-zone-resource-group
phys-schost-2# zoneadm -z zonename attach
Output is similar to the following:
phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zonename installed /pool/filesystem/zonename solaris10 shared
phys-schost-2# zoneadm -z zonename boot
Perform this step to verify that the zone is functional.
phys-schost-2# zlogin -C zonename
phys-schost-2# zoneadm -z zonename halt
phys-schost-2# zoneadm -z zonename detach -F
The zone state changes from installed to configured.
phys-schost-1# zoneadm -z zonename detach