Oracle® Solaris Cluster Data Service for Oracle Solaris Zones Guide

Exit Print View

Updated: July 2014, E39657-01
 
 

How to Create and Enable Resources for the Zone Boot Component

Before You Begin

Ensure you have edited the sczbt_config file or a copy of it to specify configuration parameters for the HA for Solaris Zones zone boot component. For more information, see Specifying Configuration Parameters for the Zone Boot Resource.

  1. Assume the root role on one of the nodes in the cluster that will host the zone.
  2. On both nodes, configure the zone-boot (sczbt) resource.
    1. Install and configure the HA for Zones agent.
      phys-schost# pkg install ha-cluster/data-service/ha-zones
      phys-schost# cd /opt/SUNWsczone/sczbt/util
      phys-schost# cp -p sczbt_config sczbt_config.zoneboot-resource
      phys-schost# vi sczbt_config.zoneboot-resource
      
      Add or modify the following entries in the file.
      RS="zoneboot-resource"
      RG="resourcegroup"
      PARAMETERDIR=
      SC_NETWORK="false"
      SC_LH=""
      FAILOVER="true"
      HAS_RS="hasp-resource"
      Zonename="zonename"
      Zonebrand="brand"
      Zonebootopt=""
      Milestone="multi-user-server"
      LXrunlevel="3"
      SLrunlevel="3"
      Mounts=""
      Migrationtype=cold
      Save and exit the file.
    2. Configure the zone-boot resource.

      The resource is configured with the parameters that you set in the zone-boot configuration file.

      phys-schost# ./sczbt_register -f ./sczbt_config.zoneboot-resource
    3. Verify that the zone-boot resource is enabled.
      phys-schost# clresource enable zoneboot-resource
  3. Verify that the resource group can switch to another node and the ZFS storage pool successfully starts there after the switchover.
    1. Switch the resource group to another node.
      phys-schost-2# clresourcegroup switch -n phys-schost-1 resourcegroup
    2. Verify that the resource group is now online on the new node.

      Output is similar to the following:

      phys-schost-1# clresourcegroup status
      === Cluster Resource Groups ===
      
      Group Name                   Node Name          Suspended        Status
      ----------                   ---------          ---------        ------
      resourcegroup                 phys-schost-1      No               Online
                                    phys-schost-2      No               Offline
    3. Verify that the zone is running on the new node.
      phys-schost-1# zoneadm list -cv
      ID  NAME     STATUS       PATH                           BRAND      IP
      0  global   running      /                              solaris    shared
      1  zonename running      /pool/filesystem/zonename        brand      shared
Example 1-6  Configuring the HA for Zones Zone Boot Component for solaris Brand Zones

This example creates the HAStoragePlus resource hasp-rs, which uses a mirrored ZFS storage pool hapool in the resource group zone-rg. The storage pool is mounted on the /hapool/solaris file system. The hasp-rs resource runs on the solaris brand non-global zone solariszone1, which is configured on both phys-schost-1 and phys-schost-2. The zone-boot resource solariszone1-rs is based on the ORCL.ha-zone_sczbt resource type. This example assumes that you are running the Oracle Solaris 11.2 version.

Create a resource group.
phys-schost-1# clresourcegroup create zone-rg

Create a mirrored ZFS storage pool to be used for the HA zone root path.
phys-schost-1# zpool create -m /ha-zones hapool mirror /dev/rdsk/c4t6d0 \
/dev/rdsk/c5t6d0
phys-schost-1# zpool export hapool

Create an HAStoragePlus resource that uses the resource group and mirrored ZFS storage pool that you created.
phys-schost-1# clresourcetype register SUNW.HAStoragePlus
phys-schost-1# clresource create -t SUNW.HAStoragePlus \
-g zone-rg -p Zpools=hapool hasp-rs

Bring the resource group online.
phys-schost-1# clresourcegroup online -eM zone-rg

Create a ZFS file-system dataset on the ZFS storage pool that you created.
phys-schost-1# zfs create hapool/solaris

Ensure that the universally unique ID (UUID) of each node's boot-environment (BE) root dataset is the same value on both nodes.
phys-schost-1# /opt/SUNWsczone/sczbt/util/ha-solaris-zone-boot-env-id get
8fe53702-16c3-eb21-ed85-d19af92c6bbd

phys-schost-2# /opt/SUNWsczone/sczbt/util/ha-solaris-zone-boot-env-id set \
8fe53702-16c3-eb21-ed85-d19af92c6bbd

Configure the solaris brand non-global zone.
phys-schost-1# zonecfg -z solariszone1 'create -b ; \
set zonepath=/hapool/solaris/solariszone1 ; add attr; set name=osc-ha-zone; set type=boolean; \
set value=true; end; set autoboot=false; set ip-type=shared'
phys-schost-1# zoneadm list -cv
ID NAME             STATUS       PATH                           BRAND    IP
0 global           running      /                              solaris  shared
- solariszone1     configured   /hapool/solaris/solariszone1   solaris  shared
Repeat on phys-schost-2.

Identify the node that masters the HAStoragePlus resource, and from that node install solariszone1.
phys-schost-1# clresource status
=== Cluster Resources ===

Resource Name             Node Name       Status        Message
--------------            ----------      -------        -------
hasp-rs                  phys-schost-1   Online        Online
                         phys-schost-2   Offline       Offline

phys-schost-1# zoneadm -z solariszone1 install
phys-schost-1# zoneadm list -cv
ID NAME             STATUS     PATH                           BRAND    IP
0 global           running    /                              solaris   shared
- solariszone1     installed  /hapool/solaris/solariszone1   solaris  shared
phys-schost-1# zoneadm -z solariszone1 boot
phys-schost-1# zoneadm list -cv
ID NAME             STATUS     PATH                           BRAND    IP
0 global           running    /                              solaris  shared
- solariszone1     running    /hapool/solaris/solariszone1   solaris  shared

Open a new terminal window and log in to solariszone1.

phys-schost-1# zoneadm -z solariszone1 halt

Forcibly detach the zone.
phys-schost-1# zoneadm -z solariszone1 detach -F

Switch zone-rg to phys-schost-2 and forcibly attach the zone.
phys-schost-1# clresourcegroup switch -n phys-schost-2 zone-rg
phys-schost-2# zoneadm -z solariszone1 attach
phys-schost-2# zoneadm list -cv
ID NAME             STATUS      PATH                           BRAND    IP
0 global           running     /                              solaris  shared
- solariszone1     installed   /hapool/solaris/solariszone1   solaris  shared
phys-schost-2# zoneadm -z solariszone1 boot

Open a new terminal window and log in to solariszone1.
phys-schost-2# zlogin -C solariszone1
phys-schost-2# zoneadm -z solariszone1 halt

Forcibly detach the zone.
phys-schost-1# zoneadm -z solariszone1 detach -F

On both nodes, install and configure the HA for Zones agent.
phys-schost# pkg install ha-cluster/data-service/ha-zones
phys-schost# cd /opt/SUNWsczone/sczbt/util
phys-schost# cp -p sczbt_config sczbt_config.solariszone1-rs
phys-schost# vi sczbt_config.solariszone1-rs

On both nodes, add or modify the following entries in the sczbt_config.solariszone1-rs file.
RS="solariszone1-rs"
RG="zone-rg"
PARAMETERDIR=
SC_NETWORK="false"
SC_LH=""
FAILOVER="true"
HAS_RS="hasp-rs"
Zonename="solariszone1"
Zonebrand="solaris"
Zonebootopt=""
Milestone="multi-user-server"
LXrunlevel="3"
SLrunlevel="3"
Mounts=""
Migrationtype=cold
Save and exit the file.

On both nodes, configure the solariszone1-rs resource and verify that it is enabled.
phys-schost# ./sczbt_register -f ./sczbt_config.solariszone1-rs
phys-schost# clresource enable solariszone1-rs

Verify that zone-rg can switch to another node and that solariszone1 successfully starts there after the switchover.
phys-schost-2# clresourcegroup switch -n phys-schost-1 zone-rg
phys-schost-1# clresourcegroup status
=== Cluster Resource Groups ===

Group Name                   Node Name          Suspended        Status
----------                   ---------          ---------        ------
zone-rg                      phys-schost-1      No               Online
                             phys-schost-2      No               Offline

phys-schost-1# zoneadm list -cv
ID  NAME         STATUS      PATH                           BRAND      IP
0  global       running      /                              solaris    shared
1  solariszone1 running      /hapool/solaris/solariszone1   solaris    shared
Example 1-7  Configuring the HA for Solaris Zones for a solaris-kz Brand Zone

This example shows how to configure a solaris-kz brand zone on a two-node cluster to perform warm migration.

  1. Identify the devices to be used as boot storage and suspend storage for the kernel zone.

    node-1# cldev list -v d2
    DID Device     Full Device Path
    d2             node-1:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0
    d2             node-2:/dev/rdsk/c0t60080E5000184744000005B4513DF1A8d0
    
    node-1# suriadm lookup-uri /dev/did/dsk/d2
    dev:did/dsk/d2
    
    node-1# cldev list -v d3
    DID Device       Full Device Path
    d3               node-1:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0
    d3               node-2:/dev/rdsk/c0t60080E5000184744000005B6513DF1B2d0
    
    node-1# suriadm lookup-uri /dev/did/dsk/d3
    dev:did/dsk/d3
    
    d2 (suri=dev:did/dsk/d2) will be used for the kernel Zone rpool as boot device
    
    d3 (suri=dev:did/dsk/d3) will be used as suspend device
  2. Configure the kernel zone, sol-kz-fz1, on node 1.

    node-1# zonecfg -z sol-kz-fz1 \ 
    'create -b; set brand=solaris-kz; add capped-memory; set physical=2G; end; 
    add device; set storage=dev:did/dsk/d2; set bootpri=1; end; 
    add suspend; set storage=dev:did/dsk/d3; end; 
    add anet; set lower-link=auto; end; set autoboot=false; 
    add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;'
  3. Install the kernel zone, sol-kz-fz1, on node 1.

    node-1# zoneadm -z sol-kz-fz1 install
  4. Boot the kernel zone, sol-kz-fz1, on node 1.

    node-1# zoneadm -z sol-kz-fz1 boot
    
  5. Perform the initial zone setup by logging on to another shell.

    node-1# zlogin -C sol-kz-fz1
    

    Within the zone console, follow the instructions for the initial zone setup.

  6. Shut down the kernel zone, sol-kz-fz1.

    node-1# zoneadm -z sol-kz-fz1 shutdown
    
  7. Detach the kernel zone, sol-kz-fz1, from node 1.

    node-1# zoneadm -z sol-kz-fz1 detach -F
  8. Export the kernel zone configuration on node 1, copy it to a secure location on node 2 and import the zone configuration on node 2.

    This is the only supported method to copy the kernel zone configuration to another node while ensuring that it contains the encryption key for the kernel zone host data that it maintains. For more information about kernel zones, see the solaris-kz (5) man page.

    node-1# zonecfg -z sol-kz-fz1 export -f /var/cluster/run/sol-kz-fz1.cfg
    node-1# scp/var/cluster/run/sol-kz-fz1.cfg  root@node-2:/var/cluster/run/
    node-1# rm /var/cluster/run/sol-kz-fz1.cfg
    
    node-2# zonecfg -z sol-kz-fz1 -f /var/cluster/run/sol-kz-fz1.cfg
    node-2# rm /var/cluster/run/sol-kz-fz1.cfg

    Repeat this step, if you determine that it to be necessary to create a new host information encryption key, by manually using the -x initialize-hostdata option of the zoneadm attach command. Normal operation and setup of kernel zones does not require re-creating the host information.

  9. Attach the kernel zone, sol-kz-fz1, on node 2 using the -x force-takeover option.

    node-2# zoneadm -z sol-kz-fz1 attach -x force-takeover
  10. Boot the kernel zone, sol-kz-fz1, on node 2.

    node-2# zoneadm -z sol-kz-fz1 boot
    on another shell:
    node-2# zlogin -C sol-kf-fz1
  11. Suspend the kernel zone, sol-kz-fz1, on node 2.

    node-2# zoneadm -z sol-kz-fz1 suspend
    
  12. Detach the kernel zone, sol-kz-fz1, on node 2.

    node-2# zoneadm -z sol-kz-fz1 detach -F
  13. Configure the failover resource group.

    node-2# clrg create zone-rg
  14. (Optional) If you require device monitoring for the storage devices configured to be used by the kernel zone, configure a SUNW.HAStoragePlus resource and specify the corresponding global device group for the did devices identified in Step 1 within the GlobalDevicePaths property.

    1. Register the SUNW.HAStoragePlus resource type, if it is not yet registered on the cluster.

      node2# clrt register SUNW.HAStoragePlus
    2. Register the SUNW.HAStoragePlus resource.

      node2# clrs create -t SUNW.HAStoragePlus -g zone-rg \
      -p GlobalDevicePaths=dsk/d2,dsk/d3 ha-zones-hasp-rs
    3. Set the resource name for that SUNW.HAStoragePlus resource within the HAS_RS variable in Step 15 to ensure the required resource dependency gets setup for the sczbt component:

      HAS_RS=ha-zones-hasp-rs
  15. Create the configuration file for the sczbt component to manage the kernel zone, sol-kz-fz1.

    node-2# vi /opt/SUNWsczone/sczbt/util/sczbt_config.sol-kz-fz1-rs
    RS=sol-kz-fz1-rs
    RG=zone-rg
    PARAMETERDIR=
    SC_NETWORK=false
    SC_LH=FAILOVER=false
    HAS_RS=
    Zonename="sol-kz-fz1"
    Zonebrand="solaris-kz"
    Zonebootopt=""
    Milestone="svc:/milestone/multi-user-server"
    LXrunlevel="3"
    SLrunlevel="3"
    Mounts=""
    Migrationtype="warm"
    
  16. Register the sczbt component resource.

    node-2# /opt/SUNWsczone/sczbt/util/sczbt_register -f \
    /opt/SUNWsczone/sczbt/util/sczbt_config.sol-kz-fz1-rs
  17. Switch the resource group online and enable the sczbt resource.

    node-2# clrg online -Me zone-rg

    Within the zone console for the kernel zone, sol-kz-fz1, confirm the that the zone resumes correctly.

  18. Perform switchover of the zone-rg resource group to node1.

    node-2# clrg switch -n node-1 zone-rg

    Confirm that the kernel zone suspends on node-2 and resumes on node-1, thus performing a successful warm migration.

Next Steps

Go to Verifying the HA for Solaris Zones and Configuration.