JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for Oracle Solaris Zones Guide     Oracle Solaris Cluster 4.0
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring HA for Solaris Zones

HA for Solaris Zones Overview

Overview of Installing and Configuring HA for Solaris Zones

Planning the HA for Solaris Zones Installation and Configuration

Configuration Restrictions

Restrictions for Zone Network Addresses

Restrictions for an HA Zone

Restrictions for a Multiple-Masters Zone

Restrictions for the Zone Path of a Zone

Restrictions on Major Device Numbers in /etc/name_to_major

Configuration Requirements

Dependencies Between HA for Solaris Zones Components

Parameter File Directory for HA for Solaris Zones

Installing and Configuring Zones

How to Enable a Zone to Run in a Failover Configuration

How to Enable a Zone to Run in a Multiple-Masters Configuration

How to Install a Zone and Perform the Initial Internal Zone Configuration

Verifying the Installation and Configuration of a Zone

How to Verify the Installation and Configuration of a Zone

Installing the HA for Solaris Zones Package

How to Install the HA for Solaris Zones Package

Registering and Configuring HA for Solaris Zones

Specifying Configuration Parameters for the Zone Boot Resource

Writing Scripts for the Zone Script Resource

Specifying Configuration Parameters for the Zone Script Resource

Writing a Service Probe for the Zone SMF Resource

Specifying Configuration Parameters for the Zone SMF Resource

How to Create and Enable Resources for the Zone Boot Component

How to Create and Enable Resources for the Zone Script Component

How to Create and Enable Resources for the Zone SMF Component

Verifying the HA for Solaris Zones and Configuration

How to Verify the HA for Solaris Zones Installation and Configuration

Upgrading Non-Global Zones Managed by HA for Oracle Solaris Zones

Tuning the HA for Solaris Zones Fault Monitors

Operation of the HA for Solaris Zones Parameter File

Operation of the Fault Monitor for the Zone Boot Component

Operation of the Fault Monitor for the Zone Script Component

Operation of the Fault Monitor for the Zone SMF Component

Tuning the HA for Solaris Zones Stop_timeout property

Choosing the Stop_timeout value for the Zone Boot Component

Choosing the Stop_timeout value for the Zone Script Component

Choosing the Stop_timeout value for the Zone SMF Component

Denying Cluster Services for a Non-Global Zone

Debugging HA for Solaris Zones

How to Activate Debugging for HA for Solaris Zones

A.  Files for Configuring HA for Solaris Zones Resources

Index

Installing and Configuring Zones

Installing and configuring Solaris Zones involves the following tasks:

  1. Enabling a zone to run in your chosen data service configuration, as explained in the following sections:

  2. Installing and configuring a zone, as explained in:

Perform this task for each zone that you are installing and configuring. This section explains only the special requirements for installing Solaris Zones for use with HA for Solaris Zones. For complete information about installing and configuring Solaris Zones, see Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.

How to Enable a Zone to Run in a Failover Configuration

  1. Register the SUNW.HAStoragePlus resource type.
    # clresourcetype register SUNW.HAStoragePlus
  2. Create a failover resource group.
    # clresourcegroup create solaris-zone-resource-group
  3. Create a resource for the zone`s disk storage.

    This HAStoragePlus resource is for the zonepath. The file system must be a failover file system.

    # clresource create \
    -g solaris-zone-resource-group \
    -t SUNW.HAStoragePlus \
    -p Zpools=solaris-zone-instance-zpool \
    solaris-zone-has-resource-name
  4. (Optional) Create a resource for the zone's logical hostname.
    # clreslogicalhostname create \
    -g solaris-zone-resource-group \
    -h solaris-zone-logical-hostname \
    solaris-zone-logical-hostname-resource-name
  5. Enable the failover resource group.
    # clresourcegroup online -M solaris-zone-resource-group

How to Enable a Zone to Run in a Multiple-Masters Configuration

  1. Create a scalable resource group.
    # clresourcegroup create \
    -p Maximum_primaries=max-number \
    -p Desired_primaries=desired-number \
    solaris-zone-resource-group
  2. Enable the scalable resource group.
    # clresourcegroup online -M solaris-zone-resource-group

How to Install a Zone and Perform the Initial Internal Zone Configuration

Perform this task on each node that is to host the zone.


Note - For complete information about installing a zone, see Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.


Before You Begin

Consult Configuration Restrictions and then determine the following requirements for the deployment of the zone with Oracle Solaris Cluster:

Ensure that the zone is configured.

If the zone that you are installing is to run in a failover configuration, configure the zone's zone path to specify a file system on a zpool. The zpool must be managed by the SUNW.HAStoragePlus resource that you created in How to Enable a Zone to Run in a Failover Configuration.

For detailed information about configuring a zone before installation of the zone, see the following documentation:


Note - This procedure assumes you are performing it on a two-node cluster. If you perform this procedure on a cluster with more than two nodes, perform on all nodes any steps that say to perform them on both nodes.


  1. Become superuser on one node of the cluster.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as non-root through a profile shell, or prefix the command with the pfexec command.

  2. If you will use a solaris10 brand zone, set up the system image.

    Follow procedures in Creating the Image for Directly Migrating Oracle Solaris 10 Systems Into Zones in Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.

  3. Create an HAStoragePlus resource.

    Specify the ZFS storage pool and the resource group that you created.

    phys-schost-1# clresource create -t SUNW.HAStoragePlus \
    -g resourcegroup -p Zpools=pool hasp-resource
  4. Bring the resource group online.
    phys-schost-1# clresourcegroup online -eM resourcegroup
  5. Create a ZFS file-system dataset on the ZFS storage pool that you created.

    You will use this file system as the zone root path for the solaris brand zone that you create later in this procedure.

    phys-schost-1# zfs create pool/filesystem
  6. For a solaris brand zone, ensure that the universally unique ID (UUID) of each node's boot-environment (BE) root dataset is the same value.
    1. Determine the UUID of the node where you initially created the zone.

      Output is similar to the following.

      phys-schost-1# beadm list -Hb101b-SC;8fe53702-16c3-eb21-ed85-d19af92c6bbd;NR;/;756…

      In this example output, the UUID is 8fe53702-16c3-eb21-ed85-d19af92c6bbd and the BE is b101b-SC.

    2. Set the same UUID on the second node.
      phys-schost-2# zfs set org.opensolaris.libbe:uuid=uuid rpool/ROOT/BE

      Note - If you use a multimaster configuration, you do not need to set the UUID as described in this step.


  7. On both nodes, configure the solaris or solaris10 brand non-global zone.

    Set the zone root path to the file system that you created on the ZFS storage pool.

    1. Configure the zone.

      Note - You must define the osc-ha-zone attribute in the zone configuration, setting type to boolean and value to true.


      • For a solaris brand zone, use the following command.
        phys-schost# zonecfg -z zonename \
        'create ; add attr; set name=osc-ha-zone; set type=boolean; set value=true; end;
        set zonepath=/pool/filesystem/zonename ; set autoboot=false'
      • For a solaris10 brand zone, use the following command.
        phys-schost# zonecfg -z zonename \
        'create ; set brand=solaris10; set zonepath=/pool/filesystem/zonename ;
        add attr; set name=osc-ha-zone; set type=boolean;
        set value=true; end; set autoboot=false'
    2. Verify the zone configuration.
      phys-schost# zoneadm list -cv
        ID NAME          STATUS        PATH                           BRAND    IP    
         0 global        running       /                              solaris  shared
         - zonename      configured   /pool/filesystem/zonename         brand    shared
  8. From the node that masters the HAStoragePlus resource, install the solaris or solaris10 brand non-global zone.

    Note - For a multi-master configuration, you do not need an HAStoragePlus resource as described in Step a and you do not need to perform the switchover described in Step 9.


    Output is similar to the following:

    1. Determine which node masters the HAStoragePlus resource.
      phys-schost# clresource status
      === Cluster Resources ===
      
      Resource Name             Node Name       Status        Message
      --------------            ----------      -------       -------
       hasp-resource              phys-schost-1   Online        Online
                                phys-schost-2    Offline       Offline

      Perform the remaining tasks in this step from the node that masters the HAStoragePlus resource.

    2. Install the zone on the node that masters the HAStoragePlus resource for the ZFS storage pool.
      • For a solaris brand zone, use the following command.
        phys-schost-1# zoneadm -z zonename install
      • For a solaris10 brand zone, use the following command.
        phys-schost-2# zoneadm -z zonename install -a flarimage -u
    3. Verify that the zone is installed.
      phys-schost-1# zoneadm list -cv
        ID NAME           STATUS       PATH                           BRAND    IP    
         0 global         running      /                              solaris  shared
         - zonename       installed    /pool/filesystem/zonename        brand    shared
    4. Boot the zone that you created and verify that the zone is running.
      phys-schost-1# zoneadm -z zonename boot
      phys-schost-1# zoneadm list -cv
        ID NAME           STATUS       PATH                           BRAND    IP    
         0 global         running      /                              solaris  shared
         - zonename       running      /pool/filesystem/zonename        brand    shared
    5. Open a new terminal window and log in to the zone console.

      Follow the interactive steps to finish the zone configuration.

    6. Halt the zone.

      The zone's status should return to installed.

      phys-schost-1# zoneadm -z zonename halt
    7. Forcibly detach the zone.
      phys-schost-1# zoneadm -z zonename detach -F

      The zone state changes from installed to configured.

  9. Switch the resource group to the other node and forcibly attach the zone.
    1. Switch over the resource group.

      Input is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.

      phys-schost-1# clresourcegroup switch -n phys-schost-2 resourcegroup

      Perform the remaining tasks in this step from the node to which you switch the resource group.

    2. Forcibly attach the zone to the node to which you switched the resource group.
      phys-schost-2# zoneadm -z zonename attach -F
    3. Verify that the zone is installed on the node.

      Output is similar to the following:

      phys-schost-2# zoneadm list -cv
        ID NAME           STATUS       PATH                           BRAND    IP    
         0 global         running      /                              solaris  shared
         - zonename       installed    /pool/filesystem/zonename        brand    shared
    4. Boot the zone.
      phys-schost-2# zoneadm -z zonename boot
    5. Open a new terminal window and log in to the zone.

      Perform this step to verify that the zone is functional.

      phys-schost-2# zlogin -C zonename
    6. Halt the zone.
      phys-schost-2# zoneadm -z zonename halt
    7. Forcibly detach the zone.
      phys-schost-1# zoneadm -z zonename detach -F

      The zone state changes from installed to configured.