JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Software Installation Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Planning the Oracle Solaris Cluster Configuration

2.  Installing Software on Global-Cluster Nodes

3.  Establishing the Global Cluster

4.  Configuring Solaris Volume Manager Software

5.  Creating a Cluster File System

6.  Creating Non-Global Zones and Zone Clusters

Configuring a Non-Global Zone on a Global-Cluster Node

How to Create a Non-Global Zone on a Global-Cluster Node

How to Configure an HAStoragePlus Resource for a Cluster File System That is Used by Non-Global Zones

Configuring a Zone Cluster

Overview of the clzonecluster Utility

Establishing the Zone Cluster

How to Prepare for Trusted Extensions Use With Zone Clusters

How to Create a Zone Cluster

Adding File Systems to a Zone Cluster

How to Add a Highly Available Local File System to a Zone Cluster

How to Add a ZFS Storage Pool to a Zone Cluster

How to Add a Cluster File System to a Zone Cluster

Adding Local File Systems to a Specific Zone-Cluster Node

How to Add a Local File System to a Specific Zone-Cluster Node

How to Add a Local ZFS Storage Pool to a Specific Zone-Cluster Node

Adding Storage Devices to a Zone Cluster

How to Add a Global Storage Device to a Zone Cluster

How to Add a Raw-Disk Device to a Specific Zone--Cluster Node

7.  Uninstalling Software From the Cluster

Index

Configuring a Zone Cluster

This section provide procedures to configure a cluster of Oracle Solaris Containers non-global zones, called a zone cluster.

Overview of the clzonecluster Utility

The clzonecluster utility creates, modifies, and removes a zone cluster. The clzonecluster utility actively manages a zone cluster. For example, the clzonecluster utility both boots and halts a zone cluster. Progress messages for the clzonecluster utility are output to the console, but are not saved in a log file.

The utility operates in the following levels of scope, similar to the zonecfg utility:

Establishing the Zone Cluster

This section describes how to configure a cluster of non-global zones.

How to Prepare for Trusted Extensions Use With Zone Clusters

This procedure prepares the global cluster to use the Trusted Extensions feature of Oracle Solaris software with zone clusters and enables the Trusted Extensions feature.

If you do not plan to enable Trusted Extensions, proceed to How to Create a Zone Cluster.

Perform this procedure on each node in the global cluster.

Before You Begin

Perform the following tasks:

  1. Become superuser on a node of the global cluster.
  2. Disable the Trusted Extensions zoneshare and zoneunshare scripts.

    The Trusted Extensions zoneshare and zoneunshare scripts support the ability to export home directories on the system. An Oracle Solaris Cluster configuration does not support this feature.

    Disable this feature by replacing each script with a symbolic link to the /bin/true utility. Do this on each global-cluster node.

    phys-schost# ln -s /usr/lib/zones/zoneshare /bin/true
    phys-schost# ln -s /usr/lib/zones/zoneunshare /bin/true
  3. Configure all logical-hostname shared-IP addresses that are in the global cluster.

    See Run the txzonemgr Script in Trusted Extensions Configuration Guide.

  4. Ensure that the administrative console is defined in the /etc/security/tsol/tnrhdb file as admin_low.
    ipaddress:admin_low
  5. Ensure that no /etc/hostname.interface file contains the -failover option in an entry.

    Delete the -failover option from any entry that contains that option.

  6. Modify the /etc/security/tsol/tnrhdb file to authorize communication with global-cluster components.

    Use the Security Templates wizard in Solaris Management Console as described in How to Construct a Remote Host Template in Trusted Extensions Administrator’s Procedures to perform the following tasks.

    • Create a new entry for IP addresses used by cluster components and assign each entry a CIPSO template.

      Add entries for each of the following IP addresses that exist in the global-cluster node's /etc/inet/hosts file:

      • Each global-cluster node private IP address

      • All cl_privnet IP addresses in the global cluster

      • Each logical-hostname public IP address for the global cluster

      • Each shared-address public IP address for the global cluster

      Entries would look similar to the following.

      127.0.0.1:cipso
      172.16.4.1:cipso
      172.16.4.2:cipso
      …
    • Add an entry to make the default template internal.

      0.0.0.0:internal

    For more information about CIPSO templates, see Configure the Domain of Interpretation in Trusted Extensions Configuration Guide.

  7. Enable the Trusted Extensions SMF service and reboot the global-cluster node.
    phys-schost# svcadm enable -s svc:/system/labeld:default
    phys-schost# shutdown -g0 -y -i6

    For more information, see Enable Trusted Extensions in Trusted Extensions Configuration Guide.

  8. Verify that the Trusted Extensions SMF service is enabled.
    phys-schost# svcs labeld
    STATE          STIME    FMRI
    online         17:52:55 svc:/system/labeld:default
  9. Repeat Step 1 through Step 8 on each remaining node of the global cluster.

    When all steps are completed on all global-cluster nodes, perform the remaining steps of this procedure on each node of the global cluster.

  10. Add the IP address of the Trusted Extensions-enabled LDAP server to the /etc/inet/hosts file on each global-cluster node.

    The LDAP server is used by the global zone and by the nodes of the zone cluster.

  11. Enable remote login by the LDAP server to the global-cluster node.
    1. In the /etc/default/login file, comment out the CONSOLE entry.
    2. Enable remote login.
      phys-schost# svcadm enable rlogin
    3. Modify the /etc/pam.conf file.

      Modify the account management entries by appending a Tab and typing allow_remote or allow_unlabeled respectively, as shown below.

      other   account requisite       pam_roles.so.1        Tab  allow_remote
      other   account required        pam_unix_account.so.1 Tab  allow_unlabeled
  12. Modify the /etc/nsswitch.ldap file.
    • Ensure that the passwd and group lookup entries have files first in the lookup order.

      …
      passwd:      files ldap
      group:       files ldap
      …
    • Ensure that the hosts and netmasks lookup entries have cluster listed first in the lookup order.

      …
      hosts:       cluster files ldap
      …
      netmasks:    cluster files ldap
      …
  13. Make the global-cluster node an LDAP client.

    See Make the Global Zone an LDAP Client in Trusted Extensions in Trusted Extensions Configuration Guide.

  14. Add Trusted Extensions users to the /etc/security/tsol/tnzonecfg file.

    Use the Add User wizard in Solaris Management Console as described in Creating Roles and Users in Trusted Extensions in Trusted Extensions Configuration Guide.

Next Steps

Create the zone cluster. Go to How to Create a Zone Cluster.

How to Create a Zone Cluster

Perform this procedure to create a cluster of non-global zones.

To modify the zone cluster after it is installed, see Performing Zone Cluster Administrative Tasks in Oracle Solaris Cluster System Administration Guide and the clzonecluster(1CL) man page.

Before You Begin

  1. Become superuser on an active member node of a global cluster.

    Note - Perform all steps of this procedure from a node of the global cluster.


  2. Ensure that the node of the global cluster is in cluster mode.

    If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.

    phys-schost# clnode status
    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-2                                   Online
    phys-schost-1                                   Online
  3. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.

  4. Choose the Zone Cluster menu item.
  5. Choose the Create a Zone Cluster menu item.
  6. Type the name of the zone cluster you want to add.

    A zone cluster name can contain ASCII letters (a-z and A-Z), numbers, a dash, or an underscore. The maximum length of the name is 20 characters.

  7. Choose the property you want to change.

    Note - The brand and ip-type properties are set by default and cannot be changed.


    You can set the following properties:


    Property
    Description
    zonepath=zone-cluster-node-path
    Specifies the path to the zone cluster node. For example, /zones/sczone.
    enable_priv_net=value
    When set to true, Oracle Solaris Cluster private network communication is enabled between the nodes of the zone cluster. The Oracle Solaris Cluster private hostnames and IP addresses for the zone cluster nodes are automatically generated by the system. Private network communication is disabled if the value is set to false. The default value is true.
    limitpriv=privilege[,…]
    Specifies the maximum set of privileges any process in this zone can obtain. See the zonecfg(1M) man page for more information.
  8. (Optional) Choose the Zone System Resource Control properties that you want to change.

    You can set the following properties:


    Property
    Description
    max-lwps=value
    Specifies the maximum number of lightweight processes (LWPs) simultaneously available to this zone cluster.
    max-shm-memory=value
    Specifies the maximum amount of shared memory in GBytes allowed for this zone cluster.
    max-shm-ids=value
    Specifies the maximum number of shared memory IDs allowed for this zone cluster.
    max-msg-ids=value
    Specifies the maximum number of message queue IDs allowed for this zone cluster.
    max-sem-ids=value
    Specifies the maximum number of semaphore IDs allowed for this zone cluster.
    cpu-shares=value
    Specifies the number of Fair Share Scheduler (FSS) shares to allocate to this zone cluster.
  9. (Optional) Choose the Zone CPU Resource Control property that you want to change.

    You can set the following properties:


    Property
    Description
    scope=scope-type
    Specifies whether the ncpus property used in a zone cluster is dedicated-cpu or capped-cpu.
    ncpus=value
    Specifies the limit for the scope type.
    • If the scope property is set to dedicated-cpu, the ncpus property sets a limit on the number of CPUs that should be assigned for this zone's exclusive use. The zone will create a pool and processor set when it boots. See the pooladm(1M) and poolcfg(1M) man pages for more information on resource pools.

    • If the scope property is set to capped-cpu, the ncpus property sets a limit on the amount of CPU time that can be used by a zone cluster. The unit used translates to the percentage of a single CPU that can be used by all user threads in a zone, expressed as a fraction (for example, .75) or a mixed number (whole number and fraction, for example, 1.25). An ncpus value of 1 means 100% of a CPU. See the pooladm(1M), pooladm(1M), and poolcfg(1M) man pages for more information on resource pools.

  10. (Optional) Choose the capped-memory property that you want to change.

    You can set the following properties:


    Property
    Description
    physical=value
    Specifies the GByte limit for physical memory.
    swap=value
    Specifies the GByte limit for swap memory.
    locked=value
    Specifies the GByte limit for locked memory.
  11. Choose a physical host from the list of available physical hosts.

    You can select one or all of the available physical nodes (or hosts), and then configure one zone-cluster node at a time.

    You can set the following properties:


    Property
    Description
    hostname=hostname
    Specifies the zone-cluster node hostname. For example, zc-host-1.
    address=public-network-address
    Specifies the public network address for the zone-cluster node on a shared-IP type zone cluster. For example, 172.1.1.1.
    physical=physical-interface
    Specifies a network physical interface for the public network from the available network interfaces that are discovered on the physical nodes, for example, bge0.
    defrouter=default-router
    Specifies the default router for the network address, if your zone is configured in a different subnet. Each zone or set of zones that uses a different defrouter setting must be on a different subnet, for example, 192.168.0.1. See the zonecfg(1M) man page for more information about the defrouter property.
  12. Specify the network addresses for the zone cluster.

    The network addresses can be used to configure a logical hostname or shared-IP cluster resources in the zone cluster. The network address is in the zone cluster global scope.

  13. At the Review Configuration screen, press Return to continue and then type c to create the zone cluster.

    The results of your configuration change are displayed, similar to the following:

     >>> Result of the Creation for the Zone Cluster(sczone) <<<
    
        The zone cluster is being created with the following configuration
    
            /usr/cluster/bin/clzonecluster configure sczone
            create
            set brand=cluster
            set zonepath=/zones/sczone
            set ip-type=shared
            set enable_priv_net=true
            add capped-memory
            set physical=2G
            end
            add node
            set physical-host=phys-schost-1
            set hostname=zc-host-1
            add net
            set address=172.1.1.1
            set physical=net0
            end
            end
            add net
            set address=172.1.1.2
                  end
    
        Zone cluster, zc2 has been created and configured successfully.
    
        Continue to install the zone cluster(yes/no) ?
  14. Type yes to continue.

    The clsetup utility performs a standard installation of a zone cluster and you cannot specify any options.

  15. When finished, exit the clsetup utility.
  16. Verify the zone cluster configuration.

    The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, there is no output.

    phys-schost-1# clzonecluster verify zoneclustername
    phys-schost-1# clzonecluster status zoneclustername
    === Zone Clusters ===
    
    --- Zone Cluster Status ---
    
    Name      Node Name   Zone HostName   Status    Zone Status
    ----      ---------   -------------   ------    -----------
    zone      basenode1    zone-1        Offline   Configured
              basenode2    zone-2        Offline   Configured
  17. For Trusted Extensions, make the password files writable on each zone-cluster node.

    From the global zone, launch the txzonemgr GUI.

    phys-schost# txzonemgr

    Select the global zone, then select the item, Configure per-zone name service.

  18. Install the zone cluster.
    phys-schost-1# clzonecluster install [-c config-profile.xml] zoneclustername
    Waiting for zone install commands to complete on all the nodes 
    of the zone cluster "zoneclustername"...

    The -c config-profile.xml option specifies a configuration profile for all non-global zones of the zone cluster. Using this option changes only the hostname of the zone, which is unique for each zone in the zone cluster. All profiles must have a .xml extension.

  19. Boot the zone cluster.
    Installation of the zone cluster might take several minutes
    phys-schost-1# clzonecluster boot zoneclustername
    Waiting for zone boot commands to complete on all the nodes of 
    the zone cluster "zoneclustername"...
  20. If you did not use the -c config-profile.xml option when you installed the zone cluster, perform sysid configuration.

    Perform the following steps on each zone-cluster node.


    Note - In the following steps, the non-global zone zcnode and zone-cluster-name share the same name.


    1. Unconfigure the Oracle Solaris instance and reboot the zone.
      phys-schost# zlogin zcnode
      zcnode# sysconfig unconfigure
      zcnode# reboot

      The zlogin session terminates during the reboot.

    2. Issue the zlogin command and progress through the interactive screens.
      phys-schost# zlogin -C zcnode
    3. When finished, exit the zone console.

      For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.

    4. Repeat for each remaining zone-cluster node.
  21. If you use Trusted Extensions, complete IP-address mappings for the zone cluster.

    Perform this step on each node of the zone cluster.

    1. From a node of the global cluster, display the node's ID.
      phys-schost# cat /etc/cluster/nodeid
      N
    2. Log in to a zone-cluster node on the same global-cluster node.

      Ensure that the SMF service has been imported and all services are up before you log in.

    3. Determine the IP addresses used by this zone-cluster node for the private interconnect.

      The cluster software automatically assigns these IP addresses when the cluster software configures a zone cluster.

      In the ifconfig -a output, locate the clprivnet0 logical interface that belongs to the zone cluster. The value for inetis the IP address that was assigned to support the use of the cluster private interconnect by this zone cluster.

      zc1# ifconfig -a
      lo0:3: flags=20010008c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
              zone zc1
              inet 127.0.0.1 netmask ff000000
      bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
              inet 10.11.166.105 netmask ffffff00 broadcast 10.11.166.255
              groupname sc_ipmp0
              ether 0:3:ba:19:fa:b7
      ce0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
              inet 10.11.166.109 netmask ffffff00 broadcast 10.11.166.255
              groupname sc_ipmp0
              ether 0:14:4f:24:74:d8
      ce0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
              zone zc1
              inet 10.11.166.160 netmask ffffff00 broadcast 10.11.166.255
      clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7
              inet 172.16.0.18 netmask fffffff8 broadcast 172.16.0.23
              ether 0:0:0:0:0:2
      clprivnet0:3: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7
              zone zc1
              inet 172.16.0.22 netmask fffffffc broadcast 172.16.0.23
    4. Add to the zone-cluster node's /etc/inet/hosts file the IP addresses of the zone-cluster node.
      • The hostname for the private interconnect, which is clusternodeN-priv, where N is the global-cluster node ID

        172.16.0.22    clusternodeN-priv 
      • Each net resource that was specified to the clzonecluster command when you created the zone cluster

    5. Repeat on the remaining zone-cluster nodes.
  22. Modify the /etc/security/tsol/tnrhdb file to authorize communication with zone-cluster components.

    Use the Security Templates wizard in Solaris Management Console as described in How to Construct a Remote Host Template in Trusted Extensions Administrator’s Procedures to perform the following tasks.

    • Create a new entry for IP addresses used by zone-cluster components and assign each entry a CIPSO template.

      Add entries for each of the following IP addresses that exist in the zone-cluster node's /etc/inet/hosts file:

      • Each zone-cluster node private IP address

      • All cl_privnet IP addresses in the zone cluster

      • Each logical-hostname public IP address for the zone cluster

      • Each shared-address public IP address for the zone cluster

      Entries would look similar to the following.

      127.0.0.1:cipso
      172.16.4.1:cipso
      172.16.4.2:cipso
      …
    • Add an entry to make the default template internal.

      0.0.0.0:internal

    For more information about CIPSO templates, see Configure the Domain of Interpretation in Trusted Extensions Configuration Guide.

  23. Enable DNS and rlogin access to the zone-cluster nodes.

    Perform the following commands on each node of the zone cluster.

    phys-schost# zlogin zcnode
    zcnode# svcadm enable svc:/network/dns/client:default
    zcnode# svcadm enable svc:/network/login:rlogin
    zcnode# reboot

Example 6-2 Configuration File to Create a Zone Cluster

The following example shows the contents of a command file that can be used with the clzonecluster utility to create a zone cluster. The file contains the series of clzonecluster commands that you would input manually.

In the following configuration, the zone cluster sczone is created on the global-cluster node phys-schost-1. The zone cluster uses /zones/sczone as the zone path and public IP address 172.16.2.2. The first node of the zone cluster is assigned the hostname zc-host-1 and uses the network address 172.16.0.1 and the bge0 adapter. The second node of the zone cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is assigned the hostname zc-host-2 and uses the network address 172.16.0.2 and the bge1 adapter.

create
set zonepath=/zones/sczone
add net
set address=172.16.2.2
end
add node
set physical-host=phys-schost-1
set hostname=zc-host-1
add net
set address=172.16.0.1
set physical=bge0
end
end
add sysid
set root_password=encrypted_password
end
add node
set physical-host=phys-schost-2
set hostname=zc-host-2
add net
set address=172.16.0.2
set physical=bge1
end
end
commit
exit

Next Steps

To add the use of a file system to the zone cluster, go to Adding File Systems to a Zone Cluster.

To add the use of global storage devices to the zone cluster, go to Adding Storage Devices to a Zone Cluster.

See Also

To patch a zone cluster, follow procedures in Chapter 11, Patching Oracle Solaris Cluster Software and Firmware, in Oracle Solaris Cluster System Administration Guide. These procedures include special instructions for zone clusters, where needed.

Adding File Systems to a Zone Cluster

This section provides procedures to add file systems for use by the zone cluster.

After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.

The following procedures are in this section:

In addition, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Add a Highly Available Local File System to a Zone Cluster

Perform this procedure to configure a highly available local file system on the global cluster for use by the zone cluster. The file system is added to the zone cluster and is configured with an HAStoragePlus resource to make the local file system highly available.

Perform all steps of the procedure from a node of the global cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of the procedure from a node of the global cluster.


  2. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip - To return to a previous screen, type the < key and press Return.


  3. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  4. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  5. Choose the zone cluster where you want to add the file system.

    The Storage Type Selection menu is displayed.

  6. Choose the File System menu item.

    The File System Selection for the Zone Cluster menu is displayed.

  7. Choose the file system you want to add to the zone cluster.

    The file systems in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify all properties for a file system.

    The Mount Type Selection menu is displayed.

  8. Choose the Loopback mount type.

    The File System Properties for the Zone Cluster menu is displayed.

  9. Change the properties that you are allowed to change for the file system you are adding.

    Note - For UFS file systems, enable logging.


    When, finished, type d and press Return.

  10. Type c to save the configuration change.

    The results of your configuration change are displayed.

  11. When finished, exit the clsetup utility.
  12. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zoneclustername

Example 6-3 Adding a Highly Available Local File System to a Zone Cluster

This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/global/oracle/d1
clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1
clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> add options [logging]
clzc:sczone:fs> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /global/oracle/d1
    special:                                   /dev/md/oracle/dsk/d1
    raw:                                       /dev/md/oracle/rdsk/d1
    type:                                      ufs
    options:                                   [logging]
    cluster-control:                           [true]
…

Next Steps

Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Add a ZFS Storage Pool to a Zone Cluster

Perform this procedure to add a ZFS storage pool for use by a zone cluster. The pool can be local to a single zone-cluster node or configured with HAStoragePlus to be highly available.

The clsetup utility discovers and displays all configured ZFS pools on the shared disks that can be accessed by the nodes where the selected zone cluster is configured. After you use the clsetup utility to add a ZFS storage pool in cluster scope to an existing zone cluster, you can use the clzonecluster command to modify the configuration or to add a ZFS storage pool in node-scope.

Before You Begin

Ensure that the ZFS pool is connected on shared disks that are connected to all nodes of the zone cluster. See Oracle Solaris ZFS Administration Guide for procedures to create a ZFS pool.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of this procedure from a node of the global zone.


  2. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip - To return to a previous screen, type the < key and press Return.


  3. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  4. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  5. Choose the zone cluster where you want to add the ZFS storage pool.

    The Storage Type Selection menu is displayed.

  6. Choose the ZFS menu item.

    The ZFS Pool Selection for the Zone Cluster menu is displayed.

  7. Choose the ZFS pool you want to add to the zone cluster.

    The ZFS pools in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify properties for a ZFS pool.

    The ZFS Pool Dataset Property for the Zone Cluster menu is displayed. The selected ZFS pool is assigned to the name property.

  8. Type d and press Return.

    The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.

  9. Type c to save the configuration change.

    The results of your configuration change are displayed. For example:

     >>> Result of Configuration Change to the Zone Cluster(sczone) <<<
    
        Adding file systems or storage devices to sczone zone cluster...
    
        The zone cluster is being created with the following configuration
    
            /usr/cluster/bin/clzonecluster configure sczone
            add dataset
            set name=myzpool5
            end
    
        Configuration change to sczone zone cluster succeeded.
  10. When finished, exit the clsetup utility.
  11. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zoneclustername

Example 6-4 Adding a ZFS Storage Pool to a Zone Cluster

The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add dataset
clzc:sczone:dataset> set name=zpool1
clzc:sczone:dataset> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                                dataset
    name:                                          zpool1
…

Next Steps

Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems that are in the pool on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Add a Cluster File System to a Zone Cluster

The clsetup utility discovers and displays the available file systems that are configured on the cluster nodes where the selected zone cluster is configured. When you use the clsetup utility to add a file system, the file system is added in cluster scope.

You can add the following types of cluster file systems to a zone cluster:

Before You Begin

Ensure that the cluster file system you want to add to the zone cluster is configured. See Planning Cluster File Systems and Chapter 5, Creating a Cluster File System.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of this procedure from a voting node of the global cluster.


  2. On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
    phys-schost# vi /etc/vfstab
    • For a UFS entry, include the global mount option, similar to the following example:
      /dev/md/datadg/dsk/d0 /dev/md/datadg/rdsk/d0 /global/fs ufs 2 no global, logging
    • For a shared QFS entry, include the shared mount option, similar to the following example:
      Data-cz1    -    /db_qfs/Data1 samfs - no shared,notrace
  3. On the global cluster, start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip - To return to a previous screen, type the < key and press Return.


  4. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  5. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  6. Choose the zone cluster where you want to add the file system.

    The Storage Type Selection menu is displayed.

  7. Choose the File System menu item.

    The File System Selection for the Zone Cluster menu is displayed.

  8. Choose a file system from the list.

    The Mount Type Selection menu is displayed.

    You can also type e to manually specify all properties for a file system.


    Note - If you are using an ACFS file system, type a to select Discover ACFS and then specify the ORACLE_HOME directory.


  9. Choose the Loopback file system mount type for the zone cluster.

    Note - If you chose an ACFS file system in Step 8, the clsetup utility skips this step because ACFS supports only the direct mount type.


    For more information about creating loopback file systems, see How to Create and Mount an LOFS File System in System Administration Guide: Devices and File Systems.

    The File System Properties for the Zone Cluster menu is displayed.

  10. Specify the mount point directory.

    Type the number for the dir property and press Return. Then type the LOFS mount point directory name in the New Value field and press Return.

    When finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.

  11. Type c to save the configuration change.

    The results of your configuration change are displayed. For example:

      >>> Result of Configuration Change to the Zone Cluster(sczone) <<<
    
        Adding file systems or storage devices to sczone zone cluster...
    
        The zone cluster is being created with the following configuration
    
            /usr/cluster/bin/clzonecluster configure sczone
            add fs
            set dir=/dev/md/ddg/dsk/d9
            set special=/dev/md/ddg/dsk/d10
            set raw=/dev/md/ddg/rdsk/d10
            set type=lofs
            end
    
        Configuration change to sczone zone cluster succeeded.
  12. When finished, exit the clsetup utility.
  13. Verify the addition of the LOFS file system.
    phys-schost# clzonecluster show -v zoneclustername

Next Steps

(Optional) Configure the cluster file system to be managed by an HAStoragePlus resource. The HAStoragePlus resource manages by mounting the file system in the global cluster, and later performing a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

Adding Local File Systems to a Specific Zone-Cluster Node

This section describes how to add file systems that are dedicated to a single zone-cluster node. To instead configure file systems for use by the entire zone cluster, go to Adding File Systems to a Zone Cluster.

This section contains the following procedures:

How to Add a Local File System to a Specific Zone-Cluster Node

Perform this procedure to add a local file system to a single, specific zone-cluster node of a specific zone cluster. The file system is not managed by Oracle Solaris Cluster software but is instead passed to the underlying Oracle Solaris zone.


Note - To add a highly available local file system to a zone cluster, perform procedures in How to Add a Highly Available Local File System to a Zone Cluster.


  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of the procedure from a node of the global cluster.


  2. Create the local file system that you want to configure to a specific zone-cluster node.

    Use local disks of the global-cluster node that hosts the intended zone-cluster node.

  3. Add the file system to the zone-cluster configuration in the node scope.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> select node physical-host=baseclusternode
    clzc:zoneclustername:node> add fs
    clzc:zoneclustername:node:fs> set dir=mountpoint
    clzc:zoneclustername:node:fs> set special=disk-device-name
    clzc:zoneclustername:node:fs> set raw=raw-disk-device-name
    clzc:zoneclustername:node:fs> set type=FS-type
    clzc:zoneclustername:node:fs> end
    clzc:zoneclustername:node> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    dir=mountpoint

    Specifies the file-system mount point

    special=disk-device-name

    Specifies the name of the disk device

    raw=raw-disk-device-name

    Specifies the name of the raw-disk device

    type=FS-type

    Specifies the type of file system


    Note - Enable logging for UFS file systems.


  4. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zoneclustername

Example 6-5 Adding a Local File System to a Zone-Cluster Node

This example adds a local UFS file system /local/data for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> select node physical-host=phys-schost-1
clzc:sczone:node> add fs
clzc:sczone:node:fs> set dir=/local/data
clzc:sczone:node:fs> set special=/dev/md/localdg/dsk/d1
clzc:sczone:node:fs> set raw=/dev/md/localdg/rdsk/d1
clzc:sczone:node:fs> set type=ufs
clzc:sczone:node:fs> add options [logging]
clzc:sczone:node:fs> end
clzc:sczone:node> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
     --- Solaris Resources for phys-schost-1 --- 
…
   Resource Name:                                fs
     dir:                                           /local/data
     special:                                       /dev/md/localdg/dsk/d1
     raw:                                           /dev/md/localdg/rdsk/d1
     type:                                          ufs
     options:                                       [logging]
     cluster-control:                               false ...

How to Add a Local ZFS Storage Pool to a Specific Zone-Cluster Node

Perform this procedure to add a local ZFS storage pool to a specific zone-cluster node. The local ZFS pool is not managed by Oracle Solaris Cluster software but is instead passed to the underlying Oracle Solaris zone.


Note - To add a highly available local ZFS pool to a zone cluster, see How to Add a Highly Available Local File System to a Zone Cluster.


Perform all steps of the procedure from a node of the global cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of the procedure from a node of the global cluster.


  2. Create the local ZFS pool that you want to configure to a specific zone-cluster node.

    Use local disks of the global-cluster node that hosts the intended zone-cluster node.

  3. Add the pool to the zone-cluster configuration in the node scope.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> select node physical-host=baseclusternode
    clzc:zoneclustername:node> add dataset
    clzc:zoneclustername:node:dataset> set name=localZFSpoolname
    clzc:zoneclustername:node:dataset> end
    clzc:zoneclustername:node> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    set name=localZFSpoolname

    Specifies the name of the local ZFS pool

  4. Verify the addition of the ZFS pool.
    phys-schost# clzonecluster show -v zoneclustername

Example 6-6 Adding a Local ZFS Pool to a Zone-Cluster Node

This example adds the local ZFS pool local_pool for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> select node physical-host=phys-schost-1
clzc:sczone:node> add dataset
clzc:sczone:node:dataset> set name=local_pool
clzc:sczone:node:dataset> end
clzc:sczone:node> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
     --- Solaris Resources for phys-schost-1 --- 
…
   Resource Name:                                dataset
     name:                                          local_pool

Adding Storage Devices to a Zone Cluster

This section describes how to add the direct use of global storage devices by a zone cluster or add storage devices that are dedicated to a single zone-cluster node. Global devices are devices that can be accessed by more than one node in the cluster, either one node at a time or multiple nodes concurrently.

After a device is added to a zone cluster, the device is visible only from within that zone cluster.

This section contains the following procedures:

How to Add a Global Storage Device to a Zone Cluster

Perform this procedure to add one of the following types of storage devices in cluster scope:


Note - To add a raw-disk device to a specific zone-cluster node, go instead to How to Add a Raw-Disk Device to a Specific Zone--Cluster Node.


The clsetup utility discovers and displays the available storage devices that are configured on the cluster nodes where the selected zone cluster is configured. After you use the clsetup utility to add a storage device to an existing zone cluster , use the clzonecluster command to modify the configuration. For instructions on using the clzonecluster command to remove a storage device from a zone cluster, see How to Remove a Storage Device From a Zone Cluster in Oracle Solaris Cluster System Administration Guide.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of the procedure from a node of the global cluster.


  2. Identify the device to add to the zone cluster and determine whether it is online.
    phys-schost# cldevicegroup status
  3. If the device that you are adding is not online, bring it online.
    phys-schost# cldevicegroup online device
  4. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip - To return to a previous screen, type the < key and press Return.


  5. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  6. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  7. Choose the zone cluster where you want to add the storage device.

    The Storage Type Selection menu is displayed.

  8. Choose the Device menu item.

    A list of the available devices is displayed.

  9. Choose a storage device from the list.

    You can also type e to manually specify properties for a storage device.

    The Storage Device Property for the Zone Cluster menu is displayed.

  10. Add or change any properties for the storage device you are adding.

    Note - An asterisk (*) is used as a wildcard character in the path name.


    When, finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.

  11. Type c to save the configuration change.

    The results of your configuration change are displayed. For example:

     >>> Result of Configuration Change to the Zone Cluster(sczone) <<<
    
        Adding file systems or storage devices to sczone zone cluster...
    
        The zone cluster is being created with the following configuration
    
            /usr/cluster/bin/clzonecluster configure sczone
            add device
            set match=/dev/md/ddg/*dsk/*
            end
            add device
            set match=/dev/md/shared/1/*dsk/*
            end
    
        Configuration change to sczone zone cluster succeeded.
        The change will become effective after the zone cluster reboots.
  12. When finished, exit the clsetup utility.
  13. Verify the addition of the device.
    phys-schost# clzonecluster show -v zoneclustername

How to Add a Raw-Disk Device to a Specific Zone—Cluster Node

Perform this procedure to add a raw-disk device to a specific zone-cluster node. This device would not be under Oracle Solaris Cluster control. Perform all steps of the procedure from a node of the global cluster.


Note - To add a raw-disk device for use by the full zone cluster, go instead to How to Add a Global Storage Device to a Zone Cluster.


  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of the procedure from a node of the global cluster.


  2. Identify the device (cNtXdYsZ) to add to the zone cluster and determine whether it is online.
  3. Add the device to the zone-cluster configuration in the node scope.

    Note - An asterisk (*) is used as a wildcard character in the path name.


    phys-schost# clzonecluster configure zone-cluster-name
    clzc:zone-cluster-name> select node physical-host=baseclusternode
    clzc:zone-cluster-name:node> add device
    clzc:zone-cluster-name:node:device> set match=/dev/*dsk/cNtXdYs*
    clzc:zone-cluster-name:node:device> end
    clzc:zone-cluster-name:node> end
    clzc:zone-cluster-name> verify
    clzc:zone-cluster-name> commit
    clzc:zone-cluster-name> exit
    match=/dev/*dsk/cNtXdYs*

    Specifies the full device path of the raw-disk device

  4. Verify the addition of the device.
    phys-schost# clzonecluster show -v zoneclustername

Example 6-7 Adding a Raw-Disk Device to a Specific Zone-Cluster Node

The following example adds the raw–disk device c1t1d0s0 for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> select node physical-host=phys-schost-1
clzc:sczone:node> add device
clzc:sczone:node:device> set match=/dev/*dsk/c1t1d0s0
clzc:sczone:node:device> end
clzc:sczone:node> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
     --- Solaris Resources for phys-schost-1 --- 
…
   Resource Name:                                device
     name:                                          /dev/*dsk/c1t1d0s0