JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Software Installation Guide
search filter icon
search icon

Document Information

Preface

1.  Planning the Oracle Solaris Cluster Configuration

2.  Installing Software on Global-Cluster Nodes

3.  Establishing the Global Cluster

4.  Configuring Solaris Volume Manager Software

5.  Installing and Configuring Veritas Volume Manager

6.  Creating a Cluster File System

7.  Creating Non-Global Zones and Zone Clusters

Configuring a Non-Global Zone on a Global-Cluster Node

How to Create a Non-Global Zone on a Global-Cluster Node

How to Configure an HAStoragePlus Resource for a Cluster File System That is Used by Non-Global Zones

Configuring a Zone Cluster

Overview of the clzonecluster Utility

Establishing the Zone Cluster

How to Prepare for Trusted Extensions Use With Zone Clusters

How to Create a Zone Cluster

Adding File Systems to a Zone Cluster

How to Add a Local File System to a Zone Cluster

How to Add a ZFS Storage Pool to a Zone Cluster

How to Add a QFS Shared File System to a Zone Cluster

How to Add a Cluster File System to a Zone Cluster

Adding Storage Devices to a Zone Cluster

How to Add an Individual Metadevice to a Zone Cluster (Solaris Volume Manager)

How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager)

How to Add a DID Device to a Zone Cluster

How to Add a Raw-Disk Device to a Zone Cluster

8.  Installing the Oracle Solaris Cluster Module to Sun Management Center

9.  Uninstalling Software From the Cluster

A.  Oracle Solaris Cluster Installation and Configuration Worksheets

Index

Configuring a Zone Cluster

This section provide procedures to configure a cluster of Solaris Containers non-global zones, called a zone cluster.

Overview of the clzonecluster Utility

The clzonecluster utility creates, modifies, and removes a zone cluster. The clzonecluster utility actively manages a zone cluster. For example, the clzonecluster utility both boots and halts a zone cluster. Progress messages for the clzonecluster utility are output to the console, but are not saved in a log file.

The utility operates in the following levels of scope, similar to the zonecfg utility:

Establishing the Zone Cluster

This section describes how to configure a cluster of non-global zones.

How to Prepare for Trusted Extensions Use With Zone Clusters

This procedure prepares the global cluster to use the Trusted Extensions feature of Oracle Solaris with zone clusters and enables the Trusted Extensions feature.

If you do not plan to enable Trusted Extensions, proceed to How to Create a Zone Cluster.

Perform this procedure on each node in the global cluster.

Before You Begin

Perform the following tasks:

  1. Become superuser on a node of the global cluster.
  2. Disable the Trusted Extensions zoneshare and zoneunshare scripts.

    The Trusted Extensions zoneshare and zoneunshare scripts support the ability to export home directories on the system. An Oracle Solaris Cluster configuration does not support this feature.

    Disable this feature by replacing each script with a symbolic link to the /bin/true utility. Do this on each global-cluster node.

    phys-schost# ln -s /usr/lib/zones/zoneshare /bin/true
    phys-schost# ln -x /usr/lib/zones/zoneunshare /bin/true
  3. Configure all logical-hostname shared-IP addresses that are in the global cluster.

    See Run the txzonemgr Script in Oracle Solaris Trusted Extensions Configuration Guide.

  4. Ensure that the administrative console is defined in the /etc/security/tsol/tnrhdb file as admin_low.
    ipaddress:admin_low
  5. Ensure that no /etc/hostname.interface file contains the -failover option in an entry.

    Delete the -failover option from any entry that contains that option.

  6. Modify the /etc/security/tsol/tnrhdb file to authorize communication with global-cluster components.

    Use the Security Templates wizard in Solaris Management Console as described in How to Construct a Remote Host Template in Oracle Solaris Trusted Extensions Administrator’s Procedures to perform the following tasks.

    • Create a new entry for IP addresses used by cluster components and assign each entry a CIPSO template.

      Add entries for each of the following IP addresses that exist in the global-cluster node's /etc/inet/hosts file:

      • Each global-cluster node private IP address

      • All cl_privnet IP addresses in the global cluster

      • Each logical-hostname public IP address for the global cluster

      • Each shared-address public IP address for the global cluster

      Entries would look similar to the following.

      127.0.0.1:cipso
      172.16.4.1:cipso
      172.16.4.2:cipso
      …
    • Add an entry to make the default template internal.

      0.0.0.0:internal

    For more information about CIPSO templates, see Configure the Domain of Interpretation in Oracle Solaris Trusted Extensions Configuration Guide.

  7. Enable the Trusted Extensions SMF service and reboot the global-cluster node.
    phys-schost# svcadm enable -s svc:/system/labeld:default
    phys-schost# shutdown -g0 -y -i6

    For more information, see Enable Trusted Extensions in Oracle Solaris Trusted Extensions Configuration Guide.

  8. Verify that the Trusted Extensions SMF service is enabled.
    phys-schost# svcs labeld
    STATE          STIME    FMRI
    online         17:52:55 svc:/system/labeld:default
  9. Repeat Step 1 through Step 8 on each remaining node of the global cluster.

    When the SMF service is enabled on all global-cluster nodes, perform the remaining steps of this procedure on each node of the global cluster.

  10. Add the IP address of the Trusted Extensions-enabled LDAP server to the /etc/inet/hosts file on each global-cluster node.

    The LDAP server is used by the global zone and by the nodes of the zone cluster.

  11. Enable remote login by the LDAP server to the global-cluster node.
    1. In the /etc/default/login file, comment out the CONSOLE entry.
    2. Enable remote login.
      phys-schost# svcadm enable rlogin
    3. Modify the /etc/pam.conf file.

      Modify the account management entries by appending a Tab and typing allow_remote or allow_unlabeled respectively, as shown below.

      other   account requisite       pam_roles.so.1        Tab  allow_remote
      other   account required        pam_unix_account.so.1 Tab  allow_unlabeled
  12. Modify the /etc/nsswitch.ldap file.
    • Ensure that the passwd and group lookup entries have files first in the lookup order.

      …
      passwd:      files ldap
      group:       files ldap
      …
    • Ensure that the hosts and netmasks lookup entries have cluster listed first in the lookup order.

      …
      hosts:       cluster files ldap
      …
      netmasks:    cluster files ldap
      …
  13. Make the global-cluster node an LDAP client.

    See Make the Global Zone an LDAP Client in Trusted Extensions in Oracle Solaris Trusted Extensions Configuration Guide.

  14. Add Trusted Extensions users to the /etc/security/tsol/tnzonecfg file.

    Use the Add User wizard in Solaris Management Console as described in Creating Roles and Users in Trusted Extensions in Solaris Trusted Extensions Installation and Configuration for Solaris 10 11/06 and Solaris 10 8/07 Releases.

Next Steps

Create the zone cluster. Go to How to Create a Zone Cluster.

How to Create a Zone Cluster

Perform this procedure to create a cluster of non-global zones.

Before You Begin

  1. Become superuser on an active member node of a global cluster.

    Note - Perform all steps of this procedure from a node of the global cluster.


  2. Ensure that the node of the global cluster is in cluster mode.

    If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.

    phys-schost# clnode status
    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-2                                   Online
    phys-schost-1                                   Online
  3. Create the zone cluster.

    Observe the following special instructions:

    • If Trusted Extensions is enabled, zoneclustername must be the same name as a Trusted Extensions security label that has the security levels that you want to assign to the zone cluster. These security labels are configured in the /etc/security/tsol/tnrhtp files on the global cluster.

    • By default, sparse root zones are created. To create whole root zones, add the -b option to the create command.

    phys-schost-1# clzonecluster configure zoneclustername
    clzc:zoneclustername> create
    
    Set the zone path for the entire zone cluster
    clzc:zoneclustername> set zonepath=/zones/zoneclustername
    
    Add the first node and specify node-specific settings
    clzc:zoneclustername> add node
    clzc:zoneclustername:node> set physical-host=baseclusternode1
    clzc:zoneclustername:node> set hostname=hostname1
    clzc:zoneclustername:node> add net
    clzc:zoneclustername:node:net> set address=public_netaddr
    clzc:zoneclustername:node:net> set physical=adapter
    clzc:zoneclustername:node:net> end
    clzc:zoneclustername:node> end
    
    Add authorization for the public-network addresses that the zone cluster is allowed to use
    clzc: zoneclustername> add net
    clzc: zoneclustername:net> set address=ipaddress1
    clzc: zoneclustername:net> end
    
    Set the root password globally for all nodes in the zone cluster
    clzc:zoneclustername> add sysid
    clzc:zoneclustername:sysid> set root_password=encrypted_password
    clzc:zoneclustername:sysid> end
    
    Save the configuration and exit the utility
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
  4. If Trusted Extensions is enabled, set the /var/tsol/doors file system and set the name-service property to NONE.
    phys-schost-1# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=/var/tsol/doors
    clzc:zoneclustername:fs> set special=/var/tsol/doors
    clzc:zoneclustername:fs> set type=lofs
    clzc:zoneclustername:fs> add options ro
    clzc:zoneclustername:fs> end
    
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
  5. (Optional) Add one or more additional nodes to the zone cluster,
    phys-schost-1# clzonecluster configure zoneclustername
    clzc:zoneclustername> add node
    clzc:zoneclustername:node> set physical-host=baseclusternode2
    clzc:zoneclustername:node> set hostname=hostname2
    clzc:zoneclustername:node> add net
    clzc:zoneclustername:node:net> set address=public_netaddr
    clzc:zoneclustername:node:net> set physical=adapter
    clzc:zoneclustername:node:net> end
    clzc:zoneclustername:node> end
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
  6. If Trusted Extensions is enabled, on each global-cluster node add or modify the following entries in the /zones/zoneclustername/root/etc/sysidcfg file.
    phys-schost-1# clzonecluster configure zoneclustername
    clzc:zoneclustername> add sysid
    clzc:zoneclustername:sysid> set name_service=LDAP
    clzc:zoneclustername:sysid> set domain_name=domainorg.domainsuffix
    clzc:zoneclustername:sysid> set proxy_dn="cn=proxyagent,ou=profile,dc=domainorg,dc=domainsuffix"
    clzc:zoneclustername:sysid> set proxy_password="proxypassword"
    clzc:zoneclustername:sysid> set profile=ldap-server
    clzc:zoneclustername:sysid> set profile_server=txldapserver_ipaddress
    clzc:zoneclustername:sysid> end
    
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
  7. Verify the zone cluster configuration.

    The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, there is no output.

    phys-schost-1# clzonecluster verify zoneclustername
    phys-schost-1# clzonecluster status zoneclustername
    === Zone Clusters ===
    
    --- Zone Cluster Status ---
    
    Name      Node Name   Zone HostName   Status    Zone Status
    ----      ---------   -------------   ------    -----------
    zone      basenode1    zone-1        Offline   Configured
              basenode2    zone-2        Offline   Configured
  8. Install the zone cluster.
    phys-schost-1# clzonecluster install zoneclustername
    Waiting for zone install commands to complete on all the nodes 
    of the zone cluster "zoneclustername"...
  9. Boot the zone cluster.
    Installation of the zone cluster might take several minutes
    phys-schost-1# clzonecluster boot zoneclustername
    Waiting for zone boot commands to complete on all the nodes of 
    the zone cluster "zoneclustername"...
  10. If you use Trusted Extensions, complete IP-address mappings for the zone cluster.

    Perform this step on each node of the zone cluster.

    1. From a node of the global cluster, display the node's ID.
      phys-schost# cat /etc/cluster/nodeid
      N
    2. Log in to a zone-cluster node on the same global-cluster node.

      Ensure that the SMF service has been imported and all services are up before you log in.

    3. Determine the IP addresses used by this zone-cluster node for the private interconnect.

      The cluster software automatically assigns these IP addresses when the cluster software configures a zone cluster.

      In the ifconfig -a output, locate the clprivnet0 logical interface that belongs to the zone cluster. The value for inetis the IP address that was assigned to support the use of the cluster private interconnect by this zone cluster.

      zc1# ifconfig -a
      lo0:3: flags=20010008c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
              zone zc1
              inet 127.0.0.1 netmask ff000000
      bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
              inet 10.11.166.105 netmask ffffff00 broadcast 10.11.166.255
              groupname sc_ipmp0
              ether 0:3:ba:19:fa:b7
      ce0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
              inet 10.11.166.109 netmask ffffff00 broadcast 10.11.166.255
              groupname sc_ipmp0
              ether 0:14:4f:24:74:d8
      ce0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
              zone zc1
              inet 10.11.166.160 netmask ffffff00 broadcast 10.11.166.255
      clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7
              inet 172.16.0.18 netmask fffffff8 broadcast 172.16.0.23
              ether 0:0:0:0:0:2
      clprivnet0:3: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7
              zone zc1
              inet 172.16.0.22 netmask fffffffc broadcast 172.16.0.23
    4. Add to the zone-cluster node's /etc/inet/hosts file the IP addresses of the zone-cluster node.
      • The hostname for the private interconnect, which is clusternodeN-priv, where N is the global-cluster node ID

        172.16.0.22    clusternodeN-priv 
      • Each net resource that was specified to the clzonecluster command when you created the zone cluster

    5. Repeat on the remaining zone-cluster nodes.
  11. Modify the /etc/security/tsol/tnrhdb file to authorize communication with zone-cluster components.

    Use the Security Templates wizard in Solaris Management Console as described in How to Construct a Remote Host Template in Oracle Solaris Trusted Extensions Administrator’s Procedures to perform the following tasks.

    • Create a new entry for IP addresses used by zone-cluster components and assign each entry a CIPSO template.

      Add entries for each of the following IP addresses that exist in the zone-cluster node's /etc/inet/hosts file:

      • Each zone-cluster node private IP address

      • All cl_privnet IP addresses in the zone cluster

      • Each logical-hostname public IP address for the zone cluster

      • Each shared-address public IP address for the zone cluster

      Entries would look similar to the following.

      127.0.0.1:cipso
      172.16.4.1:cipso
      172.16.4.2:cipso
      …
    • Add an entry to make the default template internal.

      0.0.0.0:internal

    For more information about CIPSO templates, see Configure the Domain of Interpretation in Oracle Solaris Trusted Extensions Configuration Guide.

  12. After all zone-cluster nodes are modified, reboot the global-cluster nodes to initialize the changes to the zone-cluster /etc/inet/hosts files.
    phys-schost# init -g0 -y -i6
  13. Enable DNS and rlogin access to the zone-cluster nodes.

    Perform the following commands on each node of the zone cluster.

    phys-schost# zlogin zcnode
    zcnode# svcadm enable svc:/network/dns/client:default
    zcnode# svcadm enable svc:/network/login:rlogin
    zcnode# reboot

Example 7-2 Configuration File to Create a Zone Cluster

The following example shows the contents of a command file that can be used with the clzonecluster utility to create a zone cluster. The file contains the series of clzonecluster commands that you would input manually.

In the following configuration, the zone cluster sczone is created on the global-cluster node phys-schost-1. The zone cluster uses /zones/sczone as the zone path and public IP address 172.16.2.2. The first node of the zone cluster is assigned the hostname zc-host-1 and uses the network address 172.16.0.1 and the bge0 adapter. The second node of the zone cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is assigned the hostname zc-host-2 and uses the network address 172.16.0.2 and the bge1 adapter.

create
set zonepath=/zones/sczone
add net
set address=172.16.2.2
end
add node
set physical-host=phys-schost-1
set hostname=zc-host-1
add net
set address=172.16.0.1
set physical=bge0
end
end
add sysid
set root_password=encrypted_password
end
add node
set physical-host=phys-schost-2
set hostname=zc-host-2
add net
set address=172.16.0.2
set physical=bge1
end
end
commit
exit

Example 7-3 Creating a Zone Cluster by Using a Configuration File.

The following example shows the commands to create the new zone cluster sczone on the global-cluster node phys-schost-1 by using the configuration file sczone-config. The hostnames of the zone-cluster nodes are zc-host-1 and zc-host-2.

phys-schost-1# clzonecluster configure -f sczone-config sczone
phys-schost-1# clzonecluster verify sczone
phys-schost-1# clzonecluster install sczone
Waiting for zone install commands to complete on all the nodes of the 
zone cluster "sczone"...
phys-schost-1# clzonecluster boot sczone
Waiting for zone boot commands to complete on all the nodes of the 
zone cluster "sczone"...
phys-schost-1# clzonecluster status sczone
=== Zone Clusters ===

--- Zone Cluster Status ---

Name      Node Name        Zone HostName    Status    Zone Status
----      ---------        -------------    ------    -----------
sczone    phys-schost-1    zc-host-1        Offline   Running
          phys-schost-2    zc-host-2        Offline   Running

Next Steps

To add the use of a file system to the zone cluster, go to Adding File Systems to a Zone Cluster.

To add the use of global storage devices to the zone cluster, go to Adding Storage Devices to a Zone Cluster.

Adding File Systems to a Zone Cluster

This section provides procedures to add file systems for use by the zone cluster.

After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.


Note - You cannot use the clzonecluster command to add a local file system, which is mounted on a single global-cluster node, to a zone cluster. Instead, use the zonecfg command as you normally would in a stand-alone system. The local file system would not be under cluster control.


The following procedures are in this section:

In addition, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Add a Local File System to a Zone Cluster

Perform this procedure to add a local file system on the global cluster for use by the zone cluster.


Note - To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster.

Alternatively, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.


  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of the procedure from a node of the global cluster.


  2. On the global cluster, create a file system that you want to use in the zone cluster.

    Ensure that the file system is created on shared disks.

  3. Add the file system to the zone-cluster configuration.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=mountpoint
    clzc:zoneclustername:fs> set special=disk-device-name
    clzc:zoneclustername:fs> set raw=raw-disk-device-name
    clzc:zoneclustername:fs> set type=FS-type
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    dir=mountpoint

    Specifies the file system mount point

    special=disk-device-name

    Specifies the name of the disk device

    raw=raw-disk-device-name

    Specifies the name of the raw disk device

    type=FS-type

    Specifies the type of file system


    Note - Enable logging for UFS and VxFS file systems.


  4. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zoneclustername

Example 7-4 Adding a Local File System to a Zone Cluster

This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/global/oracle/d1
clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1
clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> add options [logging]
clzc:sczone:fs> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /global/oracle/d1
    special:                                   /dev/md/oracle/dsk/d1
    raw:                                       /dev/md/oracle/rdsk/d1
    type:                                      ufs
    options:                                   [logging]
    cluster-control:                           [true]
…

Next Steps

Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Add a ZFS Storage Pool to a Zone Cluster

Perform this procedure to add a ZFS storage pool for use by a zone cluster.


Note - To configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.


  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of this procedure from a node of the global zone.


  2. Create the ZFS storage pool on the global cluster.

    Note - Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.


    See Oracle Solaris ZFS Administration Guide for procedures to create a ZFS pool.

  3. Add the pool to the zone-cluster configuration.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add dataset
    clzc:zoneclustername:dataset> set name=ZFSpoolname
    clzc:zoneclustername:dataset> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
  4. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zoneclustername

Example 7-5 Adding a ZFS Storage Pool to a Zone Cluster

The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add dataset
clzc:sczone:dataset> set name=zpool1
clzc:sczone:dataset> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                                dataset
    name:                                          zpool1
…

Next Steps

Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems that are in the pool on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

How to Add a QFS Shared File System to a Zone Cluster

Perform this procedure to add a Sun QFS shared file system for use by a zone cluster.


Note - At this time, QFS shared file systems are only supported for use in clusters that are configured with Oracle Real Application Clusters (RAC). On clusters that are not configured with Oracle RAC, you can use a single-machine QFS file system that is configured as a highly available local file system.


  1. Become superuser on a voting node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of this procedure from a voting node of the global cluster.


  2. On the global cluster, configure the QFS shared file system that you want to use in the zone cluster.

    Follow procedures for shared file systems in Configuring Sun QFS File Systems With Sun Cluster.

  3. On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
    phys-schost# vi /etc/vfstab
  4. If you are adding a QFS shared file system as a loopback file system to a zone cluster, go to Step 6.
  5. Add the file system to the zone cluster configuration.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=mountpoint
    clzc:zoneclustername:fs> set special=QFSfilesystemname
    clzc:zoneclustername:fs> set type=samfs
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit

    Go to Step 7.

  6. Configure the QFS file system as a loopback file system for the zone cluster.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=lofs-mountpoint
    clzc:zoneclustername:fs> set special=QFS-mountpoint
    clzc:zoneclustername:fs> set type=lofs
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
  7. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zoneclustername

Example 7-6 Adding a QFS Shared File System as a Direct Mount to a Zone Cluster

The following example shows the QFS shared file system Data-cz1 added to the zone cluster sczone. From the global cluster, the mount point of the file system is /zones/sczone/root/db_qfs/Data1, where /zones/sczone/root/ is the zone's root path. From the zone-cluster node, the mount point of the file system is /db_qfs/Data1.

phys-schost-1# vi /etc/vfstab
#device     device    mount   FS      fsck    mount     mount
#to mount   to fsck   point   type    pass    at boot   options
#          
Data-cz1 - /zones/sczone/root/db_qfs/Data1 samfs - no shared,notrace

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/db_qfs/Data1
clzc:sczone:fs> set special=Data-cz1
clzc:sczone:fs> set type=samfs
clzc:sczone:fs> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /db_qfs/Data1
    special:                                   Data-cz1
    raw:                                       
    type:                                      samfs
    options:                                   []
…

Example 7-7 Adding a QFS Shared File System as a Loopback File System to a Zone Cluster

The following example shows the QFS shared file system with mountpoint/db_qfs/Data1 added to the zone cluster sczone. The file system is available to a zone cluster using the loopback mount mechanism at the mountpoint/db_qfs/Data-cz1.

phys-schost-1# vi /etc/vfstab
#device     device    mount   FS      fsck    mount     mount
#to mount   to fsck   point   type    pass    at boot   options
#          
Data-cz1 - /db_qfs/Data1 samfs - no shared,notrace

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/db_qfs/Data-cz1
clzc:sczone:fs> set special=/db_qfs/Data
clzc:sczone:fs> set type=lofs
clzc:sczone:fs> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /db_qfs/Data1
    special:                                   Data-cz1
    raw:                                       
    type:                                      lofs
    options:                                   []
    cluster-control:                           [true]
…

How to Add a Cluster File System to a Zone Cluster

Perform this procedure to add a cluster file system for use by a zone cluster.

  1. Become superuser on a voting node of the global cluster that hosts the zone cluster.

    Note - Perform all steps of this procedure from a voting node of the global cluster.


  2. On the global cluster, configure the cluster file system that you want to use in the zone cluster.
  3. On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
    phys-schost# vi /etc/vfstab
    …
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0/ /global/fs ufs 2 no global, logging
  4. Configure the cluster file system as a loopback file system for the zone cluster.
    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=zonecluster-lofs-mountpoint
    clzc:zoneclustername:fs> set special=globalcluster-mountpoint
    clzc:zoneclustername:fs> set type=lofs
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    dir=zonecluster-lofs-mountpoint

    Specifies the file system mount point for LOFS to make the cluster file system available to the zone cluster.

    special=globalcluster-mountpoint

    Specifies the file system mount point of the original cluster file system in the global cluster.

    For more information about creating loopback file systems, see How to Create and Mount an LOFS File System in System Administration Guide: Devices and File Systems.

  5. Verify the addition of the LOFS file system.
    phys-schost# clzonecluster show -v zoneclustername

Example 7-8 Adding a Cluster File System to a Zone Cluster

The following example shows how to add a cluster file system with mount point /global/apache to a zone cluster. The file system is available to a zone cluster using the loopback mount mechanism at the mount point /zone/apache.

phys-schost-1# vi /etc/vfstab
#device     device    mount   FS      fsck    mount     mount
#to mount   to fsck   point   type    pass    at boot   options
#          
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/apache ufs 2 yes global, logging

phys-schost-1# clzonecluster configure zoneclustername
clzc:zoneclustername> add fs
clzc:zoneclustername:fs> set dir=/zone/apache
clzc:zoneclustername:fs> set special=/global/apache
clzc:zoneclustername:fs> set type=lofs
clzc:zoneclustername:fs> end
clzc:zoneclustername> verify
clzc:zoneclustername> commit
clzc:zoneclustername> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /zone/apache
    special:                                   /global/apache
    raw:                                       
    type:                                      lofs
    options:                                   []
    cluster-control:                           true
…

Next Steps

Configure the cluster file system to be available in the zone cluster by using an HAStoragePlus resource. The HAStoragePlus resource manages by mounting the file system in the global cluster, and later performing a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.

Adding Storage Devices to a Zone Cluster

This section describes how to add the direct use of global storage devices by a zone cluster. Global devices are devices that can be accessed by more than one node in the cluster, either one node at a time or multiple nodes concurrently.

After a device is added to a zone cluster, the device is visible only from within that zone cluster.

This section contains the following procedures:

How to Add an Individual Metadevice to a Zone Cluster (Solaris Volume Manager)

Perform this procedure to add an individual metadevice of a Solaris Volume Manager disk set to a zone cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Identify the disk set that contains the metadevice to add to the zone cluster and determine whether it is online.
    phys-schost# cldevicegroup status
  3. If the disk set that you are adding is not online, bring it online.
    phys-schost# cldevicegroup online diskset
  4. Determine the set number that corresponds to the disk set to add.
    phys-schost# ls -l /dev/md/diskset
    lrwxrwxrwx  1 root root  8 Jul 22 23:11 /dev/md/diskset -> shared/setnumber 
  5. Add the metadevice for use by the zone cluster.

    You must use a separate add device session for each set match= entry.


    Note - An asterisk (*) is used as a wildcard character in the path name.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/md/diskset/*dsk/metadevice
    clzc:zoneclustername:device> end
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/md/shared/setnumber/*dsk/metadevice
    clzc:zoneclustername:device> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    match=/dev/md/diskset/*dsk/metadevice

    Specifies the full logical device path of the metadevice

    match=/dev/md/shared/N/*dsk/metadevice

    Specifies the full physical device path of the disk set number

  6. Reboot the zone cluster.

    The change becomes effective after the zone cluster reboots.

    phys-schost# clzonecluster reboot zoneclustername

Example 7-9 Adding a Metadevice to a Zone Cluster

The following example adds the metadevice d1 in the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/oraset/*dsk/d1
clzc:sczone:device> end
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/shared/3/*dsk/d1
clzc:sczone:device> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster reboot sczone

How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager)

Perform this procedure to add an entire Solaris Volume Manager disk set to a zone cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Identify the disk set to add to the zone cluster and determine whether it is online.
    phys-schost# cldevicegroup status
  3. If the disk set that you are adding is not online, bring it online.
    phys-schost# cldevicegroup online diskset
  4. Determine the set number that corresponds to the disk set to add.
    phys-schost# ls -l /dev/md/diskset
    lrwxrwxrwx  1 root root  8 Jul 22 23:11 /dev/md/diskset -> shared/setnumber 
  5. Add the disk set for use by the zone cluster.

    You must use a separate add device session for each set match= entry.


    Note - An asterisk (*) is used as a wildcard character in the path name.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/md/diskset/*dsk/*
    clzc:zoneclustername:device> end
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/md/shared/setnumber/*dsk/*
    clzc:zoneclustername:device> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    match=/dev/md/diskset/*dsk/*

    Specifies the full logical device path of the disk set

    match=/dev/md/shared/N/*dsk/*

    Specifies the full physical device path of the disk set number

  6. Reboot the zone cluster.

    The change becomes effective after the zone cluster reboots.

    phys-schost# clzonecluster reboot zoneclustername

Example 7-10 Adding a Disk Set to a Zone Cluster

The following example adds the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/oraset/*dsk/*
clzc:sczone:device> end
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/shared/3/*dsk/*
clzc:sczone:device> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster reboot sczone

How to Add a DID Device to a Zone Cluster

Perform this procedure to add a DID device to a zone cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Identify the DID device to add to the zone cluster.

    The device you add must be connected to all nodes of the zone cluster.

    phys-schost# cldevice list -v
  3. Add the DID device for use by the zone cluster.

    Note - An asterisk (*) is used as a wildcard character in the path name.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/did/*dsk/dNs*
    clzc:zoneclustername:device> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    match=/dev/did/*dsk/dNs*

    Specifies the full device path of the DID device

  4. Reboot the zone cluster.

    The change becomes effective after the zone cluster reboots.

    phys-schost# clzonecluster reboot zoneclustername

Example 7-11 Adding a DID Device to a Zone Cluster

The following example adds the DID device d10 to the sczone zone cluster.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add device
clzc:sczone:device> set match=/dev/did/*dsk/d10s*
clzc:sczone:device> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster reboot sczone

How to Add a Raw-Disk Device to a Zone Cluster