Sun Cluster Software Installation Guide for Solaris OS

Chapter 6 Creating Cluster File Systems, Non-Global Zones, and Zone Clusters

This chapter describes the following topics:

Creating Cluster File Systems

This section provides procedures to create cluster file systems to support data services.

ProcedureHow to Create Cluster File Systems

Perform this procedure for each cluster file system that you want to create. Unlike a local file system, a cluster file system is accessible from any node in the global cluster.


Note –

Alternatively, you can use a highly available local file system to support a data service. For information about choosing between creating a cluster file system or a highly available local file system to support a particular data service, see the manual for that data service. For general information about creating a highly available local file system, see Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

You cannot add a cluster file system to a zone cluster.


Before You Begin

Perform the following tasks:

  1. Become superuser on any node in the cluster.

    For Solaris, you must perform this procedure from the global zone if non-global zones are configured in the cluster.


    Tip –

    For faster file-system creation, become superuser on the current primary of the global device for which you create a file system.


  2. Create a file system.


    Caution – Caution –

    Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete.


    • For a UFS file system, use the newfs(1M) command.


      phys-schost# newfs raw-disk-device
      

      The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

      Volume Manager 

      Sample Disk Device Name 

      Description 

      Solaris Volume Manager 

      /dev/md/nfs/rdsk/d1

      Raw disk device d1 within the nfs disk set

      Veritas Volume Manager 

      /dev/vx/rdsk/oradg/vol01

      Raw disk device vol01 within the oradg disk group

      None 

      /dev/global/rdsk/d1s3

      Raw disk device d1s3

    • SPARC: For a Veritas File System (VxFS) file system, follow the procedures that are provided in your VxFS documentation.

  3. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system is not accessed on that node.


    Tip –

    For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    phys-schost# mkdir -p /global/device-group/mountpoint/
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device.

    mountpoint

    Name of the directory on which to mount the cluster file system.

  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    See the vfstab(4) man page for details.


    Note –

    If non-global zones are configured in the cluster, ensure that you mount cluster file systems in the global zone on a path in the global zone's root directory.


    1. In each entry, specify the required mount options for the type of file system that you use.


      Note –

      Do not use the logging mount option for Solaris Volume Manager transactional volumes. Transactional volumes provide their own logging.

      In addition, Solaris Volume Manager transactional-volume logging is removed from the Solaris 10 OS. Solaris UFS logging provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.


    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/, and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.

  5. On any node in the cluster, run the configuration check utility.


    phys-schost# sccheck
    

    The configuration check utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster. If no errors occur, nothing is returned.

    For more information, see the sccheck(1M) man page.

  6. Mount the cluster file system.


    phys-schost# mount /global/device-group/mountpoint/
    
    • For UFS, mount the cluster file system from any node in the cluster.

    • SPARC: For VxFS, mount the cluster file system from the current master of device-group to ensure that the file system mounts successfully.

      In addition, unmount a VxFS file system from the current master of device-group to ensure that the file system unmounts successfully.


      Note –

      To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.


  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df command or mount command to list mounted file systems. For more information, see the df(1M) man page or mount(1M) man page.

    For the Solaris 10 OS, cluster file systems are accessible from both the global zone and the non-global zone.


Example 6–1 Creating a Cluster File System

The following example creates a UFS cluster file system on the Solaris Volume Manager volume /dev/md/oracle/rdsk/d1. An entry for the cluster file system is added to the vfstab file on each node. Then from one node the sccheck command is run. After configuration check processing is completes successfully, the cluster file system is mounted from one node and verified on all nodes.


phys-schost# newfs /dev/md/oracle/rdsk/d1
…
phys-schost# mkdir -p /global/oracle/d1
phys-schost# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
…
phys-schost# sccheck
phys-schost# mount /global/oracle/d1
phys-schost# mount
…
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles
on Sun Oct 3 08:56:16 2005

Next Steps

Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.

Configuring a Non-Global Zone on a Global-Cluster Node

This section provides procedures to create a non-global zone on a global-cluster node.

ProcedureHow to Create a Non-Global Zone on a Global-Cluster Node

Perform this procedure for each non-global zone that you create in the global cluster.


Note –

For complete information about installing a zone, refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.


You can configure a Solaris 10 non-global zone, simply referred to as a zone, on a cluster node while the node is booted in either cluster mode or in noncluster mode.

Before You Begin

Perform the following tasks:

For additional information, see Zone Components in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

  1. Become superuser on the global-cluster node where you are creating the non-voting node.

    You must be working in the global zone.

  2. For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.

    If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.


    phys-schost# svcs multi-user-server node
    STATE          STIME    FMRI
    online         17:52:55 svc:/milestone/multi-user-server:default
  3. Configure, install, and boot the new zone.


    Note –

    You must set the autoboot property to true to support resource-group functionality in the non-voting node on the global cluster.


    Follow procedures in the Solaris documentation:

    1. Perform procedures in Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

    2. Perform procedures in Installing and Booting Zones in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

    3. Perform procedures in How to Boot a Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

  4. Verify that the zone is in the ready state.


    phys-schost# zoneadm list -v
    ID  NAME     STATUS       PATH
     0  global   running      /
     1  my-zone  ready        /zone-path
    
  5. For a whole-root zone with the ip-type property set to exclusive: If the zone might host a logical-hostname resource, configure a file system resource that mounts the method directory from the global zone.


    phys-schost# zonecfg -z sczone
    zonecfg:sczone> add fs
    zonecfg:sczone:fs> set dir=/usr/cluster/lib/rgm
    zonecfg:sczone:fs> set special=/usr/cluster/lib/rgm
    zonecfg:sczone:fs> set type=lofs
    zonecfg:sczone:fs> end
    zonecfg:sczone> exit
    
  6. (Optional) For a shared-IP zone, assign a private IP address and a private hostname to the zone.

    The following command chooses and assigns an available IP address from the cluster's private IP-address range. The command also assigns the specified private hostname, or host alias, to the zone and maps it to the assigned private IP address.


    phys-schost# clnode set -p zprivatehostname=hostalias node:zone
    
    -p

    Specifies a property.

    zprivatehostname=hostalias

    Specifies the zone private hostname, or host alias.

    node

    The name of the node.

    zone

    The name of the global-cluster non-voting node.

  7. Perform the initial internal zone configuration.

    Follow the procedures in Performing the Initial Internal Zone Configuration in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. Choose either of the following methods:

    • Log in to the zone.

    • Use an /etc/sysidcfg file.

  8. In the non-voting node, modify the nsswitch.conf file.

    These changes enable the zone to resolve searches for cluster-specific hostnames and IP addresses.

    1. Log in to the zone.


      phys-schost# zlogin -c zonename
      
    2. Open the /etc/nsswitch.conf file for editing.


      sczone# vi /etc/nsswitch.conf
      
    3. Add the cluster switch to the beginning of the lookups for the hosts and netmasks entries, followed by the files switch.

      The modified entries should appear similar to the following:


      …
      hosts:      cluster files nis [NOTFOUND=return]
      …
      netmasks:   cluster files nis [NOTFOUND=return]
      …
    4. For all other entries, ensure that the files switch is the first switch that is listed in the entry.

    5. Exit the zone.

  9. If you created an exclusive-IP zone, configure IPMP groups in each /etc/hostname.interface file that is on the zone.

    You must configure an IPMP group for each public-network adapter that is used for data-service traffic in the zone. This information is not inherited from the global zone. See Public Networks for more information about configuring IPMP groups in a cluster.

  10. Set up name-to-address mappings for all logical hostname resources that are used by the zone.

    1. Add name-to-address mappings to the /etc/inet/hosts file on the zone.

      This information is not inherited from the global zone.

    2. If you use a name server, add the name-to-address mappings.

Next Steps

To install an application in a non-global zone, use the same procedure as for a stand-alone system. See your application's installation documentation for procedures to install the software in a non-global zone. Also see Adding and Removing Packages and Patches on a Solaris System With Zones Installed (Task Map) in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

To install and configure a data service in a non-global zone, see the Sun Cluster manual for the individual data service.

Configuring a Zone Cluster

This section provide procedures to configure a cluster of non-global zones.

Overview of the clzonecluster Utility

The clzonecluster utility creates, modifies, and removes a zone cluster. The clzonecluster utility actively manages a zone cluster. For example, the clzonecluster utility both boots and halts a zone cluster. Progress messages for the clzonecluster utility are output to the console, but are not saved in a log file.

The utility operates in the following levels of scope, similar to the zonecfg utility:

Establishing the Zone Cluster

This section describes how to configure a cluster of non-global zones.

ProcedureHow to Create a Zone Cluster

Perform this procedure to create a cluster of non-global zones.

Before You Begin
  1. Become superuser on an active member node of a global cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Ensure that the node of the global cluster is in cluster mode.

    If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.


    phys-schost# clnode status
    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    phys-schost-2                                   Online
    phys-schost-1                                   Online
  3. Create the zone cluster.


    Note –

    By default, sparse root zones are created. To create whole root zones, add the -b option to the create command.



    phys-schost-1# clzonecluster configure zoneclustername
    clzc:zoneclustername> create
    
    Set the zone path for the entire zone cluster
    clzc:zoneclustername> set zonepath=/zones/zoneclustername
    
    Add the first node and specify node-specific settings
    clzc:zoneclustername> add node
    clzc:zoneclustername:node> set physical-host=baseclusternode1
    clzc:zoneclustername:node> set hostname=hostname1
    clzc:zoneclustername:node> add net
    clzc:zoneclustername:node:net> set address=public_netaddr
    clzc:zoneclustername:node:net> set physical=adapter
    clzc:zoneclustername:node:net> end
    clzc:zoneclustername:node> end
    
    Set the root password globally for all nodes in the zone cluster
    clzc:zoneclustername> add sysid
    clzc:zoneclustername:sysid> set root_password=encrypted_password
    clzc:zoneclustername:sysid> end
    
    Save the configuration and exit the utility
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    
  4. (Optional) Add one or more additional nodes to the zone cluster,


    phys-schost-1# clzonecluster configure zoneclustername
    clzc:zoneclustername> add node
    clzc:zoneclustername:node> set physical-host=baseclusternode2
    clzc:zoneclustername:node> set hostname=hostname2
    clzc:zoneclustername:node> add net
    clzc:zoneclustername:node:net> set address=public_netaddr
    clzc:zoneclustername:node:net> set physical=adapter
    clzc:zoneclustername:node:net> end
    clzc:zoneclustername:node> end
    clzc:zoneclustername> exit
    
  5. Verify the zone cluster configuration.

    The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, there is no output.


    phys-schost-1# clzonecluster verify zoneclustername
    phys-schost-1# clzonecluster status zoneclustername
    === Zone Clusters ===
    
    --- Zone Cluster Status ---
    
    Name      Node Name   Zone HostName   Status    Zone Status
    ----      ---------   -------------   ------    -----------
    zone      basenode1    zone-1        Offline   Running
              basenode2    zone-2        Offline   Running
  6. Install and boot the cluster.


    phys-schost-1# clzonecluster install zoneclustername
    Waiting for zone install commands to complete on all the nodes 
    of the zone cluster "zoneclustername"...
    
    Installation of the zone cluster might take several minutes
    phys-schost-1# clzonecluster boot zoneclustername
    Waiting for zone boot commands to complete on all the nodes of 
    the zone cluster "zoneclustername"...

Example 6–2 Configuration File to Create a Zone Cluster

The following example shows the contents of a command file that can be used with the clzonecluster utility to create a zone cluster. The file contains the series of clzonecluster commands that you would input manually.

In the following configuration, the zone cluster sczone is created on the global-cluster node phys-schost-1. The zone cluster uses /zones/sczone as the zone path. The first node of the zone cluster is assigned the hostname zc-host-1 and uses the public-network address 172.16.0.1 and the bge0 adapter. The second node of the zone cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is assigned the hostname zc-host-2 and uses the public-network address 172.16.0.2 and the bge1 adapter.

create
set zonepath=/zones/sczone
add node
set physical-host=phys-schost-1
set hostname=zc-host-1
add net
set address=172.16.0.1
set physical=bge0
end
end
add sysid
set root_password=encrypted_password
end
add node
set physical-host=phys-schost-2
set hostname=zc-host-2
add net
set address=172.16.0.2
set physical=bge1
end
end
commit
exit


Example 6–3 Creating a Zone Cluster by Using a Configuration File.

The following example shows the commands to create the new zone cluster sczone on the global-cluster node phys-schost-1 by using the configuration file sczone-config. The hostnames of the zone-cluster nodes are zc-host-1 and zc-host-2.


phys-schost-1# clzonecluster configure -f sczone-config sczone
phys-schost-1# clzonecluster verify sczone
phys-schost-1# clzonecluster install sczone
Waiting for zone install commands to complete on all the nodes of the 
zone cluster "sczone"...
phys-schost-1# clzonecluster boot sczone
Waiting for zone boot commands to complete on all the nodes of the 
zone cluster "sczone"...
phys-schost-1# clzonecluster status sczone
=== Zone Clusters ===

--- Zone Cluster Status ---

Name      Node Name        Zone HostName    Status    Zone Status
----      ---------        -------------    ------    -----------
sczone    phys-schost-1    zc-host-1        Offline   Running
          phys-schost-2    zc-host-2        Offline   Running

Next Steps

To add the use of a file system to the zone cluster, go to Adding File Systems to a Zone Cluster.

To add the use of global storage devices to the zone cluster, go to Adding Storage Devices to a Zone Cluster.

Adding File Systems to a Zone Cluster

This section provides procedures to add file systems for use by the zone cluster.

After a file system is added to a zone cluster and brought online, the file system is visible on from within that zone cluster.


Note –

You cannot use the clzonecluster command to add a local file system, which is mounted on a single global-cluster node, to a zone cluster. Instead, use the zonecfg command as you normally would in a stand-alone system. The local file system would not be under cluster control.

You cannot add a cluster file system to a zone cluster.


The following procedures are in this section:

ProcedureHow to Add a Highly Available Local File System to a Zone Cluster

Perform this procedure to add a highly available local file system on the global cluster for use by the zone cluster.


Note –

To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster.


  1. On the global cluster, configure the highly available local file system that you want to use in the zone cluster.

    See Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  2. Become superuser on a node of the global cluster that hosts the zone cluster.

    You perform all steps of the procedure from a node of the global cluster.

  3. Display the /etc/vfstab entry for the file system that you want to mount on the zone cluster.


    phys-schost# vi /etc/vfstab
    
  4. Add the file system to the zone-cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=mountpoint
    clzc:zoneclustername:fs> set special=disk-device-name
    clzc:zoneclustername:fs> set raw=raw-disk-device-name
    clzc:zoneclustername:fs> set type=FS-type
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> exit
    
    dir=mountpoint

    Specifies the file system mount point

    special=disk-device-name

    Specifies the name of the disk device

    raw=raw-disk-device-name

    Specifies the name of the raw disk device

    type=FS-type

    Specifies the type of file system

  5. Verify the addition of the file system.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 6–4 Adding a Highly Available Local File System to a Zone Cluster

This example adds the highly available local file system /global/oracle/d1 for use by the sczone zone cluster.


phys-schost-1# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 5 no logging

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/global/oracle/d1
clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1
clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> end
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /global/oracle/d1
    special:                                   /dev/md/oracle/dsk/d1
    raw:                                       /dev/md/oracle/rdsk/d1
    type:                                      ufs
    options:                                   []
…

ProcedureHow to Add a ZFS Storage Pool to a Zone Cluster

Perform this procedure to add a ZFS storage pool for use by a zone cluster.

  1. Configure the ZFS storage pool on the global cluster.


    Note –

    Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.


    See Solaris ZFS Administration Guide for procedures to create a ZFS pool..

  2. Become superuser on a node of the global cluster that hosts the zone cluster.

  3. Add the pool to the zone-cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add dataset
    clzc:zoneclustername:dataset> set name=ZFSpoolname
    clzc:zoneclustername:dataset> end
    clzc:zoneclustername> exit
    
  4. Verify the addition of the file system.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 6–5 Adding a ZFS Storage Pool to a Zone Cluster

The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.


phys-schost-1# clzonecluster configure sczone
clzc:sczone> add dataset
clzc:sczone:dataset> set name=zpool1
clzc:sczone:dataset> end
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                                dataset
    name:                                          zpool1
…

ProcedureHow to Add a QFS Shared File System to a Zone Cluster

Perform this procedure to add a Sun StorageTek QFS shared file system for use by a zone cluster.


Note –

At this time, QFS shared file systems are only supported for use in clusters that are configured with Oracle Real Application Clusters (RAC). On clusters that are not configured with Oracle RAC, you can use a single-machine QFS file system that is configured as a highly available local file system.


  1. On the global cluster, configure the QFS shared file system that you want to use in the zone cluster.

    Follow procedures in Tasks for Configuring the Sun StorEdge QFS Shared File System for Oracle Files in Sun Cluster Data Service for Oracle RAC Guide for Solaris OS.

  2. Become superuser on a voting node of the global cluster that hosts the zone cluster.

    You perform all remaining steps of this procedure from a voting node of the global cluster.

  3. Display the /etc/vfstab entry for the file system that you want to mount on the zone cluster.

    You will use information from the entry to specify the file system to the zone-cluster configuration.


    phys-schost# vi /etc/vfstab
    
  4. Add the file system to the zone cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=mountpoint
    clzc:zoneclustername:fs> set special=QFSfilesystemname
    clzc:zoneclustername:fs> set type=samfs
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> exit
    
  5. Verify the addition of the file system.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 6–6 Adding a QFS Shared File System to a Zone Cluster

The following example shows the QFS shared file system Data-cz1 added to the zone cluster sczone. From the global cluster, the mount point of the file system is /zones/sczone/root/db_qfs/Data1, where /zones/sczone/root/ is the zone's root path. From the zone-cluster node, the mount point of the file system is db_qfs/Data1.


phys-schost-1# vi /etc/vfstab
#device           device        mount   FS      fsck    mount     mount
#to mount         to fsck       point   type    pass    at boot   options
#                     
Data-cz1          -            /zones/sczone/root/db_qfs/Data1 samfs - no shared,notrace

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/db_qfs/Data1
clzc:sczone:fs> set special=Data-cz1
clzc:sczone:fs> set type=samfs
clzc:sczone:fs> end
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /db_qfs/Data1
    special:                                   Data-cz1
    raw:                                       
    type:                                      samfs
    options:                                   []
…

Adding Storage Devices to a Zone Cluster

This section describes how to add the direct use of global storage devices by a zone cluster. Global devices are devices that can be accessed by more than one node in the cluster, either one node at a time or multiple nodes concurrently.

After a device is added to a zone cluster, the device is visible only from within that zone cluster.

This section contains the following procedures:

ProcedureHow to Add an Individual Metadevice to a Zone Cluster (Solaris Volume Manager)

Perform this procedure to add an individual metadevice of a Solaris Volume Manager disk set to a zone cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Identify the disk set that contains the metadevice to add to the zone cluster and determine whether it is online.


    phys-schost# cldevicegroup status
    
  3. If the disk set that you are adding is not online, bring it online.


    phys-schost# cldevicegroup online diskset
    
  4. Determine the set number that corresponds to the disk set to add.


    phys-schost# ls -l /dev/md/diskset
    lrwxrwxrwx  1 root root  8 Jul 22 23:11 /dev/md/diskset -> shared/setnumber 
    
  5. Add the metadevice for use by the zone cluster.

    You must use a separate add device session for each set match= entry.


    Note –

    An asterisk (*) is used as a wildcard character in the path name.



    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/md/diskset/*dsk/metadevice
    clzc:zoneclustername:device> end
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/md/shared/setnumber/*dsk/metadevice
    clzc:zoneclustername:device> end
    clzc:zoneclustername:> exit
    
    match=/dev/md/diskset/*dsk/metadevice

    Specifies the full logical device path of the metadevice

    match=/dev/md/shared/N/*dsk/metadevice

    Specifies the full physical device path of the disk set number

  6. Reboot the zone cluster.

    The change becomes effective after the zone cluster reboots.


    phys-schost# clzonecluster reboot zoneclustername
    

Example 6–7 Adding a Metadevice to a Zone Cluster

The following example adds the metadevice d1 in the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.


phys-schost-1# clzonecluster configure sczone
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/oraset/*dsk/d1
clzc:sczone:device> end
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/shared/3/*dsk/d1
clzc:sczone:device> end
clzc:sczone:> exit

phys-schost-1# clzonecluster reboot sczone

ProcedureHow to Add a Disk Set to a Zone Cluster (Solaris Volume Manager)

Perform this procedure to add an entire Solaris Volume Manager disk set to a zone cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Identify the disk set to add to the zone cluster and determine whether it is online.


    phys-schost# cldevicegroup status
    
  3. If the disk set that you are adding is not online, bring it online.


    phys-schost# cldevicegroup online diskset
    
  4. Determine the set number that corresponds to the disk set to add.


    phys-schost# ls -l /dev/md/diskset
    lrwxrwxrwx  1 root root  8 Jul 22 23:11 /dev/md/diskset -> shared/setnumber 
    
  5. Add the disk set for use by the zone cluster.

    You must use a separate add device session for each set match= entry.


    Note –

    An asterisk (*) is used as a wildcard character in the path name.



    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/md/diskset/*dsk/*
    clzc:zoneclustername:device> end
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/md/shared/setnumber/*dsk/*
    clzc:zoneclustername:device> end
    clzc:zoneclustername:> exit
    
    match=/dev/md/diskset/*dsk/*

    Specifies the full logical device path of the disk set

    match=/dev/md/shared/N/*dsk/*

    Specifies the full physical device path of the disk set number

  6. Reboot the zone cluster.

    The change becomes effective after the zone cluster reboots.


    phys-schost# clzonecluster reboot zoneclustername
    

Example 6–8 Adding a Disk Set to a Zone Cluster

The following example adds the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.


phys-schost-1# clzonecluster configure sczone
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/oraset/*dsk/*
clzc:sczone:device> end
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/shared/3/*dsk/*
clzc:sczone:device> end
clzc:sczone:> exit

phys-schost-1# clzonecluster reboot sczone

ProcedureHow to Add a DID Device to a Zone Cluster

Perform this procedure to add a DID device to a zone cluster.

  1. Become superuser on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Identify the DID device to add to the zone cluster.

    The device you add must be connected to all nodes of the zone cluster.


    phys-schost# cldevice list -v
    
  3. Add the DID device for use by the zone cluster.


    Note –

    An asterisk (*) is used as a wildcard character in the path name.



    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add device
    clzc:zoneclustername:device> set match=/dev/did/*dsk/dNs*
    clzc:zoneclustername:device> end
    clzc:zoneclustername:> exit
    
    match=/dev/did/*dsk/dNs*

    Specifies the full device path of the DID device

  4. Reboot the zone cluster.

    The change becomes effective after the zone cluster reboots.


    phys-schost# clzonecluster reboot zoneclustername
    

Example 6–9 Adding a DID Device to a Zone Cluster

The following example adds the DID device d10 to the sczone zone cluster.


phys-schost-1# clzonecluster configure sczone
clzc:sczone> add device
clzc:sczone:device> set match=/dev/did/*dsk/d10s*
clzc:sczone:device> end
clzc:sczone:> exit

phys-schost-1# clzonecluster reboot sczone

ProcedureHow to Add a Raw-Disk Device to a Zone Cluster

  1. Use the zonecfg command to export raw-disk devices (cNtXdYsZ) to a zone-cluster node, as you normally would for other brands of non-global zones.

    Such devices would not be under the control of the clzonecluster command, but would be treated as local devices of the node. See How to Import Raw and Block Devices by Using zonecfg in System Administration Guide: Solaris Containers-Resource Management and Solaris Zonesfor more information about exporting raw-disk devices to a non-global zone.