Go to main content

Installing and Configuring an Oracle® Solaris Cluster 4.4 Environment

Exit Print View

Updated: November 2019
 
 

Adding File Systems to a Zone Cluster

After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.


Note -  To add a file system whose use is limited to a single zone-cluster node, see instead Adding Local File Systems to a Specific Zone-Cluster Node.

This section provides the following procedures to add file systems for use by the zone cluster:

You can also use Oracle Solaris Cluster Manager to add a file system to a zone cluster. For the browser interface log-in instructions, see How to Access Oracle Solaris Cluster Manager in Administering an Oracle Solaris Cluster 4.4 Configuration.

How to Add a Highly Available Local File System to a Zone Cluster (clsetup)

Perform this procedure to configure a highly available local file system on the global cluster for use by a zone cluster. The file system is added to the zone cluster and is configured with an HAStoragePlus resource to make the local file system highly available.


Note -  Alternatively, you can use either the command line or Oracle Solaris Cluster Manager to perform this task.

To use the command line to perform this task, see How to Add a Highly Available Local File System to a Zone Cluster (CLI).

To use the Oracle Solaris Cluster Manager browser interface to perform this task, click Zone Clusters, click the zone cluster name to go to its page, click the Solaris Resources tab, then in the File Systems section click Add to start the file systems wizard. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager in Administering an Oracle Solaris Cluster 4.4 Configuration.


Perform all steps of the procedure from a node of the global cluster.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.
  2. On the global cluster, create a file system that you want to use in the zone cluster.

    Ensure that the file system is created on shared disks.

  3. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip  -  To return to a previous screen, type the < key and press Return.
  4. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  5. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  6. Choose the zone cluster where you want to add the file system.

    The Storage Type Selection menu is displayed.

  7. Choose the File System menu item.

    The File System Selection for the Zone Cluster menu is displayed.

  8. Choose the file system you want to add to the zone cluster.

    The file systems in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify all properties for a file system.

    The Mount Type Selection menu is displayed.

  9. Choose the Loopback mount type.

    The File System Properties for the Zone Cluster menu is displayed.

  10. Change the properties that you are allowed to change for the file system you are adding.

    Note -  For UFS file systems, enable logging.

    When, finished, type d and press Return.

  11. Type c to save the configuration change.

    The results of your configuration change are displayed.

  12. When finished, exit the clsetup utility.
  13. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zone-cluster-name

Next Steps

Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

How to Add a Highly Available Local File System to a Zone Cluster (CLI)

Perform this procedure to add a highly available local file system on the global cluster for use by the zone cluster.


Note -  Alternatively, you can use the clsetup utility to perform this task. See How to Add a Highly Available Local File System to a Zone Cluster (clsetup).

To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster (clsetup). Or, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.


  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    You perform all steps of the procedure from a node of the global cluster.

  2. On the global cluster, create a file system that you want to use in the zone cluster.

    Ensure that the file system is created on shared disks.

  3. Add the file system to the zone-cluster configuration.
    phys-schost# clzonecluster configure zone-cluster-name
    clzc:zone-cluster-name> add fs
    clzc:zone-cluster-name:fs> set dir=mount-point
    clzc:zone-cluster-name:fs> set special=disk-device-name
    clzc:zone-cluster-name:fs> set raw=raw-disk-device-name
    clzc:zone-cluster-name:fs> set type=FS-type
    clzc:zone-cluster-name:fs> end
    clzc:zone-cluster-name> verify
    clzc:zone-cluster-name> commit
    clzc:zone-cluster-name> exit
    dir=mount-point

    Specifies the file system mount point

    special=disk-device-name

    Specifies the name of the disk device

    raw=raw-disk-device-name

    Specifies the name of the raw disk device

    type=FS-type

    Specifies the type of file system


    Note -  Enable logging for UFS file systems.
  4. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zone-cluster-name
Example 14  Adding a Highly Available Local File System to a Zone Cluster (CLI)

This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/global/oracle/d1
clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1
clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> add options [logging]
clzc:sczone:fs> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
Resource Name:                            fs
dir:                                       /global/oracle/d1
special:                                   /dev/md/oracle/dsk/d1
raw:                                       /dev/md/oracle/rdsk/d1
type:                                      ufs
options:                                   [logging]
cluster-control:                           [true]
…

Next Steps

Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

How to Add a ZFS Storage Pool to a Zone Cluster (clsetup)

Perform this procedure to add a ZFS storage pool to a zone cluster. The pool can be local to a single zone-cluster node or configured with HAStoragePlus to be highly available.

The clsetup utility discovers and displays all configured ZFS pools on the shared disks that can be accessed by the nodes where the selected zone cluster is configured. After you use the clsetup utility to add a ZFS storage pool in cluster scope to an existing zone cluster, you can use the clzonecluster command to modify the configuration or to add a ZFS storage pool in node-scope.


Note -  Alternatively, you can use either the command line or Oracle Solaris Cluster Manager to perform this task.

To use the command line to perform this task, see How to Add a ZFS Storage Pool to a Zone Cluster (CLI).

To use the Oracle Solaris Cluster Manager browser interface to perform this task, click Zone Clusters, click the zone cluster name to go to its page, click the Solaris Resources tab, then in the Datasets for ZFS Storage Pools section, click Add. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager in Administering an Oracle Solaris Cluster 4.4 Configuration.


Before You Begin

Ensure that the ZFS pool is connected on shared disks that are connected to all nodes of the zone cluster. See Managing ZFS File Systems in Oracle Solaris 11.4 for procedures to create a ZFS pool.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. Start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip  -  To return to a previous screen, type the < key and press Return.
  3. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  4. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  5. Choose the zone cluster where you want to add the ZFS storage pool.

    The Storage Type Selection menu is displayed.

  6. Choose the ZFS menu item.

    The ZFS Pool Selection for the Zone Cluster menu is displayed.

  7. Choose the ZFS pool you want to add to the zone cluster.

    The ZFS pools in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify properties for a ZFS pool.

    The ZFS Pool Dataset Property for the Zone Cluster menu is displayed. The selected ZFS pool is assigned to the name property.

  8. Type d and press Return.

    The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.

  9. Type c to save the configuration change.

    The results of your configuration change are displayed. For example:

     >>> Result of Configuration Change to the Zone Cluster(sczone) <<<
    
    Adding file systems or storage devices to sczone zone cluster...
    
    The zone cluster is being created with the following configuration
    
    /usr/cluster/bin/clzonecluster configure sczone
    add dataset
    set name=myzpool5
    end
    
    Configuration change to sczone zone cluster succeeded.
  10. When finished, exit the clsetup utility.
  11. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zoneclustername
  12. To make the ZFS storage pool highly available, configure the pool with an HAStoragePlus resource.

    The HAStoragePlus resource manages the mounting of file systems in the pool on the zone-cluster node that currently hosts the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

How to Add a ZFS Storage Pool to a Zone Cluster (CLI)

Perform this procedure to add a ZFS storage pool to a zone cluster.


Note -  Alternatively, you can use the clsetup utility to perform this task. See How to Add a ZFS Storage Pool to a Zone Cluster (clsetup).

To configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.


  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global zone.

  2. Create the ZFS storage pool on the global cluster.

    Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.

    See Managing ZFS File Systems in Oracle Solaris 11.4 for procedures to create a ZFS pool.

  3. Add the pool to the zone-cluster configuration.
    phys-schost# clzonecluster configure zone-cluster-name
    clzc:zone-cluster-name> add dataset
    clzc:zone-cluster-name:dataset> set name=ZFSpoolname
    clzc:zone-cluster-name:dataset> end
    clzc:zone-cluster-name> verify
    clzc:zone-cluster-name> commit
    clzc:zone-cluster-name> exit
  4. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zone-cluster-name
Example 15  Adding a ZFS Storage Pool to a Zone Cluster (CLI)

The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add dataset
clzc:sczone:dataset> set name=zpool1
clzc:sczone:dataset> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
Resource Name:                                dataset
name:                                          zpool1
…

Next Steps

Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of file systems in the pool on the zone-cluster node that currently hosts the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

How to Add a Cluster File System to a Zone Cluster (clsetup)

The clsetup utility discovers and displays the available file systems that are configured on the cluster nodes where the selected zone cluster is configured. When you use the clsetup utility to add a file system, the file system is added in cluster scope.

You can add the following types of cluster file systems to a zone cluster:

  • ZFS cluster file system - The ZFS storage pool is imported in the global zone, and ZFS automatically mounts filesystem datasets configured in the pool.

  • UFS cluster file system - You specify the file system type in the /etc/vfstab file, using the global mount option. This file system can be located on the shared disk or on a Solaris Volume Manager device.

  • Oracle HSM shared file system - You specify the file system type in the /etc/vfstab file, using the shared mount option.

  • ACFS - Discovered automatically, based on the ORACLE_HOME path you provide.


Note -  Alternatively, you can use either the command line or the Oracle Solaris Cluster Manager to perform this task.

To use the command line to perform this task, see one of the following procedures:

To use the Oracle Solaris Cluster Manager browser interface to perform this task, click Zone Clusters, click the zone cluster name to go to its page, click the Solaris Resources tab, then in the File Systems section click Add to start the file systems wizard. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager in Administering an Oracle Solaris Cluster 4.4 Configuration.


Before You Begin

Ensure that the cluster file system you want to add to the zone cluster is configured. See Planning Cluster File Systems and Creating a Cluster File System.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. (For non-ZFS file systems only) On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
    phys-schost# vi /etc/vfstab
    • For a UFS entry, include the global mount option, similar to the following example:
      /dev/md/datadg/dsk/d0 /dev/md/datadg/rdsk/d0 /global/fs ufs 2 no global, logging
    • For a shared QFS entry, include the shared mount option, similar to the following example:
      Data-cz1    -    /db_qfs/Data1 samfs - no shared,notrace
  3. On the global cluster, start the clsetup utility.
    phys-schost# clsetup

    The Main Menu is displayed.


    Tip  -  To return to a previous screen, type the < key and press Return.
  4. Choose the Zone Cluster menu item.

    The Zone Cluster Tasks Menu is displayed.

  5. Choose the Add File System/Storage Device to a Zone Cluster menu item.

    The Select Zone Cluster menu is displayed.

  6. Choose the zone cluster where you want to add the file system.

    The Storage Type Selection menu is displayed.

  7. Choose the File System menu item.

    The File System Selection for the Zone Cluster menu is displayed.

  8. Choose a file system from the list.

    You can also type e to manually specify all properties for a file system. If you are using an ACFS file system, you can select Discover ACFS and then specify the ORACLE_HOME directory.

    The Mount Type Selection menu is displayed.

  9. Choose the Loopback file system mount type for the zone cluster.

    If you chose ACFS in Step 7, the clsetup utility skips this step because ACFS supports only the direct mount type.

    For more information about creating loopback file systems, see the lofiadm(8) man page.

    The File System Properties for the Zone Cluster menu is displayed.

  10. Specify the mount point directory.

    Type the number for the dir property and press Return. Then type the LOFS mount point directory name in the New Value field and press Return.

    When finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.

  11. Type c to save the configuration change.

    The results of your configuration change are displayed. For example:

      >>> Result of Configuration Change to the Zone Cluster(sczone) <<<
    
    Adding file systems or storage devices to sczone zone cluster...
    
    The zone cluster is being created with the following configuration
    
    /usr/cluster/bin/clzonecluster configure sczone
    add fs
    set dir=/zones/sczone/dsk/d0
    set special=/global/fs
    set type=lofs
    end
    
    Configuration change to sczone zone cluster succeeded.
  12. When finished, exit the clsetup utility.
  13. Verify the addition of the LOFS file system.
    phys-schost# clzonecluster show -v zone-cluster-name

Next Steps

(Optional) Configure the cluster file system to be managed by an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems in the global cluster, and later performs a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

How to Add a ZFS-based Cluster File System to a Zone Cluster (CLI)

This procedure shows how to add a ZFS cluster file system for use by a zone cluster.


Note -  Alternatively, you can use the clsetup utility to perform this task. See How to Add a Cluster File System to a Zone Cluster (clsetup).
  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    Perform all steps of this procedure from a global cluster node.

  2. On the global cluster, configure the ZFS pool and the file system that you want to use in the zone cluster.
    phys-schost# zpool create poolname device
    phys-schost# zfs create poolname/fsname

    For information on how to set up a ZFS pool, see the zfs(8) and the zpool(8) man pages.

  3. Add the file system to the zone cluster configuration.
    phys-schost# clzonecluster configure zone-cluster-name
    clzc:zone-cluster-name> add fs
    clzc:zone-cluster-name:fs> set dir=zone-cluster-lofs-mountpoint
    clzc:zone-cluster-name:fs> set special=global-cluster-mount-point
    clzc:zone-cluster-name:fs> set type=lofs
    clzc:zone-cluster-name:fs> end
    clzc:zone-cluster-name> verify
    clzc:zone-cluster-name> commit
    clzc:zone-cluster-name> exit
  4. Verify the addition of the LOFS file system.
    phys-schost# clzonecluster show -v zone-cluster-name
  5. (Optional) Create an HAStoragePlus resource to manage and monitor the global ZFS pool in the global zone.

    Although this step is considered to be optional, it is recommended to perform this step as a best practice.

    phys-schost# clresource create -t HAStoragePlus \
    -p GlobalZpools=poolname \
    -g resource-group-name global-zpool-hastorageplus-resource-name

    Bring the resource group online.

    Proceed to Step 6 only if you skipped Step 5. Otherwise, proceed to Step 7.

  6. If the global ZFS pool is not managed by HAStoragePlus, then create the zpool device group and bring it online.
    phys-schost# cldevicegroup create -p poolaccess=global \
    -n phys-schost-1,phys-schost-2 -t zpool global-zfs-pool
    phys-schost# cldevicegroup online global-zfs-pool

    For more information, see How to Configure a zpool for Globally Mounted ZFS File Systems Without HAStoragePlus HAStoragePlus in Administering an Oracle Solaris Cluster 4.4 Configuration.

  7. Ensure that the global zone file system is globally mounted by looking at the mount flags for the word, global.
    phys-schost# mount -p | grep global-cluster-mount-point
    global-zfs-dataset - global-cluster-mount-point zfs - no rw,devices,setuid,nonbmand,exec,
    rstchown,noxattr,atime,global
  8. Configure an HAStoragePlus resource in the zone cluster to perform the loopback mount of the global file system in the zone cluster node.

    Note -  The global file system will be mounted on the node(s) where the HAStoragePlus is brought online. If you require the filesystem online on multiple zone cluster nodes, the resource group must be scalable.
    ZC-schost# clresource create -g resource-group-name -t HAStoragePlus \
    -p FileSystemMountPoints=zone-cluster-lofs-mountpoint hastorageplus-resource-name
  9. If an HAStoragePlus resource was created to manage and monitor the global ZFS pool in the global zone as shown in Step 5, then set the offline restart dependency.
    phys-schost# clresource set -Z zone-cluster-name \
    -p Resource_dependencies_offline_restart=global:global-zpool-hastorageplus-resource-name
    \
    hastorageplus-resource-name
Example 16  Adding a ZFS-based Cluster File System to a Zone Cluster (CLI)

The following example shows how to add a ZFS cluster file system in global ZFS poolglobalpool1, with mount point /globalpool1/apache to a zone cluster sczone. The file system is available to the zone cluster using the loopback mount mechanism at the mount point /zone/apache.

phys-schost-1# mount -p | grep 'globalpool1/apache'
...
globalpool1/apache - /globalpool1/apache zfs - no
rw,devices,setuid,nonbmand,exec,rstchown,noxattr,atime,global
...
phys-schost-1# clzonecluster configure sczone
clzc:zone-cluster-name> add fs
clzc:zone-cluster-name:fs> set dir=/zone/apache
clzc:zone-cluster-name:fs> set special=/globalpool1/apache
clzc:zone-cluster-name:fs> set type=lofs
clzc:zone-cluster-name:fs> end
clzc:zone-cluster-name> verify
clzc:zone-cluster-name> commit
clzc:zone-cluster-name> exit

phys-schost-1# clzonecluster show -v sczone
…
Resource Name:                            fs
dir:                                       /zone/apache
special:                                   /globalpool1/apache
raw:
type:                                      lofs
options:                                   []
cluster-control:                           true
…
phys-schost-1# clresourcegroup create gpoolRG
phys-schost-1# clresource create -t HAStoragePlus -p GlobalZpools=globalpool1 \
-g gpoolRG gpoolResource
phys-schost-1# mount -p | grep '/globalpool1/apache'
globalpool1/apache - /globalpoo1/apache zfs - no
rw,devices,setuid,nonbmand,exec,rstchown,noxattr,atime,global

phys-schost-1# clresource create -Z sczone -g zone-rg -t HAStoragePlus \
-p FileSystemMountPoints=/globalpool1/apache -p \ 
Resource_dependencies_offline_restart=global:gpoolResource zc-hasp-resource

Data service resources in the zone cluster that require access to the global ZFS file systemmust declare a resource dependency on this HAStoragePlus resource zc-hasp-resource.

How to Add a UFS Cluster File System to a Zone Cluster (CLI)

Perform this procedure to add a UFS cluster file system for use by a zone cluster.


Note -  Alternatively, you can use the clsetup utility to perform this task. See How to Add a Cluster File System to a Zone Cluster (clsetup).
  1. Assume the root role on a voting node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a voting node of the global cluster.

  2. On the global cluster, configure the cluster file system that you want to use in the zone cluster.
  3. On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
    phys-schost# vi /etc/vfstab
    …
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0/ /global/fs ufs 2 no global, logging
  4. Configure the cluster file system as a loopback file system for the zone cluster.
    phys-schost# clzonecluster configure zone-cluster-name
    clzc:zone-cluster-name> add fs
    clzc:zone-cluster-name:fs> set dir=zone-cluster-lofs-mountpoint
    clzc:zone-cluster-name:fs> set special=global-cluster-mount-point
    clzc:zone-cluster-name:fs> set type=lofs
    clzc:zone-cluster-name:fs> end
    clzc:zone-cluster-name> verify
    clzc:zone-cluster-name> commit
    clzc:zone-cluster-name> exit
    dir=zone-cluster-lofs-mount-point

    Specifies the file system mount point for LOFS to make the cluster file system available to the zone cluster.

    special=global-cluster-mount-point

    Specifies the file system mount point of the original cluster file system in the global cluster.

    For more information about creating loopback file systems, see the lofiadm(8) man page.

  5. Verify the addition of the LOFS file system.
    phys-schost# clzonecluster show -v zone-cluster-name
Example 17  Adding a UFS Cluster File System to a Zone Cluster (CLI)

The following example shows how to add a cluster file system with mount point /global/apache to a zone cluster. The file system is available to a zone cluster using the loopback mount mechanism at the mount point /zone/apache.

phys-schost-1# vi /etc/vfstab
#device     device    mount   FS      fsck    mount     mount
#to mount   to fsck   point   type    pass    at boot   options
#
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/apache ufs 2 yes global, logging

phys-schost-1# clzonecluster configure zone-cluster-name
clzc:zone-cluster-name> add fs
clzc:zone-cluster-name:fs> set dir=/zone/apache
clzc:zone-cluster-name:fs> set special=/global/apache
clzc:zone-cluster-name:fs> set type=lofs
clzc:zone-cluster-name:fs> end
clzc:zone-cluster-name> verify
clzc:zone-cluster-name> commit
clzc:zone-cluster-name> exit

phys-schost-1# clzonecluster show -v sczone
…
Resource Name:                            fs
dir:                                       /zone/apache
special:                                   /global/apache
raw:
type:                                      lofs
options:                                   []
cluster-control:                           true
…

Next Steps

Configure the cluster file system to be available in the zone cluster by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems in the global cluster, and later performs a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

How to Add a Oracle HSM Shared File System to a Zone Cluster (CLI)

Perform this task to add a Oracle HSM shared file system for use by a zone cluster.


Note -  Alternatively, you can use the clsetup utility to perform this task. See How to Add a Cluster File System to a Zone Cluster (clsetup).

At this time, Oracle HSM shared file systems are only supported for use in clusters that are configured with Oracle RAC. On clusters that are not configured with Oracle RAC, you can use a single-machine Oracle HSM file system that is configured as a highly available local file system.

  1. Assume the root role on a node of the global cluster that hosts the zone cluster.

    You perform all steps of this procedure from a node of the global cluster.

  2. On the global cluster, configure the Oracle HSM shared file system that you want to use in the zone cluster.

    Follow procedures for shared file systems in your Oracle HSM documentation.

  3. On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
  4. Add the file system to the zone cluster configuration.
    phys-schost# clzonecluster configure zone-cluster-name
    clzc:zone-cluster-name> add fs
    clzc:zone-cluster-name:fs> set dir=mount-point
    clzc:zone-cluster-name:fs> set special=QFS-file-system-name
    clzc:zone-cluster-name:fs> set type=samfs
    clzc:zone-cluster-name:fs> end
    clzc:zone-cluster-name> verify
    clzc:zone-cluster-name> commit
    clzc:zone-cluster-name> exit
  5. Verify the addition of the file system.
    phys-schost# clzonecluster show -v zone-cluster-name
Example 18  Adding a Oracle HSM Shared File System as a Direct Mount to a Zone Cluster (CLI)

The following example shows the Oracle HSM shared file system Data-cz1 added to the zone cluster sczone. From the global cluster, the mount point of the file system is /zones/sczone/root/db_qfs/Data1, where /zones/sczone/root/ is the zone's root path. From the zone-cluster node, the mount point of the file system is /db_qfs/Data1.

phys-schost-1# vi /etc/vfstab
#device     device    mount   FS      fsck    mount     mount
#to mount   to fsck   point   type    pass    at boot   options
#
Data-cz1    -    /zones/sczone/root/db_qfs/Data1 samfs - no shared,notrace

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/db_qfs/Data1
clzc:sczone:fs> set special=Data-cz1
clzc:sczone:fs> set type=samfs
clzc:sczone:fs> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
Resource Name:                            fs
dir:                                       /db_qfs/Data1
special:                                   Data-cz1
raw:
type:                                      samfs
options:                                   []
…

How to Add an Oracle ACFS File System to a Zone Cluster (CLI)

Perform this procedure to add an Oracle ACFS file system for use by a zone cluster.


Note -  Alternatively, you can use the clsetup utility to perform this task. See How to Add a Cluster File System to a Zone Cluster (clsetup).

Before You Begin

Ensure that the Oracle ACFS file system is created and ready for use by a zone cluster. See How to Create an Oracle ACFS File System.

  1. Assume the root role or become an administrator that provides solaris.cluster.admin and solaris.cluster.modify authorizations.
  2. Add the Oracle ACFS file system to the zone cluster.

    Perform this step from the global zone of one node.

    # clzonecluster configure zonecluster
    clzc:zonecluster> add fs
    clzc:zonecluster:fs> set dir=mountpoint
    clzc:zonecluster:fs> set special=/dev/asm/volume-dev-path
    clzc:zonecluster:fs> set type=acfs
    clzc:zonecluster:fs> end
    clzc:zonecluster> exit
  3. Verify that the file system is added to the zone cluster.
    # clzonecluster show zonecluster
    …
    Resource Name:                fs
    dir:                          mountpoint
    special                       /dev/asm/volume-dev-path
    raw:
    type:                         acfs
    options:                      []
    cluster-control:              true
    …