After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.
This section provides the following procedures to add file systems for use by the zone cluster:
How to Add a Highly Available Local File System to a Zone Cluster (clsetup)
How to Add a Highly Available Local File System to a Zone Cluster (CLI)
How to Add a StorageTek QFS Shared File System to a Zone Cluster (CLI)
How to Add a Cluster File System to a Zone Cluster (clsetup)
How to Add a UFS Cluster File System to a Zone Cluster (CLI)
How to Add an Oracle ACFS File System to a Zone Cluster (CLI)
You can also use Oracle Solaris Cluster Manager to add a file system to a zone cluster. For the browser interface log-in instructions, see How to Access Oracle Solaris Cluster Manager in Oracle Solaris Cluster 4.3 System Administration Guide.
Perform this procedure to configure a highly available local file system on the global cluster for use by a zone cluster. The file system is added to the zone cluster and is configured with an HAStoragePlus resource to make the local file system highly available.
To use the command line to perform this task, see How to Add a Highly Available Local File System to a Zone Cluster (CLI).
To use the Oracle Solaris Cluster Manager browser interface to perform this task, click Zone Clusters, click the zone cluster name to go to its page, click the Solaris Resources tab, then in the File Systems section click Add to start the file systems wizard. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager in Oracle Solaris Cluster 4.3 System Administration Guide.
Perform all steps of the procedure from a node of the global cluster.
Ensure that the file system is created on shared disks.
phys-schost# clsetup
The Main Menu is displayed.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
The File System Selection for the Zone Cluster menu is displayed.
The file systems in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify all properties for a file system.
The Mount Type Selection menu is displayed.
The File System Properties for the Zone Cluster menu is displayed.
When, finished, type d and press Return.
The results of your configuration change are displayed.
phys-schost# clzonecluster show -v zone-cluster-name
Next Steps
Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.
Perform this procedure to add a highly available local file system on the global cluster for use by the zone cluster.
To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster (clsetup). Or, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.
You perform all steps of the procedure from a node of the global cluster.
Ensure that the file system is created on shared disks.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add fs clzc:zone-cluster-name:fs> set dir=mount-point clzc:zone-cluster-name:fs> set special=disk-device-name clzc:zone-cluster-name:fs> set raw=raw-disk-device-name clzc:zone-cluster-name:fs> set type=FS-type clzc:zone-cluster-name:fs> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
Specifies the file system mount point
Specifies the name of the disk device
Specifies the name of the raw disk device
Specifies the type of file system
phys-schost# clzonecluster show -v zone-cluster-name
This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/global/oracle/d1 clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1 clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1 clzc:sczone:fs> set type=ufs clzc:sczone:fs> add options [logging] clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /global/oracle/d1 special: /dev/md/oracle/dsk/d1 raw: /dev/md/oracle/rdsk/d1 type: ufs options: [logging] cluster-control: [true] …
Next Steps
Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.
Perform this procedure to add a ZFS storage pool to a zone cluster. The pool can be local to a single zone-cluster node or configured with HAStoragePlus to be highly available.
The clsetup utility discovers and displays all configured ZFS pools on the shared disks that can be accessed by the nodes where the selected zone cluster is configured. After you use the clsetup utility to add a ZFS storage pool in cluster scope to an existing zone cluster, you can use the clzonecluster command to modify the configuration or to add a ZFS storage pool in node-scope.
To use the command line to perform this task, see How to Add a ZFS Storage Pool to a Zone Cluster (CLI).
To use the Oracle Solaris Cluster Manager browser interface to perform this task, click Zone Clusters, click the zone cluster name to go to its page, click the Solaris Resources tab, then in the Datasets for ZFS Storage Pools section, click Add. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager in Oracle Solaris Cluster 4.3 System Administration Guide.
Before You Begin
Ensure that the ZFS pool is connected on shared disks that are connected to all nodes of the zone cluster. See Managing ZFS File Systems in Oracle Solaris 11.3 for procedures to create a ZFS pool.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# clsetup
The Main Menu is displayed.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
The ZFS Pool Selection for the Zone Cluster menu is displayed.
The ZFS pools in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify properties for a ZFS pool.
The ZFS Pool Dataset Property for the Zone Cluster menu is displayed. The selected ZFS pool is assigned to the name property.
The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.
The results of your configuration change are displayed. For example:
>>> Result of Configuration Change to the Zone Cluster(sczone) <<< Adding file systems or storage devices to sczone zone cluster... The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone add dataset set name=myzpool5 end Configuration change to sczone zone cluster succeeded.
phys-schost# clzonecluster show -v zoneclustername
The HAStoragePlus resource manages the mounting of file systems in the pool on the zone-cluster node that currently hosts the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.
Perform this procedure to add a ZFS storage pool to a zone cluster.
To configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.
You perform all steps of this procedure from a node of the global zone.
Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.
See Managing ZFS File Systems in Oracle Solaris 11.3 for procedures to create a ZFS pool.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add dataset clzc:zone-cluster-name:dataset> set name=ZFSpoolname clzc:zone-cluster-name:dataset> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
phys-schost# clzonecluster show -v zone-cluster-name
The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:dataset> set name=zpool1 clzc:sczone:dataset> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: dataset name: zpool1 …
Next Steps
Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of file systems in the pool on the zone-cluster node that currently hosts the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.
The clsetup utility discovers and displays the available file systems that are configured on the cluster nodes where the selected zone cluster is configured. When you use the clsetup utility to add a file system, the file system is added in cluster scope.
You can add the following types of cluster file systems to a zone cluster:
UFS cluster file system - You specify the file system type in the /etc/vfstab file, using the global mount option. This file system can be located on the shared disk or on a Solaris Volume Manager device.
StorageTek QFS shared file system - You specify the file system type in the /etc/vfstab file, using the shared mount option.
ACFS - Discovered automatically, based on the ORACLE_HOME path you provide.
To use the command line to perform this task, see one of the following procedures:
How to Add a UFS Cluster File System to a Zone Cluster (CLI)
How to Add a StorageTek QFS Shared File System to a Zone Cluster (CLI)
How to Add an Oracle ACFS File System to a Zone Cluster (CLI)
To use the Oracle Solaris Cluster Manager browser interface to perform this task, click Zone Clusters, click the zone cluster name to go to its page, click the Solaris Resources tab, then in the File Systems section click Add to start the file systems wizard. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager in Oracle Solaris Cluster 4.3 System Administration Guide.
Before You Begin
Ensure that the cluster file system you want to add to the zone cluster is configured. See Planning Cluster File Systems and Creating a Cluster File System.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# vi /etc/vfstab
/dev/md/datadg/dsk/d0 /dev/md/datadg/rdsk/d0 /global/fs ufs 2 no global, logging
Data-cz1 - /db_qfs/Data1 samfs - no shared,notrace
phys-schost# clsetup
The Main Menu is displayed.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
The File System Selection for the Zone Cluster menu is displayed.
You can also type e to manually specify all properties for a file system. If you are using an ACFS file system, you can select Discover ACFS and then specify the ORACLE_HOME directory.
The Mount Type Selection menu is displayed.
If you chose ACFS in Step 7, the clsetup utility skips this step because ACFS supports only the direct mount type.
For more information about creating loopback file systems, see How to Create and Mount an LOFS File System in Managing File Systems in Oracle Solaris 11.3.
The File System Properties for the Zone Cluster menu is displayed.
Type the number for the dir property and press Return. Then type the LOFS mount point directory name in the New Value field and press Return.
When finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.
The results of your configuration change are displayed. For example:
>>> Result of Configuration Change to the Zone Cluster(sczone) <<< Adding file systems or storage devices to sczone zone cluster... The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone add fs set dir=/zones/sczone/dsk/d0 set special=/global/fs set type=lofs end Configuration change to sczone zone cluster succeeded.
phys-schost# clzonecluster show -v zone-cluster-name
Next Steps
(Optional) Configure the cluster file system to be managed by an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems in the global cluster, and later performs a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.
Perform this procedure to add a UFS cluster file system for use by a zone cluster.
You perform all steps of this procedure from a voting node of the global cluster.
phys-schost# vi /etc/vfstab … /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0/ /global/fs ufs 2 no global, logging
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add fs clzc:zone-cluster-name:fs> set dir=zone-cluster-lofs-mountpoint clzc:zone-cluster-name:fs> set special=global-cluster-mount-point clzc:zone-cluster-name:fs> set type=lofs clzc:zone-cluster-name:fs> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
Specifies the file system mount point for LOFS to make the cluster file system available to the zone cluster.
Specifies the file system mount point of the original cluster file system in the global cluster.
For more information about creating loopback file systems, see How to Create and Mount an LOFS File System in Managing File Systems in Oracle Solaris 11.3.
phys-schost# clzonecluster show -v zone-cluster-name
The following example shows how to add a cluster file system with mount point /global/apache to a zone cluster. The file system is available to a zone cluster using the loopback mount mechanism at the mount point /zone/apache.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/apache ufs 2 yes global, logging phys-schost-1# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add fs clzc:zone-cluster-name:fs> set dir=/zone/apache clzc:zone-cluster-name:fs> set special=/global/apache clzc:zone-cluster-name:fs> set type=lofs clzc:zone-cluster-name:fs> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /zone/apache special: /global/apache raw: type: lofs options: [] cluster-control: true …
Next Steps
Configure the cluster file system to be available in the zone cluster by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems in the global cluster, and later performs a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster 4.3 Data Services Planning and Administration Guide.
Perform this task to add a StorageTek QFS shared file system for use by a zone cluster.
At this time, StorageTek QFS shared file systems are only supported for use in clusters that are configured with Oracle RAC. On clusters that are not configured with Oracle RAC, you can use a single-machine StorageTek QFS file system that is configured as a highly available local file system.
You perform all steps of this procedure from a node of the global cluster.
Follow procedures for shared file systems in your StorageTek QFS documentation.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> add fs clzc:zone-cluster-name:fs> set dir=mount-point clzc:zone-cluster-name:fs> set special=QFS-file-system-name clzc:zone-cluster-name:fs> set type=samfs clzc:zone-cluster-name:fs> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
phys-schost# clzonecluster show -v zone-cluster-name
The following example shows the StorageTek QFS shared file system Data-cz1 added to the zone cluster sczone. From the global cluster, the mount point of the file system is /zones/sczone/root/db_qfs/Data1, where /zones/sczone/root/ is the zone's root path. From the zone-cluster node, the mount point of the file system is /db_qfs/Data1.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # Data-cz1 - /zones/sczone/root/db_qfs/Data1 samfs - no shared,notrace phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/db_qfs/Data1 clzc:sczone:fs> set special=Data-cz1 clzc:sczone:fs> set type=samfs clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /db_qfs/Data1 special: Data-cz1 raw: type: samfs options: [] …
Perform this procedure to add an Oracle ACFS file system for use by a zone cluster.
Before You Begin
Ensure that the Oracle ACFS file system is created and ready for use by a zone cluster. See How to Create an Oracle ACFS File System.
Perform this step from the global zone of one node.
# clzonecluster configure zonecluster clzc:zonecluster> add fs clzc:zonecluster:fs> set dir=mountpoint clzc:zonecluster:fs> set special=/dev/asm/volume-dev-path clzc:zonecluster:fs> set type=acfs clzc:zonecluster:fs> end clzc:zonecluster> exit
# clzonecluster show zonecluster … Resource Name: fs dir: mountpoint special /dev/asm/volume-dev-path raw: type: acfs options: [] cluster-control: true …