This section provides procedures to add file systems for use by the zone cluster.
After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.
You cannot use the clzonecluster command to add a local file system, which is mounted on a single global-cluster node, to a zone cluster. Instead, use the zonecfg command as you normally would in a stand-alone system. The local file system would not be under cluster control.
You cannot add a cluster file system to a zone cluster.
The following procedures are in this section:
In addition, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Perform this procedure to add a local file system on the global cluster for use by the zone cluster.
To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster.
Alternatively, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Become superuser on a node of the global cluster that hosts the zone cluster.
Perform all steps of the procedure from a node of the global cluster.
On the global cluster, create a file system that you want to use in the zone cluster.
Ensure that the file system is created on shared disks.
On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
phys-schost# vi /etc/vfstab |
Add the file system to the zone-cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=mountpoint clzc:zoneclustername:fs> set special=disk-device-name clzc:zoneclustername:fs> set raw=raw-disk-device-name clzc:zoneclustername:fs> set type=FS-type clzc:zoneclustername:fs> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit |
Specifies the file system mount point
Specifies the name of the disk device
Specifies the name of the raw disk device
Specifies the type of file system
Verify the addition of the file system.
phys-schost# clzonecluster show -v zoneclustername |
This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 5 no logging phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/global/oracle/d1 clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1 clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1 clzc:sczone:fs> set type=ufs clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /global/oracle/d1 special: /dev/md/oracle/dsk/d1 raw: /dev/md/oracle/rdsk/d1 type: ufs options: [] … |
Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Perform this procedure to add a ZFS storage pool for use by a zone cluster.
To configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Become superuser on a node of the global cluster that hosts the zone cluster.
Perform all steps of this procedure from a node of the global zone.
Create the ZFS storage pool on the global cluster.
Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.
See Solaris ZFS Administration Guide for procedures to create a ZFS pool.
Add the pool to the zone-cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add dataset clzc:zoneclustername:dataset> set name=ZFSpoolname clzc:zoneclustername:dataset> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit |
Verify the addition of the file system.
phys-schost# clzonecluster show -v zoneclustername |
The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:dataset> set name=zpool1 clzc:sczone:dataset> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: dataset name: zpool1 … |
Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems that are in the pool on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Perform this procedure to add a Sun QFS shared file system for use by a zone cluster.
At this time, QFS shared file systems are only supported for use in clusters that are configured with Oracle Real Application Clusters (RAC). On clusters that are not configured with Oracle RAC, you can use a single-machine QFS file system that is configured as a highly available local file system.
Become superuser on a voting node of the global cluster that hosts the zone cluster.
Perform all steps of this procedure from a voting node of the global cluster.
On the global cluster, configure the QFS shared file system that you want to use in the zone cluster.
Follow procedures for shared file systems in Configuring Sun QFS File Systems With Sun Cluster.
On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
phys-schost# vi /etc/vfstab |
Add the file system to the zone cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=mountpoint clzc:zoneclustername:fs> set special=QFSfilesystemname clzc:zoneclustername:fs> set type=samfs clzc:zoneclustername:fs> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit |
Verify the addition of the file system.
phys-schost# clzonecluster show -v zoneclustername |
The following example shows the QFS shared file system Data-cz1 added to the zone cluster sczone. From the global cluster, the mount point of the file system is /zones/sczone/root/db_qfs/Data1, where /zones/sczone/root/ is the zone's root path. From the zone-cluster node, the mount point of the file system is /db_qfs/Data1.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # Data-cz1 - /zones/sczone/root/db_qfs/Data1 samfs - no shared,notrace phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/db_qfs/Data1 clzc:sczone:fs> set special=Data-cz1 clzc:sczone:fs> set type=samfs clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /db_qfs/Data1 special: Data-cz1 raw: type: samfs options: [] … |