This section provides procedures to add file systems for use by the zone cluster.
After a file system is added to a zone cluster and brought online, the file system is visible on from within that zone cluster.
You cannot use the clzonecluster command to add a local file system, which is mounted on a single global-cluster node, to a zone cluster. Instead, use the zonecfg command as you normally would in a stand-alone system. The local file system would not be under cluster control.
You cannot add a cluster file system to a zone cluster.
The following procedures are in this section:
Perform this procedure to add a highly available local file system on the global cluster for use by the zone cluster.
To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster.
On the global cluster, configure the highly available local file system that you want to use in the zone cluster.
Become superuser on a node of the global cluster that hosts the zone cluster.
You perform all steps of the procedure from a node of the global cluster.
Display the /etc/vfstab entry for the file system that you want to mount on the zone cluster.
phys-schost# vi /etc/vfstab |
Add the file system to the zone-cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=mountpoint clzc:zoneclustername:fs> set special=disk-device-name clzc:zoneclustername:fs> set raw=raw-disk-device-name clzc:zoneclustername:fs> set type=FS-type clzc:zoneclustername:fs> end clzc:zoneclustername> exit |
Specifies the file system mount point
Specifies the name of the disk device
Specifies the name of the raw disk device
Specifies the type of file system
Verify the addition of the file system.
phys-schost# clzonecluster show -v zoneclustername |
This example adds the highly available local file system /global/oracle/d1 for use by the sczone zone cluster.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 5 no logging phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/global/oracle/d1 clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1 clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1 clzc:sczone:fs> set type=ufs clzc:sczone:fs> end clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /global/oracle/d1 special: /dev/md/oracle/dsk/d1 raw: /dev/md/oracle/rdsk/d1 type: ufs options: [] … |
Perform this procedure to add a ZFS storage pool for use by a zone cluster.
Configure the ZFS storage pool on the global cluster.
Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.
See Solaris ZFS Administration Guide for procedures to create a ZFS pool..
Become superuser on a node of the global cluster that hosts the zone cluster.
Add the pool to the zone-cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add dataset clzc:zoneclustername:dataset> set name=ZFSpoolname clzc:zoneclustername:dataset> end clzc:zoneclustername> exit |
Verify the addition of the file system.
phys-schost# clzonecluster show -v zoneclustername |
The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:dataset> set name=zpool1 clzc:sczone:dataset> end clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: dataset name: zpool1 … |
Perform this procedure to add a Sun StorageTek QFS shared file system for use by a zone cluster.
At this time, QFS shared file systems are only supported for use in clusters that are configured with Oracle Real Application Clusters (RAC). On clusters that are not configured with Oracle RAC, you can use a single-machine QFS file system that is configured as a highly available local file system.
On the global cluster, configure the QFS shared file system that you want to use in the zone cluster.
Follow procedures in Tasks for Configuring the Sun StorEdge QFS Shared File System for Oracle Files in Sun Cluster Data Service for Oracle RAC Guide for Solaris OS.
Become superuser on a voting node of the global cluster that hosts the zone cluster.
You perform all remaining steps of this procedure from a voting node of the global cluster.
Display the /etc/vfstab entry for the file system that you want to mount on the zone cluster.
You will use information from the entry to specify the file system to the zone-cluster configuration.
phys-schost# vi /etc/vfstab |
Add the file system to the zone cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=mountpoint clzc:zoneclustername:fs> set special=QFSfilesystemname clzc:zoneclustername:fs> set type=samfs clzc:zoneclustername:fs> end clzc:zoneclustername> exit |
Verify the addition of the file system.
phys-schost# clzonecluster show -v zoneclustername |
The following example shows the QFS shared file system Data-cz1 added to the zone cluster sczone. From the global cluster, the mount point of the file system is /zones/sczone/root/db_qfs/Data1, where /zones/sczone/root/ is the zone's root path. From the zone-cluster node, the mount point of the file system is db_qfs/Data1.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # Data-cz1 - /zones/sczone/root/db_qfs/Data1 samfs - no shared,notrace phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/db_qfs/Data1 clzc:sczone:fs> set special=Data-cz1 clzc:sczone:fs> set type=samfs clzc:sczone:fs> end clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /db_qfs/Data1 special: Data-cz1 raw: type: samfs options: [] … |