Sun Cluster Software Installation Guide for Solaris OS

Adding File Systems to a Zone Cluster

This section provides procedures to add file systems for use by the zone cluster.

After a file system is added to a zone cluster and brought online, the file system is visible on from within that zone cluster.


Note –

You cannot use the clzonecluster command to add a local file system, which is mounted on a single global-cluster node, to a zone cluster. Instead, use the zonecfg command as you normally would in a stand-alone system. The local file system would not be under cluster control.

You cannot add a cluster file system to a zone cluster.


The following procedures are in this section:

ProcedureHow to Add a Highly Available Local File System to a Zone Cluster

Perform this procedure to add a highly available local file system on the global cluster for use by the zone cluster.


Note –

To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster.


  1. On the global cluster, configure the highly available local file system that you want to use in the zone cluster.

    See Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  2. Become superuser on a node of the global cluster that hosts the zone cluster.

    You perform all steps of the procedure from a node of the global cluster.

  3. Display the /etc/vfstab entry for the file system that you want to mount on the zone cluster.


    phys-schost# vi /etc/vfstab
    
  4. Add the file system to the zone-cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=mountpoint
    clzc:zoneclustername:fs> set special=disk-device-name
    clzc:zoneclustername:fs> set raw=raw-disk-device-name
    clzc:zoneclustername:fs> set type=FS-type
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> exit
    
    dir=mountpoint

    Specifies the file system mount point

    special=disk-device-name

    Specifies the name of the disk device

    raw=raw-disk-device-name

    Specifies the name of the raw disk device

    type=FS-type

    Specifies the type of file system

  5. Verify the addition of the file system.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 6–4 Adding a Highly Available Local File System to a Zone Cluster

This example adds the highly available local file system /global/oracle/d1 for use by the sczone zone cluster.


phys-schost-1# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 5 no logging

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/global/oracle/d1
clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1
clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> end
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /global/oracle/d1
    special:                                   /dev/md/oracle/dsk/d1
    raw:                                       /dev/md/oracle/rdsk/d1
    type:                                      ufs
    options:                                   []
…

ProcedureHow to Add a ZFS Storage Pool to a Zone Cluster

Perform this procedure to add a ZFS storage pool for use by a zone cluster.

  1. Configure the ZFS storage pool on the global cluster.


    Note –

    Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.


    See Solaris ZFS Administration Guide for procedures to create a ZFS pool..

  2. Become superuser on a node of the global cluster that hosts the zone cluster.

  3. Add the pool to the zone-cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add dataset
    clzc:zoneclustername:dataset> set name=ZFSpoolname
    clzc:zoneclustername:dataset> end
    clzc:zoneclustername> exit
    
  4. Verify the addition of the file system.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 6–5 Adding a ZFS Storage Pool to a Zone Cluster

The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.


phys-schost-1# clzonecluster configure sczone
clzc:sczone> add dataset
clzc:sczone:dataset> set name=zpool1
clzc:sczone:dataset> end
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                                dataset
    name:                                          zpool1
…

ProcedureHow to Add a QFS Shared File System to a Zone Cluster

Perform this procedure to add a Sun StorageTek QFS shared file system for use by a zone cluster.


Note –

At this time, QFS shared file systems are only supported for use in clusters that are configured with Oracle Real Application Clusters (RAC). On clusters that are not configured with Oracle RAC, you can use a single-machine QFS file system that is configured as a highly available local file system.


  1. On the global cluster, configure the QFS shared file system that you want to use in the zone cluster.

    Follow procedures in Tasks for Configuring the Sun StorEdge QFS Shared File System for Oracle Files in Sun Cluster Data Service for Oracle RAC Guide for Solaris OS.

  2. Become superuser on a voting node of the global cluster that hosts the zone cluster.

    You perform all remaining steps of this procedure from a voting node of the global cluster.

  3. Display the /etc/vfstab entry for the file system that you want to mount on the zone cluster.

    You will use information from the entry to specify the file system to the zone-cluster configuration.


    phys-schost# vi /etc/vfstab
    
  4. Add the file system to the zone cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=mountpoint
    clzc:zoneclustername:fs> set special=QFSfilesystemname
    clzc:zoneclustername:fs> set type=samfs
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> exit
    
  5. Verify the addition of the file system.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 6–6 Adding a QFS Shared File System to a Zone Cluster

The following example shows the QFS shared file system Data-cz1 added to the zone cluster sczone. From the global cluster, the mount point of the file system is /zones/sczone/root/db_qfs/Data1, where /zones/sczone/root/ is the zone's root path. From the zone-cluster node, the mount point of the file system is db_qfs/Data1.


phys-schost-1# vi /etc/vfstab
#device           device        mount   FS      fsck    mount     mount
#to mount         to fsck       point   type    pass    at boot   options
#                     
Data-cz1          -            /zones/sczone/root/db_qfs/Data1 samfs - no shared,notrace

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/db_qfs/Data1
clzc:sczone:fs> set special=Data-cz1
clzc:sczone:fs> set type=samfs
clzc:sczone:fs> end
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /db_qfs/Data1
    special:                                   Data-cz1
    raw:                                       
    type:                                      samfs
    options:                                   []
…