Sun Cluster Software Installation Guide for Solaris OS

ProcedureHow to Add a Local File System to a Zone Cluster

Perform this procedure to add a local file system on the global cluster for use by the zone cluster.


Note –

To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster.

Alternatively, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


  1. Become superuser on a node of the global cluster that hosts the zone cluster.


    Note –

    Perform all steps of the procedure from a node of the global cluster.


  2. On the global cluster, create a file system that you want to use in the zone cluster.

    Ensure that the file system is created on shared disks.

  3. On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.


    phys-schost# vi /etc/vfstab
    
  4. Add the file system to the zone-cluster configuration.


    phys-schost# clzonecluster configure zoneclustername
    clzc:zoneclustername> add fs
    clzc:zoneclustername:fs> set dir=mountpoint
    clzc:zoneclustername:fs> set special=disk-device-name
    clzc:zoneclustername:fs> set raw=raw-disk-device-name
    clzc:zoneclustername:fs> set type=FS-type
    clzc:zoneclustername:fs> end
    clzc:zoneclustername> verify
    clzc:zoneclustername> commit
    clzc:zoneclustername> exit
    
    dir=mountpoint

    Specifies the file system mount point

    special=disk-device-name

    Specifies the name of the disk device

    raw=raw-disk-device-name

    Specifies the name of the raw disk device

    type=FS-type

    Specifies the type of file system

  5. Verify the addition of the file system.


    phys-schost# clzonecluster show -v zoneclustername
    

Example 7–4 Adding a Local File System to a Zone Cluster

This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.


phys-schost-1# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 5 no logging

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/global/oracle/d1
clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1
clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> end
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

phys-schost-1# clzonecluster show -v sczone
…
  Resource Name:                            fs
    dir:                                       /global/oracle/d1
    special:                                   /dev/md/oracle/dsk/d1
    raw:                                       /dev/md/oracle/rdsk/d1
    type:                                      ufs
    options:                                   []
…

Next Steps

Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.