Sun Cluster 3.0 Installation Guide

How to Add Cluster File Systems

Perform this task for each cluster file system you add.


Caution - Caution -

Creating a file system destroys any data on the disks. Be sure you have specified the correct disk device name. If you specify the wrong device name, you erase its contents when the new file system is created.


  1. Become superuser on any node in the cluster.


    Tip -

    For faster file-system creation, become superuser on the current primary of the global device for which you are creating a file system.


  2. Create a file system by using the newfs(1M) command.


    # newfs raw-disk-device
    

    The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Table 2-3 Sample Raw Disk Device Names

    Volume Manager 

    Sample Disk Device Name 

    Description 

    Solstice DiskSuite 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle diskset

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group

    None 

    /dev/global/rdsk/d1s3

    Raw disk device d1s3

  3. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    # mkdir -p /global/device-group/mount-point
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device

    mount-point

    Name of the directory on which to mount the cluster file system


    Tip -

    For ease of administration, create the mount point in the /global/device-group directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.


  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.


    Note -

    The syncdir mount option is not required for cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior. If you do not, you will have the same behavior that is seen with UFS file systems. Not specifying syncdir can significantly improve performance of writes that allocate disk blocks, such as when appending data to a file. However, in some cases, without syncdir you would not discover an out-of-space condition until you close a file. The cases in which you could have problems if you do not specify syncdir are rare. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.


    1. To automatically mount the cluster file system, set the mount at boot field to yes.

    2. Use the following required mount options.

      • If you are using Solaris UFS logging, use the global,logging mount options.

      • If a cluster file system uses a Solstice DiskSuite trans metadevice, use the global mount option (do not use the logging mount option). Refer to Solstice DiskSuite documentation for information about setting up trans metadevices.


      Note -

      Logging is required for all cluster file systems.


    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

    5. Make sure the entries in each node's /etc/vfstab file list devices in the same order.

    Refer to the vfstab(4) man page for details.

  5. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If no errors occur, nothing is returned.

  6. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mount-point
    
  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

Example--Creating a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
(on each node:)
# mkdir -p /global/oracle/d1
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
(save and exit)
 
(on one node:)
# sccheck
# mount /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/
largefiles on Sun Oct 3 08:56:16 1999

Where to Go From Here

If your cluster nodes are connected to more than one public subnet, to configure additional public network adapters, go to "How to Configure Additional Public Network Adapters".

Otherwise, to configure PNM and set up NAFO groups, go to "How to Configure Public Network Management (PNM)".