Sun Cluster 3.0 System Administration Guide

3.4 Administering Cluster File Systems

Table 3-2 Task Map: Administering Cluster File Systems

Task 

For Instructions, Go To... 

Add cluster file systems after the initial Sun Cluster installation 

    - Use newfs and makedir

"3.4.1 How to Add an Additional Cluster File System"

Remove a cluster file system 

    - Use fuser and umount

"3.4.2 How to Remove a Cluster File System"

Check global mount points in a cluster for consistency across nodes 

    - Use sccheck

"3.4.3 How to Check Global Mounts in a Cluster"

3.4.1 How to Add an Additional Cluster File System

Perform this task for each cluster file system you create after your initial Sun Cluster installation.


Caution - Caution -

Be sure you have specified the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you may not intend to delete.


The prerequisites to add an additional cluster file system are:

  1. Become superuser on any node in the cluster.


    Tip -

    For faster file system creation, become superuser on the current primary of the global device for which you are creating a file system.


  2. Create a file system using the newfs(1M) command.


    # newfs raw-disk-device
    

    Table 3-3 shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Table 3-3 Sample Raw Disk Device Names

    If Your Volume Manager Is ... 

    A Disk Device Name Might Be ... 

    Description 

    Solstice DiskSuite 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle metaset.

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group.

    None 

    /dev/global/rdsk/d1s3

    Raw disk device for block slice d1s3.

  3. On each node in the cluster, create a mount point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    # mkdir -p /global/device-group/mount-point
    
    device-group

    Name of the directory that corresponds to the name of the device group which contains the device.

    mount-point

    Name of the directory on which to mount the cluster file system.


    Tip -

    For ease of administration, create the mount point in the /global/device-group directory. This enables you to easily distinguish cluster file systems, which are globally available, from local file systems.


  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    1. To automatically mount a cluster file system, set the mount at boot field to yes.

    2. Use the following required mount options:

      • The global mount option is required for all cluster file systems. This option identifies the file system as a cluster file system.

      • File system logging is required for all cluster file systems. UFS logging can be done either through the use of Solstice DiskSuite metatrans devices or directly through a Solaris UFS mount option. But, the two approaches should not be combined. If Solaris UFS logging is used directly, the logging mount option should be used. Otherwise, if metatrans file system logging is used, no additional mount option is needed.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node that has that entry.

    4. Pay attention to boot order dependencies of the file systems.

      Normally, you should not nest the mount points for cluster file systems. For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot up and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

    5. Make sure the entries in each node's /etc/vfstab file list common devices in the same order.

      For example, if phys-schost-1 and phys-schost-2 have a physical connection to devices d0, d1, and d2, the entries in their respective /etc/vfstab files should be listed as d0, d1, and d2.

    Refer to the vfstab(4) man page for details.

  5. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If there are no errors, nothing is returned.

  6. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mount-point
    
  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

3.4.1.1 Example--Adding a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
[on each node:]
# mkdir -p /global/oracle/d1
 
# vi /etc/vfstab
#device           device       mount   FS      fsck    mount   mount
#to mount        to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
[save and exit]
 
[on one node:]
# sccheck
 
# mount /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/
largefiles on Sun Oct 3 08:56:16 1999

3.4.2 How to Remove a Cluster File System

You `remove' a cluster file system by merely unmounting it. If you want to also remove or delete the data, remove the underlying disk device (or metadevice or volume) from the system.


Note -

Cluster file systems are automatically unmounted as part of the system shutdown that occurs when you run scshutdown(1M) to stop the entire cluster. A cluster file system is not unmounted when you run shutdown to stop a single node. However, if the node being shut down is the only node with a connection to the disk, any attempt to access the cluster file system on that disk results in an error.


The prerequisites to unmount cluster file systems are:

  1. Become superuser on a node in the cluster.

  2. Determine which cluster file systems are mounted.


    # mount -v
    
  3. On each node, list all processes that are using the cluster file system, so you know which processes you are going to stop.


    # fuser -c [ -u ] mount-point
    
    -c

    Reports on files that are mount points for file systems and any files within those mounted file systems.

    -u

    (Optional) Displays the user login name for each process ID.

    mount-point

    Specifies the name of the cluster file system for which you want to stop processes.

  4. On each node, stop all processes for the cluster file system.

    Use your preferred method for stopping processes. If necessary, use the following command to force termination of processes associated with the cluster file system.


    # fuser -c -k mount-point
    

    A SIGKILL is sent to each process using the cluster file system.

  5. On each node, verify that no processes are using the file system.


    # fuser -c mount-point
    
  6. From just one node, umount the file system.


    # umount mount-point
    
    mount-point

    Specifies the name of the cluster file system you want to unmount. This can be either the directory name where the cluster file system is mounted, or the device name path of the file system.

  7. (Optional) Edit the /etc/vfstab file to delete the entry for the cluster file system being removed.

    Perform this step on each cluster node that has an entry for this cluster file system in its /etc/vfstab file.

  8. (Optional) Remove the disk device group/metadevice/plex.

    See your volume manager documentation for more information.

3.4.2.1 Example--Removing a Cluster File System

The following example removes a UFS cluster file system mounted on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# mount -v
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles on Sun Oct  3 08:56:16 1999
# fuser -c /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c -k /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c /global/oracle/d1
/global/oracle/d1:
# umount /global/oracle/d1
 
(on each node, remove the highlighted entry:)
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
[Save and exit.]

Note -

To remove the data on the cluster file system, remove the underlying device. See your volume manager documentation for more information.


3.4.3 How to Check Global Mounts in a Cluster

The sccheck(1M) utility verifies the syntax of the entries for cluster file systems in the /etc/vfstab file. If there are no errors, nothing is returned.


Note -

Run sccheck after making cluster configuration changes, such as removing a cluster file system, that have affected devices or volume management components.


  1. Become superuser on a node in the cluster.

  2. Check the cluster global mounts.


    # sccheck
    

3.4.4 How to Remove a Node From a Disk Device Group (Solstice DiskSuite)

Use this procedure to remove a cluster node from disk device groups (diskset) running Solstice DiskSuite.

  1. Determine the disk device group(s) of which the node to be removed is a member.


    # scstat -D
    
  2. Become superuser on the node that currently owns the disk device group from which you want to remove the node.

  3. Delete from the disk device group the hostname of the node being removed.

    Repeat this step for each disk device group from which the node is being removed.


    # metaset -s setname -d -f -h node
    
    -s setname

    Specifies the disk device group (diskset) name

    -f

    Force

    -d

    Deletes from the disk device group

    -h nodelist

    Removes the node from the list of nodes that can master the disk device group


    Note -

    The update can take several minutes to complete.


  4. Verify that the node has been removed from the disk device group.

    The disk device group name will match the diskset name specified with metaset.


    # scstat -D
    

3.4.4.1 Example--Removing a Node From a Disk Device Group (SDS)

The following example shows the removal of the host name from a disk device group (metaset) and verifies that the node has been removed from the disk device group. Although the example shows the removal of a node from a single disk device group, a node can belong to more than one disk device group at a time. Repeat the metaset command for each disk device group from which you want to remove the node.


[Determine the disk device group(s) for the node:]
# scstat -D
  -- Device Group Servers --
                      Device Group  Primary       Secondary
                      ------------  -------       ---------
  Device group servers: dg-schost-1  phys-schost-1  phys-schost-2
[Become superuser.]
[Remove the hostname from all disk device groups:]
# metaset -s dg-schost-1 -d -f -h phys-schost-2
[Verify removal of the node:]
# scstat -D
  -- Device Group Servers --
                       Device Group  Primary       Secondary
                       ------------  -------       ---------
  Device group servers: dg-schost-1  phys-schost-1  -

3.4.5 How to Remove a Node From a Disk Device Group (VERITAS Volume Manager)

Use this procedure to remove a cluster node from an existing cluster disk device group (disk group) running VERITAS Volume Manager (VxVM).

  1. Determine the disk device group of which the node to be removed is a member.


    # scstat -D
    
  2. Become superuser on a current cluster member node.

  3. Execute the scsetup utility.


    # scsetup
    

    The Main Menu appears.

  4. Reconfigure a disk device group by entering 3 (Device groups and volumes).

  5. Remove the node from the VxVM disk device group by entering 5 (Remove a node from a VxVM device group).

    Follow the prompts to remove the cluster node from the disk device group. You will be asked for information about the following:

    VxVM device group

    Node name

  6. Verify that the node has been removed from the VxVM disk device group:


    # scstat -D	
      ...
      Device group name: devicegroupname
      Device group type: VxVM
      Device group failback enabled: no
      Device group node list: nodename
      Diskgroup name: diskgroupname
      ...

3.4.5.1 Example--Removing a Node From a Disk Device Group (VxVM)

This example shows removal of the node named phys-schost-4 from the dg1 VxVM disk device group.


[Determine the disk device group for the node:]
# scstat -D
  -- Device Group Servers --
                       Device Group  Primary        Secondary
                       ------------  -------        ---------
  Device group servers: dg-schost-1  phys-schost-1  phys-schost-2
[Become superuser and execute the scsetup utility:]
# scsetup
[Select option 3:]
*** Main Menu ***
    Please select from one of the following options:
      ...
      3) Device groups and volumes
      ...
    Option: 3
[Select option 5:]
*** Device Groups Menu ***
    Please select from one of the following options:
      ...
      5) Remove a node from a VxVM device group
      ...
    Option:  5
[Answer the questions to remove the node:]
>>> Remove a Node from a VxVM Device Group <<<
    ...
    Is it okay to continue (yes/no) [yes]? yes
    ...
    Name of the VxVM device group from which you want to remove a node?  dg1
    Name of the node to remove from this group?  phys-schost-4
    Is it okay to proceed with the update (yes/no) [yes]? yes
 
scconf -r -D name=dg1,nodelist=phys-schost-4
 
    Command completed successfully.
    Hit ENTER to continue: 

[Quit the scsetup Device Groups Menu and Main Menu:]
    ...
    Option:  q
[Verify that the node was removed:]
# scstat -D
  ...
  Device group name: 		dg1
  Device group type: 	VxVM
  Device group failback enabled: 	no
  Device group node list: 	phys-schost-3
  Diskgroup name: 	dg1
  ...