Sun Cluster 3.0 System Administration Guide

Chapter 3 Administering Global Devices and Cluster File Systems

This chapter provides the procedures for administering global devices and cluster file systems.

This is a list of the procedures in this chapter.

For a high-level description of the related procedures in this chapter, see Table 3-1.

See the Sun Cluster 3.0 Concepts document for conceptual information related to global devices, the global namespace, disk device groups, and the cluster file system.

3.1 Administering Global Devices and the Global Namespace Overview

Administration of Sun Cluster disk device groups depends on the volume manager installed on the cluster. Solstice DiskSuite is "cluster-aware," so you add, register, and remove disk device groups by using the Solstice DiskSuite metaset(1M) command. With VERITAS Volume Manager (VxVM), you create disk groups by using VxVM commands. Then you register the disk groups as Sun Cluster disk device groups through the scsetup(1M) utility. When removing VxVM disk device groups, you use both the scsetup utility and VxVM commands.

When administering disk device groups, or volume manager disk groups, you need to be on the cluster node that is the primary node for the group.

Normally, you do not need to administer the global device namespace because the global namespace is automatically set up during installation and automatically updated during Solaris operating environment reconfiguration reboots. However, if the global namespace needs to be regenerated or updated, you can run the scgdevs(1M) command from any cluster node. This causes the global namespace to be updated on all other cluster node members, as well as on nodes that might join the cluster in the future.

3.1.1 Global Device Permissions for Solstice DiskSuite

Changes made to global device permissions are not automatically propagated to all the nodes in the cluster for Solstice DiskSuite and disk devices. If you want to change permissions on global devices, you must manually change the permissions on all the nodes in the cluster. For example, if you want to change permissions on global device /dev/global/dsk/d3s0 to 644, you must execute

# chmod 644 /dev/global/dsk/d3s0

on all nodes in the cluster.

VxVM does not support the chmod command. To change global device permissions in VxVM, consult the VxVM administrator's guide.

3.1.2 VERITAS Volume Manager Administration Considerations

For Sun Cluster to correctly maintain the VxVM namespace, when any configuration information for a disk group or volume is changed, you must register the Sun Cluster disk device group configuration changes. This ensures that the namespace on all cluster nodes is updated. Examples of configuration changes that impact the namespace include adding, removing, or renaming a volume; and changing the volume permissions, owner, or group ID.


Note -

Never import or deport VxVM disk groups using VxVM commands once the disk group has been registered with the cluster as a Sun Cluster disk device group. The Sun Cluster software will handle all cases where disk groups need to be imported or deported.


Each VxVM disk group must have a cluster-wide unique minor number. By default, when a disk group is created, VxVM chooses a random number that is a multiple of 1000 as that disk group's base minor number. For most configurations with only a small number of disk groups, this is sufficient to guarantee uniqueness. However, it is possible that the minor number for a newly-created disk group will conflict with the minor number of a pre-existing disk group imported on a different cluster node. In this case, attempting to register the Sun Cluster disk device group will fail. To fix this problem, the new disk group should be given a new minor number that is a unique value and then registered as a Sun Cluster disk device group.

If you are setting up a mirrored volume, Dirty Region Logging (DRL) can be used to decrease volume recovery time in the event of a system crash. Use of DRL is strongly recommended.

3.2 Administering Cluster File Systems Overview

You use standard Solaris file system commands, such as mount, newfs, and so on, to administer the cluster file system. You mount cluster file systems by specifying the -g option to the mount command. Cluster file systems can also be automatically mounted at boot.


Note -

No special Sun Cluster commands are necessary for cluster file system administration. You administer a cluster file system as you would any other Solaris file system.


3.3 Administering Disk Device Groups


Note -

The scsetup(1M) utility is an interactive interface to the scconf(1M) command. When scsetup runs, it generates scconf commands. These generated commands are shown in the examples at the end of some procedures.


Table 3-1 Task Map: Administering Disk Device Groups

Task 

For Instructions, Go To... 

Update the global device namespace (without a reconfiguration reboot) 

    - Use scgdevs

"3.3.1 How to Update the Global Device Namespace"

Add Solstice DiskSuite disksets and register them as disk device groups 

    - Use metaset

"3.3.2 How to Add and Register a Disk Device Group (Solstice DiskSuite)"

Add VERITAS Volume Manager disk groups as disk device groups 

    - Use VxVM commands and scsetup

"3.3.3 How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager)"

 

"3.3.4 How to Create a New Disk Group When Encapsulating Disks (VERITAS Volume Manager)"

 

"3.3.5 How to Add a New Volume to an Existing Disk Device Group (VERITAS Volume Manager)"

 

"3.3.6 How to Make an Existing Disk Group Into a Disk Device Group (VERITAS Volume Manager)"

 

"3.3.7 How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)"

 

"3.3.8 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)"

 

"3.3.9 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)"

Remove Solstice DiskSuite disk device groups from the configuration 

    - Use metaset and metaclear

"3.3.10 How to Remove and Unregister a Disk Device Group (Solstice DiskSuite)"

Remove VERITAS Volume Manager disk device groups from the configuration 

    - Use scsetup (to generate scconf)

"3.3.11 How to Remove a Volume From a Disk Device Group (VERITAS Volume Manager)"

 

"3.3.12 How to Remove and Unregister a Disk Device Group (VERITAS Volume Manager)"

Add a node to a VERITAS Volume Manager disk device group 

    - Use scsetup (to generate scconf)

"3.3.13 How to Add a Node to a Disk Device Group (VERITAS Volume Manager)"

Change disk device group properties 

    - Use scsetup (to generate scconf)

"3.3.14 How to Change Disk Device Properties "

Display disk device groups and properties 

    - Use scconf

"3.3.15 How to List a Disk Device Group Configuration"

Switch the primary for a disk device group 

    - Use scswitch

"3.3.16 How to Switch the Primary for a Device Group"

3.3.1 How to Update the Global Device Namespace

Manually update the global device namespace when adding a new global device by running scgdevs(1M).


Note -

The scgdevs command does not have any effect if the node running the command is not currently a cluster member or if the /global/.devices/node@nodeID file system is not mounted.


  1. Become superuser on a node of the cluster.

  2. Use scgdevs to reconfigure the namespace.


    # scgdevs
    

3.3.1.1 Example--Updating the Global Device Namespace

The following example shows output generated by a successful run of scgdevs.


# scgdevs 
Configuring the /dev/global directory (global devices)...
obtaining access to all attached disks
reservation program successfully exiting

3.3.2 How to Add and Register a Disk Device Group (Solstice DiskSuite)

Disk device groups map directly to Solstice DiskSuite disksets. When you create a diskset using metaset(1M), you also create the disk device group with the same name and register it as a Sun Cluster disk device group.

  1. Become superuser on the node connected to the disks where you want to create the diskset.

  2. Use metaset to add the Solstice DiskSuite diskset and register it as a disk device group with Sun Cluster.


    # metaset -s diskset -a -h node-list
    
    -s diskset

    Specifies the diskset to be created.

    -a -h node-list

    Adds the list of nodes that can master the diskset.

  3. Verify that the disk device group has been added.

    The disk device group name will match the diskset name specified with metaset.


    # scconf -p | egrep disk-device-group
    

3.3.2.1 Example--Adding a Solstice DiskSuite Disk Device Group

The following example shows the creation of the diskset and disk device group and verifies that the disk device group has been created.


# metaset -s dg-schost-1
# scconf -p | egrep dg-schost-1
Device group name: dg-schost-1

3.3.3 How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager)


Note -

This procedure is only for initializing disks. If you are encapsulating disks, use the procedure "3.3.4 How to Create a New Disk Group When Encapsulating Disks (VERITAS Volume Manager)".


After adding the VxVM disk group, you need to register the disk device group.

  1. Become superuser on a node of the cluster that is physically connected to the disks that make up the disk group being added.

  2. Create the VxVM disk group and volume.

    Use your preferred method to create the disk group and volume.


    Note -

    If you are setting up a mirrored volume, we strongly recommend that Dirty Region Logging (DRL) be used to decrease volume recovery time in the event of a system crash.


    See the VERITAS Volume Manager documentation for the procedures to complete this step.

3.3.3.1 Where to Go From Here

The VxVM disk group must be registered as a Sun Cluster disk device group. See "3.3.8 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)".

3.3.4 How to Create a New Disk Group When Encapsulating Disks (VERITAS Volume Manager)


Note -

This procedure is only for encapsulating disks. If you are initializing disks, use the procedure "3.3.3 How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager)".


You can make non-root disks into Sun Cluster disk device groups by first encapsulating them as VxVM disk groups, then registering them as Sun Cluster disk device groups.

Disk encapsulation is only supported during initial creation of VxVM disk groups. Once a disk group is created and registered, only disks which can be initialized should be added to the disk group.

  1. Become superuser on a node of the cluster.

  2. If the disk being encapsulated has file system entries in the /etc/vfstab file, make sure that the mount at boot option is set to no.

    This can be set back to yes once the disk has been encapsulated and registered as a Sun Cluster disk device group.

  3. Encapsulate the disks.

    Use vxdiskadm menus or the graphical user interface to encapsulate the disks. VxVM requires two free partitions as well as unassigned cylinders at the beginning or the end of the disk. Slice 2 must also be set to the entire disk. See the vxdiskadm(1M) man page for more information.

  4. Shut down and restart the node.

    The scswitch(1M) command will switch over all resource groups and device groups from the node to the next preferred node. Then shutdown(1M) is used to shut down and restart the node.


    # scswitch -S -h node
    # shutdown -g 0 -i 6 -y
    
  5. If necessary, switch all resource groups, and device groups back.

    If the resource groups and device groups were initially configured to fail back to the primary node, this step is not necessary.


    # scswitch -z -h node -D devgrp1 [ ,devgrp2,... ]
    # scswitch -z -h node -g resgrp1 [ ,resgrp2,... ]

3.3.4.1 Where to Go From Here

The VxVM disk group must be registered as a Sun Cluster disk device group. See "3.3.8 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)".

3.3.5 How to Add a New Volume to an Existing Disk Device Group (VERITAS Volume Manager)


Note -

After adding the volume, you need to register the configuration change by using the procedure "3.3.9 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)".


When you add a new volume to an existing VxVM disk device group, you need to perform the procedure from the primary node for the disk device group, and the disk device group must be online.

  1. Become superuser on a node of the cluster.

  2. Determine the primary node for the disk device group.


    # scstat -D
    
  3. Determine if the disk device group is offline.

    • If no, proceed to Step 4.

    • If yes, bring the disk group online.


    # scswitch -z -D disk-device-group -h node
    
    -z -D disk-device-group

    Switches the specified device group.

    -h node

    Specifies the name of the node to switch the disk device group to.

  4. From the primary node (the node currently mastering the disk device group), create the VxVM volume in the disk group.

    Refer to your VERITAS Volume Manager documentation for the procedure used to create the VxVM volume.

3.3.5.1 Where to Go From Here

The change to the VxVM disk group must be registered to update the global namespace. See "3.3.9 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)".

3.3.6 How to Make an Existing Disk Group Into a Disk Device Group (VERITAS Volume Manager)

You can make an existing VxVM disk group into a Sun Cluster disk device group by first importing the disk group onto the current node, then registering the disk group as a Sun Cluster disk device group.

  1. Become superuser on a node of the cluster.

  2. Import the VxVM disk group onto the current node.


    # vxdg import diskgroup
    

3.3.6.1 Where to Go From Here

The VxVM disk group must be registered as a Sun Cluster disk device group. See "3.3.8 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)".

3.3.7 How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)

If registering a VxVM disk device group fails due to a minor number conflict with another disk group, the new disk group must be assigned a new, unused minor number. After assigning the new minor number, you then rerun the procedure to register the disk group as a Sun Cluster disk device group.

  1. Become superuser on a node of the cluster.

  2. Determine the minor numbers in use.


    # ls -l /dev/vx/dsk/*
    
  3. Choose any other multiple of 1000 that is not in use as the base minor number for the new disk group.

  4. Assign the new minor number to the disk group.


    # vxdg reminor diskgroup base_minor_number
    

3.3.7.1 Example--How to Assign a New Minor Number to a Disk Device Group

This example shows the minor numbers 16000-16002 and 4000-4001 being used. The vxdg reminor command is used to reminor the new disk device group to use the base minor number 5000.


# ls -l /dev/vx/dsk/*
/dev/vx/dsk/dg1
brw-------   1 root     root      56,16000 Oct  7 11:32 dg1v1
brw-------   1 root     root      56,16001 Oct  7 11:32 dg1v2
brw-------   1 root     root      56,16002 Oct  7 11:32 dg1v3
 
/dev/vx/dsk/dg2
brw-------   1 root     root      56,4000 Oct  7 11:32 dg2v1
brw-------   1 root     root      56,4001 Oct  7 11:32 dg2v2
# vxdg reminor dg3 5000

3.3.7.2 Where to Go From Here

The VxVM disk group must be registered as a Sun Cluster disk device group. See "3.3.8 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)".

3.3.8 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)

This procedure uses the scsetup(1M) utility to register the associated VxVM disk group as a Sun Cluster disk device group.


Note -

Once a disk device group has been registered, if you make a change to the VxVM disk group or volume, use the procedure "3.3.9 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)" to register the disk device group configuration changes. This will ensure that the global namespace is in the correct state.


The prerequisites to register a VxVM disk device group are:

When you define the preference order, you also specify whether you want the disk device group to be switched back to the most preferred node in the event that the most preferred node goes down and later returns to the cluster.

See scconf(1M) for more information on node preference and failback options.

  1. Become superuser on a node of the cluster.

  2. Enter the scsetup utility.


    # scsetup
    

    The Main Menu appears.

  3. To work with VxVM disk device groups, enter 3 (Device groups and volumes).

    The Device Groups Menu appears.

  4. To register a VxVM disk device group, enter 1 (Register a VxVM disk group as a device group).

    Follow the instructions and enter the VxVM disk group to be registered as a Sun Cluster disk device group. If you encounter the following error while attempting to register the disk device group, use the procedure "3.3.7 How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)". This procedure will enable you to assign a new minor number that does not conflict with a minor number used by existing disk device groups.


    scconf: Failed to add device group - in use

  5. Verify that the disk device group has been registered and brought online.

    Look for the disk device information for the new disk displayed by the following command.


    # scstat -D
    

3.3.8.1 Example--Registering a VERITAS Volume Manager Disk Device Group

The following example shows the scconf command generated by scsetup when it registers a VxVM disk device group (dg1), and the verification step. This example assumes that the VxVM disk group and volume were created previously.


# scconf -a -D type=vxvm,name=dg1,nodelist=phys-schost-1:phys-schost-2
# scstat -D
-- Device Group Servers --
 
                         Device Group        Primary             Secondary
                         ------------        -------             ---------
Device group servers:    dg1                 phys-schost-1       phys-schost-2
 
-- Device Group Status --
 
                              Device Group        Status              
                              ------------        ------              
  Device group status:        dg1                Online

3.3.8.2 Where to Go From Here

To create a cluster file system on the VxVM disk device group, see "3.4.1 How to Add an Additional Cluster File System". If there are problems with the minor number, see "3.3.7 How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)".

3.3.9 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)

When you change any configuration information for a VxVM disk group or volume, you need to register the configuration changes for the Sun Cluster disk device group. This ensures that the global namespace is in the correct state.

  1. Become superuser on a node in the cluster.

  2. Enter the scsetup(1M) utility.


    # scsetup
    

    The Main Menu appears.

  3. To work with VxVM disk device groups, enter 3 (Device groups and volumes).

    The Device Groups Menu appears.

  4. To register configuration changes, enter 2 (Synchronize volume information for a VxVM device group).

    Follow the instructions and enter the VxVM disk group that has changed configuration.

3.3.9.1 Example--Registering VERITAS Volume Manager Disk Group Configuration Changes

The following example shows the scconf command generated by scsetup when it registers a changed VxVM disk device group (dg1). This example assumes that the VxVM disk group and volume were created previously.


# scconf -c -D name=dg1,sync

3.3.10 How to Remove and Unregister a Disk Device Group (Solstice DiskSuite)

Disk device groups map directly to Solstice DiskSuite disksets. Thus, to remove a Solstice DiskSuite disk device group, you use the metaclear(1M) and metaset(1M) commands. These commands remove the disk device group with the same name and unregister the disk group as a Sun Cluster disk device group.

Refer to the Solstice DiskSuite documentation for the steps to remove a diskset.

3.3.11 How to Remove a Volume From a Disk Device Group (VERITAS Volume Manager)


Note -

After removing the volume from the disk device group, you must register the configuration changes to the disk device group using the procedure "3.3.9 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)".


  1. Become superuser on a node of the cluster.

  2. Determine the primary node for the disk device group.


    # scstat -D
    
  3. Determine if the disk device group is offline.

    • If no, proceed to Step 4.

    • If yes, bring the disk group online.


    # scswitch -z -D disk-device-group -h node
    
    -z

    Performs the switch.

    -D disk-device-group

    Specifies the device group to switch.

    -h node

    Specifies the name of the node to become the new primary.

  4. From the primary node (the node currently mastering the disk device group), remove the VxVM volume in the disk group.


    # vxedit -g diskgroup -rf rm volume
    
    -g diskgroup

    Specifies the VxVM disk group containing the volume.

    -rf rm volume

    Removes the specified volume.

3.3.11.1 Where to Go From Here

After removing a volume, you must register the configuration changes to the disk device group. To register the configuration changes, see "3.3.9 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)".

3.3.12 How to Remove and Unregister a Disk Device Group (VERITAS Volume Manager)

Removing a Sun Cluster disk device group will cause the corresponding VxVM disk group to be deported, not destroyed. However, even though the VxVM disk group still exists, it cannot be used in the cluster unless re-registered.

This procedure uses the scsetup(1M) utility to remove a VxVM disk group and unregister it as a Sun Cluster disk device group.

  1. Become superuser on a node of the cluster.

  2. Take the disk device group offline.


    # scswitch -F -D disk-device-group
    
    -F

    Places the disk device group offline.

    -D disk-device-group

    Specifies the device group to take offline.

  3. Enter the scsetup utility.

    The Main Menu appears.


    # scsetup
    
  4. To work with VxVM device groups, enter 3 (Device groups and volumes).

    The Device Groups Menu appears.

  5. To unregister a VxVM disk group, enter 3 (Unregister a VxVM device group).

    Follow the instructions and enter the VxVM disk group to be unregistered.

3.3.12.1 Example--Removing and Unregistering a VERITAS Volume Manager Disk Device Group

The following example shows the VxVM disk device group dg1 taken offline, and the scconf(1M) command generated by scsetup when it removes and unregisters the disk device group.


# scswitch -F -D dg1
# scconf -r -D name=dg1

3.3.13 How to Add a Node to a Disk Device Group (VERITAS Volume Manager)

This procedure adds a node to a disk device group using the scsetup(1M) utility.

The prerequisites to add a node to a VxVM disk device group are:

  1. Become superuser on a node of the cluster.

  2. Enter the scsetup(1M) utility

    The Main Menu appears.


    # scsetup
    
  3. To work with VxVM disk device groups, enter 3 (Device groups and volumes).

    The Device Groups Menu appears.

  4. To add a node to a VxVM disk device group, enter 4 (Add a node to a VxVM device group).

    Follow the instructions and enter the device group and node names.

  5. Verify that the node has been added.

    Look for the device group information for the new disk displayed by the following command.


    # scconf -p 
    

3.3.13.1 Example--Adding a Node to a VERITAS Volume Manager Disk Device Group

The following example shows the scconf command generated by scsetup when it adds a node (phys-schost-3) to a VxVM disk device group (dg1), and the verification step.


# scconf -a -D type=vxvm,name=dg1,nodelist=phys-schost-3
# scconf -p 
...
Device group name:                              dg1
   Device type:                                 VXVM
   Failback enabled:                            yes
   Node preference list:                        phys-schost-1, phys-schost-3

3.3.14 How to Change Disk Device Properties

The method for establishing the primary ownership of a disk device group is based on the setting of an ownership preference attribute called preferenced. If the attribute is not set, the primary owner of an otherwise unowned disk device group is the first node that attempts to access a disk in that group. However, if this attribute is set, you must specify the preferred order in which nodes attempt to establish ownership.

If you disable the preferenced attribute, then the failback attribute is also automatically disabled. However, if you attempt to enable or re-enable the preferenced attribute, you have the choice of enabling or disabling the failback attribute.

If the preferenced attribute is either enabled or re-enabled, you are required to re-establish the order of nodes in the primary ownership preference list.

This procedure uses scsetup(1M) to set or unset the preferenced attribute and the failback attribute for Solstice DiskSuite or VxVM disk device groups.

To run this procedure, you need the name of the disk device group for which you are changing attribute values.

  1. Become superuser on a node of the cluster.

  2. Enter the scsetup(1M) utility

    The Main Menu appears.


    # scsetup
    
  3. To work with disk device groups, enter 3 (Device groups and volumes).

    The Device Groups Menu appears.

  4. To change a device group property, enter 6 (Change key properties of a VxVM or Solstice DiskSuite device group).

    Follow the instructions to set the preferenced and failback options for a device group.

  5. Verify that the disk device group attributes have been changed.

    Look for the device group information displayed by the following command.


    # scconf -p 
    

3.3.14.1 Example--Changing Disk Device Group Properties

The following example shows the scconf command generated by scsetup when it sets the attribute values for a disk device group (dg-schost-1).


# scconf -c -D name=dg-schost-1,nodelist=phys-schost-1:phys-schost-2,\
preferenced=true,failback=enabled
# scconf -p
Device group name:                             dg-schost-1
   Device type:                                SDS
   Failback enabled:                           yes
   Node preference list:                       phys-schost-1, phys-schost-2
   Diskset name:                               dg-schost-1

3.3.15 How to List a Disk Device Group Configuration

You do not need to be superuser to list the configuration.

    Use scconf(1M) to list the disk device group configuration.


    % scconf -p
    

3.3.15.1 Example--Listing the Disk Device Group Configuration

When using the scconf command, look for the information listed under device groups.


# scconf -p
...
Device group name: dg-schost-1
	Device type: SDS
	Failback enabled: yes
	Node preference list: phys-schost-2, phys-schost-3
	Diskset name: dg-schost-1

3.3.16 How to Switch the Primary for a Device Group

This procedure can also be used to start (bring online) an inactive device group.

  1. Become superuser on a node of the cluster.

  2. Use scswitch(1M) to switch the disk device group primary.


    # scswitch -z -D disk-device-group -h node
    
    -z

    Performs the switch.

    -D disk-device-group

    Specifies the device group to switch.

    -h node

    Specifies the name of the node to become the new primary.

  3. Verify that the disk device group has been switched to the new primary.

    Look for the disk device information for the device group displayed by the following command.


    # scstat -D
    

3.3.16.1 Example--Switching the Primary for a Disk Device Group

The following example shows how to switch the primary for a disk device group and verify the change.


# scswitch -z -D dg-schost-1 -h phys-schost-1
# scstat -D
...
Device Group Name:                             dg-schost-1
   Status:                                     Online
   Primary:                                    phys-schost-1

3.4 Administering Cluster File Systems

Table 3-2 Task Map: Administering Cluster File Systems

Task 

For Instructions, Go To... 

Add cluster file systems after the initial Sun Cluster installation 

    - Use newfs and makedir

"3.4.1 How to Add an Additional Cluster File System"

Remove a cluster file system 

    - Use fuser and umount

"3.4.2 How to Remove a Cluster File System"

Check global mount points in a cluster for consistency across nodes 

    - Use sccheck

"3.4.3 How to Check Global Mounts in a Cluster"

3.4.1 How to Add an Additional Cluster File System

Perform this task for each cluster file system you create after your initial Sun Cluster installation.


Caution - Caution -

Be sure you have specified the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you may not intend to delete.


The prerequisites to add an additional cluster file system are:

  1. Become superuser on any node in the cluster.


    Tip -

    For faster file system creation, become superuser on the current primary of the global device for which you are creating a file system.


  2. Create a file system using the newfs(1M) command.


    # newfs raw-disk-device
    

    Table 3-3 shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Table 3-3 Sample Raw Disk Device Names

    If Your Volume Manager Is ... 

    A Disk Device Name Might Be ... 

    Description 

    Solstice DiskSuite 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle metaset.

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group.

    None 

    /dev/global/rdsk/d1s3

    Raw disk device for block slice d1s3.

  3. On each node in the cluster, create a mount point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    # mkdir -p /global/device-group/mount-point
    
    device-group

    Name of the directory that corresponds to the name of the device group which contains the device.

    mount-point

    Name of the directory on which to mount the cluster file system.


    Tip -

    For ease of administration, create the mount point in the /global/device-group directory. This enables you to easily distinguish cluster file systems, which are globally available, from local file systems.


  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    1. To automatically mount a cluster file system, set the mount at boot field to yes.

    2. Use the following required mount options:

      • The global mount option is required for all cluster file systems. This option identifies the file system as a cluster file system.

      • File system logging is required for all cluster file systems. UFS logging can be done either through the use of Solstice DiskSuite metatrans devices or directly through a Solaris UFS mount option. But, the two approaches should not be combined. If Solaris UFS logging is used directly, the logging mount option should be used. Otherwise, if metatrans file system logging is used, no additional mount option is needed.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node that has that entry.

    4. Pay attention to boot order dependencies of the file systems.

      Normally, you should not nest the mount points for cluster file systems. For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot up and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

    5. Make sure the entries in each node's /etc/vfstab file list common devices in the same order.

      For example, if phys-schost-1 and phys-schost-2 have a physical connection to devices d0, d1, and d2, the entries in their respective /etc/vfstab files should be listed as d0, d1, and d2.

    Refer to the vfstab(4) man page for details.

  5. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If there are no errors, nothing is returned.

  6. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mount-point
    
  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

3.4.1.1 Example--Adding a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
[on each node:]
# mkdir -p /global/oracle/d1
 
# vi /etc/vfstab
#device           device       mount   FS      fsck    mount   mount
#to mount        to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
[save and exit]
 
[on one node:]
# sccheck
 
# mount /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/
largefiles on Sun Oct 3 08:56:16 1999

3.4.2 How to Remove a Cluster File System

You `remove' a cluster file system by merely unmounting it. If you want to also remove or delete the data, remove the underlying disk device (or metadevice or volume) from the system.


Note -

Cluster file systems are automatically unmounted as part of the system shutdown that occurs when you run scshutdown(1M) to stop the entire cluster. A cluster file system is not unmounted when you run shutdown to stop a single node. However, if the node being shut down is the only node with a connection to the disk, any attempt to access the cluster file system on that disk results in an error.


The prerequisites to unmount cluster file systems are:

  1. Become superuser on a node in the cluster.

  2. Determine which cluster file systems are mounted.


    # mount -v
    
  3. On each node, list all processes that are using the cluster file system, so you know which processes you are going to stop.


    # fuser -c [ -u ] mount-point
    
    -c

    Reports on files that are mount points for file systems and any files within those mounted file systems.

    -u

    (Optional) Displays the user login name for each process ID.

    mount-point

    Specifies the name of the cluster file system for which you want to stop processes.

  4. On each node, stop all processes for the cluster file system.

    Use your preferred method for stopping processes. If necessary, use the following command to force termination of processes associated with the cluster file system.


    # fuser -c -k mount-point
    

    A SIGKILL is sent to each process using the cluster file system.

  5. On each node, verify that no processes are using the file system.


    # fuser -c mount-point
    
  6. From just one node, umount the file system.


    # umount mount-point
    
    mount-point

    Specifies the name of the cluster file system you want to unmount. This can be either the directory name where the cluster file system is mounted, or the device name path of the file system.

  7. (Optional) Edit the /etc/vfstab file to delete the entry for the cluster file system being removed.

    Perform this step on each cluster node that has an entry for this cluster file system in its /etc/vfstab file.

  8. (Optional) Remove the disk device group/metadevice/plex.

    See your volume manager documentation for more information.

3.4.2.1 Example--Removing a Cluster File System

The following example removes a UFS cluster file system mounted on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# mount -v
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles on Sun Oct  3 08:56:16 1999
# fuser -c /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c -k /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c /global/oracle/d1
/global/oracle/d1:
# umount /global/oracle/d1
 
(on each node, remove the highlighted entry:)
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
[Save and exit.]

Note -

To remove the data on the cluster file system, remove the underlying device. See your volume manager documentation for more information.


3.4.3 How to Check Global Mounts in a Cluster

The sccheck(1M) utility verifies the syntax of the entries for cluster file systems in the /etc/vfstab file. If there are no errors, nothing is returned.


Note -

Run sccheck after making cluster configuration changes, such as removing a cluster file system, that have affected devices or volume management components.


  1. Become superuser on a node in the cluster.

  2. Check the cluster global mounts.


    # sccheck
    

3.4.4 How to Remove a Node From a Disk Device Group (Solstice DiskSuite)

Use this procedure to remove a cluster node from disk device groups (diskset) running Solstice DiskSuite.

  1. Determine the disk device group(s) of which the node to be removed is a member.


    # scstat -D
    
  2. Become superuser on the node that currently owns the disk device group from which you want to remove the node.

  3. Delete from the disk device group the hostname of the node being removed.

    Repeat this step for each disk device group from which the node is being removed.


    # metaset -s setname -d -f -h node
    
    -s setname

    Specifies the disk device group (diskset) name

    -f

    Force

    -d

    Deletes from the disk device group

    -h nodelist

    Removes the node from the list of nodes that can master the disk device group


    Note -

    The update can take several minutes to complete.


  4. Verify that the node has been removed from the disk device group.

    The disk device group name will match the diskset name specified with metaset.


    # scstat -D
    

3.4.4.1 Example--Removing a Node From a Disk Device Group (SDS)

The following example shows the removal of the host name from a disk device group (metaset) and verifies that the node has been removed from the disk device group. Although the example shows the removal of a node from a single disk device group, a node can belong to more than one disk device group at a time. Repeat the metaset command for each disk device group from which you want to remove the node.


[Determine the disk device group(s) for the node:]
# scstat -D
  -- Device Group Servers --
                      Device Group  Primary       Secondary
                      ------------  -------       ---------
  Device group servers: dg-schost-1  phys-schost-1  phys-schost-2
[Become superuser.]
[Remove the hostname from all disk device groups:]
# metaset -s dg-schost-1 -d -f -h phys-schost-2
[Verify removal of the node:]
# scstat -D
  -- Device Group Servers --
                       Device Group  Primary       Secondary
                       ------------  -------       ---------
  Device group servers: dg-schost-1  phys-schost-1  -

3.4.5 How to Remove a Node From a Disk Device Group (VERITAS Volume Manager)

Use this procedure to remove a cluster node from an existing cluster disk device group (disk group) running VERITAS Volume Manager (VxVM).

  1. Determine the disk device group of which the node to be removed is a member.


    # scstat -D
    
  2. Become superuser on a current cluster member node.

  3. Execute the scsetup utility.


    # scsetup
    

    The Main Menu appears.

  4. Reconfigure a disk device group by entering 3 (Device groups and volumes).

  5. Remove the node from the VxVM disk device group by entering 5 (Remove a node from a VxVM device group).

    Follow the prompts to remove the cluster node from the disk device group. You will be asked for information about the following:

    VxVM device group

    Node name

  6. Verify that the node has been removed from the VxVM disk device group:


    # scstat -D	
      ...
      Device group name: devicegroupname
      Device group type: VxVM
      Device group failback enabled: no
      Device group node list: nodename
      Diskgroup name: diskgroupname
      ...

3.4.5.1 Example--Removing a Node From a Disk Device Group (VxVM)

This example shows removal of the node named phys-schost-4 from the dg1 VxVM disk device group.


[Determine the disk device group for the node:]
# scstat -D
  -- Device Group Servers --
                       Device Group  Primary        Secondary
                       ------------  -------        ---------
  Device group servers: dg-schost-1  phys-schost-1  phys-schost-2
[Become superuser and execute the scsetup utility:]
# scsetup
[Select option 3:]
*** Main Menu ***
    Please select from one of the following options:
      ...
      3) Device groups and volumes
      ...
    Option: 3
[Select option 5:]
*** Device Groups Menu ***
    Please select from one of the following options:
      ...
      5) Remove a node from a VxVM device group
      ...
    Option:  5
[Answer the questions to remove the node:]
>>> Remove a Node from a VxVM Device Group <<<
    ...
    Is it okay to continue (yes/no) [yes]? yes
    ...
    Name of the VxVM device group from which you want to remove a node?  dg1
    Name of the node to remove from this group?  phys-schost-4
    Is it okay to proceed with the update (yes/no) [yes]? yes
 
scconf -r -D name=dg1,nodelist=phys-schost-4
 
    Command completed successfully.
    Hit ENTER to continue: 

[Quit the scsetup Device Groups Menu and Main Menu:]
    ...
    Option:  q
[Verify that the node was removed:]
# scstat -D
  ...
  Device group name: 		dg1
  Device group type: 	VxVM
  Device group failback enabled: 	no
  Device group node list: 	phys-schost-3
  Diskgroup name: 	dg1
  ...