Sun Cluster 3.1 System Administration Guide

Chapter 3 Administering Global Devices and Cluster File Systems

This chapter provides the procedures for administering global devices and cluster file systems.

Following is a list of the procedures in this chapter.

For a high-level description of the related procedures in this chapter, see Table 3–2.

See the Sun Cluster 3.1 Concepts Guide for conceptual information related to global devices, the global namespace, disk device groups, and the cluster file system.

Administering Global Devices and the Global Namespace Overview

Administration of Sun Cluster disk device groups depends on the volume manager that is installed on the cluster. Solstice DiskSuite/Solaris Volume Manager is “cluster-aware,” so you add, register, and remove disk device groups by using the Solstice DiskSuite/Solaris Volume Manager metaset(1M) command. With VERITAS Volume Manager (VxVM), you create disk groups by using VxVM commands. You register the disk groups as Sun Cluster disk device groups through the scsetup(1M) utility. When removing VxVM disk device groups, you use both the scsetup (1M) utility and VxVM commands.

Sun Cluster software automatically creates a rawdisk device group for each disk and tape device in the cluster. However, cluster device groups remain in an offline state until you access the groups as global devices. When administering disk device groups, or volume manager disk groups, you need to be on the cluster node that is the primary node for the group.

Normally, you do not need to administer the global device namespace. The global namespace is automatically set up during installation and automatically updated during Solaris operating environment reboots. However, if the global namespace needs to be updated, you can run the scgdevs(1M) command from any cluster node. This command causes the global namespace to be updated on all other cluster node members, as well as on nodes that might join the cluster in the future.

Global Device Permissions for Solstice DiskSuite/Solaris Volume Manager

Changes made to global device permissions are not automatically propagated to all the nodes in the cluster for Solstice DiskSuite/Solaris Volume Manager and disk devices. If you want to change permissions on global devices, you must manually change the permissions on all the nodes in the cluster. For example, if you want to change permissions on global device /dev/global/dsk/d3s0 to 644, you must execute

# chmod 644 /dev/global/dsk/d3s0

on all nodes in the cluster.

VxVM does not support the chmod command. To change global device permissions in VxVM, consult the VxVM administrator's guide.

Dynamic Reconfiguration With Global Devices

Following are issues the you must consider when completing dynamic reconfiguration (DR) operations on disk and tape devices in a cluster.


Caution – Caution –

If the current primary node fails while you are performing the DR operation on a secondary node, cluster availability is impacted. The primary node will have no place to fail over until a new secondary node is provided.


To perform DR operations on global devices, complete the following steps in the order indicated.

Table 3–1 Task Map: Dynamic Reconfiguration with Disk and Tape Devices

Task 

For Instructions 

1. If a DR operation that affects an active device group must be performed on the current primary node, switch the primary and secondary nodes before performing the DR remove operation on the device. 

How to Switch the Primary for a Device Group

2. Perform the DR removal operation on the device being removed. 

Sun Enterprise 10000 DR Configuration Guide and the Sun Enterprise 10000 Dynamic Reconfiguration Reference Manual in the Solaris 8 on Sun Hardware and Solaris 9 on Sun Hardware collections.

VERITAS Volume Manager Administration Considerations

Creating Shared Disk Groups for Oracle Parallel Server/Real Application Clusters

If you use VxVM to set up shared disk groups for Oracle Parallel Server/Real Application Clusters, use the cluster functionality of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide. Differences between creating shared disk groups for Oracle Parallel Server/Real Application Clusters and creating other disk groups include the following items.

To create other VxVM disk groups, see How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager).

Administering Cluster File Systems Overview

No special Sun Cluster commands are necessary for cluster file system administration. Administer a cluster file system as you would any other Solaris file system, using standard Solaris file system commands, such as mount, newfs, and so on. Mount cluster file systems by specifying the -g option to the mount command. Cluster file systems can also be automatically mounted at boot.


Note –

When the cluster file system reads files, the file system does not update the access time on those files.


Guidelines to Support VxFS

The following VxFS features are not supported in a Sun Cluster 3.1 configuration.

All other VxFS features and options that are supported in a cluster configuration are supported by Sun Cluster 3.1 software. See VxFS documentation for details about VxFS options that are supported in a cluster configuration.

The following guidelines for how to use VxFS to create highly available cluster file systems are specific to a Sun Cluster 3.1 configuration.

The following guidelines for how to administer VxFS cluster file systems are not specific to Sun Cluster 3.1 software. However, the guidelines are different from the way you administer UFS cluster file systems.

Administering Disk Device Groups

The scsetup(1M) utility is an interactive interface to the scconf(1M) command. scsetup generates scconf commands. Generated commands are shown in the examples at the end of some procedures.


Note –

Sun Cluster software automatically creates a raw disk device group for each disk and tape device in the cluster. However, cluster device groups remain in an offline state until you access the groups as global devices.


Table 3–2 Task List: Administering Disk Device Groups

Task 

For Instructions, Go To… 

Update the global device namespace without a reconfiguration reboot 

    - Use scgdevs(1M)

How to Update the Global Device Namespace

Add Solstice DiskSuite/Solaris Volume Manager disksets and register them as disk device groups 

    - Use metaset(1M)

How to Add and Register a Disk Device Group (Solstice DiskSuite/Solaris Volume Manager)

Remove Solstice DiskSuite/Solaris Volume Manager disk device groups from the configuration 

    - Use metaset and metaclear(1M)

How to Remove and Unregister a Disk Device Group (Solstice DiskSuite/Solaris Volume Manager)

Remove a node from all disk device groups 

    - Use scconf, metaset, and scsetup

How to Remove a Node From All Disk Device Groups

Remove a node from a Solstice DiskSuite/Solaris Volume Manager disk device group 

    - Use metaset

How to Remove a Node From a Disk Device Group (Solstice DiskSuite/Solaris Volume Manager)

Add VERITAS Volume Manager disk groups as disk device groups 

    - Use VxVM commands and scsetup(1M)

How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager)

 

How to Create a New Disk Group When Encapsulating Disks (VERITAS Volume Manager)

 

How to Add a New Volume to an Existing Disk Device Group (VERITAS Volume Manager)

 

How to Make an Existing Disk Group Into a Disk Device Group (VERITAS Volume Manager)

 

How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)

 

How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)

 

How to Register Disk Group Configuration Changes (VERITAS Volume Manager)

Remove VERITAS Volume Manager disk device groups from the configuration 

    - Use scsetup (to generate scconf)

How to Remove a Volume From a Disk Device Group (VERITAS Volume Manager)

 

How to Remove and Unregister a Disk Device Group (VERITAS Volume Manager)

Add a node to a VERITAS Volume Manager disk device group 

    - Use scsetup to generate scconf

How to Add a Node to a Disk Device Group (VERITAS Volume Manager)

Remove a node from a VERITAS Volume Manager disk device group 

    - Use scsetup to generate scconf

How to Remove a Node From a Disk Device Group (VERITAS Volume Manager)

Remove a node from a raw disk device group 

    - Use scconf(1M)

How to Remove a Node From a Raw Disk Device Group

Change disk device group properties 

    - Use scsetup to generate scconf

How to Change Disk Device Properties

Display disk device groups and properties 

    - Use scconf

How to List a Disk Device Group Configuration

Change the desired number of secondaries for a device group 

    - Use scsetup to generate scconf

How to Change the Desired Number of Secondaries for a Device Group

Switch the primary for a disk device group 

    - Use scswitch(1M)

How to Switch the Primary for a Device Group

Put a disk device group in maintenance state 

    - Use metaset or vxdg

How to Put a Disk Device Group in Maintenance State

How to Update the Global Device Namespace

When adding a new global device, manually update the global device namespace by running scgdevs(1M).


Note –

The scgdevs command does not have any effect if the node that is running the command is not currently a cluster member. The command also has no effect if the /global/.devices/node@nodeID file system is not mounted.


  1. Become superuser on any node of the cluster.

  2. Use the scgdevs command to reconfigure the namespace.


    # scgdevs
    

Example—Updating the Global Device Namespace

The following example shows output generated by a successful run of scgdevs.


# scgdevs 
Configuring the /dev/global directory (global devices)...
obtaining access to all attached disks
reservation program successfully exiting

How to Add and Register a Disk Device Group (Solstice DiskSuite/Solaris Volume Manager)

Use the metaset(1M) command to create a Solstice DiskSuite/Solaris Volume Manager diskset and register the disk set as a Sun Cluster disk device group. When you register the diskset, the name that you assigned to the diskset is automatically assigned to the disk device group.

  1. Become superuser on the node connected to the disks where you want to create the diskset.

  2. Calculate the number of metadevice names needed for your configuration, and modify the /kernel/drv/md.conf file on each node.

    See “How to Set the Number of Metadevice Names and Disksets ” in the Sun Cluster 3.1 Software Installation Guide .

  3. Use the metaset(1M)command to add the Solstice DiskSuite/Solaris Volume Manager diskset and register it as a disk device group with Sun Cluster.


    # metaset -s diskset -a -h nodelist
    

    -s diskset

    Specifies the diskset to be created.

    -a -h nodelist

    Adds the list of nodes that can master the diskset.


    Note –

    Running the metaset command to set up a Solstice DiskSuite/Solaris Volume Manager device group on a cluster results in one secondary by default, regardless of the number of nodes that are included in that device group. You can change the desired number of secondary nodes by using the scsetup(1M) utility after the device group has been created. Refer to How to Change the Desired Number of Secondaries for a Device Group for more information about disk failover.


  4. Verify that the disk device group has been added.

    The disk device group name matches the diskset name that is specified with metaset.


    # scconf -p | grep disk-device-group
    

Example—Adding a Solstice DiskSuite/Solaris Volume Manager Disk Device Group

The following example shows the creation of the diskset and disk device group and verifies that the disk device group has been created.


# metaset -s dg-schost-1 -a -h phys-schost-1
# scconf -p | grep dg-schost-1
Device group name: dg-schost-1

How to Remove and Unregister a Disk Device Group (Solstice DiskSuite/Solaris Volume Manager)

Disk device groups are Solstice DiskSuite/Solaris Volume Manager disksets that have been registered with Sun Cluster. To remove a Solstice DiskSuite/Solaris Volume Manager disk device group, use the metaclear(1M) and metaset(1M) commands. These commands remove the disk device group with the same name and unregister the disk group as a Sun Cluster disk device group.

Refer to the Solstice DiskSuite/Solaris Volume Manager documentation for the steps to remove a diskset.

How to Remove a Node From All Disk Device Groups

Use this procedure to remove a cluster node from all disk device groups that list the node in their lists of potential primaries.

  1. Become superuser on the node you want to remove as a potential primary of all disk device groups.

  2. Determine the disk device group(s) of which the node to be removed is a member.

    Look for the node name in the Device group node list for each disk device group.


    # scconf -p | grep ¨Device group¨
    

  3. Are any of the disk device groups identified in Step 2 of the device group type SDS/SVM?

  4. Are any of the disk device groups identified in Step 2 of the device group type VxVM?

  5. Determine the raw disk device groups of which the node to be removed is a member.

    Note that the following command contains two “v”s in -pvv. The second “v” is needed to display raw disk device groups.


    # scconf -pvv | grep ¨Device group¨
    

  6. Are any of the disk device groups listed in Step 5 of the device group types Disk, Local_Disk, or both?

  7. Verify that the node has been removed from the potential primaries list of all disk device groups.

    The command returns nothing if the node is no longer listed as a potential primary of any disk device group.


    # scconf -pvv | grep ¨Device group¨ | grep nodename
    

How to Remove a Node From a Disk Device Group (Solstice DiskSuite/Solaris Volume Manager)

Use this procedure to remove a cluster node from the list of potential primaries of a Solstice DiskSuite/Solaris Volume Manager disk device group. Repeat the metaset command for each disk device group from which you want to remove the node.

  1. Verify that the node is still a member of the group and that the group is an SDS/SVM device group.

    Device group type SDS/SVM indicates a Solstice DiskSuite/Solaris Volume Manager disk device group.


    phys-suncluster-1% scconf -pv | grep '(global-galileo)'
      (global-galileo) Device group type:              SDS/SVM
      (global-galileo) Device group failback enabled:  no
      (global-galileo) Device group node list:         phys-suncluster-1, phys-suncluster-2
      (global-galileo) Diskset name:                   global-galileo
    phys-suncluster-1%

  2. Determine which node is the current primary for the device group.


    # scstat -D
    
  3. Become superuser on the node that currently owns the disk device group that you want to modify.

  4. Delete the node's hostname from the disk device group.


    # metaset -s setname -d -h nodelist
    

    -s setname

    Specifies the disk device group name

    -d

    Deletes from the disk device group the nodes identified with -h

    -h nodelist

    Removes the node from the list of nodes that can master the disk device group


    Note –

    The update can take several minutes to complete.


    If the command fails, add the -f (Force) option to the command.


    # metaset -s setname -d -f -h nodelist
    

  5. Repeat Step 4 for each disk device group from which the node is being removed as a potential primary.

  6. Verify that the node has been removed from the disk device group.

    The disk device group name matches the diskset name that is specified with metaset.


    phys-suncluster-1% scconf -pv |grep   
    Device group node list:  phys-suncluster-1, phys-suncluster-2, phys-suncluster-1%

Example—Removing a Node From a Disk Device Group (Solstice DiskSuite/Solaris Volume Manager)

The following example shows the removal of the host name phys-schost-2 from a disk device group configuration. This example eliminates phys-schost-2 as a potential primary for the designated disk device group. Verify removal of the node by running the scstat -D command. Check that the removed node is no longer displayed in the screen text.


[Determine the Solstice DiskSuite/Solaris Volume Manager disk device group(2) for the node:]
# scconf -pv | grep Device
  Device group name:                 dg-schost-1
    Device group type:               SDS/SVM
    Device group failback enabled:   no
    Device group node list:          phys-schost-1, phys-schost-2
    Device group ordered node list:  yes
    Device group diskset name:    	         dg-schost-1
[Determine the disk device group(s) for the node:]
# scstat -D
  -- Device Group Servers --
                       Device Group  Primary        Secondary
                       ------------  -------        ---------
  Device group servers: dg-schost-1  phys-schost-1  phys-schost-2
[Become superuser.]
[Remove the hostname from the disk device group:]
# metaset -s dg-schost-1 -d -h phys-schost-2
[Verify removal of the node:]
phys-suncluster-1% scconf -pv |grep 
 Device Group Servers --
                       Device Group  Primary        Secondary
                       ------------  -------        ---------
Device group node list: dg-schost-1, phys-schost-2, 
  

How to Create More Than Three Disksets in a Cluster

If you intend to create more than three disksets in the cluster, perform the following steps before you create the disksets. Follow these steps if you are installing disksets for the first time or if you are adding more disksets to a fully configured cluster.

  1. Ensure that the value of the md_nsets variable is high enough. The value should accommodate the total number of disksets you intend to create in the cluster.

    1. On any node of the cluster, check the value of the md_nsets variable in the /kernel/drv/md.conf file.

    2. If the number of disksets in the cluster will be greater than the existing value of md_nsets minus one, increase the value of md_nsets on each node.

      The maximum permissible number of disksets is the value of md_nsets minus one. The maximum possible value of md_nsets is 32.

    3. Ensure that the /kernel/drv/md.conf file is identical on each node of the cluster.


      Caution – Caution –

      Failure to follow this guideline can result in serious Solstice DiskSuite/Solaris Volume Manager errors and possible loss of data.


    4. From one node, shut down the cluster.


      # scshutdown -g0 -y
      

    5. Reboot each node of the cluster.


      ok> boot
      

  2. On each node in the cluster, run the devfsadm(1M) command.

    You can run this command on all nodes in the cluster at the same time.

  3. From one node of the cluster, run the scgdevs(1M) command.

  4. On each node, verify that the scgdevs command has completed before you attempt to create any disksets.

    The scgdevs(1M) command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the scgdevs command has completed processing, run the following command on each node of the cluster.


    % ps -ef | grep scgdevs
    

How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager)


Note –

This procedure is only for initializing disks. If you are encapsulating disks, use the procedure How to Create a New Disk Group When Encapsulating Disks (VERITAS Volume Manager).


After adding the VxVM disk group, you need to register the disk device group.

If you use VxVM to set up shared disk groups for Oracle Parallel Server/Real Application Clusters, use the cluster functionality of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide. See Creating Shared Disk Groups for Oracle Parallel Server/Real Application Clusters for more information.

  1. Become superuser on any cluster node that is physically connected to disks that make up the disk group being added.

  2. Create the VxVM disk group and volume.

    Use your preferred method to create the disk group and volume.


    Note –

    If you are setting up a mirrored volume, use Dirty Region Logging (DRL) to decrease volume recovery time after a node failure. However, DRL might decrease I/O throughput.


    See the VERITAS Volume Manager documentation for the procedures to complete this step.

  3. Register the VxVM disk group as a Sun Cluster disk device group.

    See How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager).

    Do not register the Oracle Parallel Server/Real Application Clusters shared disk groups with the cluster framework.

How to Create a New Disk Group When Encapsulating Disks (VERITAS Volume Manager)


Note –

This procedure is only for encapsulating disks. If you are initializing disks, use the procedure How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager).


You can make non-root disks into Sun Cluster disk device groups by encapsulating the disks as VxVM disk groups, then registering the disk groups as Sun Cluster disk device groups.

Disk encapsulation is only supported during initial creation of a VxVM disk group. After a VxVM disk group is created and registered as a Sun Cluster disk device group, only disks which can be initialized should be added to the disk group.

If you use VxVM to set up shared disk groups for Oracle Parallel Server/Real Application Clusters, use the cluster functionality of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide. See Creating Shared Disk Groups for Oracle Parallel Server/Real Application Clusters for more information.

  1. Become superuser on any node of the cluster.

  2. If the disk being encapsulated has file system entries in the /etc/vfstab file, make sure that the mount at boot option is set to no.

    Set back to yes once the disk is encapsulated and registered as a Sun Cluster disk device group.

  3. Encapsulate the disks.

    Use vxdiskadm menus or the graphical user interface to encapsulate the disks. VxVM requires two free partitions as well as unassigned cylinders at the beginning or the end of the disk. Slice two must also be set to the entire disk. See the vxdiskadm(1M) man page for more information.

  4. Shut down and restart the node.

    The scswitch(1M) command switches all resource groups and device groups from the primary node to the next preferred node. Use shutdown(1M) to shut down and restart the node.


    # scswitch -S -h node[,...]
    # shutdown -g0 -y -i6
    

  5. If necessary, switch all resource groups and device groups back to the original node.

    If the resource groups and device groups were initially configured to fail back to the primary node, this step is not necessary.


    # scswitch -z -D disk-device-group -hnode[,...] 
    # scswitch -z -g resource-group -hnode[,...] 
    

  6. Register the VxVM disk group as a Sun Cluster disk device group.

    See How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager).

    Do not register the Oracle Parallel Server/Real Application Clusters shared disk groups with the cluster framework.

How to Add a New Volume to an Existing Disk Device Group (VERITAS Volume Manager)


Note –

After adding the volume, you need to register the configuration change by using the procedure How to Register Disk Group Configuration Changes (VERITAS Volume Manager).


When you add a new volume to an existing VxVM disk device group, perform the procedure from the primary node of the online disk device group.

  1. Become superuser on any node of the cluster.

  2. Determine the primary node for the disk device group to which you are adding the new volume.


    # scstat -D
    

  3. If the disk device group is offline, bring the device group online.


    # scswitch -z -D disk-device-group -h node[,...]
    

    -z -D disk-device-group

    Switches the specified device group.

    -h node

    Specifies the name of the node to switch the disk device group to. This node becomes the new primary.

  4. From the primary node (the node currently mastering the disk device group), create the VxVM volume in the disk group.

    Refer to your VERITAS Volume Manager documentation for the procedure used to create the VxVM volume.

  5. Register the VxVM disk group changes so the global namespace gets updated.

    See How to Register Disk Group Configuration Changes (VERITAS Volume Manager).

How to Make an Existing Disk Group Into a Disk Device Group (VERITAS Volume Manager)

You can make an existing VxVM disk group into a Sun Cluster disk device group by importing the disk group onto the current node, then registering the disk group as a Sun Cluster disk device group.

  1. Become superuser on any node of the cluster.

  2. Import the VxVM disk group onto the current node.


    # vxdg import diskgroup
    

  3. Register the VxVM disk group as a Sun Cluster disk device group.

    See How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager).

How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)

If disk device group registration fails because of a minor number conflict with another disk group, you must assign the new disk group a new, unused minor number. After assigning the new minor number, rerun the procedure to register the disk group as a Sun Cluster disk device group.

  1. Become superuser on any node of the cluster.

  2. Determine the minor numbers in use.


    # ls -l /global/.devices/node@nodeid/dev/vx/dsk/*
    

  3. Choose another multiple of 1000 not in use as the base minor number for the new disk group.

  4. Assign the new minor number to the disk group.


    # vxdg reminor diskgroup base-minor-number
    

  5. Register the VxVM disk group as a Sun Cluster disk device group.

    See How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager).

Example—How to Assign a New Minor Number to a Disk Device Group

This example uses the minor numbers 16000-16002 and 4000-4001. The vxdg reminor command is used to assign the base minor number 5000 to the new disk device group.


# ls -l /global/.devices/node@nodeid/dev/vx/dsk/*
/global/.devices/node@nodeid/dev/vx/dsk/dg1
brw-------   1 root     root      56,16000 Oct  7 11:32 dg1v1
brw-------   1 root     root      56,16001 Oct  7 11:32 dg1v2
brw-------   1 root     root      56,16002 Oct  7 11:32 dg1v3
 
/global/.devices/node@nodeid/dev/vx/dsk/dg2
brw-------   1 root     root      56,4000 Oct  7 11:32 dg2v1
brw-------   1 root     root      56,4001 Oct  7 11:32 dg2v2
# vxdg reminor dg3 5000

How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)

This procedure uses the scsetup(1M) utility to register the associated VxVM disk group as a Sun Cluster disk device group.


Note –

After a disk device group has been registered with the cluster, never import or deport a VxVM disk group by using VxVM commands. If you make a change to the VxVM disk group or volume, use the procedure How to Register Disk Group Configuration Changes (VERITAS Volume Manager) to register the disk device group configuration changes. This procedure ensures that the global namespace is in the correct state.


The prerequisites to register a VxVM disk device group are:

When you define the preference order, you also specify whether you want the disk device group to be switched back to the most preferred node in the event that the most preferred node goes down and later returns to the cluster.

See scconf(1M) for more information on node preference and failback options.

Non-primary cluster nodes (spares) transition to secondary according to the node preference order. The default number of secondaries for a device group is normally set to one. This default setting minimizes performance degradation caused by primary checkpointing of multiple secondary nodes during normal operation. For example, in a four node cluster, the default behavior configures one primary, one secondary, and two spare nodes. See alsoHow to Set the Desired Number of Secondaries (VERITAS Volume Manager).

  1. Become superuser on any node of the cluster.

  2. Enter the scsetup utility.


    # scsetup
    

    The Main Menu is displayed.

  3. To work with VxVM disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To register a VxVM disk device group, type 1 (Register a VxVM disk group as a device group).

    Follow the instructions and enter the name of the VxVM disk group to be registered as a Sun Cluster disk device group.

    If you use VxVM to set up shared disk groups for Oracle Parallel Server/Real Application Clusters, you do not register the shared disk groups with the cluster framework. Use the cluster functionality of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide.

  5. If you encounter the following error while attempting to register the disk device group, reminor the disk device group.


    scconf: Failed to add device group - in use

    To reminor the disk device group, use the procedure How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager). This procedure enables you to assign a new minor number that does not conflict with a minor number used by an existing disk device group.

  6. Verify that the disk device group is registered and online.

    If the disk device group is properly registered, information for the new disk device group displays when using the following command.


    # scstat -D
    


    Note –

    If you change any configuration information for a VxVM disk group or volume that is registered with the cluster, you must reregister the disk device group by using scsetup(1M). Such configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See How to Update the Global Device Namespace.


Example—Registering a VERITAS Volume Manager Disk Device Group

The following example shows the scconf command generated by scsetup when registering a VxVM disk device group (dg1), and the verification step. This example assumes that the VxVM disk group and volume were created previously.


# scsetup

scconf -a -D type=vxvm,name=dg1,nodelist=phys-schost-1:phys-schost-2

# scstat -D
-- Device Group Servers --
                         Device Group      Primary           Secondary
                         ------------      -------           ---------
Device group servers:    dg1              phys-schost-1      phys-schost-2
 
-- Device Group Status --
                              Device Group        Status              
                              ------------        ------              
  Device group status:        dg1                 Online

Where to Go From Here

To create a cluster file system on the VxVM disk device group, see How to Add a Cluster File System.

If there are problems with the minor number, see How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager).

How to Register Disk Group Configuration Changes (VERITAS Volume Manager)

When you change any configuration information for a VxVM disk group or volume, you need to register the configuration changes for the Sun Cluster disk device group. Registration ensures that the global namespace is in the correct state.

  1. Become superuser on any node in the cluster.

  2. Run the scsetup(1M) utility.


    # scsetup
    

    The Main Menu is displayed.

  3. To work with VxVM disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To register configuration changes, type 2 (Synchronize volume information for a VxVM device group).

    Follow the instructions and enter the VxVM disk group that has changed configuration.

Example—Registering VERITAS Volume Manager Disk Group Configuration Changes

The following example shows the scconf command generated by scsetup when registering a changed VxVM disk device group (dg1). This example assumes that the VxVM disk group and volume were created previously.


# scsetup
 
scconf -c -D name=dg1,sync

How to Set the Desired Number of Secondaries (VERITAS Volume Manager)

The numsecondaries property specifies the number of nodes within a device group that can master the group if the primary node fails. The default number of secondaries for device services is one. The value can be set to any integer between one and the number of operational non-primary provider nodes in the device group.

This setting is an important factor in balancing cluster performance and availability. For example, increasing the desired number of secondaries increases the device group's opportunity to survive multiple failures that occur simultaneously within a cluster. Increasing the number of secondaries also decreases performance regularly during normal operation. A smaller number of secondaries typically results in better performance, but reduces availability. However, a larger number of secondaries does not always result in greater availability of the file system or device group in question. Refer to “Key Concepts – Administration and Application Development” in Sun Cluster 3.1 Concepts Guide for more information.

  1. Become superuser on any node of the cluster.

  2. Run the scsetup(1M) utility.


    # scsetup
    

    The Main Menu is displayed.

  3. To work with VxVM disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To change key properties of a device group, type 6 (Change key properties of a device group).

    The Change Key Properties Menu is displayed.

  5. To change the desired number of secondaries, type 2 (Change the numsecondaries property).

    Follow the instructions and type the desired number of secondaries to be configured for the disk device group. After an appropriate value has been typed, the corresponding scconf command is executed. Following, a log is printed, and the user is returned to the previous menu.

  6. Validate the device group configuration by using the scconf -p command.


    # scconf -p | grep Device
    Device group name:                             dg-schost-1
       Device group type:                          VxVM
       Device group failback enabled:              yes
       Device group node list:                     phys-schost-1,phys-scot-2, phys-schst-3
       Device group ordered node list:             yes
       Device group desired number of secondaries: 1
       Device group diskset name:                  dg-schost-1


    Note –

    If you change any configuration information for a VxVM disk group or volume that is registered with the cluster, you must reregister the disk device group by using scsetup. Such configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See How to Update the Global Device Namespace.


  7. Verify the primary node and status for the disk device group.


    # scstat -D
    

Example—Setting the Desired Number of Secondaries (VERITAS Volume Manager)

The following example shows the scconf command that is generated by scsetup when it configures the desired number of secondaries for a device group (diskgrp1). See How to Change the Desired Number of Secondaries for a Device Group for information about changing the desired number of secondaries after a device group is created.


# scconf -a -D type=vxvm,name=diskgrp1, 
nodelist=host1:host2:host3,preferenced=true,failback=enabled,numsecondaries=2
 

How to Remove a Volume From a Disk Device Group (VERITAS Volume Manager)


Note –

After removing the volume from the disk device group, you must register the configuration changes to the disk device group using the procedure How to Register Disk Group Configuration Changes (VERITAS Volume Manager).


  1. Become superuser on any node of the cluster.

  2. Determine the primary node and status for the disk device group.


    # scstat -D
    

  3. If the disk device group is offline, bring it online.


    # scswitch -z -D disk-device-group -h node[,...]
    

    -z

    Performs the switch.

    -D disk-device-group

    Specifies the device group to switch.

    -h node

    Specifies the name of the node to switch to. This node becomes the new primary.

  4. From the primary node (the node currently mastering the disk device group), remove the VxVM volume in the disk group.


    # vxedit -g diskgroup -rf rm volume
    

    -g diskgroup

    Specifies the VxVM disk group containing the volume.

    -rf rm volume

    Removes the specified volume.

  5. Register the disk device group configuration changes to update the global namespace, using scsetup(1M).

    See How to Register Disk Group Configuration Changes (VERITAS Volume Manager).

How to Remove and Unregister a Disk Device Group (VERITAS Volume Manager)

Removing a Sun Cluster disk device group will cause the corresponding VxVM disk group to be deported, not destroyed. However, even though the VxVM disk group still exists, it cannot be used in the cluster unless re-registered.

This procedure uses the scsetup(1M) utility to remove a VxVM disk group and unregister it as a Sun Cluster disk device group.

  1. Become superuser on any node of the cluster.

  2. Take the disk device group offline.


    # scswitch -F -D disk-device-group
    

    -F

    Places the disk device group offline.

    -D disk-device-group

    Specifies the device group to take offline.

  3. Enter the scsetup utility.

    The Main Menu is displayed.


    # scsetup
    

  4. To work with VxVM device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  5. To unregister a VxVM disk group, type 3 (Unregister a VxVM device group).

    Follow the instructions and enter the VxVM disk group to be unregistered.

Example—Removing and Unregistering a VERITAS Volume Manager Disk Device Group

The following example shows the VxVM disk device group dg1 taken offline, and the scconf(1M) command generated by scsetup when it removes and unregisters the disk device group.


# scswitch -F -D dg1
# scsetup

   scconf -r -D name=dg1

How to Add a Node to a Disk Device Group (VERITAS Volume Manager)

This procedure adds a node to a disk device group using the scsetup(1M) utility.

The prerequisites to add a node to a VxVM disk device group are:

  1. Become superuser on any node of the cluster.

  2. Enter the scsetup(1M) utility

    The Main Menu is displayed.


    # scsetup
    

  3. To work with VxVM disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To add a node to a VxVM disk device group, type 4 (Add a node to a VxVM device group).

    Follow the instructions and enter the device group and node names.

  5. Verify that the node has been added.

    Look for the device group information for the new disk displayed by the following command.


    # scconf -p 
    

Example—Adding a Node to a VERITAS Volume Manager Disk Device Group

The following example shows the scconf command generated by scsetup when it adds a node (phys-schost-3) to a VxVM disk device group (dg1), and the verification step.


# scsetup
 
scconf a D type=vxvm,name=dg1,nodelist=phys-schost-3
  
# scconf -p 
Device group name:                               dg1
   Device group type:                            VXVM
   Device group failback enabled:                yes
   Device group node list:                       phys-schost-1, phys-schost-3

How to Remove a Node From a Disk Device Group (VERITAS Volume Manager)

Use this procedure to remove a cluster node from the list of potential primaries of a VERITAS Volume Manager (VxVM) disk device group (disk group).

  1. Verify that the node is still a member of the group and that the group is an VxVM device group.

    Device group type VxVM indicates a VxVM disk device group.


    phys-suncluster-1% scconf -pv | grep '(global-galileo)'
      (global-galileo) Device group type:              VxVM
      (global-galileo) Device group failback enabled:  no
      (global-galileo) Device group node list:         phys-suncluster-1, phys-suncluster-2
      (global-galileo) Diskset name:                   global-galileo
    phys-suncluster-1%

  2. Become superuser on a current cluster member node.

  3. Execute the scsetup(1M) command.


    # scsetup
    

    The Main Menu is displayed.

  4. To reconfigure a disk device group, type 4 (Device groups and volumes).

  5. To remove the node from the VxVM disk device group, type 5 (Remove a node from a VxVM device group).

    Follow the prompts to remove the cluster node from the disk device group. You will be asked for information about the following:

    • VxVM device group

    • Node name

  6. Verify that the node has been removed from the VxVM disk device group(s).


    # scconf -p | grep Device
    

Example—Removing a Node From a Disk Device Group (VxVM)

This example shows removal of the node named phys-schost-1 from the dg1 VxVM disk device group.


[Determine the VxVM disk device group for the node:]
# scconf -p | grep Device
  Device group name:                 dg1
    Device group type:               VxVM
    Device group failback enabled:   no
    Device group node list:          phys-schost-1, phys-schost-2
    Device group diskset name:    	dg1
[Become superuser and execute the scsetup utility:]
# scsetup
 Select Device groups and volumes>Remove a node from a VxVM device group.
Answer the questions when prompted. 
You will need the following information.
  You Will Need:            Example:
  VxVM device group name    dg1
  node names                phys-schost-1
[Verify that the scconf command executed properly:]
 
scconf -r -D name=dg1,nodelist=phys-schost-1
 
    Command completed successfully.
Quit the scsetup Device Groups Menu and Main Menu.
[Verify that the node was removed:]
# scconf -p | grep Device
  Device group name:                 dg1
    Device group type:               VxVM
    Device group failback enabled:   no
    Device group node list:          phys-schost-2
    Device group diskset name:    	dg1

How to Remove a Node From a Raw Disk Device Group

Use this procedure to remove a cluster node from the list of potential primaries of a VERITAS Volume Manager (VxVM) disk device group (disk group).

Use this procedure to remove a cluster node from the list of potential primaries of a raw disk device group.

  1. Become superuser on a node in the cluster other than the node to remove.

  2. Identify the disk device groups that are connected to the node being removed.

    Look for the node name in the Device group node list entry.


    # scconf -pvv | grep Devicenodename | grep 	
    

  3. Determine which disk device groups identified in Step 2 are raw disk device groups.

    Raw disk device groups are of the Disk or Local_Disk device group type.


    # scconf -pvv | grep group type
    

  4. Disable the localonly property of each Local_Disk raw disk device group.


    # scconf -c -D name=rawdisk-device-group,localonly=false
    

    See the scconf_dg_rawdisk(1M) man page for more information about the localonly property.

  5. Verify that you have disabled the localonly property of all raw disk device groups that are connected to the node being removed.

    The Disk device group type indicates that the localonly property is disabled for that raw disk device group.


    # scconf -pvv | grep group type 
    

  6. Remove the node from all raw disk device groups identified in Step 3.

    You must complete this step for each raw disk device group that is connected to the node being removed.


    # scconf -r -D name=rawdisk-device-group,nodelist=nodename
    

Example—Removing a Node From a Raw Disk Device Group

This example shows how to remove a node (phys-schost-2) from a raw disk device group. All commands are run from another node of the cluster (phys-schost-1).


[Identify the disk device groups connected to the node being removed:]
phys-schost-1# scconf -pvv | grep phys-schost-2 | grep Device group node list
	(dsk/d4) Device group node list:  phys-schost-2
	(dsk/d2) Device group node list:  phys-schost-1, phys-schost-2
	(dsk/d1) Device group node list:  phys-schost-1, phys-schost-2
[Identify the are raw disk device groups:]
phys-schost-1# scconf -pvv | grep group type
	(dsk/d4) Device group type:          Local_Disk
	(dsk/d8) Device group type:          Local_Disk
[Disable the localonly flag for each local disk on the node:]
phys-schost-1# scconf -c -D name=dsk/d4,localonly=false
[Verify that the localonly flag is disabled:]
phys-schost-1# scconf -pvv | grep group type
    (dsk/d4) Device group type:          Disk
    (dsk/d8) Device group type:          Local_Disk
[Remove the node from all raw disk device groups:]
phys-schost-1# scconf -r -D name=dsk/d4,nodelist=phys-schost-2
phys-schost-1# scconf -r -D name=dsk/d2,nodelist=phys-schost-2
phys-schost-1# scconf -r -D name=dsk/d1,nodelist=phys-schost-2

How to Change Disk Device Properties

The method for establishing the primary ownership of a disk device group is based on the setting of an ownership preference attribute called preferenced. If the attribute is not set, the primary owner of an otherwise unowned disk device group is the first node that attempts to access a disk in that group. However, if this attribute is set, you must specify the preferred order in which nodes attempt to establish ownership.

If you disable the preferenced attribute, then the failback attribute is also automatically disabled. However, if you attempt to enable or re-enable the preferenced attribute, you have the choice of enabling or disabling the failback attribute.

If the preferenced attribute is either enabled or re-enabled, you are required to re-establish the order of nodes in the primary ownership preference list.

This procedure uses scsetup(1M) to set or unset the preferenced attribute and the failback attribute for Solstice DiskSuite/Solaris Volume Manager or VxVM disk device groups.

To run this procedure, you need the name of the disk device group for which you are changing attribute values.

  1. Become superuser on any node of the cluster.

  2. Run the scsetup(1M) utility

    The Main Menu is displayed.


    # scsetup
    

  3. To work with disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To change key properties of a device group, type 6 (Change key properties of a VxVM or Solstice DiskSuite/Solaris Volume Manager device group).

    The Change Key Properties Menu is displayed

  5. To change a device group property, type 1 (Change the preferenced and/or failback properties).

    Follow the instructions to set the preferenced and failback options for a device group.

  6. Verify that the disk device group attributes have been changed.

    Look for the device group information displayed by the following command.


    # scconf -p 
    

Example—Changing Disk Device Group Properties

The following example shows the scconf command generated by scsetup when it sets the attribute values for a disk device group (dg-schost-1).


# scconf -c -D name=dg-schost-1,nodelist=phys-schost-1:phys-schost-2,\
preferenced=true,failback=enabled,numsecondaries=1

# scconf -p | grep Device
Device group name:                             dg-schost-1
   Device group type:                          SDS
   Device group failback enabled:              yes
   Device group node list:                     phys-schost-1, phys-schost-2
   Device group ordered node list:             yes
   Device group desired number of secondaries: 1
   Device group diskset name:                  dg-schost-1

How to Change the Desired Number of Secondaries for a Device Group

The default number of secondary nodes for a device group is set to one. This setting specifies the number of nodes within a device group that can become primary owner of the group if the primary node fails. The desired number of secondaries value can be set to any integer between one and the number of non-primary provider nodes in the device group.

If the numsecondaries property is changed, secondary nodes are added or removed from the device group if the change causes a mismatch between the actual number of secondaries and the desired number.

This procedure uses scsetup(1M) to set or unset the numsecondaries property for Solstice DiskSuite/Solaris Volume Manager or VxVM disk device groups. Refer to scconf_dg_rawdisk(1M), scconf_dg_sds(1M), scconf_dg_svm (1M) and scconf_dg_vxvm(1M) for information about disk device group options when configuring any device group.

  1. Become superuser on any node of the cluster.

  2. Run the scsetup utility.


    # scsetup
    

    The Main Menu is displayed.

  3. To work with disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To change key properties of a device group, type 6 (Change key properties of a device group).

    The Change Key Properties Menu is displayed.

  5. To change the desired number of secondaries, type 2 (Change the numsecondaries property).

    Follow the instructions and type the desired number of secondaries to be configured for the disk device group. After an appropriate value has been entered, the corresponding scconf command is executed, a log is printed, and the user returns to the previous menu.

  6. Verify that the disk device group attribute has been changed.

    Look for the device group information that is displayed by the following command.


    # scconf -p 
    

Example—Changing the Desired Number of Secondaries

The following example shows the scconf command that is generated by scsetup when it configures the desired number of secondaries for a device group (dg-schost-1). This example assumes that the disk group and volume were created previously.


# scconf -c -D name=phys-host-1,nodelist=phys-schost-1:phys-schost-2,phys-schost-3\
preferenced=true,failback=enabled,numsecondaries=1

# scconf -p | grep Device
Device group name:                             dg-schost-1
   Device group type:                          SDS/SVM
   Device group failback enabled:              yes
   Device group node list:                     phys-schost-1, phys-scost-2, phys-schost-3 
   Device group ordered node list:             yes
   Device group desired number of secondaries: 1
   Device group diskset name:                  dg-schost-1

The following example shows use of a null string value to configure the default number of secondaries. The device group will be configured to use the default value, even if the default value changes.


# scconf -c -D 
name=diskgrp1, nodelist=host1:host2:host3,
preferenced=false,failback=enabled,numsecondaries=
 # scconf -p | grep Device
Device group name:                             dg-schost-1
   Device group type:                          SDS/SVM
   Device group failback enabled:              yes
   Device group node list:                     phys-schost-1, phost-2, phys-schost-3
   Device group ordered node list:             yes
   Device group desired number of secondaries: 1
   Device group diskset name:                  dg-schost-1

How to List a Disk Device Group Configuration

You do not need to be superuser to list the configuration.

There are three ways you can list disk device group configuration information.

  1. Use the SunPlex Manager GUI.

    See the SunPlex Manager online help for more information.

  1. Use scstat(1M) to list the disk device group configuration.


    % scstat -D
    

    Use scconf(1M) to list the disk device group configuration.


    % scconf -p
    

Example—Listing the Disk Device Group Configuration By Using scstat

Using the scstat -D command displays the following information.


-- Device Group Servers --
                         Device Group      Primary             Secondary
                         ------------      -------             ---------
  Device group servers:  schost-2          -                   -
  Device group servers:  schost-1          phys-schost-2       phys-schost-3
  Device group servers:  schost-3          -                   -
-- Device Group Status --
                              Device Group      Status              
                              ------------      ------              
  Device group status:        schost-2          Offline
  Device group status:        schost-1          Online
  Device group status:        schost-3          Offline

Example—Listing the Disk Device Group Configuration By Using scconf

When using the scconf command, look for the information listed under device groups.


# scconf -p
...
Device group name: dg-schost-1
	Device group type:              SDS/SVM
	Device group failback enabled:  yes
	Device group node list:         phys-schost-2, phys-schost-3
	Device group diskset name:      dg-schost-1

How to Switch the Primary for a Device Group

This procedure can also be used to start (bring online) an inactive device group.

You can also bring an inactive device group online, or switch the primary for a device group, by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.

  1. Become superuser on any node of the cluster.

  2. Use scswitch(1M) to switch the disk device group primary.


    # scswitch -z -D disk-device-group -h node
    

    -z

    Performs the switch.

    -D disk-device-group

    Specifies the device group to switch.

    -h node

    Specifies the name of the node to switch to. This node become the new primary.

  3. Verify that the disk device group has been switched to the new primary.

    If the disk device group is properly registered, information for the new disk device group displays when using the following command.


    # scstat -D
    

Example—Switching the Primary for a Disk Device Group

The following example shows how to switch the primary for a disk device group and verify the change.


# scswitch -z -D dg-schost-1 -h phys-schost-1
# scstat -D

-- Device Group Servers --
                          Device Group        Primary             Secondary
                         ------------        -------             ---------
Device group servers:    dg1                 phys-schost-1       phys-schost-2
 
-- Device Group Status --
                                Device Group        Status              
                              ------------        ------              
  Device group status:        dg1                 Online

How to Put a Disk Device Group in Maintenance State

Putting a device group in maintenance state prevents that device group from automatically being brought online whenever one of its devices is accessed. You should put a device group in maintenance state when completing repair procedures that require that all I/O activity be acquiesced until completion of the repair. Putting a device group in maintenance state also helps prevent data lost by ensuring that a disk device group is not brought online on one node while the diskset or disk group is being repaired on another node.


Note –

Before a device group can be placed in maintenance state, all access to its devices must be stopped, and all dependent file systems must be unmounted.


  1. Place the device group in maintenance state.


    # scswitch -m -D disk-device-group
    

  2. If the repair procedure being performed requires ownership of a diskset or disk group, manually import that diskset or disk group.

    • For Solstice DiskSuite/Solaris Volume Manager:


      # metaset -C take -f -s diskset
      


    Caution – Caution –

    If you are taking ownership of a Solstice DiskSuite/Solaris Volume Manager diskset, the metaset -C take command must be used when the device group is in maintenance state. Using metaset -t will bring the device group online as part of taking ownership. If you are importing a VxVM disk group, the -t flag must be used when importing the disk group. This prevents the disk group from automatically being imported if this node is rebooted.


    • For VERITAS Volume Manager:


      # vxdg -t import disk-group-name
      

  3. Complete whatever repair procedure you need to perform.

  4. Release ownership of the diskset or disk group.


    Caution – Caution –

    Before taking the disk device group out of maintenance state, you must release ownership of the diskset or disk group. Failure to do so may result in data loss.


    • For Solstice DiskSuite/Solaris Volume Manager:


      # metaset -C release -s diskset
      

    • For VERITAS Volume Manager:


      # vxdg deport disk-group-name
      

  5. Bring the disk device group online.


    # scswitch -z -D disk-device-group -h node
    

Example—Putting a Disk Device Group in Maintenance State

This example shows how to put disk device group dg-schost-1 into maintenance state, and remove the disk device group from maintenance state.


[Place the disk device group in maintenance state.]
# scswitch -m -D dg-schost-1
 
[If needed, manually import the diskset or disk group.]
For Solstice DiskSuite/Solaris Volume Manager:
  # metaset -C take -f -s dg-schost-1
For VERITAS Volume Manager:
  # vxdg -t import dg1
  
[Complete all necessary repair procedures.]
  
[Release ownership.]
For Solstice DiskSuite/Solaris Volume Manager:
  # metaset -C release -s dg-schost-1
For VERITAS Volume Manager:
  # vxdg deport dg1
  
[Bring the disk device group online.]
# scswitch -z -D dg-schost-1 -h phys-schost-1

Administering Cluster File Systems

Table 3–3 Task Map: Administering Cluster File Systems

Task 

For Instructions, Go To… 

Add cluster file systems after the initial Sun Cluster installation 

    - Use newfs(1M) and mkdir

How to Add a Cluster File System

Remove a cluster file system 

    - Use fuser(1M) and umount(1M)

How to Remove a Cluster File System

Check global mount points in a cluster for consistency across nodes 

    - Use sccheck(1M)

How to Check Global Mounts in a Cluster

How to Add a Cluster File System

Perform this task for each cluster file system you create after your initial Sun Cluster installation.


Caution – Caution –

Be sure you specify the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you may not intend to delete.


The prerequisites to add an additional cluster file system are:

If you used SunPlex Manger to install data services, one or more cluster file systems already exist if there were sufficient shared disks on which to create the cluster file systems.

  1. Become superuser on any node in the cluster.


    Tip –

    For faster file system creation, become superuser on the current primary of the global device for which you are creating a file system.


  2. Create a file system using the newfs(1M) command.


    Note –

    The newfs(1M) command is only valid for creating new UFS file systems. To create a new VxFS file system, follow procedures provided in your VxFS documentation



    # newfs raw-disk-device
    

    The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Table 3–4 Sample Raw Disk Device Names

    If Your Volume Manager Is … 

    A Disk Device Name Might Be … 

    Description 

    Solstice DiskSuite/Solaris Volume Manager 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle diskset.

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group.

    None 

    /dev/global/rdsk/d1s3

    Raw disk device for block slice d1s3.

     

  3. On each node in the cluster, create a mount point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    Tip –

    For ease of administration, create the mount point in the /global/device-group directory. Using this location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    # mkdir -p /global/device-group/mountpoint
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device.

    mountpoint

    Name of the directory on which to mount the cluster file system.

  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    1. Use the following required mount options.


      Note –

      Logging is required for all cluster file systems.


      • Solaris UFS logging – Use the global,logging mount options. See the mount_ufs(1M) man page for more information about UFS mount options.


        Note –

        The syncdir mount option is not required for UFS cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior. If you do not, you will have the same behavior that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition until you close a file. The cases in which you could have problems if you do not specify syncdir are rare. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.


      • Solstice DiskSuite/Solaris Volume Manager trans metadevice or transactional volume– Use the global mount option (do not use the logging mount option). See your Solstice DiskSuite/Solaris Volume Manager documentation for information about setting up trans metadevices and transactional volumes.


        Note –

        Transactional volumes are scheduled to be removed from the Solaris operating environment in an upcoming Solaris release. Solaris UFS logging, available since the Solaris 8 release, provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.


      • VxFS logging – Use the global, log mount options. See the mount_vxfs(1M) man page for more information about VxFS mount options.

    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

    See the vfstab(4) man page for details.

  5. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If there are no errors, nothing is returned.

  6. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mountpoint
    

  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

    To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.

Example—Adding a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite/Solaris Volume Manager metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
[on each node:]
# mkdir -p /global/oracle/d1
 
# vi /etc/vfstab
#device                device                 mount            FS  fsck  mount          mount
#to mount              to fsck                point           type pass  at boot      options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs  2    yes         global,logging
[save and exit]
 
[on one node:]
# sccheck
# mount /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/
largefiles on Sun Oct 3 08:56:16 2001

How to Remove a Cluster File System

You remove a cluster file system by merely unmounting it. If you want to also remove or delete the data, remove the underlying disk device (or metadevice or volume) from the system.


Note –

Cluster file systems are automatically unmounted as part of the system shutdown that occurs when you run scshutdown(1M) to stop the entire cluster. A cluster file system is not unmounted when you run shutdown to stop a single node. However, if the node being shut down is the only node with a connection to the disk, any attempt to access the cluster file system on that disk results in an error.


The prerequisites to unmount cluster file systems are:

  1. Become superuser on any node in the cluster.

  2. Determine which cluster file systems are mounted.


    # mount -v
    

  3. On each node, list all processes that are using the cluster file system, so you know which processes you are going to stop.


    # fuser -c [ -u ] mountpoint
    

    -c

    Reports on files that are mount points for file systems and any files within those mounted file systems.

    -u

    (Optional) Displays the user login name for each process ID.

    mountpoint

    Specifies the name of the cluster file system for which you want to stop processes.

  4. On each node, stop all processes for the cluster file system.

    Use your preferred method for stopping processes. If necessary, use the following command to force termination of processes associated with the cluster file system.


    # fuser -c -k mountpoint
    

    A SIGKILL is sent to each process using the cluster file system.

  5. On each node, verify that no processes are using the file system.


    # fuser -c mountpoint
    

  6. From just one node, umount the file system.


    # umount mountpoint
    

    mountpoint

    Specifies the name of the cluster file system you want to unmount. This can be either the directory name where the cluster file system is mounted, or the device name path of the file system.

  7. (Optional) Edit the /etc/vfstab file to delete the entry for the cluster file system being removed.

    Perform this step on each cluster node that has an entry for this cluster file system in its /etc/vfstab file.

  8. (Optional) Remove the disk device group/metadevice/plex.

    See your volume manager documentation for more information.

Example—Removing a Cluster File System

The following example removes a UFS cluster file system mounted on the Solstice DiskSuite/Solaris Volume Manager metadevice /dev/md/oracle/rdsk/d1.


# mount -v
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles 
# fuser -c /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c -k /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c /global/oracle/d1
/global/oracle/d1:
# umount /global/oracle/d1
 
(on each node, remove the highlighted entry:)
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
[Save and exit.]

Note –

To remove the data on the cluster file system, remove the underlying device. See your volume manager documentation for more information.


How to Check Global Mounts in a Cluster

The sccheck(1M) utility verifies the syntax of the entries for cluster file systems in the /etc/vfstab file. If there are no errors, nothing is returned.


Note –

Run sccheck after making cluster configuration changes, such as removing a cluster file system, that have affected devices or volume management components.


  1. Become superuser on any node in the cluster.

  2. Check the cluster global mounts.


    # sccheck