Sun Cluster 3.0 12/01 System Administration Guide

Chapter 3 Administering Global Devices and Cluster File Systems

This chapter provides the procedures for administering global devices and cluster file systems.

This is a list of the procedures in this chapter.

For a high-level description of the related procedures in this chapter, see Table 3-2.

See the Sun Cluster 3.0 12/01 Concepts document for conceptual information related to global devices, the global namespace, disk device groups, and the cluster file system.

3.1 Administering Global Devices and the Global Namespace Overview

Administration of Sun Cluster disk device groups depends on the volume manager installed on the cluster. Solstice DiskSuite is "cluster-aware," so you add, register, and remove disk device groups by using the Solstice DiskSuite metaset(1M) command. With VERITAS Volume Manager (VxVM), you create disk groups by using VxVM commands. Then you register the disk groups as Sun Cluster disk device groups through the scsetup(1M) utility. When removing VxVM disk device groups, you use both the scsetup utility and VxVM commands.

Sun Cluster software automatically creates a rawdisk device group for each disk and tape device in the cluster. However, these cluster device groups remain in an offline state until you access them as global devices. When administering disk device groups, or volume manager disk groups, you need to be on the cluster node that is the primary node for the group.

Normally, you do not need to administer the global device namespace because the global namespace is automatically set up during installation and automatically updated during Solaris operating environment reconfiguration reboots. However, if the global namespace needs to be regenerated or updated, you can run the scgdevs(1M) command from any cluster node. This causes the global namespace to be updated on all other cluster node members, as well as on nodes that might join the cluster in the future.

3.1.1 Global Device Permissions for Solstice DiskSuite

Changes made to global device permissions are not automatically propagated to all the nodes in the cluster for Solstice DiskSuite and disk devices. If you want to change permissions on global devices, you must manually change the permissions on all the nodes in the cluster. For example, if you want to change permissions on global device /dev/global/dsk/d3s0 to 644, you must execute

# chmod 644 /dev/global/dsk/d3s0

on all nodes in the cluster.

VxVM does not support the chmod command. To change global device permissions in VxVM, consult the VxVM administrator's guide.

3.1.2 Dynamic Reconfiguration With Global Devices

There are a few issues you must consider when completing dynamic reconfiguration (DR) operations on disk and tape devices in a cluster.


Caution - Caution -

If the current primary node fails while you are performing the DR operation on a secondary node, cluster availability is impacted. The primary node will have no place to fail over until a new secondary node is provided.


To perform DR operations on global devices, complete the following steps in the order indicated.

Table 3-1 Task Map: Dynamic Reconfiguration with Disk and Tape Devices

Task 

For Instructions, Go To... 

1. If a DR operation that affects an active device group must be performed on the current primary node, switch the primary and secondary nodes before performing the DR remove operation on the device. 

"3.3.18 How to Switch the Primary for a Device Group"

2. Perform the DR remove operation on the device being removed. 

Sun Enterprise 10000 Dynamic Reconfiguration User Guide and the Sun Enterprise 10000 Dynamic Reconfiguration Reference Manual (from the Solaris 8 on Sun Hardware collection)

3.1.3 VERITAS Volume Manager Administration Considerations

For Sun Cluster to maintain the VxVM namespace, you must register any VxVM disk group or volume changes as Sun Cluster disk device group configuration changes. Registering these changes ensures that the namespace on all cluster nodes is updated. Examples of configuration changes that impact the namespace include adding, removing, or renaming a volume; and changing the volume permissions, owner, or group ID.


Note -

Never import or deport VxVM disk groups using VxVM commands once the disk group has been registered with the cluster as a Sun Cluster disk device group. The Sun Cluster software will handle all cases where disk groups need to be imported or deported.


Each VxVM disk group must have a cluster-wide unique minor number. By default, when a disk group is created, VxVM chooses a random number that is a multiple of 1000 as that disk group's base minor number. For most configurations with only a small number of disk groups, this is sufficient to guarantee uniqueness. However, it is possible that the minor number for a newly-created disk group will conflict with the minor number of a pre-existing disk group imported on a different cluster node. In this case, attempting to register the Sun Cluster disk device group will fail. To fix this problem, the new disk group should be given a new minor number that is a unique value and then registered as a Sun Cluster disk device group.

If you are setting up a mirrored volume, Dirty Region Logging (DRL) can be used to decrease volume recovery time after a node failure. Use of DRL is strongly recommended, although it could decrease I/O throughput.

3.1.3.1 Creating Shared Disk Groups for Oracle Parallel Server/Real Application Clusters

If you use VxVM to set up shared disk groups for Oracle Parallel Server/Real Application Clusters, use the cluster functionality of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide. Differences between creating shared disk groups for Oracle Parallel Server/Real Application Clusters and creating other disk groups include the following items.

To create other VxVM disk groups, see "3.3.5 How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager)".

3.2 Administering Cluster File Systems Overview

No special Sun Cluster commands are necessary for cluster file system administration. Administer a cluster file system as you would any other Solaris file system, using standard Solaris file system commands, such as mount, newfs, and so on. Mount cluster file systems by specifying the -g option to the mount command. Cluster file systems can also be automatically mounted at boot.


Note -

When the cluster file system reads files, it does not update the access time on those files.


3.3 Administering Disk Device Groups

The scsetup(1M) utility is an interactive interface to the scconf(1M) command. When scsetup runs, it generates scconf commands. These generated commands are shown in the examples at the end of some procedures.


Note -

Sun Cluster software automatically creates a rawdisk device group for each disk and tape device in the cluster. However, these cluster device groups remain in an offline state until you access them as global devices.


Table 3-2 Task List: Administering Disk Device Groups

Task 

For Instructions, Go To... 

Update the global device namespace (without a reconfiguration reboot) 

    - Use scgdevs

"3.3.1 How to Update the Global Device Namespace"

Add Solstice DiskSuite disksets and register them as disk device groups 

    - Use metaset

"3.3.2 How to Add and Register a Disk Device Group (Solstice DiskSuite)"

Remove Solstice DiskSuite disk device groups from the configuration 

    - Use metaset and metaclear

"3.3.3 How to Remove and Unregister a Disk Device Group (Solstice DiskSuite)"

Remove a node from a Solstice DiskSuite disk device group 

    - Use metaset 

"3.3.4 How to Remove a Node From a Disk Device Group (Solstice DiskSuite)"

Add VERITAS Volume Manager disk groups as disk device groups 

    - Use VxVM commands and scsetup

"3.3.5 How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager)"

 

"3.3.6 How to Create a New Disk Group When Encapsulating Disks (VERITAS Volume Manager)"

 

"3.3.7 How to Add a New Volume to an Existing Disk Device Group (VERITAS Volume Manager)"

 

"3.3.8 How to Make an Existing Disk Group Into a Disk Device Group (VERITAS Volume Manager)"

 

"3.3.9 How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)"

 

"3.3.10 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)"

 

"3.3.11 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)"

Remove VERITAS Volume Manager disk device groups from the configuration 

    - Use scsetup (to generate scconf)

"3.3.12 How to Remove a Volume From a Disk Device Group (VERITAS Volume Manager)"

 

"3.3.13 How to Remove and Unregister a Disk Device Group (VERITAS Volume Manager)"

Add a node to a VERITAS Volume Manager disk device group 

    - Use scsetup (to generate scconf)

"3.3.14 How to Add a Node to a Disk Device Group (VERITAS Volume Manager)"

Remove a node from a VERITAS Volume Manager disk device group 

    - Use scsetup (to generate scconf) 

"3.3.15 How to Remove a Node From a Disk Device Group (VERITAS Volume Manager)"

Change disk device group properties 

    - Use scsetup (to generate scconf)

"3.3.16 How to Change Disk Device Properties "

Display disk device groups and properties 

    - Use scconf

"3.3.17 How to List a Disk Device Group Configuration"

Switch the primary for a disk device group 

    - Use scswitch

"3.3.18 How to Switch the Primary for a Device Group"

Put a disk device group in maintenance state 

    - Use metaset or vxdg

"3.3.19 How to Put a Disk Device Group in Maintenance State"

3.3.1 How to Update the Global Device Namespace

When adding a new global device, manually update the global device namespace by running scgdevs(1M).


Note -

The scgdevs command does not have any effect if the node running the command is not currently a cluster member or if the /global/.devices/node@nodeID file system is not mounted.


  1. Become superuser on any node of the cluster.

  2. Use the scgdevs command to reconfigure the namespace.


    # scgdevs
    

3.3.1.1 Example--Updating the Global Device Namespace

The following example shows output generated by a successful run of scgdevs.


# scgdevs 
Configuring the /dev/global directory (global devices)...
obtaining access to all attached disks
reservation program successfully exiting

3.3.2 How to Add and Register a Disk Device Group (Solstice DiskSuite)

Use the metaset(1M) command to create a Solstice DiskSuite diskset and register it as a Sun Cluster disk device group. When you register the diskset, the name you assigned to the diskset will automatically be assigned to the disk device group.

  1. Become superuser on the node connected to the disks where you want to create the diskset.

  2. Calculate the number of metadevice names needed for your configuration, and modify the /kernel/drv/md.conf file on each node.

    See "How to Set the Number of Metadevice Names and Disksets" in the Sun Cluster 3.0 12/01 Software Installation Guide.

  3. Use the metaset command to add the Solstice DiskSuite diskset and register it as a disk device group with Sun Cluster.


    # metaset -s diskset -a -h nodelist
    

    -s diskset

    Specifies the diskset to be created.

    -a -h nodelist

    Adds the list of nodes that can master the diskset.

  4. Verify that the disk device group has been added.

    The disk device group name will match the diskset name specified with metaset.


    # scconf -p | grep disk-device-group
    

3.3.2.1 Example--Adding a Solstice DiskSuite Disk Device Group

The following example shows the creation of the diskset and disk device group and verifies that the disk device group has been created.


# metaset -s dg-schost-1
# scconf -p | grep dg-schost-1
Device group name: dg-schost-1

3.3.3 How to Remove and Unregister a Disk Device Group (Solstice DiskSuite)

Disk device groups are Solstice DiskSuite disksets that have been registered with Sun Cluster. To remove a Solstice DiskSuite disk device group, use the metaclear(1M) and metaset(1M) commands. These commands remove the disk device group with the same name and unregister the disk group as a Sun Cluster disk device group.

Refer to the Solstice DiskSuite documentation for the steps to remove a diskset.

3.3.4 How to Remove a Node From a Disk Device Group (Solstice DiskSuite)

Use this procedure to remove a cluster node from the list of potential primaries of a disk device group, on a cluster running Solstice DiskSuite. A node can belong to more than one disk device group at a time, so repeat the metaset command for each disk device group from which you want to remove the node.

  1. Determine the disk device group(s) of which the node to be removed is a member.


    # scstat -D
    

  2. Become superuser on the node that currently owns the disk device group you want to modify.

  3. Delete the node's hostname from the disk device group.


    # metaset -s setname -d -f -h nodelist
    

    -s setname

    Specifies the disk device group name

    -d

    Deletes from the disk device group the nodes identified with -h

    -f

    Force

    -h nodelist

    Removes the node from the list of nodes that can master the disk device group


    Note -

    The update can take several minutes to complete.


  4. Repeat Step 3 for each disk device group from which the node is being removed as a potential primary.

  5. Verify that the node has been removed from the disk device group.

    The disk device group name will match the diskset name specified with metaset.


    # scstat -D
    

3.3.4.1 Example--Removing a Node From a Disk Device Group (SDS)

The following example shows the removal of the host name phys-schost-2 from a disk device group configuration. This eliminates phys-schost-2 as a potential primary for the designated disk device group. Verify removal of the node by running the scstat -D command and checking that the removed node is no longer displayed in the screen text.


[Determine the disk device group(s) for the node:]
# scstat -D
  -- Device Group Servers --
                       Device Group  Primary        Secondary
                       ------------  -------        ---------
  Device group servers: dg-schost-1  phys-schost-1  phys-schost-2
[Become superuser.]
[Remove the hostname from all disk device groups:]
# metaset -s dg-schost-1 -d -f -h phys-schost-2
[Verify removal of the node:]
# scstat -D
  -- Device Group Servers --
                       Device Group  Primary       Secondary
                       ------------  -------       ---------
  Device group servers: dg-schost-1  phys-schost-1  -

3.3.5 How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager)


Note -

This procedure is only for initializing disks. If you are encapsulating disks, use the procedure "3.3.6 How to Create a New Disk Group When Encapsulating Disks (VERITAS Volume Manager)".


After adding the VxVM disk group, you need to register the disk device group.

If you use VxVM to set up shared disk groups for Oracle Parallel Server/Real Application Clusters, use the cluster functionality of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide. See "3.1.3.1 Creating Shared Disk Groups for Oracle Parallel Server/Real Application Clusters" for more information.

  1. Become superuser on any node of the cluster that is physically connected to the disks that make up the disk group being added.

  2. Create the VxVM disk group and volume.

    Use your preferred method to create the disk group and volume.


    Note -

    If you are setting up a mirrored volume, use Dirty Region Logging (DRL) to decrease volume recovery time after a node failure. However, DRL might decrease I/O throughput.


    See the VERITAS Volume Manager documentation for the procedures to complete this step.

  3. Register the VxVM disk group as a Sun Cluster disk device group.

    See "3.3.10 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)".

    Do not register the Oracle Parallel Server/Real Application Clusters shared disk groups with the cluster framework.

3.3.6 How to Create a New Disk Group When Encapsulating Disks (VERITAS Volume Manager)


Note -

This procedure is only for encapsulating disks. If you are initializing disks, use the procedure "3.3.5 How to Create a New Disk Group When Initializing Disks (VERITAS Volume Manager)".


You can make non-root disks into Sun Cluster disk device groups by first encapsulating them as VxVM disk groups, then registering them as Sun Cluster disk device groups.

Disk encapsulation is only supported during initial creation of a VxVM disk group. Once a VxVM disk group is created and registered as a Sun Cluster disk device group, only disks which can be initialized should be added to the disk group.

If you use VxVM to set up shared disk groups for Oracle Parallel Server/Real Application Clusters, use the cluster functionality of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide. See "3.1.3.1 Creating Shared Disk Groups for Oracle Parallel Server/Real Application Clusters" for more information.

  1. Become superuser on any node of the cluster.

  2. If the disk being encapsulated has file system entries in the /etc/vfstab file, make sure that the mount at boot option is set to no.

    This can be set back to yes once the disk has been encapsulated and registered as a Sun Cluster disk device group.

  3. Encapsulate the disks.

    Use vxdiskadm menus or the graphical user interface to encapsulate the disks. VxVM requires two free partitions as well as unassigned cylinders at the beginning or the end of the disk. Slice 2 must also be set to the entire disk. See the vxdiskadm(1M) man page for more information.

  4. Shut down and restart the node.

    The scswitch(1M) command switches all resource groups and device groups from the primary node to the next preferred node. Then shutdown(1M) is used to shut down and restart the node.


    # scswitch -S -h nodelist
    # shutdown -g0 -y -i6
    

  5. If necessary, switch all resource groups and device groups back to the original node.

    If the resource groups and device groups were initially configured to fail back to the primary node, this step is not necessary.


    # scswitch -z -h nodelist -D disk-device-group
    # scswitch -z -h nodelist -g resource-group
    

  6. Register the VxVM disk group as a Sun Cluster disk device group.

    See "3.3.10 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)".

    Do not register the Oracle Parallel Server/Real Application Clusters shared disk groups with the cluster framework.

3.3.7 How to Add a New Volume to an Existing Disk Device Group (VERITAS Volume Manager)


Note -

After adding the volume, you need to register the configuration change by using the procedure "3.3.11 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)".


When you add a new volume to an existing VxVM disk device group, you need to perform the procedure from the primary node for the disk device group, and the disk device group must be online.

  1. Become superuser on any node of the cluster.

  2. Determine the primary node and status for the disk device group to which you are adding the new volume.


    # scstat -D
    

  3. If the disk device group is offline, bring it online.


    # scswitch -z -D disk-device-group -h nodelist
    

    -z -D disk-device-group

    Switches the specified device group.

    -h nodelist

    Specifies the name of the node to switch the disk device group to. This node becomes the new primary.

  4. From the primary node (the node currently mastering the disk device group), create the VxVM volume in the disk group.

    Refer to your VERITAS Volume Manager documentation for the procedure used to create the VxVM volume.

  5. Register the VxVM disk group changes so the global namespace gets updated.

    See "3.3.11 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)".

3.3.8 How to Make an Existing Disk Group Into a Disk Device Group (VERITAS Volume Manager)

You can make an existing VxVM disk group into a Sun Cluster disk device group by first importing the disk group onto the current node, then registering the disk group as a Sun Cluster disk device group.

  1. Become superuser on any node of the cluster.

  2. Import the VxVM disk group onto the current node.


    # vxdg import diskgroup
    

  3. Register the VxVM disk group as a Sun Cluster disk device group.

    See "3.3.10 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)".

3.3.9 How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)

If disk device group registration fails because of a minor number conflict with another disk group, you must assign the new disk group a new, unused minor number. After assigning the new minor number, rerun the procedure to register the disk group as a Sun Cluster disk device group.

  1. Become superuser on any node of the cluster.

  2. Determine the minor numbers in use.


    # ls -l /global/.devices/node@nodeid/dev/vx/dsk/*
    

  3. Choose any other multiple of 1000 that is not in use as the base minor number for the new disk group.

  4. Assign the new minor number to the disk group.


    # vxdg reminor diskgroup base-minor-number
    

  5. Register the VxVM disk group as a Sun Cluster disk device group.

    See "3.3.10 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)".

3.3.9.1 Example--How to Assign a New Minor Number to a Disk Device Group

This example uses the minor numbers 16000-16002 and 4000-4001. The vxdg reminor command is used to assign the base minor number 5000 to the new disk device group.


# ls -l /global/.devices/node@nodeid/dev/vx/dsk/*
/global/.devices/node@nodeid/dev/vx/dsk/dg1
brw-------   1 root     root      56,16000 Oct  7 11:32 dg1v1
brw-------   1 root     root      56,16001 Oct  7 11:32 dg1v2
brw-------   1 root     root      56,16002 Oct  7 11:32 dg1v3
 
/global/.devices/node@nodeid/dev/vx/dsk/dg2
brw-------   1 root     root      56,4000 Oct  7 11:32 dg2v1
brw-------   1 root     root      56,4001 Oct  7 11:32 dg2v2
# vxdg reminor dg3 5000

3.3.10 How to Register a Disk Group as a Disk Device Group (VERITAS Volume Manager)

This procedure uses the scsetup(1M) utility to register the associated VxVM disk group as a Sun Cluster disk device group.


Note -

Once a disk device group has been registered with the cluster, never import or deport a VxVM disk group using VxVM commands. If you make a change to the VxVM disk group or volume, use the procedure "3.3.11 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)" to register the disk device group configuration changes. This will ensure that the global namespace is in the correct state.


The prerequisites to register a VxVM disk device group are:

When you define the preference order, you also specify whether you want the disk device group to be switched back to the most preferred node in the event that the most preferred node goes down and later returns to the cluster.

See scconf(1M) for more information on node preference and failback options.

  1. Become superuser on any node of the cluster.

  2. Enter the scsetup utility.


    # scsetup
    

    The Main Menu is displayed.

  3. To work with VxVM disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To register a VxVM disk device group, type 1 (Register a VxVM disk group as a device group).

    Follow the instructions and enter the name of the VxVM disk group to be registered as a Sun Cluster disk device group.

    If you use VxVM to set up shared disk groups for Oracle Parallel Server/Real Application Clusters, you do not register the shared disk groups with the cluster framework. Use the cluster functionality of VxVM as described in the VERITAS Volume Manager Administrator's Reference Guide.

  5. If you encounter the following error while attempting to register the disk device group, reminor the disk device group.


    scconf: Failed to add device group - in use

    To reminor the disk device group, use the procedure "3.3.9 How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)". This procedure enables you to assign a new minor number that does not conflict with a minor number used by an existing disk device group.

  6. Verify that the disk device group is registered and online.

    If the disk device group is properly registered, information for the new disk device group displays when using the following command.


    # scstat -D
    


    Note -

    If you change any configuration information for a VxVM disk group or volume that is registered with the cluster, you must reregister the disk device group by using scsetup. Such configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See "3.3.1 How to Update the Global Device Namespace".


3.3.10.1 Example--Registering a VERITAS Volume Manager Disk Device Group

The following example shows the scconf command generated by scsetup when it registers a VxVM disk device group (dg1), and the verification step. This example assumes that the VxVM disk group and volume were created previously.


# scsetup

scconf -a -D type=vxvm,name=dg1,nodelist=phys-schost-1:phys-schost-2

# scstat -D
-- Device Group Servers --
                         Device Group      Primary           Secondary
                         ------------      -------           ---------
Device group servers:    dg1              phys-schost-1      phys-schost-2
 
-- Device Group Status --
                              Device Group        Status              
                              ------------        ------              
  Device group status:        dg1                 Online

3.3.10.2 Where to Go From Here

To create a cluster file system on the VxVM disk device group, see "3.4.1 How to Add a Cluster File System".

If there are problems with the minor number, see "3.3.9 How to Assign a New Minor Number to a Disk Device Group (VERITAS Volume Manager)".

3.3.11 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)

When you change any configuration information for a VxVM disk group or volume, you need to register the configuration changes for the Sun Cluster disk device group. This ensures that the global namespace is in the correct state.

  1. Become superuser on any node in the cluster.

  2. Enter the scsetup(1M) utility.


    # scsetup
    

    The Main Menu is displayed.

  3. To work with VxVM disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To register configuration changes, type 2 (Synchronize volume information for a VxVM device group).

    Follow the instructions and enter the VxVM disk group that has changed configuration.

3.3.11.1 Example--Registering VERITAS Volume Manager Disk Group Configuration Changes

The following example shows the scconf command generated by scsetup when it registers a changed VxVM disk device group (dg1). This example assumes that the VxVM disk group and volume were created previously.


# scsetup
 
scconf -c -D name=dg1,sync

3.3.12 How to Remove a Volume From a Disk Device Group (VERITAS Volume Manager)


Note -

After removing the volume from the disk device group, you must register the configuration changes to the disk device group using the procedure "3.3.11 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)".


  1. Become superuser on any node of the cluster.

  2. Determine the primary node and status for the disk device group.


    # scstat -D
    

  3. If the disk device group is offline, bring it online.


    # scswitch -z -D disk-device-group -h nodelist
    

    -z

    Performs the switch.

    -D disk-device-group

    Specifies the device group to switch.

    -h nodelist

    Specifies the name of the node to switch to. This node becomes the new primary.

  4. From the primary node (the node currently mastering the disk device group), remove the VxVM volume in the disk group.


    # vxedit -g diskgroup -rf rm volume
    

    -g diskgroup

    Specifies the VxVM disk group containing the volume.

    -rf rm volume

    Removes the specified volume.

  5. Register the disk device group configuration changes to update the global namespace, using scsetup.

    See "3.3.11 How to Register Disk Group Configuration Changes (VERITAS Volume Manager)".

3.3.13 How to Remove and Unregister a Disk Device Group (VERITAS Volume Manager)

Removing a Sun Cluster disk device group will cause the corresponding VxVM disk group to be deported, not destroyed. However, even though the VxVM disk group still exists, it cannot be used in the cluster unless re-registered.

This procedure uses the scsetup(1M) utility to remove a VxVM disk group and unregister it as a Sun Cluster disk device group.

  1. Become superuser on any node of the cluster.

  2. Take the disk device group offline.


    # scswitch -F -D disk-device-group
    

    -F

    Places the disk device group offline.

    -D disk-device-group

    Specifies the device group to take offline.

  3. Enter the scsetup utility.

    The Main Menu is displayed.


    # scsetup
    

  4. To work with VxVM device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  5. To unregister a VxVM disk group, type 3 (Unregister a VxVM device group).

    Follow the instructions and enter the VxVM disk group to be unregistered.

3.3.13.1 Example--Removing and Unregistering a VERITAS Volume Manager Disk Device Group

The following example shows the VxVM disk device group dg1 taken offline, and the scconf(1M) command generated by scsetup when it removes and unregisters the disk device group.


# scswitch -F -D dg1
# scsetup

   scconf -r -D name=dg1

3.3.14 How to Add a Node to a Disk Device Group (VERITAS Volume Manager)

This procedure adds a node to a disk device group using the scsetup(1M) utility.

The prerequisites to add a node to a VxVM disk device group are:

  1. Become superuser on any node of the cluster.

  2. Enter the scsetup(1M) utility

    The Main Menu is displayed.


    # scsetup
    

  3. To work with VxVM disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To add a node to a VxVM disk device group, type 4 (Add a node to a VxVM device group).

    Follow the instructions and enter the device group and node names.

  5. Verify that the node has been added.

    Look for the device group information for the new disk displayed by the following command.


    # scconf -p 
    

3.3.14.1 Example--Adding a Node to a VERITAS Volume Manager Disk Device Group

The following example shows the scconf command generated by scsetup when it adds a node (phys-schost-3) to a VxVM disk device group (dg1), and the verification step.


# scsetup
 
scconf a D type=vxvm,name=dg1,nodelist=phys-schost-3
  
# scconf -p 
Device group name:                               dg1
   Device group type:                            VXVM
   Device group failback enabled:                yes
   Device group node list:                       phys-schost-1, phys-schost-3

3.3.15 How to Remove a Node From a Disk Device Group (VERITAS Volume Manager)

Use this procedure to remove a cluster node from an existing cluster disk device group (disk group) running VERITAS Volume Manager (VxVM).

  1. Determine the disk device group of which the node to be removed is a member.


    # scstat -D
    

  2. Become superuser on a current cluster member node.

  3. Execute the scsetup utility.


    # scsetup
    

    The Main Menu is displayed.

  4. To reconfigure a disk device group, type 4 (Device groups and volumes).

  5. To remove the node from the VxVM disk device group, type 5 (Remove a node from a VxVM device group).

    Follow the prompts to remove the cluster node from the disk device group. You will be asked for information about the following:

    VxVM device group

    Node name

  6. Verify that the node has been removed from the VxVM disk device group:


    # scconf -p | grep Device
    

3.3.15.1 Example--Removing a Node From a Disk Device Group (VxVM)

This example shows removal of the node named phys-schost-4 from the dg1 VxVM disk device group.


[Determine the disk device group for the node:]
# scstat -D
  -- Device Group Servers --
                       Device Group  Primary        Secondary
                       ------------  -------        ---------
  Device group servers: dg-schost-1  phys-schost-1  phys-schost-2
[Become superuser and execute the scsetup utility:]
# scsetup
 Select Device groups and volumes>Remove a node from a VxVM device group.
Answer the questions when prompted. 
You will need the following information.
  You Will Need:            Example:
  VxVM device group name    dg1
  node names                phys-schost-1
[Verify that the scconf command executed properly:]
 
scconf -r -D name=dg1,nodelist=phys-schost-4
 
    Command completed successfully.
Quit the scsetup Device Groups Menu and Main Menu.
[Verify that the node was removed:]
# scconf -p | grep Device
  Device group name:                 dg1
    Device group type:               VxVM
    Device group failback enabled:   no
    Device group node list:          phys-schost-3
    Device group diskset name:    	dg1

3.3.16 How to Change Disk Device Properties

The method for establishing the primary ownership of a disk device group is based on the setting of an ownership preference attribute called preferenced. If the attribute is not set, the primary owner of an otherwise unowned disk device group is the first node that attempts to access a disk in that group. However, if this attribute is set, you must specify the preferred order in which nodes attempt to establish ownership.

If you disable the preferenced attribute, then the failback attribute is also automatically disabled. However, if you attempt to enable or re-enable the preferenced attribute, you have the choice of enabling or disabling the failback attribute.

If the preferenced attribute is either enabled or re-enabled, you are required to re-establish the order of nodes in the primary ownership preference list.

This procedure uses scsetup(1M) to set or unset the preferenced attribute and the failback attribute for Solstice DiskSuite or VxVM disk device groups.

To run this procedure, you need the name of the disk device group for which you are changing attribute values.

  1. Become superuser on any node of the cluster.

  2. Enter the scsetup(1M) utility

    The Main Menu is displayed.


    # scsetup
    

  3. To work with disk device groups, type 4 (Device groups and volumes).

    The Device Groups Menu is displayed.

  4. To change a device group property, type 6 (Change key properties of a VxVM or Solstice DiskSuite device group).

    Follow the instructions to set the preferenced and failback options for a device group.

  5. Verify that the disk device group attributes have been changed.

    Look for the device group information displayed by the following command.


    # scconf -p 
    

3.3.16.1 Example--Changing Disk Device Group Properties

The following example shows the scconf command generated by scsetup when it sets the attribute values for a disk device group (dg-schost-1).


# scconf -c -D name=dg-schost-1,nodelist=phys-schost-1:phys-schost-2,\
preferenced=true,failback=enabled

# scconf -p | grep Device
Device group name:                             dg-schost-1
   Device group type:                          SDS
   Device group failback enabled:              yes
   Device group node list:                     phys-schost-1, phys-schost-2
   Device group ordered node list:             yes
   Device group diskset name:                  dg-schost-1

3.3.17 How to List a Disk Device Group Configuration

You do not need to be superuser to list the configuration.

There are three ways you can list disk device group configuration information.

  1. Use the SunPlex Manager GUI.

    See the SunPlex Manager online help for more information.

  1. Use scstat(1M) to list the disk device group configuration.


    % scstat -D
    

    Use scconf(1M) to list the disk device group configuration.


    % scconf -p
    

3.3.17.1 Example--Listing the Disk Device Group Configuration By Using scstat

Using the scstat -D command displays the following information.


-- Device Group Servers --
                         Device Group      Primary             Secondary
                         ------------      -------             ---------
  Device group servers:  schost-2          -                   -
  Device group servers:  schost-1          phys-schost-2       phys-schost-3
  Device group servers:  schost-3          -                   -
-- Device Group Status --
                              Device Group      Status              
                              ------------      ------              
  Device group status:        schost-2          Offline
  Device group status:        schost-1          Online
  Device group status:        schost-3          Offline

3.3.17.2 Example--Listing the Disk Device Group Configuration By Using scconf

When using the scconf command, look for the information listed under device groups.


# scconf -p
...
Device group name: dg-schost-1
	Device group type:              SDS
	Device group failback enabled:  yes
	Device group node list:         phys-schost-2, phys-schost-3
	Device group diskset name:      dg-schost-1

3.3.18 How to Switch the Primary for a Device Group

This procedure can also be used to start (bring online) an inactive device group.

You can also bring an inactive device group online, or switch the primary for a device group, by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.

  1. Become superuser on any node of the cluster.

  2. Use scswitch(1M) to switch the disk device group primary.


    # scswitch -z -D disk-device-group -h nodelist
    

    -z

    Performs the switch.

    -D disk-device-group

    Specifies the device group to switch.

    -h nodelist

    Specifies the name of the node to switch to. This node become the new primary.

  3. Verify that the disk device group has been switched to the new primary.

    If the disk device group is properly registered, information for the new disk device group displays when using the following command.


    # scstat -D
    

3.3.18.1 Example--Switching the Primary for a Disk Device Group

The following example shows how to switch the primary for a disk device group and verify the change.


# scswitch -z -D dg-schost-1 -h phys-schost-1
# scstat -D

-- Device Group Servers --
                          Device Group        Primary             Secondary
                         ------------        -------             ---------
Device group servers:    dg1                 phys-schost-1       phys-schost-2
 
-- Device Group Status --
                                Device Group        Status              
                              ------------        ------              
  Device group status:        dg1                 Online

3.3.19 How to Put a Disk Device Group in Maintenance State

Putting a device group in maintenance state prevents that device group from automatically being brought online whenever one of its devices is accessed. You should put a device group in maintenance state when completing repair procedures that require that all I/O activity be quiesced until completion of the repair. Putting a device group in maintenance state also helps prevent data lost by ensuring that a disk device group is not brought online on one node while the diskset or disk group is being repaired on another node.


Note -

Before a device group can be placed in maintenance state, all access to its devices must be stopped, and all dependent file systems must be unmounted.


  1. Place the device group in maintenance state.


    # scswitch -m -D disk-device-group
    

  2. If the repair procedure being performed requires ownership of a diskset or disk group, manually import that diskset or disk group.

    • For Solstice DiskSuite:


      # metaset -C take -f -s diskset
      


    Caution - Caution -

    If you are taking ownership of an SDS diskset, the metaset -C take command must be used when the device group is in maintenance state. Using metaset -t will bring the device group online as part of taking ownership. If you are importing a VxVM disk group, the -t flag must be used when importing the disk group. This prevents the disk group from automatically being imported if this node is rebooted.


    • For VERITAS Volume Manager:


      # vxdg -t import disk-group-name
      

  3. Complete whatever repair procedure you need to perform.

  4. Release ownership of the diskset or disk group.


    Caution - Caution -

    Before taking the disk device group out of maintenance state, you must release ownership of the diskset or disk group. Failure to do so may result in data loss.


    • For Solstice DiskSuite:


      # metaset -C release -s diskset
      

    • For VERITAS Volume Manager:


      # vxdg deport disk-group-name
      

  5. Bring the disk device group online.


    # scswitch -z -D disk-device-group -h nodelist
    

3.3.19.1 Example--Putting a Disk Device Group in Maintenance State

This example shows how to put disk device group dg-schost-1 into maintenance state, and remove the disk device group from maintenance state.


[Place the disk device group in maintenance state.]
# scswitch -m -D dg-schost-1
 
[If needed, manually import the diskset or disk group.]
For Solstice DiskSuite:
  # metaset -C take -f -s dg-schost-1
For VERITAS Volume Manager:
  # vxdg -t import dg1
  
[Complete all necessary repair procedures.]
  
[Release ownership.]
For Solstice DiskSuite:
  # metaset -C release -s dg-schost-1
For VERITAS Volume Manager:
  # vxdg deport dg1
  
[Bring the disk device group online.]
# scswitch -z -D dg-schost-1 -h phys-schost-1

3.4 Administering Cluster File Systems

Table 3-3 Task Map: Administering Cluster File Systems

Task 

For Instructions, Go To... 

Add cluster file systems after the initial Sun Cluster installation 

    - Use newfs and mkdir

"3.4.1 How to Add a Cluster File System"

Remove a cluster file system 

    - Use fuser and umount

"3.4.2 How to Remove a Cluster File System"

Check global mount points in a cluster for consistency across nodes 

    - Use sccheck

"3.4.3 How to Check Global Mounts in a Cluster"

3.4.1 How to Add a Cluster File System

Perform this task for each cluster file system you create after your initial Sun Cluster installation.


Caution - Caution -

Be sure you specify the correct disk device name. Creating a cluster file system destroys any data on the disks. If you specify the wrong device name, you will erase data that you may not intend to delete.


The prerequisites to add an additional cluster file system are:

If you used SunPlex Manger to install data services, one or more cluster file systems already exist if there were sufficient shared disks on which to create the cluster file systems.

  1. Become superuser on any node in the cluster.


    Tip -

    For faster file system creation, become superuser on the current primary of the global device for which you are creating a file system.


  2. Create a file system using the newfs(1M) command.


    # newfs raw-disk-device
    

    The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

    Table 3-4 Sample Raw Disk Device Names

    If Your Volume Manager Is ... 

    A Disk Device Name Might Be ... 

    Description 

    Solstice DiskSuite 

    /dev/md/oracle/rdsk/d1

    Raw disk device d1 within the oracle diskset.

    VERITAS Volume Manager 

    /dev/vx/rdsk/oradg/vol01

    Raw disk device vol01 within the oradg disk group.

    None 

    /dev/global/rdsk/d1s3

    Raw disk device for block slice d1s3.

     

  3. On each node in the cluster, create a mount point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system will not be accessed on that node.


    Tip -

    For ease of administration, create the mount point in the /global/device-group directory. Using this location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    # mkdir -p /global/device-group/mountpoint
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device.

    mountpoint

    Name of the directory on which to mount the cluster file system.

  4. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    1. Use the following required mount options.


      Note -

      Logging is required for all cluster file systems.


      • Solaris UFS logging - Use the global,logging mount options. See the mount_ufs(1M) man page for more information about UFS mount options.


        Note -

        The syncdir mount option is not required for UFS cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior. If you do not, you will have the same behavior that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition until you close a file. The cases in which you could have problems if you do not specify syncdir are rare. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.


      • Solstice DiskSuite trans metadevice - Use the global mount option (do not use the logging mount option). See your Solstice DiskSuite documentation for information about setting up trans metadevices.

      • VxFS logging - Use the global, log mount options. See the mount_vxfs(1M) man page for more information about VxFS mount options.

    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.

    See the vfstab(4) man page for details.

  5. On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If there are no errors, nothing is returned.

  6. From any node in the cluster, mount the cluster file system.


    # mount /global/device-group/mountpoint
    

  7. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

    To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.

3.4.1.1 Example--Adding a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
...
 
[on each node:]
# mkdir -p /global/oracle/d1
 
# vi /etc/vfstab
#device                device                 mount            FS  fsck  mount          mount
#to mount              to fsck                point           type pass  at boot      options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs  2    yes         global,logging
[save and exit]
 
[on one node:]
# sccheck
# mount /global/oracle/d1
# mount
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/
largefiles on Sun Oct 3 08:56:16 2001

3.4.2 How to Remove a Cluster File System

You `remove' a cluster file system by merely unmounting it. If you want to also remove or delete the data, remove the underlying disk device (or metadevice or volume) from the system.


Note -

Cluster file systems are automatically unmounted as part of the system shutdown that occurs when you run scshutdown(1M) to stop the entire cluster. A cluster file system is not unmounted when you run shutdown to stop a single node. However, if the node being shut down is the only node with a connection to the disk, any attempt to access the cluster file system on that disk results in an error.


The prerequisites to unmount cluster file systems are:

  1. Become superuser on any node in the cluster.

  2. Determine which cluster file systems are mounted.


    # mount -v
    

  3. On each node, list all processes that are using the cluster file system, so you know which processes you are going to stop.


    # fuser -c [ -u ] mountpoint
    

    -c

    Reports on files that are mount points for file systems and any files within those mounted file systems.

    -u

    (Optional) Displays the user login name for each process ID.

    mountpoint

    Specifies the name of the cluster file system for which you want to stop processes.

  4. On each node, stop all processes for the cluster file system.

    Use your preferred method for stopping processes. If necessary, use the following command to force termination of processes associated with the cluster file system.


    # fuser -c -k mountpoint
    

    A SIGKILL is sent to each process using the cluster file system.

  5. On each node, verify that no processes are using the file system.


    # fuser -c mountpoint
    

  6. From just one node, umount the file system.


    # umount mountpoint
    

    mountpoint

    Specifies the name of the cluster file system you want to unmount. This can be either the directory name where the cluster file system is mounted, or the device name path of the file system.

  7. (Optional) Edit the /etc/vfstab file to delete the entry for the cluster file system being removed.

    Perform this step on each cluster node that has an entry for this cluster file system in its /etc/vfstab file.

  8. (Optional) Remove the disk device group/metadevice/plex.

    See your volume manager documentation for more information.

3.4.2.1 Example--Removing a Cluster File System

The following example removes a UFS cluster file system mounted on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# mount -v
...
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles on Sun Oct  3 08:56:16 1999
# fuser -c /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c -k /global/oracle/d1
/global/oracle/d1: 4006c
# fuser -c /global/oracle/d1
/global/oracle/d1:
# umount /global/oracle/d1
 
(on each node, remove the highlighted entry:)
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type    pass    at boot options
#                       
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
[Save and exit.]

Note -

To remove the data on the cluster file system, remove the underlying device. See your volume manager documentation for more information.


3.4.3 How to Check Global Mounts in a Cluster

The sccheck(1M) utility verifies the syntax of the entries for cluster file systems in the /etc/vfstab file. If there are no errors, nothing is returned.


Note -

Run sccheck after making cluster configuration changes, such as removing a cluster file system, that have affected devices or volume management components.


  1. Become superuser on any node in the cluster.

  2. Check the cluster global mounts.


    # sccheck