JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster System Administration Guide     Oracle Solaris Cluster 4.0
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering Oracle Solaris Cluster

2.  Oracle Solaris Cluster and RBAC

3.  Shutting Down and Booting a Cluster

4.  Data Replication Approaches

5.  Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems

Overview of Administering Global Devices and the Global Namespace

Global Device Permissions for Solaris Volume Manager

Dynamic Reconfiguration With Global Devices

Overview of Administering Cluster File Systems

Cluster File System Restrictions

Administering Device Groups

How to Update the Global-Devices Namespace

How to Change the Size of a lofi Device That Is Used for the Global-Devices Namespace

Migrating the Global-Devices Namespace

How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device

How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition

Adding and Registering Device Groups

How to Add and Register a Device Group (Solaris Volume Manager)

How to Add and Register a Device Group (Raw-Disk)

How to Add and Register a Replicated Device Group (ZFS)

Maintaining Device Groups

How to Remove and Unregister a Device Group (Solaris Volume Manager)

How to Remove a Node From All Device Groups

How to Remove a Node From a Device Group (Solaris Volume Manager)

How to Remove a Node From a Raw-Disk Device Group

How to Change Device Group Properties

How to Set the Desired Number of Secondaries for a Device Group

How to List a Device Group Configuration

How to Switch the Primary for a Device Group

How to Put a Device Group in Maintenance State

Administering the SCSI Protocol Settings for Storage Devices

How to Display the Default Global SCSI Protocol Settings for All Storage Devices

How to Display the SCSI Protocol of a Single Storage Device

How to Change the Default Global Fencing Protocol Settings for All Storage Devices

How to Change the Fencing Protocol for a Single Storage Device

Administering Cluster File Systems

How to Add a Cluster File System

How to Remove a Cluster File System

How to Check Global Mounts in a Cluster

Administering Disk-Path Monitoring

How to Monitor a Disk Path

How to Unmonitor a Disk Path

How to Print Failed Disk Paths

How to Resolve a Disk-Path Status Error

How to Monitor Disk Paths From a File

How to Enable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail

How to Disable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail

6.  Administering Quorum

7.  Administering Cluster Interconnects and Public Networks

8.  Adding and Removing a Node

9.  Administering the Cluster

10.  Configuring Control of CPU Usage

11.  Updating Your Software

12.  Backing Up and Restoring a Cluster

A.  Example

Index

Administering Device Groups

As your cluster requirements change, you might need to add, remove, or modify the device groups on your cluster. Oracle Solaris Cluster provides an interactive interface called clsetup that you can use to make these changes. clsetup generates cluster commands. Generated commands are shown in the examples at the end of some procedures. The following table lists tasks for administering device groups and provides links to the appropriate procedures in this section.


Caution

Caution - Do not run metaset —s setname —f -t on a cluster node that is booted outside the cluster if other nodes are active cluster members and at least one of them owns the disk set.



Note - Oracle Solaris Cluster software automatically creates a raw-disk device group for each disk and tape device in the cluster. However, cluster device groups remain in an offline state until you access the groups as global devices.


Table 5-2 Task Map: Administering Device Groups

Task
Instructions
Update the global-devices namespace without a reconfiguration reboot by using the cldevice populate command
Change the size of a lofi device that is used for the global-devices namespace
Move an existing global-devices namespace
Add Solaris Volume Manager disksets and register them as device groups by using the metaset command
Add and register a raw-disk device group by using the cldevicegroup command
Add a named device group for ZFS using the cldevicegroup command
Remove Solaris Volume Manager device groups from the configuration by using the metaset and metaclear commands
Remove a node from all device groups by using the cldevicegroup, metaset, and clsetup commands
Remove a node from a Solaris Volume Manager device group by using the metaset command
Remove a node from a raw-disk device group by using the cldevicegroup command
Change device group properties by using clsetup to generate cldevicegroup
Display device groups and properties by using the cldevicegroup show command
Change the desired number of secondaries for a device group by using clsetup to generate cldevicegroup
Switch the primary for a device group by using the cldevicegroup switch command
Put a device group in maintenance state by using the metaset or vxdg command

How to Update the Global-Devices Namespace

When adding a new global device, manually update the global-devices namespace by running the cldevice populate command.


Note - The cldevice populate command does not have any effect if the node that is running the command is not currently a cluster member. The command also has no effect if the /global/.devices/node@ nodeID file system is not mounted.


  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on any node of the cluster.
  2. On each node in the cluster, run the devfsadm command.

    You can run this command on all nodes in the cluster at the same time. For more information, see the devfsadm(1M) man page.

  3. Reconfigure the namespace.
    # cldevice populate
  4. On each node, verify that the “cldevice populate” command has been completed before you attempt to create any disksets.

    The cldevice command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.

    # ps -ef | grep cldevice populate

Example 5-1 Updating the Global-Devices Namespace

The following example shows the output generated by successfully running the cldevice populate command.

# devfsadm
cldevice populate 
Configuring the /dev/global directory (global devices)...
obtaining access to all attached disks
reservation program successfully exiting
# ps -ef | grep cldevice populate

How to Change the Size of a lofi Device That Is Used for the Global-Devices Namespace

If you use a lofi device for the global-devices namespace on one or more nodes of the global cluster, perform this procedure to change the size of the device.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on a node whose lofi device for the global-devices namespace you want to resize.
  2. Evacuate services off the node and reboot the node into noncluster mode.

    Do this to ensure that global devices are not served from this node while you perform this procedure. For instructions, see How to Boot a Node in Noncluster Mode.

  3. Unmount the global-device file system and detach its lofi device.

    The global-devices file system mounts locally.

    phys-schost# umount /global/.devices/node\@`clinfo -n` > /dev/null 2>&1
    
    Ensure that the lofi device is detached
    phys-schost# lofiadm -d /.globaldevices
    The command returns no output if the device is detached

    Note - If the file system is mounted by using the -m option, no entry is added to the mnttab file. The umount command might report a warning similar to the following:

    umount: warning: /global/.devices/node@2 not in mnttab    ====>>>>
    not mounted

    This warning is safe to ignore.


  4. Delete and recreate the /.globaldevices file with the required size.

    The following example shows the creation of a new /.globaldevices file that is 200 Mbytes in size.

    phys-schost# rm /.globaldevices
    phys-schost# mkfile 200M /.globaldevices
  5. Create a new file system for the global-devices namespace.
    phys-schost# lofiadm -a /.globaldevices
    phys-schost# newfs `lofiadm /.globaldevices` < /dev/null
  6. Boot the node into cluster mode.

    The global devices are now populated on the new file system.

    phys-schost# reboot
  7. Migrate to the node any services that you want to run on that node.

Migrating the Global-Devices Namespace

You can create a namespace on a loopback file interface (lofi) device, rather than creating a global-devices namespace on a dedicated partition.


Note - ZFS for root file systems is supported, with one significant exception. If you use a dedicated partition of the boot disk for the global-devices file system, you must use only UFS as its file system. The global-devices namespace requires the proxy file system (PxFS) running on a UFS file system. However, a UFS file system for the global-devices namespace can coexist with a ZFS file system for the root (/) file system and other root file systems, for example, /var or /home. Alternatively, if you instead use a lofi device to host the global-devices namespace, there is no limitation on the use of ZFS for root file systems.


The following procedures describe how to move an existing global-devices namespace from a dedicated partition to a lofi device or the opposite:

How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device

  1. Become superuser on the global-cluster voting node whose namespace location you want to change.
  2. Evacuate services off the node and reboot the node into noncluster mode.

    Do this to ensure that global devices are not served from this node while you perform this procedure. For instructions, see How to Boot a Node in Noncluster Mode.

  3. Ensure that a file named /.globaldevices does not exist on the node.

    If the file does exist, delete it.

  4. Create the lofi device.
    # mkfile 100m /.globaldevices# lofiadm -a /.globaldevices
    # LOFI_DEV=`lofiadm /.globaldevices`
    # newfs `echo ${LOFI_DEV} | sed -e 's/lofi/rlofi/g'` < /dev/null# lofiadm -d /.globaldevices
  5. In the /etc/vfstab file, comment out the global-devices namespace entry.

    This entry has a mount path that begins with /global/.devices/node@nodeID.

  6. Unmount the global-devices partition /global/.devices/node@nodeID.
  7. Disable and re-enable the globaldevices and scmountdev SMF services.
    # svcadm disable globaldevices
    # svcadm disable scmountdev
    # svcadm enable scmountdev
    # svcadm enable globaldevices

    A lofi device is now created on /.globaldevices and mounted as the global-devices file system.

  8. Repeat these steps on other nodes whose global-devices namespace you want to migrate from a partition to a lofi device.
  9. From one node, populate the global-device namespaces.
    # /usr/cluster/bin/cldevice populate

    On each node, verify that the command has completed processing before you perform any further actions on the cluster.

    # ps -ef | grep cldevice populate

    The global-devices namespace now resides on a lofi device.

  10. Migrate to the node any services that you want to run on that node.

How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition

  1. Become superuser on the global-cluster voting node whose namespace location you want to change.
  2. Evacuate services off the node and reboot the node into noncluster mode

    Do this to ensure that global devices are not served from this node while you perform this procedure. For instructions, see How to Boot a Node in Noncluster Mode.

  3. On a local disk of the node, create a new partition that meets the following requirements:
    • Is at least 512 MByte in size

    • Uses the UFS file system

  4. Add an entry to the /etc/vfstab file for the new partition to be mounted as the global-devices file system.
    • Determine the current node's node ID.
      # /usr/sbin/clinfo -n node- ID
    • Create the new entry in the /etc/vfstab file, using the following format:
      blockdevice rawdevice /global/.devices/node@nodeID ufs 2 no global

    For example, if the partition that you choose to use is /dev/did/rdsk/d5s3, the new entry to add to the /etc/vfstab file would then be as follows: /dev/did/dsk/d5s3 /dev/did/rdsk/d5s3 /global/.devices/node@3 ufs 2 no global

  5. Unmount the global devices partition /global/.devices/node@nodeID.
  6. Remove the lofi device that is associated with the /.globaldevices file.
    # lofiadm -d /.globaldevices
  7. Delete the /.globaldevices file.
    # rm /.globaldevices
  8. Disable and re-enable the globaldevices and scmountdev SMF services.
    # svcadm disable globaldevices# svcadm disable scmountdev
    # svcadm enable scmountdev
    # svcadm enable globaldevices

    The partition is now mounted as the global-devices namespace file system.

  9. Repeat these steps on other nodes whose global-devices namespace you might want to migrate from a lofi device to a partition.
  10. Boot into cluster mode and populate the global-devices namespace.
    1. From one node in the cluster, populate the global-devices namespace.
      # /usr/cluster/bin/cldevice populate
    2. Ensure that the process completes on all nodes of the cluster before you perform any further action on any of the nodes.
      # ps -ef | grep cldevice populate

      The global-devices namespace now resides on the dedicated partition.

  11. Migrate to the node any services that you want to run on that node.

Adding and Registering Device Groups

You can add and register device groups for Solaris Volume Manager, ZFS, or raw-disk.

How to Add and Register a Device Group (Solaris Volume Manager)

Use the metaset command to create a Solaris Volume Manager disk set and register the disk set as an Oracle Solaris Cluster device group. When you register the disk set, the name that you assigned to the disk set is automatically assigned to the device group.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.


Caution

Caution - The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.


  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on one of the nodes connected to the disks where you are creating the disk set.
  2. Add the Solaris Volume Manager disk set and register it as a device group with Oracle Solaris Cluster.

    To create a multi-owner disk group, use the -M option.

    # metaset -s diskset -a -M -h nodelist
    -s diskset

    Specifies the disk set to be created.

    -a -h nodelist

    Adds the list of nodes that can master the disk set.

    -M

    Designates the disk group as multi-owner.


    Note - Running the metaset command to set up a Solaris Volume Manager device group on a cluster results in one secondary by default, regardless of the number of nodes that are included in that device group. You can change the desired number of secondary nodes by using the clsetup utility after the device group has been created. Refer to How to Set the Desired Number of Secondaries for a Device Group for more information about disk failover.


  3. If you are configuring a replicated device group, set the replication property for the device group.
    # cldevicegroup sync devicegroup
  4. Verify that the device group has been added.

    The device group name matches the disk set name that is specified with metaset.

    # cldevicegroup list
  5. List the DID mappings.
    # cldevice show | grep Device
    • Choose drives that are shared by the cluster nodes that will master or potentially master the disk set.

    • Use the full DID device name, which has the form /dev/did/rdsk/dN, when you add a drive to a disk set.

    In the following example, the entries for DID device /dev/did/rdsk/d3 indicate that the drive is shared by phys-schost-1 and phys-schost-2.

    === DID Device Instances ===                   
    DID Device Name:                                /dev/did/rdsk/d1
      Full Device Path:                               phys-schost-1:/dev/rdsk/c0t0d0
    DID Device Name:                                /dev/did/rdsk/d2
      Full Device Path:                               phys-schost-1:/dev/rdsk/c0t6d0
    DID Device Name:                                /dev/did/rdsk/d3
      Full Device Path:                               phys-schost-1:/dev/rdsk/c1t1d0
      Full Device Path:                               phys-schost-2:/dev/rdsk/c1t1d0
    …
  6. Add the drives to the disk set.

    Use the full DID path name.

    # metaset -s setname -a /dev/did/rdsk/dN
    -s setname

    Specifies the disk set name, which is the same as the device group name.

    -a

    Adds the drive to the disk set.


    Note - Do not use the lower-level device name (cNtXdY) when you add a drive to a disk set. Because the lower-level device name is a local name and not unique throughout the cluster, using this name might prevent the metaset from being able to switch over.


  7. Verify the status of the disk set and drives.
    # metaset -s setname

Example 5-2 Adding a Solaris Volume Manager Device Group

The following example shows the creation of the disk set and device group with the disk drives /dev/did/rdsk/d1 and /dev/did/rdsk/d2 and verifies that the device group has been created.

# metaset -s dg-schost-1 -a -h phys-schost-1

# cldevicegroup list
dg-schost-1 

# metaset -s dg-schost-1 -a /dev/did/rdsk/d1 /dev/did/rdsk/d2

How to Add and Register a Device Group (Raw-Disk)

Oracle Solaris Cluster software supports the use of raw-disk device groups in addition to other volume managers. When you initially configure Oracle Solaris Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Oracle Solaris Cluster software.

Create a new device group of the raw-disk type for the following reasons:


Caution

Caution - If you are creating a device group on replicated devices, the name of the device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.


  1. Identify the devices that you want to use and unconfigure any predefined device groups.

    The following commands remove the predefined device groups for d7 and d8.

    paris-1# cldevicegroup disable dsk/d7 dsk/d8
    paris-1# cldevicegroup offline dsk/d7 dsk/d8
    paris-1# cldevicegroup delete dsk/d7 dsk/d8
  2. Create the new raw-disk device group, including the desired devices.

    The following command creates a global device group, rawdg, which contains d7 and d8.

    paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 -t rawdisk -d d7,d8 rawdg
    paris-1# /usr/cluster/lib/dcs/cldg show rawdg -d d7 rawdg
    paris-1# /usr/cluster/lib/dcs/cldg show rawdg -d d8 rawdg

How to Add and Register a Replicated Device Group (ZFS)

To replicate ZFS, you must create a named device group and list the disks that belong to the zpool. A device can belong to only one device group at a time, so if you already have an Oracle Solaris Cluster device group that contains the device, you must delete the group before you add that device to a new ZFS device group.

The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.

  1. Delete the default device groups that correspond to the devices in the zpool.

    For example, if you have a zpool called mypool that contains two devices /dev/did/dsk/d2 and /dev/did/dsk/d13, you must delete the two default device groups called d2 and d13.

    # cldevicegroup offline dsk/d2 dsk/d13
    # cldevicegroup add dsk/d2 dsk/d13
  2. Create a named device group with DIDs that correspond to those in the device group you removed in Step 1.
    # cldevicegroup create -n pnode1,pnode2 -d d2,d13 -t rawdisk mypool

    This action creates a device group called mypool (with the same name as the zpool), which manages the raw devices /dev/did/dsk/d2 and /dev/did/dsk/d13.

  3. Create a zpool that contains those devices.
    # zpool create mypool mirror /dev/did/dsk/d2 /dev/did/dsk/d13
  4. Create a resource group to manage migration of the replicated devices (in the device group) with only global zones in its nodelist.
    # clrg create -n pnode1,pnode2 migrate_truecopydg-rg
  5. Create a hasp-rs resource in the resource group you created in Step 4, setting theglobaldevicepaths property to a device group of type raw-disk.

    You created this device in Step 2.

    # clrs create -t HAStoragePlus -x globaldevicepaths=mypool -g \
    migrate_truecopydg-rg hasp2migrate_mypool
  6. Set the +++ value in the rg_affinities property from this resource group to the resource group you created in Step 4.
    # clrg create -n pnode1:zone-1,pnode2:zone-2 -p \
    RG_affinities=+++migrate_truecopydg-rg sybase-rg
  7. Create an HAStoragePlus resource (hasp-rs) for the zpool you created in Step 3 in the resource group that you created in either Step 4 or Step 6.

    Set the resource_dependencies property to the hasp-rs resource that you created in Step 5.

    # clrs create -g sybase-rg -t HAStoragePlus -p zpools=mypool \
    -p resource_dependencies=hasp2migrate_mypool \
    -p ZpoolsSearchDir=/dev/did/dsk hasp2import_mypool
  8. Use the new resource group name where a device group name is required.

Maintaining Device Groups

You can perform a variety of administrative tasks for your device groups.

How to Remove and Unregister a Device Group (Solaris Volume Manager)

Device groups are Solaris Volume Manager disksets that have been registered with Oracle Solaris Cluster. To remove a Solaris Volume Manager device group, use the metaclear and metaset commands. These commands remove the device group with the same name and unregister the disk group as an Oracle Solaris Cluster device group.

Refer to the Solaris Volume Manager documentation for the steps to remove a disk set.

How to Remove a Node From All Device Groups

Use this procedure to remove a cluster node from all device groups that list the node in their lists of potential primaries.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on the node that you are removing as a potential primary of all device groups.
  2. Determine the device group or groups of which the node to be removed is a member.

    Look for the node name in the Device group node list for each device group.

    # cldevicegroup list -v
  3. If any of the device groups identified in Step 2 are of the device group type SVM, perform the steps in How to Remove a Node From a Device Group (Solaris Volume Manager) for each device group of that type.
  4. Determine the raw-device disk groups of which the node to be removed is a member.
    # cldevicegroup list -v
  5. If any of the device groups listed in Step 4 are of the device group types Disk or Local_Disk, perform the steps in How to Remove a Node From a Raw-Disk Device Group for each of these device groups.
  6. Verify that the node has been removed from the potential primaries list of all device groups.

    The command returns nothing if the node is no longer listed as a potential primary of any device group.

    # cldevicegroup list -v nodename

How to Remove a Node From a Device Group (Solaris Volume Manager)

Use this procedure to remove a cluster node from the list of potential primaries of a Solaris Volume Manager device group. Repeat the metaset command for each device group from which you want to remove the node.


Caution

Caution - Do not run metaset —s setname —f -t on a cluster node that is booted outside the cluster if other nodes are active cluster members and at least one of them owns the disk set.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Verify that the node is still a member of the device group and that the device group is a Solaris Volume Manager device group.

    Device group type SDS/SVM indicates a Solaris Volume Manager device group.

    phys-schost-1% cldevicegroup show devicegroup
  2. Determine which node is the current primary for the device group.
    # cldevicegroup status devicegroup
  3. Become superuser on the node that currently owns the device group that you want to modify.
  4. Delete the node's hostname from the device group.
    # metaset -s setname -d -h nodelist
    -s setname

    Specifies the device group name.

    -d

    Deletes from the device group the nodes identified with -h.

    -h nodelist

    Specifies the node name of the node or nodes that will be removed.


    Note - The update can take several minutes to complete.


    If the command fails, add the -f (force) option to the command.

    # metaset -s setname -d -f -h nodelist
  5. Repeat Step 4 for each device group from which the node is being removed as a potential primary.
  6. Verify that the node has been removed from the device group.

    The device group name matches the disk set name that is specified with metaset.

    phys-schost-1% cldevicegroup list -v devicegroup

Example 5-3 Removing a Node From a Device Group (Solaris Volume Manager)

The following example shows the removal of the hostname phys-schost-2 from a device group configuration. This example eliminates phys-schost-2 as a potential primary for the designated device group. Verify removal of the node by running the cldevicegroup show command. Check that the removed node is no longer displayed in the screen text.

[Determine the Solaris Volume Manager
 device group for the node:]
# cldevicegroup show dg-schost-1
=== Device Groups ===                          

Device Group Name:                    dg-schost-1
  Type:                                 SVM
  failback:                             no
  Node List:                            phys-schost-1, phys-schost-2
  preferenced:                          yes
  numsecondaries:                       1
  diskset name:                         dg-schost-1
[Determine which node is the current primary for the device group:]
# cldevicegroup status dg-schost-1
=== Cluster Device Groups ===

--- Device Group Status ---

Device Group Name    Primary         Secondary      Status
-----------------    -------         ---------      ------
dg-schost-1          phys-schost-1   phys-schost-2  Online
[Become superuser on the node that currently owns the device group.]
[Remove the host name from the device group:]
# metaset -s dg-schost-1 -d -h phys-schost-2
[Verify removal of the node:]]
phys-schost-1% cldevicegroup list -v dg-schost-1
=== Cluster Device Groups ===

--- Device Group Status ---

Device Group Name    Primary         Secondary      Status
-----------------    -------         ---------      ------
dg-schost-1          phys-schost-1   -              Online

How to Remove a Node From a Raw-Disk Device Group

Use this procedure to remove a cluster node from the list of potential primaries of a raw-disk device group.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization on a node in the cluster other than the node to remove.
  2. Identify the device groups that are connected to the node being removed, and determine which are raw-disk device groups.
    # cldevicegroup show -n nodename -t rawdisk +
  3. Disable the localonly property of each Local_Disk raw-disk device group.
    # cldevicegroup set -p localonly=false devicegroup

    See the cldevicegroup(1CL) man page for more information about the localonly property.

  4. Verify that you have disabled the localonly property of all raw-disk device groups that are connected to the node being removed.

    The Disk device group type indicates that the localonly property is disabled for that raw-disk device group.

    # cldevicegroup show -n nodename -t rawdisk -v + 
  5. Remove the node from all raw-disk device groups that are identified in Step 2.

    You must complete this step for each raw-disk device group that is connected to the node being removed.

    # cldevicegroup remove-node -n nodename devicegroup

Example 5-4 Removing a Node From a Raw Device Group

This example shows how to remove a node (phys-schost-2) from a raw-disk device group. All commands are run from another node of the cluster (phys-schost-1).

[Identify the device groups connected to the node being removed, and determine which are raw-disk
    device groups:]
phys-schost-1# cldevicegroup show -n phys-schost-2 -t rawdisk -v +    
Device Group Name:                              dsk/d4
  Type:                                           Disk
  failback:                                       false
  Node List:                                      phys-schost-2
  preferenced:                                    false
  localonly:                                      false
  autogen                                         true
  numsecondaries:                                 1
  device names:                                   phys-schost-2

Device Group Name:                              dsk/d1
  Type:                                           SVM
  failback:                                       false
  Node List:                                      pbrave1, pbrave2
  preferenced:                                    true
  localonly:                                      false
  autogen                                         true
  numsecondaries:                                 1
  diskset name:                                   ms1
(dsk/d4) Device group node list:  phys-schost-2
    (dsk/d2) Device group node list:  phys-schost-1, phys-schost-2
    (dsk/d1) Device group node list:  phys-schost-1, phys-schost-2
[Disable the localonly flag for each local disk on the node:]
phys-schost-1# cldevicegroup set -p localonly=false dsk/d4
[Verify that the localonly flag is disabled:]
phys-schost-1# cldevicegroup show -n phys-schost-2 -t rawdisk +   
 (dsk/d4) Device group type:          Disk
 (dsk/d8) Device group type:          Local_Disk
[Remove the node from all raw-disk device groups:]

phys-schost-1# cldevicegroup remove-node -n phys-schost-2 dsk/d4
phys-schost-1# cldevicegroup remove-node -n phys-schost-2 dsk/d2
phys-schost-1# cldevicegroup remove-node -n phys-schost-2 dsk/d1

How to Change Device Group Properties

The method for establishing the primary ownership of a device group is based on the setting of an ownership preference attribute called preferenced. If the attribute is not set, the primary owner of an otherwise unowned device group is the first node that attempts to access a disk in that group. However, if this attribute is set, you must specify the preferred order in which nodes attempt to establish ownership.

If you disable the preferenced attribute, then the failback attribute is also automatically disabled. However, if you attempt to enable or re-enable the preferenced attribute, you have the choice of enabling or disabling the failback attribute.

If the preferenced attribute is either enabled or re-enabled, you are required to reestablish the order of nodes in the primary ownership preference list.

This procedure uses 5 to set or unset the preferenced attribute and the failback attribute for Solaris Volume Manager device groups.

Before You Begin

To perform this procedure, you need the name of the device group for which you are changing attribute values.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization on any node of the cluster.
  2. Start the clsetup utility.
    # clsetup

    The Main Menu is displayed.

  3. To work with device groups, type the number for the option for device groups and volumes.

    The Device Groups Menu is displayed.

  4. To change key properties of a device group, type the number for the option for changing key properties of a Solaris Volume Manager device group.

    The Change Key Properties Menu is displayed.

  5. To change a device group property, type the number for option for changing the preferences and/or failback properties.

    Follow the instructions to set the preferenced and failback options for a device group.

  6. Verify that the device group attributes have been changed.

    Look for the device group information displayed by the following command.

    # cldevicegroup show -v devicegroup 

Example 5-5 Changing Device Group Properties

The following example shows the cldevicegroup command generated by clsetup when it sets the attribute values for a device group (dg-schost-1).

# cldevicegroup set -p preferenced=true -p failback=true -p numsecondaries=1 \ -p nodelist=phys-schost-1,phys-schost-2 dg-schost-1
# cldevicegroup show dg-schost-1

=== Device Groups ===                          

Device Group Name:                        dg-schost-1
  Type:                                     SVM
  failback:                                 yes
  Node List:                                phys-schost-1, phys-schost-2
  preferenced:                              yes
  numsecondaries:                           1
  diskset names:                            dg-schost-1

How to Set the Desired Number of Secondaries for a Device Group

The numsecondaries property specifies the number of nodes within a device group that can master the group if the primary node fails. The default number of secondaries for device services is one. You can set the value to any integer between one and the number of operational nonprimary provider nodes in the device group.

This setting is an important factor in balancing cluster performance and availability. For example, increasing the desired number of secondaries increases the device group's opportunity to survive multiple failures that occur simultaneously within a cluster. Increasing the number of secondaries also decreases performance regularly during normal operation. A smaller number of secondaries typically results in better performance, but reduces availability. However, a larger number of secondaries does not always result in greater availability of the file system or device group in question. Refer to Chapter 3, Key Concepts for System Administrators and Application Developers, in Oracle Solaris Cluster Concepts Guide for more information.

If you change the numsecondaries property, secondary nodes are added or removed from the device group if the change causes a mismatch between the actual number of secondaries and the desired number.

This procedure uses the clsetup utility to set the numsecondaries property for all types of device groups. Refer to cldevicegroup(1CL) for information about device group options when configuring any device group.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization on any node of the cluster.
  2. Start the clsetup utility.
    # clsetup

    The Main Menu is displayed.

  3. To work with device groups, select the Device Groups and Volumes menu item.

    The Device Groups Menu is displayed.

  4. To change key properties of a device group, select the Change Key Properties of a Device Group menu item.

    The Change Key Properties Menu is displayed.

  5. To change the desired number of secondaries, type the number for the option for changing the numsecondaries property.

    Follow the instructions and type the desired number of secondaries to be configured for the device group. The corresponding cldevicegroup command is then executed, a log is printed, and the utility returns to the previous menu.

  6. Validate the device group configuration.
    # cldevicegroup show dg-schost-1
    === Device Groups ===                          
    
    Device Group Name:                    dg-schost-1
      Type:                                 Local_Disk 
      failback:                             yes
      Node List:                            phys-schost-1, phys-schost-2 phys-schost-3
      preferenced:                          yes
      numsecondaries:                       1
      diskgroup names:                      dg-schost-1

    Note - Such configuration Configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See How to Update the Global-Devices Namespace.


  7. Verify that the device group attribute has been changed.

    Look for the device group information that is displayed by the following command.

    # cldevicegroup show -v devicegroup 

Example 5-6 Changing the Desired Number of Secondaries (Solaris Volume Manager)

The following example shows the cldevicegroup command that is generated by clsetup when it configures the desired number of secondaries for a device group (dg-schost-1). This example assumes that the disk group and volume were created previously.

# cldevicegroup set -p numsecondaries=1 dg-schost-1
# cldevicegroup show -v dg-schost-1

=== Device Groups ===                          

Device Group Name:                        dg-schost-1
  Type:                                     SVM
  failback:                                 yes
  Node List:                                phys-schost-1, phys-schost-2
  preferenced:                              yes
  numsecondaries:                           1
  diskset names:                            dg-schost-1

Example 5-7 Setting the Desired Number of Secondaries to the Default Value

The following example shows use of a null string value to configure the default number of secondaries. The device group will be configured to use the default value, even if the default value changes.

# cldevicegroup set -p numsecondaries= dg-schost-1
# cldevicegroup show -v dg-schost-1

=== Device Groups ===                          

Device Group Name:                        dg-schost-1
  Type:                                     SVM
  failback:                                 yes
  Node List:                                phys-schost-1, phys-schost-2 phys-schost-3
  preferenced:                              yes
  numsecondaries:                           1
  diskset names:                            dg-schost-1

How to List a Device Group Configuration

You do not need to be superuser to list the configuration. However, you do need solaris.cluster.read authorization.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

Example 5-8 Listing the Status of All Device Groups

# cldevicegroup status +

=== Cluster Device Groups ===

--- Device Group Status ---

Device Group Name    Primary         Secondary        Status
-----------------    -------         ---------        ------
dg-schost-1          phys-schost-2   phys-schost-1    Online
dg-schost-2          phys-schost-1   --               Offline
dg-schost-3          phys-schost-3   phy-shost-2      Online

Example 5-9 Listing the Configuration of a Particular Device Group

# cldevicegroup show dg-schost-1

=== Device Groups ===                          

Device Group Name:                              dg-schost-1
  Type:                                           SVM
  failback:                                       yes
  Node List:                                      phys-schost-2, phys-schost-3
  preferenced:                                    yes
  numsecondaries:                                 1
  diskset names:                                  dg-schost-1

How to Switch the Primary for a Device Group

This procedure can also be used to start (bring online) an inactive device group.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Become superuser or assume a profile that provides solaris.cluster.modify RBAC authorization on any node of the cluster.
  2. Use cldevicegroup switch to switch the device group primary.
    # cldevicegroup switch -n nodename devicegroup 
    -n nodename

    Specifies the name of the node to switch to. This node become the new primary.

    devicegroup

    Specifies the device group to switch.

  3. Verify that the device group has been switched to the new primary.

    If the device group is properly registered, information for the new device group is displayed when you use the following command.

    # cldevice status devicegroup

Example 5-10 Switching the Primary for a Device Group

The following example shows how to switch the primary for a device group and verify the change.

# cldevicegroup switch -n phys-schost-1 dg-schost-1

# cldevicegroup status dg-schost-1

=== Cluster Device Groups ===

--- Device Group Status ---

Device Group Name    Primary        Secondary       Status
-----------------    -------        ---------       ------
dg-schost-1          phys-schost-1   phys-schost-2  Online

How to Put a Device Group in Maintenance State

Putting a device group in maintenance state prevents that device group from automatically being brought online whenever one of its devices is accessed. You should put a device group in maintenance state when completing repair procedures that require that all I/O activity be quiesced until completion of the repair. Putting a device group in maintenance state also helps prevent data loss by ensuring that a device group is not brought online on one node while the disk set or disk group is being repaired on another node.

For instructions on how to restore a corrupted diskset, see Restoring a Corrupted Diskset.


Note - Before a device group can be placed in maintenance state, all access to its devices must be stopped, and all dependent file systems must be unmounted.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Place the device group in maintenance state.
    1. If the device group is enabled, disable the device group.
      # cldevicegroup disable devicegroup
    2. Take the device group offline.
      # cldevicegroup offline devicegroup
  2. If the repair procedure being performed requires ownership of a disk set or disk group, manually import that disk set or disk group.

    For Solaris Volume Manager:

    # metaset -C take -f -s diskset

    Caution

    Caution - If you are taking ownership of a Solaris Volume Manager disk set, you must use the metaset -C take command when the device group is in maintenance state. Using metaset -t brings the device group online as part of taking ownership.


  3. Complete the repair procedure that you need to perform.
  4. Release ownership of the disk set or disk group.

    Caution

    Caution - Before taking the device group out of maintenance state, you must release ownership of the disk set or disk group. Failure to release ownership can result in data loss.


    • For Solaris Volume Manager:

      # metaset -C release -s diskset
  5. Bring the device group online.
    # cldevicegroup online devicegroup
    # cldevicegroup enable devicegroup

Example 5-11 Putting a Device Group in Maintenance State

This example shows how to put device group dg-schost-1 in maintenance state, and remove the device group from maintenance state.

[Place the device group in maintenance state.]
# cldevicegroup disable dg-schost-1
# cldevicegroup offline dg-schost-1 
[If needed, manually import the disk set or disk group.]
For Solaris Volume Manager:
  # metaset -C take -f -s dg-schost-1
  
[Complete all necessary repair procedures.]  
[Release ownership.]
For Solaris Volume Manager:
  # metaset -C release -s dg-schost-1
  
[Bring the device group online.]
# cldevicegroup online dg-schost-1
# cldevicegroup enable dg-schost-1