Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster |
1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
Overview of Administering Global Devices and the Global Namespace
Global Device Permissions for Solaris Volume Manager
Dynamic Reconfiguration With Global Devices
Veritas Volume Manager Administration Considerations
Administering Storage-Based Replicated Devices
Administering Hitachi TrueCopy Replicated Devices
How to Configure a Hitachi TrueCopy Replication Group
How to Configure DID Devices for Replication Using Hitachi TrueCopy
How to Verify a Hitachi TrueCopy Replicated Global Device Group Configuration
Example: Configuring a TrueCopy Replication Group for Oracle Solaris Cluster
Administering EMC Symmetrix Remote Data Facility Replicated Devices
How to Configure an EMC SRDF Replication Group
How to Configure DID Devices for Replication Using EMC SRDF
How to Verify EMC SRDF Replicated Global Device Group Configuration
Example: Configuring an SRDF Replication Group for Oracle Solaris Cluster
Overview of Administering Cluster File Systems
Cluster File System Restrictions
How to Update the Global-Devices Namespace
How to Change the Size of a lofi Device That Is Used for the Global-Devices Namespace
Migrating the Global-Devices Namespace
How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device
How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition
Adding and Registering Device Groups
How to Add and Register a Device Group (Solaris Volume Manager)
How to Add and Register a Device Group (Raw-Disk)
How to Add and Register a Replicated Device Group (ZFS)
How to Create a New Disk Group When Initializing Disks (Veritas Volume Manager)
How to Remove and Unregister a Device Group (Solaris Volume Manager)
How to Remove a Node From All Device Groups
How to Remove a Node From a Device Group (Solaris Volume Manager)
How to Create a New Disk Group When Encapsulating Disks (Veritas Volume Manager)
How to Add a New Volume to an Existing Device Group (Veritas Volume Manager)
How to Convert an Existing Disk Group to a Device Group (Veritas Volume Manager)
How to Assign a New Minor Number to a Device Group (Veritas Volume Manager)
How to Register a Disk Group as a Device Group (Veritas Volume Manager)
How to Register Disk Group Configuration Changes (Veritas Volume Manager)
How to Convert a Local Disk Group to a Device Group (VxVM)
How to Convert a Device Group to a Local Disk Group (VxVM)
How to Remove a Volume From a Device Group (Veritas Volume Manager)
How to Remove and Unregister a Device Group (Veritas Volume Manager)
How to Add a Node to a Device Group (Veritas Volume Manager)
How to Remove a Node From a Device Group (Veritas Volume Manager)
How to Remove a Node From a Raw-Disk Device Group
How to Change Device Group Properties
How to Set the Desired Number of Secondaries for a Device Group
How to List a Device Group Configuration
Administering the SCSI Protocol Settings for Storage Devices
How to Display the Default Global SCSI Protocol Settings for All Storage Devices
How to Display the SCSI Protocol of a Single Storage Device
How to Change the Default Global Fencing Protocol Settings for All Storage Devices
How to Change the Fencing Protocol for a Single Storage Device
Administering Cluster File Systems
How to Add a Cluster File System
How to Remove a Cluster File System
How to Check Global Mounts in a Cluster
Administering Disk-Path Monitoring
How to Print Failed Disk Paths
How to Resolve a Disk-Path Status Error
How to Monitor Disk Paths From a File
How to Enable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
How to Disable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
7. Administering Cluster Interconnects and Public Networks
10. Configuring Control of CPU Usage
11. Patching Oracle Solaris Cluster Software and Firmware
12. Backing Up and Restoring a Cluster
13. Administering Oracle Solaris Cluster With the Graphical User Interfaces
As your cluster requirements change, you might need to add, remove, or modify the device groups on your cluster. Oracle Solaris Cluster provides an interactive interface called clsetup that you can use to make these changes. clsetup generates cluster commands. Generated commands are shown in the examples at the end of some procedures. The following table lists tasks for administering device groups and provides links to the appropriate procedures in this section.
Caution - Do not run metaset —s setname —f -t on a cluster node that is booted outside the cluster if other nodes are active cluster members and at least one of them owns the disk set. |
Note - Oracle Solaris Cluster software automatically creates a raw-disk device group for each disk and tape device in the cluster. However, cluster device groups remain in an offline state until you access the groups as global devices.
Table 5-4 Task Map: Administering Device Groups
|
When adding a new global device, manually update the global-devices namespace by running the cldevice populate command.
Note - The cldevice populate command does not have any effect if the node that is running the command is not currently a cluster member. The command also has no effect if the /global/.devices/node@ nodeID file system is not mounted.
You can run this command on all nodes in the cluster at the same time.
# cldevice populate
The cldevice command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.
# ps -ef | grep cldevice populate
Example 5-21 Updating the Global-Devices Namespace
The following example shows the output generated by successfully running the cldevice populate command.
# devfsadm cldevice populate Configuring the /dev/global directory (global devices)... obtaining access to all attached disks reservation program successfully exiting # ps -ef | grep cldevice populate
If you use a lofi device for the global-devices namespace on one or more nodes of the global cluster, perform this procedure to change the size of the device.
Do this to ensure that global devices are not served from this node while you perform this procedure. For instructions, see How to Boot a Node in Noncluster Mode.
The global-devices file system mounts locally.
phys-schost# umount /global/.devices/node\@`clinfo -n` > /dev/null 2>&1 Ensure that the lofi device is detached phys-schost# lofiadm -d /.globaldevices The command returns no output if the device is detached
Note - If the file system is mounted by using the -m option, no entry is added to the mnttab file. The umount command might report a warning similar to the following:
umount: warning: /global/.devices/node@2 not in mnttab ====>>>> not mounted
This warning is safe to ignore.
The following example shows the creation of a new /.globaldevices file that is 200 Mbytes in size.
phys-schost# rm /.globaldevices phys-schost# mkfile 200M /.globaldevices
phys-schost# lofiadm -a /.globaldevices phys-schost# newfs `lofiadm /.globaldevices` < /dev/null
The global devices are now populated on the new file system.
phys-schost# reboot
You can create a namespace on a loopback file interface (lofi) device, rather than creating a global-devices namespace on a dedicated partition. This feature is useful if you are installing Oracle Solaris Cluster software on systems that are pre-installed with the Oracle Solaris 10 OS.
Note - ZFS for root file systems is supported, with one significant exception. If you use a dedicated partition of the boot disk for the global-devices file system, you must use only UFS as its file system. The global-devices namespace requires the proxy file system (PxFS) running on a UFS file system. However, a UFS file system for the global-devices namespace can coexist with a ZFS file system for the root (/) file system and other root file systems, for example, /var or /home. Alternatively, if you instead use a lofi device to host the global-devices namespace, there is no limitation on the use of ZFS for root file systems.
The following procedures describe how to move an existing global-devices namespace from a dedicated partition to a lofi device or the opposite:
How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device
How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition
Do this to ensure that global devices are not served from this node while you perform this procedure. For instructions, see How to Boot a Node in Noncluster Mode.
# mkfile 100m /.globaldevices# lofiadm -a /.globaldevices # LOFI_DEV=`lofiadm /.globaldevices` # newfs `echo ${LOFI_DEV} | sed -e 's/lofi/rlofi/g'` < /dev/null# lofiadm -d /.globaldevices
# svcadm disable globaldevices# svcadm disable scmountdev # svcadm enable scmountdev # svcadm enable globaldevices
A lofi device is now created on /.globaldevices and mounted as the global-devices file system.
# /usr/cluster/bin/cldevice populate
On each node, verify that the command has completed processing before you perform any further actions on the cluster.
# ps -ef \ grep cldevice populate
The global-devices namespace now resides on a lofi device.
Do this to ensure that global devices are not served from this node while you perform this procedure. For instructions, see How to Boot a Node in Noncluster Mode.
Is at least 512 MByte in size
Uses the UFS file system
# /usr/sbin/clinfo -nnode ID
blockdevice rawdevice /global/.devices/node@nodeID ufs 2 no global
For example, if the partition that you choose to use is /dev/did/rdsk/d5s3, the new entry to add to the /etc/vfstab file would then be as follows: /dev/did/dsk/d5s3 /dev/did/rdsk/d5s3 /global/.devices/node@3 ufs 2 no global
# lofiadm -d /.globaldevices
# rm /.globaldevices
# svcadm disable globaldevices# svcadm disable scmountdev # svcadm enable scmountdev # svcadm enable globaldevices
The partition is now mounted as the global-devices namespace file system.
# /usr/cluster/bin/cldevice populate
Ensure that the process completes on all nodes of the cluster before you perform any further action on any of the nodes.
# ps -ef | grep cldevice populate
The global-devices namespace now resides on the dedicated partition.
You can add and register device groups for Solaris Volume Manager, ZFS, Veritas Volume Manager, or raw-disk.
Use the metaset command to create a Solaris Volume Manager disk set and register the disk set as an Oracle Solaris Cluster device group. When you register the disk set, the name that you assigned to the disk set is automatically assigned to the device group.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Caution - The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager, Veritas Volume Manager, or raw-disk) must be the same as the name of the replicated device group. |
# metaset -s diskset -a -M -h nodelist
Specifies the disk set to be created.
Adds the list of nodes that can master the disk set.
Designates the disk group as multi-owner.
Note - Running the metaset command to set up a Solaris Volume Manager device group on a cluster results in one secondary by default, regardless of the number of nodes that are included in that device group. You can change the desired number of secondary nodes by using the clsetup utility after the device group has been created. Refer to How to Set the Desired Number of Secondaries for a Device Group for more information about disk failover.
# cldevicegroup sync devicegroup
The device group name matches the disk set name that is specified with metaset.
# cldevicegroup list
# cldevice show | grep Device
Choose drives that are shared by the cluster nodes that will master or potentially master the disk set.
Use the full DID device name, which has the form /dev/did/rdsk/dN, when you add a drive to a disk set.
In the following example, the entries for DID device /dev/did/rdsk/d3 indicate that the drive is shared by phys-schost-1 and phys-schost-2.
=== DID Device Instances === DID Device Name: /dev/did/rdsk/d1 Full Device Path: phys-schost-1:/dev/rdsk/c0t0d0 DID Device Name: /dev/did/rdsk/d2 Full Device Path: phys-schost-1:/dev/rdsk/c0t6d0 DID Device Name: /dev/did/rdsk/d3 Full Device Path: phys-schost-1:/dev/rdsk/c1t1d0 Full Device Path: phys-schost-2:/dev/rdsk/c1t1d0 …
Use the full DID path name.
# metaset -s setname -a /dev/did/rdsk/dN
Specifies the disk set name, which is the same as the device group name.
Adds the drive to the disk set.
Note - Do not use the lower-level device name (cNtXdY) when you add a drive to a disk set. Because the lower-level device name is a local name and not unique throughout the cluster, using this name might prevent the metaset from being able to switch over.
# metaset -s setname
Example 5-22 Adding a Solaris Volume Manager Device Group
The following example shows the creation of the disk set and device group with the disk drives /dev/did/rdsk/d1 and /dev/did/rdsk/d2 and verifies that the device group has been created.
# metaset -s dg-schost-1 -a -h phys-schost-1 # cldevicegroup list dg-schost-1 metaset -s dg-schost-1 -a /dev/did/rdsk/d1 /dev/did/rdsk/d2
Oracle Solaris Cluster software supports the use of raw-disk device groups in addition to other volume managers. When you initially configure Oracle Solaris Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Oracle Solaris Cluster software.
Create a new device group of the raw-disk type for the following reasons:
You want to add more than one DID to the device group
You need to change the name of the device group
You want to create a list of device groups without using the -v option of the cldg command
Caution - If you are creating a device group on replicated devices, the name of the device group that you create (Solaris Volume Manager, Veritas Volume Manager, or raw-disk) must be the same as the name of the replicated device group. |
The following commands remove the predefined device groups for d7 and d8.
paris-1# cldevicegroup disable dsk/d7 dsk/d8 paris-1# cldevicegroup offline dsk/d7 dsk/d8 paris-1# cldevicegroup delete dsk/d7 dsk/d8
The following command creates a global device group, rawdg, which contains d7 and d8.
paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 -t rawdisk -d d7,d8 rawdg paris-1# /usr/cluster/lib/dcs/cldg show rawdg -d d7 rawdg paris-1# /usr/cluster/lib/dcs/cldg show rawdg -d d8 rawdg
To replicate ZFS, you must create a named device group and list the disks that belong to the zpool. A device can belong to only one device group at a time, so if you already have an Oracle Solaris Cluster device group that contains the device, you must delete the group before you add that device to a new ZFS device group.
The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager, Veritas Volume Manager, or raw-disk) must be the same as the name of the replicated device group.
Caution - Full support for ZFS with third-party data-replication technologies is pending. See the latest Oracle Solaris Cluster Release Notes for updates on ZFS support. |
For example, if you have a zpool called mypool that contains two devices /dev/did/dsk/d2 and /dev/did/dsk/d13, you must delete the two default device groups called d2 and d13.
# cldevicegroup offline dsk/d2 dsk/d13 # cldevicegroup remove dsk/d2 dsk/d13
# cldevicegroup create -d d2,d13 -t rawdisk mypool
This action creates a device group called mypool (with the same name as the zpool), which manages the raw devices /dev/did/dsk/d2 and /dev/did/dsk/d13.
# zpool create mypool mirror /dev/did/dsk/d2 /dev/did/dsk/d13
# clrg create -n pnode1,pnode2 migrate_truecopydg-rg
# clrs create -t HAStoragePlus -x globaldevicepaths=mypool -g \ migrate_truecopydg-rg hasp2migrate_mypool
# clrg create -n pnode1:zone-1,pnode2:zone-2 -p \ RG_affinities=+++migrate_truecopydg-rg sybase-rg
# clrs create -g sybase-rg -t HAStoragePlus -p zpools=mypool \ -p resource_dependencies=hasp2migrate_mypool \ -p ZpoolsSearchDir=/dev/did/dsk hasp2import_mypool
Note - This procedure is only for initializing disks. If you are encapsulating disks, use the procedure How to Create a New Disk Group When Encapsulating Disks (Veritas Volume Manager).
After adding the VxVM disk group, you need to register the device group.
If you use VxVM to set up shared disk groups for Oracle RAC, use the cluster functionality of VxVM as described in the Veritas Volume Manager Administrator's Reference Guide.
Use your preferred method to create the disk group and volume.
Note - If you are setting up a mirrored volume, use Dirty Region Logging (DRL) to decrease volume recovery time after a node failure. However, DRL might decrease I/O throughput.
See the Veritas Volume Manager documentation for the procedures to complete this step.
See How to Register a Disk Group as a Device Group (Veritas Volume Manager).
Do not register the Oracle RAC shared disk groups with the cluster framework.
You can perform a variety of administrative tasks for your device groups.
Device groups are Solaris Volume Manager disksets that have been registered with Oracle Solaris Cluster. To remove a Solaris Volume Manager device group, use the metaclear and metaset commands. These commands remove the device group with the same name and unregister the disk group as an Oracle Solaris Cluster device group.
Refer to the Solaris Volume Manager documentation for the steps to remove a disk set.
Use this procedure to remove a cluster node from all device groups that list the node in their lists of potential primaries.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Look for the node name in the Device group node list for each device group.
# cldevicegroup list -v
# cldevicegroup list -v
The command returns nothing if the node is no longer listed as a potential primary of any device group.
# cldevicegroup list -v nodename
Use this procedure to remove a cluster node from the list of potential primaries of a Solaris Volume Manager device group. Repeat the metaset command for each device group from which you want to remove the node.
Caution - Do not run metaset —s setname —f -t on a cluster node that is booted outside the cluster if other nodes are active cluster members and at least one of them owns the disk set. |
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Device group type SDS/SVM indicates a Solaris Volume Manager device group.
phys-schost-1% cldevicegroup show devicegroup
# cldevicegroup status devicegroup
# metaset -s setname -d -h nodelist
Specifies the device group name.
Deletes from the device group the nodes identified with -h.
Specifies the node name of the node or nodes that will be removed.
Note - The update can take several minutes to complete.
If the command fails, add the -f (force) option to the command.
# metaset -s setname -d -f -h nodelist
The device group name matches the disk set name that is specified with metaset.
phys-schost-1% cldevicegroup list -v devicegroup
Example 5-23 Removing a Node From a Device Group (Solaris Volume Manager)
The following example shows the removal of the hostname phys-schost-2 from a device group configuration. This example eliminates phys-schost-2 as a potential primary for the designated device group. Verify removal of the node by running the cldevicegroup show command. Check that the removed node is no longer displayed in the screen text.
[Determine the Solaris Volume Manager device group for the node:] # cldevicegroup show dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: no Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset name: dg-schost-1 [Determine which node is the current primary for the device group:] # cldevicegroup status dg-schost-1 === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ dg-schost-1 phys-schost-1 phys-schost-2 Online [Become superuser on the node that currently owns the device group.] [Remove the host name from the device group:] # metaset -s dg-schost-1 -d -h phys-schost-2 [Verify removal of the node:]] phys-schost-1% cldevicegroup list -v dg-schost-1 === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ dg-schost-1 phys-schost-1 - Online
Note - This procedure is only for encapsulating disks. If you are initializing disks, use the procedure How to Create a New Disk Group When Initializing Disks (Veritas Volume Manager).
You can convert nonroot disks to Oracle Solaris Cluster device groups by encapsulating the disks as VxVM disk groups, then registering the disk groups as Oracle Solaris Cluster device groups.
Disk encapsulation is only supported during initial creation of a VxVM disk group. After a VxVM disk group is created and registered as an Oracle Solaris Cluster device group, only disks which can be initialized should be added to the disk group.
If you use VxVM to set up shared disk groups for Oracle RAC, use the cluster functionality of VxVM as described in the Veritas Volume Manager Administrator's Reference Guide.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Set back to yes after the disk is encapsulated and registered as an Oracle Solaris Cluster device group.
Use vxdiskadm menus or the graphical user interface to encapsulate the disks. VxVM requires two free partitions as well as unassigned cylinders at the beginning or the end of the disk. Slice two must also be set to the entire disk. See the vxdiskadm man page for more information.
The clnode evacuate command switches over all resource groups and device groups including all non-voting nodes in a global cluster from the specified node to a next-preferred node. Use the shutdown command to shut down and restart the node.
# clnode evacuate node[,...] # shutdown -g0 -y -i6
If the resource groups and device groups were initially configured to fail back to the primary node, this step is not necessary.
# cldevicegroup switch -n node devicegroup # clresourcegroup switch -z zone -n node resourcegroup
The name of the node.
The name of the non-voting node, node, that can master the resource group. Specify zone only if you specified a non-voting node when you created the resource group.
See How to Register a Disk Group as a Device Group (Veritas Volume Manager).
Do not register the Oracle RAC shared disk groups with the cluster framework.
When you add a new volume to an existing VxVM device group, perform the procedure from the primary node of the online device group.
Note - After adding the volume, you need to register the configuration change by using the procedure How to Register Disk Group Configuration Changes (Veritas Volume Manager).
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# cldevicegroup status
# cldevicegroup switch -n nodename devicegroup
Specifies the name of the node to which to switch the device group. This node becomes the new primary.
Specifies the device group to switch.
Refer to your Veritas Volume Manager documentation for the procedure used to create the VxVM volume.
# cldevicegroup sync
How to Register Disk Group Configuration Changes (Veritas Volume Manager).
You can convert an existing VxVM disk group to an Oracle Solaris Cluster device group by importing the disk group onto the current node, then registering the disk group as an Oracle Solaris Cluster device group.
# vxdg import diskgroup
See How to Register a Disk Group as a Device Group (Veritas Volume Manager).
If device group registration fails because of a minor number conflict with another disk group, you must assign the new disk group a new, unused minor number. After assigning the new minor number, rerun the procedure to register the disk group as an Oracle Solaris Cluster device group.
# ls -l /global/.devices/node@nodeid/dev/vx/dsk/*
# vxdg reminor diskgroup base-minor-number
See How to Register a Disk Group as a Device Group (Veritas Volume Manager).
Example 5-24 How to Assign a New Minor Number to a Device Group
This example uses the minor numbers 16000-16002 and 4000-4001. The vxdg reminor command is used to assign the base minor number 5000 to the new device group.
# ls -l /global/.devices/node@nodeid/dev/vx/dsk/* /global/.devices/node@nodeid/dev/vx/dsk/dg1 brw------- 1 root root 56,16000 Oct 7 11:32 dg1v1 brw------- 1 root root 56,16001 Oct 7 11:32 dg1v2 brw------- 1 root root 56,16002 Oct 7 11:32 dg1v3 /global/.devices/node@nodeid/dev/vx/dsk/dg2 brw------- 1 root root 56,4000 Oct 7 11:32 dg2v1 brw------- 1 root root 56,4001 Oct 7 11:32 dg2v2 # vxdg reminor dg3 5000
This procedure uses the clsetup utility to register the associated VxVM disk group as an Oracle Solaris Cluster device group.
Note - After a device group has been registered with the cluster, never import or export a VxVM disk group by using VxVM commands. If you make a change to the VxVM disk group or volume, follow the procedure How to Register Disk Group Configuration Changes (Veritas Volume Manager) to register the device group configuration changes. This procedure ensures that the global namespace is in the correct state.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Before You Begin
Ensure that the following prerequisites have been completed prior to registering a VxVM device group:
Superuser privilege on a node in the cluster.
The name of the VxVM disk group to be registered as a device group.
A preferred order of nodes to master the device group.
A desired number of secondary nodes for the device group.
When you define the preference order, you also specify whether the device group should be switched back to the most preferred node if that node fails and later returns to the cluster.
See cldevicegroup(1CL) for more information about node preference and failback options.
Nonprimary cluster nodes (spares) transition to secondary according to the node preference order. The default number of secondaries for a device group is normally set to one. This default setting minimizes performance degradation that is caused by primary checkpointing of multiple secondary nodes during normal operation. For example, in a four-node cluster, the default behavior configures one primary, one secondary, and two spare nodes. See also How to Set the Desired Number of Secondaries for a Device Group.
# clsetup
The Main Menu is displayed.
The Device Groups Menu is displayed.
Follow the instructions and type the name of the VxVM disk group to be registered as an Oracle Solaris Cluster device group.
If this device group is replicated by using storage-based replication, this name must match the replication group name.
If you use VxVM to set up shared disk groups for Oracle Parallel Server/Oracle RAC, you do not register the shared disk groups with the cluster framework. Use the cluster functionality of VxVM as described in the Veritas Volume Manager Administrator's Reference Guide.
cldevicegroup: Failed to add device group - in use
To reminor the device group, use the procedure How to Assign a New Minor Number to a Device Group (Veritas Volume Manager). This procedure enables you to assign a new minor number that does not conflict with a minor number that an existing device group uses.
# cldevicegroup sync devicegroup
If the device group is properly registered, information for the new device group is displayed when you use the following command.
# cldevicegroup status devicegroup
Note - If you change any configuration information for a VxVM disk group or volume that is registered with the cluster, you must synchronize the device group by using clsetup. Such configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See How to Update the Global-Devices Namespace.
Example 5-25 Registering a Veritas Volume Manager Device Group
The following example shows the cldevicegroup command generated by clsetup when it registers a VxVM device group (dg1), and the verification step. This example assumes that the VxVM disk group and volume were created previously.
# clsetup # cldevicegroup create -t vxvm -n phys-schost-1,phys-schost-2 -p failback=true dg1 # cldevicegroup status dg1 === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ dg1 phys-schost-1 phys-schost-2 Online
See Also
To create a cluster file system on the VxVM device group, see How to Add a Cluster File System.
If problems occur with the minor number, see How to Assign a New Minor Number to a Device Group (Veritas Volume Manager).
When you change any configuration information for a VxVM disk group or volume, you need to register the configuration changes for the Oracle Solaris Cluster device group. Registration ensures that the global namespace is in the correct state.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# clsetup
The Main Menu is displayed.
The Device Groups Menu is displayed.
Follow the instructions and type the name of the VxVM disk group that has changed configuration.
Example 5-26 Registering Veritas Volume Manager Disk Group Configuration Changes
The following example shows the cldevicegroup command generated by clsetup a changed VxVM device group (dg1) is registered. This example assumes that the VxVM disk group and volume were created previously.
# clsetup cldevicegroup sync dg1
Perform this procedure to change a local VxVM disk group to a globally accessible VxVM device group.
# clsetup
phys-schost# cldevicegroup show
Perform this procedure to change a VxVM device group to a local VxVM disk group that is not managed by Oracle Solaris Cluster software. The local disk group can have more than one node in its node list, but it can be mastered by only one node at a time.
phys-schost# cldevicegroup offline devicegroup
phys-schost# clsetup
phys-schost# cldevicegroup status
Command output should no longer show the device group that you unregistered.
phys-schost# vxdg import diskgroup
phys-schost# clsetup
phys-schost# vxdg list diskgroup
Note - After removing the volume from the device group, you must register the configuration changes to the device group by using the procedure How to Register Disk Group Configuration Changes (Veritas Volume Manager).
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# cldevicegroup status devicegroup
# cldevicegroup online devicegroup
# vxedit -g diskgroup -rf rm volume
Specifies the VxVM disk group that contains the volume.
Removes the specified volume. The -r option makes the operation recursive. The -f option is required to remove an enabled volume.
See How to Register Disk Group Configuration Changes (Veritas Volume Manager).
Removing an Oracle Solaris Cluster device group causes the corresponding VxVM disk group to be exported, not destroyed. However, even though the VxVM disk group still exists, it cannot be used in the cluster unless reregistered.
This procedure uses the clsetup utility to remove a VxVM disk group and unregister it as an Oracle Solaris Cluster device group.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# cldevicegroup offline devicegroup
# clsetup
The Main Menu is displayed.
The Device Groups Menu is displayed.
Follow the instructions and type the name of the VxVM disk group to be unregistered.
Example 5-27 Removing and Unregistering a Veritas Volume Manager Device Group
The following example shows the VxVM device group dg1 taken offline, and the cldevicegroup command generated by clsetup when it removes and unregisters the device group.
# cldevicegroup offline dg1 # clsetup cldevicegroup delete dg1
This procedure adds a node to a device group using the clsetup utility.
The prerequisites to add a node to a VxVM device group are:
Superuser privilege on a node in the cluster
The name of the VxVM device group to which the node will be added
The name or node ID of the nodes to add
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# clsetup
The Main Menu is displayed.
The Device Groups Menu is displayed.
Follow the instructions and type the device group and node names.
Look for the device group information for the new disk displayed by the following command.
# cldevicegroup show devicegroup
Example 5-28 Adding a Node to a Veritas Volume Manager Device Group
The following example shows the cldevicegroupcommand generated by clsetup when it adds a node (phys-schost-3 ) to a VxVM device group (dg1 ), and the verification step.
# clsetup cldevicegroup add-node -n phys-schost-3 dg1 # cldevicegroup show dg1 === Device Groups === Device Group Name: dg1 Type: VxVM failback: yes Node List: phys-schost-1, phys-schost-3 preferenced: no numsecondaries: 1 diskgroup names: dg1
Use this procedure to remove a cluster node from the list of potential primaries of a Veritas Volume Manager (VxVM) device group (disk group).
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Device group type VxVM indicates a VxVM device group.
phys-schost-1% cldevicegroup show devicegroup
# clsetup
The Main Menu is displayed.
Follow the prompts to remove the cluster node from the device group. You are asked for information about the following:
VxVM device group
Node name
# cldevicegroup show devicegroup
Example 5-29 Removing a Node From a Device Group (VxVM)
This example shows the removal of the node named phys-schost-1 from the dg1 VxVM device group.
[Determine the VxVM device group for the node:] # cldevicegroup show dg1 === Device Groups === Device Group Name: dg1 Type: VXVM failback: no Node List: phys-schost-1, phys-schost-2 preferenced: no numsecondaries: 1 diskgroup names: dg1 [Become superuser and start the clsetup utility:] # clsetup Select Device groups and volumes>Remove a node from a VxVM device group. Answer the questions when prompted. You will need the following information. Name: Example: VxVM device group name dg1 node names phys-schost-1 [Verify that the cldevicegroup command executed properly:] cldevicegroup remove-node -n phys-schost-1 dg1 Command completed successfully. Dismiss the clsetup Device Groups Menu and Main Menu. [Verify that the node was removed:] # cldevicegroup show dg1 === Device Groups === Device Group Name: dg1 Type: VXVM failback: no Node List: phys-schost-2 preferenced: no numsecondaries: 1 device names: dg1
Use this procedure to remove a cluster node from the list of potential primaries of a raw-disk device group.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# cldevicegroup show -n nodename -t rawdisk +
# cldevicegroup set -p localonly=false devicegroup
See the cldevicegroup(1CL) man page for more information about the localonly property.
The Disk device group type indicates that the localonly property is disabled for that raw-disk device group.
# cldevicegroup show -n nodename -t rawdisk -v +
You must complete this step for each raw-disk device group that is connected to the node being removed.
# cldevicegroup remove-node -n nodename devicegroup
Example 5-30 Removing a Node From a Raw Device Group
This example shows how to remove a node (phys-schost-2) from a raw-disk device group. All commands are run from another node of the cluster (phys-schost-1).
[Identify the device groups connected to the node being removed, and determine which are raw-disk device groups:] phys-schost-1# cldevicegroup show -n phys-schost-2 -t rawdisk -v + Device Group Name: dsk/d4 Type: Disk failback: false Node List: phys-schost-2 preferenced: false localonly: false autogen true numsecondaries: 1 device names: phys-schost-2 Device Group Name: dsk/d2 Type: VxVM failback: true Node List: pbrave2 preferenced: false localonly: false autogen true numsecondaries: 1 diskgroup name: vxdg1 Device Group Name: dsk/d1 Type: SVM failback: false Node List: pbrave1, pbrave2 preferenced: true localonly: false autogen true numsecondaries: 1 diskset name: ms1 (dsk/d4) Device group node list: phys-schost-2 (dsk/d2) Device group node list: phys-schost-1, phys-schost-2 (dsk/d1) Device group node list: phys-schost-1, phys-schost-2 [Disable the localonly flag for each local disk on the node:] phys-schost-1# cldevicegroup set -p localonly=false dsk/d4 [Verify that the localonly flag is disabled:] phys-schost-1# cldevicegroup show -n phys-schost-2 -t rawdisk + (dsk/d4) Device group type: Disk (dsk/d8) Device group type: Local_Disk [Remove the node from all raw-disk device groups:] phys-schost-1# cldevicegroup remove-node -n phys-schost-2 dsk/d4 phys-schost-1# cldevicegroup remove-node -n phys-schost-2 dsk/d2 phys-schost-1# cldevicegroup remove-node -n phys-schost-2 dsk/d1
The method for establishing the primary ownership of a device group is based on the setting of an ownership preference attribute called preferenced. If the attribute is not set, the primary owner of an otherwise unowned device group is the first node that attempts to access a disk in that group. However, if this attribute is set, you must specify the preferred order in which nodes attempt to establish ownership.
If you disable the preferenced attribute, then the failback attribute is also automatically disabled. However, if you attempt to enable or re-enable the preferenced attribute, you have the choice of enabling or disabling the failback attribute.
If the preferenced attribute is either enabled or re-enabled, you are required to reestablish the order of nodes in the primary ownership preference list.
This procedure uses clsetup to set or unset the preferenced attribute and the failback attribute for Solaris Volume Manager or VxVM device groups.
Before You Begin
To perform this procedure, you need the name of the device group for which you are changing attribute values.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# clsetup
The Main Menu is displayed.
The Device Groups Menu is displayed.
The Change Key Properties Menu is displayed.
Follow the instructions to set the preferenced and failback options for a device group.
Look for the device group information displayed by the following command.
# cldevicegroup show -v devicegroup
Example 5-31 Changing Device Group Properties
The following example shows the cldevicegroup command generated by clsetup when it sets the attribute values for a device group (dg-schost-1).
# cldevicegroup set -p preferenced=true -p failback=true -p numsecondaries=1 \ -p nodelist=phys-schost-1,phys-schost-2 dg-schost-1 # cldevicegroup show dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: yes Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset names: dg-schost-1
The numsecondaries property specifies the number of nodes within a device group that can master the group if the primary node fails. The default number of secondaries for device services is one. You can set the value to any integer between one and the number of operational nonprimary provider nodes in the device group.
This setting is an important factor in balancing cluster performance and availability. For example, increasing the desired number of secondaries increases the device group's opportunity to survive multiple failures that occur simultaneously within a cluster. Increasing the number of secondaries also decreases performance regularly during normal operation. A smaller number of secondaries typically results in better performance, but reduces availability. However, a larger number of secondaries does not always result in greater availability of the file system or device group in question. Refer to Chapter 3, Key Concepts for System Administrators and Application Developers, in Oracle Solaris Cluster Concepts Guide for more information.
If you change the numsecondaries property, secondary nodes are added or removed from the device group if the change causes a mismatch between the actual number of secondaries and the desired number.
This procedure uses the clsetup utility to set the numsecondaries property for all types of device groups. Refer to cldevicegroup(1CL) for information about device group options when configuring any device group.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# clsetup
The Main Menu is displayed.
The Device Groups Menu is displayed.
The Change Key Properties Menu is displayed.
Follow the instructions and type the desired number of secondaries to be configured for the device group. The corresponding cldevicegroup command is then executed, a log is printed, and the utility returns to the previous menu.
# cldevicegroup show dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: VxVm This might also be SDS or Local_Disk. failback: yes Node List: phys-schost-1, phys-schost-2 phys-schost-3 preferenced: yes numsecondaries: 1 diskgroup names: dg-schost-1
Note - If you change any configuration information for a VxVM disk group or volume that is registered with the cluster, you must reregister the device group by using clsetup. Such configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See How to Update the Global-Devices Namespace.
Look for the device group information that is displayed by the following command.
# cldevicegroup show -v devicegroup
Example 5-32 Changing the Desired Number of Secondaries (Solaris Volume Manager)
The following example shows the cldevicegroup command that is generated by clsetup when it configures the desired number of secondaries for a device group (dg-schost-1). This example assumes that the disk group and volume were created previously.
# cldevicegroup set -p numsecondaries=1 dg-schost-1 # cldevicegroup show -v dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: yes Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset names: dg-schost-1
Example 5-33 Setting the Desired Number of Secondaries (Veritas Volume Manager)
The following example shows the cldevicegroup command that is generated by clsetup when it sets the desired number of secondaries for a device group (dg-schost-1) to two. See How to Set the Desired Number of Secondaries for a Device Group for information about changing the desired number of secondaries after a device group is created.
# cldevicegroup set -p numsecondaries=2 dg-schost-1 # cldevicegroup show dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: VxVM failback: yes Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskgroup names: dg-schost-1
Example 5-34 Setting the Desired Number of Secondaries to the Default Value
The following example shows use of a null string value to configure the default number of secondaries. The device group will be configured to use the default value, even if the default value changes.
# cldevicegroup set -p numsecondaries= dg-schost-1 # cldevicegroup show -v dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: yes Node List: phys-schost-1, phys-schost-2 phys-schost-3 preferenced: yes numsecondaries: 1 diskset names: dg-schost-1
You do not need to be superuser to list the configuration. However, you do need solaris.cluster.read authorization.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
See the Oracle Solaris Cluster Manager online help for more information.
Use cldevicegroup show to list the configuration for all device groups in the cluster.
Use cldevicegroup show devicegroup to list the configuration of a single device group.
Use cldevicegroup status devicegroup to determine the status of a single device group.
Use cldevicegroup status + to determine the status of all device groups in the cluster.
Use the -v option with any of these commands to obtain more detailed information.
Example 5-35 Listing the Status of All Device Groups
# cldevicegroup status + === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ dg-schost-1 phys-schost-2 phys-schost-1 Online dg-schost-2 phys-schost-1 -- Offline dg-schost-3 phys-schost-3 phy-shost-2 Online
Example 5-36 Listing the Configuration of a Particular Device Group
# cldevicegroup show dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: yes Node List: phys-schost-2, phys-schost-3 preferenced: yes numsecondaries: 1 diskset names: dg-schost-1
This procedure can also be used to start (bring online) an inactive device group.
You can also bring an inactive device group online or switch the primary for a device group by using the Oracle Solaris Cluster Manager GUI. See the Oracle Solaris Cluster Manager online help for more information.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# cldevicegroup switch -n nodename devicegroup
Specifies the name of the node to switch to. This node become the new primary.
Specifies the device group to switch.
If the device group is properly registered, information for the new device group is displayed when you use the following command.
# cldevice status devicegroup
Example 5-37 Switching the Primary for a Device Group
The following example shows how to switch the primary for a device group and verify the change.
# cldevicegroup switch -n phys-schost-1 dg-schost-1 # cldevicegroup status dg-schost-1 === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ dg-schost-1 phys-schost-1 phys-schost-2 Online
Putting a device group in maintenance state prevents that device group from automatically being brought online whenever one of its devices is accessed. You should put a device group in maintenance state when completing repair procedures that require that all I/O activity be acquiesced until completion of the repair. Putting a device group in maintenance state also helps prevent data loss by ensuring that a device group is not brought online on one node while the disk set or disk group is being repaired on another node.
For instructions on how to restore a corrupted diskset, see Restoring a Corrupted Diskset.
Note - Before a device group can be placed in maintenance state, all access to its devices must be stopped, and all dependent file systems must be unmounted.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# cldevicegroup disable devicegroup
# cldevicegroup offline devicegroup
For Solaris Volume Manager:
# metaset -C take -f -s diskset
Caution - If you are taking ownership of a Solaris Volume Manager disk set, you must use the metaset -C take command when the device group is in maintenance state. Using metaset -t brings the device group online as part of taking ownership. If you are importing a VxVM disk group, you must use the -t flag when importing the disk group. Using the -t flag prevents the disk group from automatically being imported if this node is rebooted. |
For Veritas Volume Manager:
# vxdg -t import disk-group-name
Caution - Before taking the device group out of maintenance state, you must release ownership of the disk set or disk group. Failure to release ownership can result in data loss. |
For Solaris Volume Manager:
# metaset -C release -s diskset
For Veritas Volume Manager:
# vxdg deport diskgroupname
# cldevicegroup online devicegroup # cldevicegroup enable devicegroup
Example 5-38 Putting a Device Group in Maintenance State
This example shows how to put device group dg-schost-1 in maintenance state, and remove the device group from maintenance state.
[Place the device group in maintenance state.] # cldevicegroup disable dg-schost-1 # cldevicegroup offline dg-schost-1 [If needed, manually import the disk set or disk group.] For Solaris Volume Manager: # metaset -C take -f -s dg-schost-1 For Veritas Volume Manager: # vxdg -t import dg1 [Complete all necessary repair procedures.] [Release ownership.] For Solaris Volume Manager: # metaset -C release -s dg-schost-1 For Veritas Volume Manager: # vxdg deport dg1 [Bring the device group online.] # cldevicegroup online dg-schost-1 # cldevicegroup enable dg-schost-1