Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster System Administration Guide Oracle Solaris Cluster 3.3 3/13 |
1. Introduction to Administering Oracle Solaris Cluster
2. Oracle Solaris Cluster and RBAC
3. Shutting Down and Booting a Cluster
4. Data Replication Approaches
5. Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems
Overview of Administering Global Devices and the Global Namespace
Global Device Permissions for Solaris Volume Manager
Dynamic Reconfiguration With Global Devices
Administering Storage-Based Replicated Devices
Administering Hitachi TrueCopy Replicated Devices
How to Configure a Hitachi TrueCopy Replication Group
How to Configure DID Devices for Replication Using Hitachi TrueCopy
How to Verify a Hitachi TrueCopy Replicated Global Device Group Configuration
Example: Configuring a TrueCopy Replication Group for Oracle Solaris Cluster
Administering EMC Symmetrix Remote Data Facility Replicated Devices
How to Configure an EMC SRDF Replication Group
How to Configure DID Devices for Replication Using EMC SRDF
How to Verify EMC SRDF Replicated Global Device Group Configuration
Example: Configuring an SRDF Replication Group for Oracle Solaris Cluster
Overview of Administering Cluster File Systems
Cluster File System Restrictions
How to Update the Global-Devices Namespace
How to Change the Size of a lofi Device That Is Used for the Global-Devices Namespace
Migrating the Global-Devices Namespace
How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device
How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition
Adding and Registering Device Groups
How to Add and Register a Device Group (Solaris Volume Manager)
How to Add and Register a Device Group (Raw-Disk)
How to Add and Register a Replicated Device Group (ZFS)
How to Remove and Unregister a Device Group (Solaris Volume Manager)
How to Remove a Node From All Device Groups
How to Remove a Node From a Device Group (Solaris Volume Manager)
How to Remove a Node From a Raw-Disk Device Group
How to Change Device Group Properties
How to Set the Desired Number of Secondaries for a Device Group
How to List a Device Group Configuration
Administering the SCSI Protocol Settings for Storage Devices
How to Display the Default Global SCSI Protocol Settings for All Storage Devices
How to Display the SCSI Protocol of a Single Storage Device
How to Change the Default Global Fencing Protocol Settings for All Storage Devices
How to Change the Fencing Protocol for a Single Storage Device
Administering Cluster File Systems
How to Add a Cluster File System
How to Remove a Cluster File System
How to Check Global Mounts in a Cluster
Administering Disk-Path Monitoring
How to Print Failed Disk Paths
How to Resolve a Disk-Path Status Error
How to Monitor Disk Paths From a File
How to Enable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
How to Disable the Automatic Rebooting of a Node When All Monitored Shared-Disk Paths Fail
7. Administering Cluster Interconnects and Public Networks
10. Configuring Control of CPU Usage
11. Patching Oracle Solaris Cluster Software and Firmware
12. Backing Up and Restoring a Cluster
13. Administering Oracle Solaris Cluster With the Graphical User Interfaces
As your cluster requirements change, you might need to add, remove, or modify the device groups on your cluster. Oracle Solaris Cluster provides an interactive interface called clsetup that you can use to make these changes. clsetup generates cluster commands. Generated commands are shown in the examples at the end of some procedures. The following table lists tasks for administering device groups and provides links to the appropriate procedures in this section.
Caution - Do not run metaset —s setname —f -t on a cluster node that is booted outside the cluster if other nodes are active cluster members and at least one of them owns the disk set. |
Note - Oracle Solaris Cluster software automatically creates a raw-disk device group for each disk and tape device in the cluster. However, cluster device groups remain in an offline state until you access the groups as global devices.
Table 5-4 Task Map: Administering Device Groups
|
When adding a new global device, manually update the global-devices namespace by running the cldevice populate command.
Note - The cldevice populate command does not have any effect if the node that is running the command is not currently a cluster member. The command also has no effect if the /global/.devices/node@ nodeID file system is not mounted.
You can run this command on all nodes in the cluster at the same time.
# cldevice populate
The cldevice command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.
# ps -ef | grep cldevice populate
Example 5-21 Updating the Global-Devices Namespace
The following example shows the output generated by successfully running the cldevice populate command.
# devfsadm cldevice populate Configuring the /dev/global directory (global devices)... obtaining access to all attached disks reservation program successfully exiting # ps -ef | grep cldevice populate
If you use a lofi device for the global-devices namespace on one or more nodes of the global cluster, perform this procedure to change the size of the device.
Do this to ensure that global devices are not served from this node while you perform this procedure. For instructions, see How to Boot a Node in Noncluster Mode.
The global-devices file system mounts locally.
phys-schost# umount /global/.devices/node\@`clinfo -n` > /dev/null 2>&1 Ensure that the lofi device is detached phys-schost# lofiadm -d /.globaldevices The command returns no output if the device is detached
Note - If the file system is mounted by using the -m option, no entry is added to the mnttab file. The umount command might report a warning similar to the following:
umount: warning: /global/.devices/node@2 not in mnttab ====>>>> not mounted
This warning is safe to ignore.
The following example shows the creation of a new /.globaldevices file that is 200 Mbytes in size.
phys-schost# rm /.globaldevices phys-schost# mkfile 200M /.globaldevices
phys-schost# lofiadm -a /.globaldevices phys-schost# newfs `lofiadm /.globaldevices` < /dev/null
The global devices are now populated on the new file system.
phys-schost# reboot
You can create a namespace on a loopback file interface (lofi) device, rather than creating a global-devices namespace on a dedicated partition. This feature is useful if you are installing Oracle Solaris Cluster software on systems that are pre-installed with the Oracle Solaris 10 OS.
Note - ZFS for root file systems is supported, with one significant exception. If you use a dedicated partition of the boot disk for the global-devices file system, you must use only UFS as its file system. The global-devices namespace requires the proxy file system (PxFS) running on a UFS file system. However, a UFS file system for the global-devices namespace can coexist with a ZFS file system for the root (/) file system and other root file systems, for example, /var or /home. Alternatively, if you instead use a lofi device to host the global-devices namespace, there is no limitation on the use of ZFS for root file systems.
The following procedures describe how to move an existing global-devices namespace from a dedicated partition to a lofi device or the opposite:
How to Migrate the Global-Devices Namespace From a Dedicated Partition to a lofi Device
How to Migrate the Global-Devices Namespace From a lofi Device to a Dedicated Partition
Do this to ensure that global devices are not served from this node while you perform this procedure. For instructions, see How to Boot a Node in Noncluster Mode.
# mkfile 100m /.globaldevices# lofiadm -a /.globaldevices # LOFI_DEV=`lofiadm /.globaldevices` # newfs `echo ${LOFI_DEV} | sed -e 's/lofi/rlofi/g'` < /dev/null# lofiadm -d /.globaldevices
# svcadm disable globaldevices# svcadm disable scmountdev # svcadm enable scmountdev # svcadm enable globaldevices
A lofi device is now created on /.globaldevices and mounted as the global-devices file system.
# /usr/cluster/bin/cldevice populate
On each node, verify that the command has completed processing before you perform any further actions on the cluster.
# ps -ef \ grep cldevice populate
The global-devices namespace now resides on a lofi device.
Do this to ensure that global devices are not served from this node while you perform this procedure. For instructions, see How to Boot a Node in Noncluster Mode.
Is at least 512 MByte in size
Uses the UFS file system
# /usr/sbin/clinfo -nnode ID
blockdevice rawdevice /global/.devices/node@nodeID ufs 2 no global
For example, if the partition that you choose to use is /dev/did/rdsk/d5s3, the new entry to add to the /etc/vfstab file would then be as follows: /dev/did/dsk/d5s3 /dev/did/rdsk/d5s3 /global/.devices/node@3 ufs 2 no global
# lofiadm -d /.globaldevices
# rm /.globaldevices
# svcadm disable globaldevices# svcadm disable scmountdev # svcadm enable scmountdev # svcadm enable globaldevices
The partition is now mounted as the global-devices namespace file system.
# /usr/cluster/bin/cldevice populate
Ensure that the process completes on all nodes of the cluster before you perform any further action on any of the nodes.
# ps -ef | grep cldevice populate
The global-devices namespace now resides on the dedicated partition.
You can add and register device groups for Solaris Volume Manager, ZFS, or raw-disk.
Use the metaset command to create a Solaris Volume Manager disk set and register the disk set as an Oracle Solaris Cluster device group. When you register the disk set, the name that you assigned to the disk set is automatically assigned to the device group.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Caution - The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group. |
# metaset -s diskset -a -M -h nodelist
Specifies the disk set to be created.
Adds the list of nodes that can master the disk set.
Designates the disk group as multi-owner.
Note - Running the metaset command to set up a Solaris Volume Manager device group on a cluster results in one secondary by default, regardless of the number of nodes that are included in that device group. You can change the desired number of secondary nodes by using the clsetup utility after the device group has been created. Refer to How to Set the Desired Number of Secondaries for a Device Group for more information about disk failover.
# cldevicegroup sync devicegroup
The device group name matches the disk set name that is specified with metaset.
# cldevicegroup list
# cldevice show | grep Device
Choose drives that are shared by the cluster nodes that will master or potentially master the disk set.
Use the full DID device name, which has the form /dev/did/rdsk/dN, when you add a drive to a disk set.
In the following example, the entries for DID device /dev/did/rdsk/d3 indicate that the drive is shared by phys-schost-1 and phys-schost-2.
=== DID Device Instances === DID Device Name: /dev/did/rdsk/d1 Full Device Path: phys-schost-1:/dev/rdsk/c0t0d0 DID Device Name: /dev/did/rdsk/d2 Full Device Path: phys-schost-1:/dev/rdsk/c0t6d0 DID Device Name: /dev/did/rdsk/d3 Full Device Path: phys-schost-1:/dev/rdsk/c1t1d0 Full Device Path: phys-schost-2:/dev/rdsk/c1t1d0 …
Use the full DID path name.
# metaset -s setname -a /dev/did/rdsk/dN
Specifies the disk set name, which is the same as the device group name.
Adds the drive to the disk set.
Note - Do not use the lower-level device name (cNtXdY) when you add a drive to a disk set. Because the lower-level device name is a local name and not unique throughout the cluster, using this name might prevent the metaset from being able to switch over.
# metaset -s setname
Example 5-22 Adding a Solaris Volume Manager Device Group
The following example shows the creation of the disk set and device group with the disk drives /dev/did/rdsk/d1 and /dev/did/rdsk/d2 and verifies that the device group has been created.
# metaset -s dg-schost-1 -a -h phys-schost-1 # cldevicegroup list dg-schost-1 metaset -s dg-schost-1 -a /dev/did/rdsk/d1 /dev/did/rdsk/d2
Oracle Solaris Cluster software supports the use of raw-disk device groups in addition to other volume managers. When you initially configure Oracle Solaris Cluster, device groups are automatically configured for each raw device in the cluster. Use this procedure to reconfigure these automatically created device groups for use with Oracle Solaris Cluster software.
Create a new device group of the raw-disk type for the following reasons:
You want to add more than one DID to the device group
You need to change the name of the device group
You want to create a list of device groups without using the -v option of the cldg command
Caution - If you are creating a device group on replicated devices, the name of the device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group. |
The following commands remove the predefined device groups for d7 and d8.
paris-1# cldevicegroup disable dsk/d7 dsk/d8 paris-1# cldevicegroup offline dsk/d7 dsk/d8 paris-1# cldevicegroup delete dsk/d7 dsk/d8
The following command creates a global device group, rawdg, which contains d7 and d8.
paris-1# cldevicegroup create -n phys-paris-1,phys-paris-2 -t rawdisk -d d7,d8 rawdg paris-1# /usr/cluster/lib/dcs/cldg show rawdg -d d7 rawdg paris-1# /usr/cluster/lib/dcs/cldg show rawdg -d d8 rawdg
To replicate ZFS, you must create a named device group and list the disks that belong to the zpool. A device can belong to only one device group at a time, so if you already have an Oracle Solaris Cluster device group that contains the device, you must delete the group before you add that device to a new ZFS device group.
The name of the Oracle Solaris Cluster device group that you create (Solaris Volume Manager or raw-disk) must be the same as the name of the replicated device group.
Caution - Full support for ZFS with third-party data-replication technologies is pending. See the latest Oracle Solaris Cluster Release Notes for updates on ZFS support. |
For example, if you have a zpool called mypool that contains two devices /dev/did/dsk/d2 and /dev/did/dsk/d13, you must delete the two default device groups called d2 and d13.
# cldevicegroup offline dsk/d2 dsk/d13 # cldevicegroup delete dsk/d2 dsk/d13
# cldevicegroup create -n pnode1,pnode2 -d d2,d13 -t rawdisk mypool
This action creates a device group called mypool (with the same name as the zpool), which manages the raw devices /dev/did/dsk/d2 and /dev/did/dsk/d13.
# zpool create mypool mirror /dev/did/dsk/d2 /dev/did/dsk/d13
# clrg create -n pnode1,pnode2 migrate_truecopydg-rg
# clrs create -t HAStoragePlus -x globaldevicepaths=mypool -g \ migrate_truecopydg-rg hasp2migrate_mypool
# clrg create -n pnode1:zone-1,pnode2:zone-2 -p \ RG_affinities=+++migrate_truecopydg-rg sybase-rg
# clrs create -g sybase-rg -t HAStoragePlus -p zpools=mypool \ -p resource_dependencies=hasp2migrate_mypool \ -p ZpoolsSearchDir=/dev/did/dsk hasp2import_mypool
You can perform a variety of administrative tasks for your device groups.
Device groups are Solaris Volume Manager disksets that have been registered with Oracle Solaris Cluster. To remove a Solaris Volume Manager device group, use the metaclear and metaset commands. These commands remove the device group with the same name and unregister the disk group as an Oracle Solaris Cluster device group.
Refer to the Solaris Volume Manager documentation for the steps to remove a disk set.
Use this procedure to remove a cluster node from all device groups that list the node in their lists of potential primaries.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Look for the node name in the Device group node list for each device group.
# cldevicegroup list -v
# cldevicegroup list -v
The command returns nothing if the node is no longer listed as a potential primary of any device group.
# cldevicegroup list -v nodename
Use this procedure to remove a cluster node from the list of potential primaries of a Solaris Volume Manager device group. Repeat the metaset command for each device group from which you want to remove the node.
Caution - Do not run metaset —s setname —f -t on a cluster node that is booted outside the cluster if other nodes are active cluster members and at least one of them owns the disk set. |
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
Device group type SDS/SVM indicates a Solaris Volume Manager device group.
phys-schost-1% cldevicegroup show devicegroup
# cldevicegroup status devicegroup
# metaset -s setname -d -h nodelist
Specifies the device group name.
Deletes from the device group the nodes identified with -h.
Specifies the node name of the node or nodes that will be removed.
Note - The update can take several minutes to complete.
If the command fails, add the -f (force) option to the command.
# metaset -s setname -d -f -h nodelist
The device group name matches the disk set name that is specified with metaset.
phys-schost-1% cldevicegroup list -v devicegroup
Example 5-23 Removing a Node From a Device Group (Solaris Volume Manager)
The following example shows the removal of the hostname phys-schost-2 from a device group configuration. This example eliminates phys-schost-2 as a potential primary for the designated device group. Verify removal of the node by running the cldevicegroup show command. Check that the removed node is no longer displayed in the screen text.
[Determine the Solaris Volume Manager device group for the node:] # cldevicegroup show dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: no Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset name: dg-schost-1 [Determine which node is the current primary for the device group:] # cldevicegroup status dg-schost-1 === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ dg-schost-1 phys-schost-1 phys-schost-2 Online [Become superuser on the node that currently owns the device group.] [Remove the host name from the device group:] # metaset -s dg-schost-1 -d -h phys-schost-2 [Verify removal of the node:]] phys-schost-1% cldevicegroup list -v dg-schost-1 === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ dg-schost-1 phys-schost-1 - Online
Use this procedure to remove a cluster node from the list of potential primaries of a raw-disk device group.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# cldevicegroup show -n nodename -t rawdisk +
# cldevicegroup set -p localonly=false devicegroup
See the cldevicegroup(1CL) man page for more information about the localonly property.
The Disk device group type indicates that the localonly property is disabled for that raw-disk device group.
# cldevicegroup show -n nodename -t rawdisk -v +
You must complete this step for each raw-disk device group that is connected to the node being removed.
# cldevicegroup remove-node -n nodename devicegroup
Example 5-24 Removing a Node From a Raw Device Group
This example shows how to remove a node (phys-schost-2) from a raw-disk device group. All commands are run from another node of the cluster (phys-schost-1).
[Identify the device groups connected to the node being removed, and determine which are raw-disk device groups:] phys-schost-1# cldevicegroup show -n phys-schost-2 -t rawdisk -v + Device Group Name: dsk/d4 Type: Disk failback: false Node List: phys-schost-2 preferenced: false localonly: false autogen true numsecondaries: 1 device names: phys-schost-2 Device Group Name: dsk/d2 Type: Disk failback: true Node List: pbrave2 preferenced: false localonly: false autogen true numsecondaries: 1 diskgroup name: vxdg1 Device Group Name: dsk/d1 Type: SVM failback: false Node List: pbrave1, pbrave2 preferenced: true localonly: false autogen true numsecondaries: 1 diskset name: ms1 (dsk/d4) Device group node list: phys-schost-2 (dsk/d2) Device group node list: phys-schost-1, phys-schost-2 (dsk/d1) Device group node list: phys-schost-1, phys-schost-2 [Disable the localonly flag for each local disk on the node:] phys-schost-1# cldevicegroup set -p localonly=false dsk/d4 [Verify that the localonly flag is disabled:] phys-schost-1# cldevicegroup show -n phys-schost-2 -t rawdisk + (dsk/d4) Device group type: Disk (dsk/d8) Device group type: Local_Disk [Remove the node from all raw-disk device groups:] phys-schost-1# cldevicegroup remove-node -n phys-schost-2 dsk/d4 phys-schost-1# cldevicegroup remove-node -n phys-schost-2 dsk/d2 phys-schost-1# cldevicegroup remove-node -n phys-schost-2 dsk/d1
The method for establishing the primary ownership of a device group is based on the setting of an ownership preference attribute called preferenced. If the attribute is not set, the primary owner of an otherwise unowned device group is the first node that attempts to access a disk in that group. However, if this attribute is set, you must specify the preferred order in which nodes attempt to establish ownership.
If you disable the preferenced attribute, then the failback attribute is also automatically disabled. However, if you attempt to enable or re-enable the preferenced attribute, you have the choice of enabling or disabling the failback attribute.
If the preferenced attribute is either enabled or re-enabled, you are required to reestablish the order of nodes in the primary ownership preference list.
This procedure uses clsetup to set or unset the preferenced attribute and the failback attribute for Solaris Volume Manager device groups.
Before You Begin
To perform this procedure, you need the name of the device group for which you are changing attribute values.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# clsetup
The Main Menu is displayed.
The Device Groups Menu is displayed.
The Change Key Properties Menu is displayed.
Follow the instructions to set the preferenced and failback options for a device group.
Look for the device group information displayed by the following command.
# cldevicegroup show -v devicegroup
Example 5-25 Changing Device Group Properties
The following example shows the cldevicegroup command generated by clsetup when it sets the attribute values for a device group (dg-schost-1).
# cldevicegroup set -p preferenced=true -p failback=true -p numsecondaries=1 \ -p nodelist=phys-schost-1,phys-schost-2 dg-schost-1 # cldevicegroup show dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: yes Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset names: dg-schost-1
The numsecondaries property specifies the number of nodes within a device group that can master the group if the primary node fails. The default number of secondaries for device services is one. You can set the value to any integer between one and the number of operational nonprimary provider nodes in the device group.
This setting is an important factor in balancing cluster performance and availability. For example, increasing the desired number of secondaries increases the device group's opportunity to survive multiple failures that occur simultaneously within a cluster. Increasing the number of secondaries also decreases performance regularly during normal operation. A smaller number of secondaries typically results in better performance, but reduces availability. However, a larger number of secondaries does not always result in greater availability of the file system or device group in question. Refer to Chapter 3, Key Concepts for System Administrators and Application Developers, in Oracle Solaris Cluster Concepts Guide for more information.
If you change the numsecondaries property, secondary nodes are added or removed from the device group if the change causes a mismatch between the actual number of secondaries and the desired number.
This procedure uses the clsetup utility to set the numsecondaries property for all types of device groups. Refer to cldevicegroup(1CL) for information about device group options when configuring any device group.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# clsetup
The Main Menu is displayed.
The Device Groups Menu is displayed.
The Change Key Properties Menu is displayed.
Follow the instructions and type the desired number of secondaries to be configured for the device group. The corresponding cldevicegroup command is then executed, a log is printed, and the utility returns to the previous menu.
# cldevicegroup show dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: Local_Disk This might also be SDS. failback: yes Node List: phys-schost-1, phys-schost-2 phys-schost-3 preferenced: yes numsecondaries: 1 diskgroup names: dg-schost-1
Note - If you change any configuration information for a disk group or volume that is registered with the cluster, you must reregister the device group by using clsetup. Such configuration changes include adding or removing volumes, as well as changing the group, owner, or permissions of existing volumes. Reregistration after configuration changes ensures that the global namespace is in the correct state. See How to Update the Global-Devices Namespace.
Look for the device group information that is displayed by the following command.
# cldevicegroup show -v devicegroup
Example 5-26 Changing the Desired Number of Secondaries (Solaris Volume Manager)
The following example shows the cldevicegroup command that is generated by clsetup when it configures the desired number of secondaries for a device group (dg-schost-1). This example assumes that the disk group and volume were created previously.
# cldevicegroup set -p numsecondaries=1 dg-schost-1 # cldevicegroup show -v dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: yes Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset names: dg-schost-1
Example 5-27 Setting the Desired Number of Secondaries to the Default Value
The following example shows use of a null string value to configure the default number of secondaries. The device group will be configured to use the default value, even if the default value changes.
# cldevicegroup set -p numsecondaries= dg-schost-1 # cldevicegroup show -v dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: yes Node List: phys-schost-1, phys-schost-2 phys-schost-3 preferenced: yes numsecondaries: 1 diskset names: dg-schost-1
You do not need to be superuser to list the configuration. However, you do need solaris.cluster.read authorization.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
See the Oracle Solaris Cluster Manager online help for more information.
Use cldevicegroup show to list the configuration for all device groups in the cluster.
Use cldevicegroup show devicegroup to list the configuration of a single device group.
Use cldevicegroup status devicegroup to determine the status of a single device group.
Use cldevicegroup status + to determine the status of all device groups in the cluster.
Use the -v option with any of these commands to obtain more detailed information.
Example 5-28 Listing the Status of All Device Groups
# cldevicegroup status + === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ dg-schost-1 phys-schost-2 phys-schost-1 Online dg-schost-2 phys-schost-1 -- Offline dg-schost-3 phys-schost-3 phy-shost-2 Online
Example 5-29 Listing the Configuration of a Particular Device Group
# cldevicegroup show dg-schost-1 === Device Groups === Device Group Name: dg-schost-1 Type: SVM failback: yes Node List: phys-schost-2, phys-schost-3 preferenced: yes numsecondaries: 1 diskset names: dg-schost-1
This procedure can also be used to start (bring online) an inactive device group.
You can also bring an inactive device group online or switch the primary for a device group by using the Oracle Solaris Cluster Manager GUI. See the Oracle Solaris Cluster Manager online help for more information.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# cldevicegroup switch -n nodename devicegroup
Specifies the name of the node to switch to. This node become the new primary.
Specifies the device group to switch.
If the device group is properly registered, information for the new device group is displayed when you use the following command.
# cldevice status devicegroup
Example 5-30 Switching the Primary for a Device Group
The following example shows how to switch the primary for a device group and verify the change.
# cldevicegroup switch -n phys-schost-1 dg-schost-1 # cldevicegroup status dg-schost-1 === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ dg-schost-1 phys-schost-1 phys-schost-2 Online
Putting a device group in maintenance state prevents that device group from automatically being brought online whenever one of its devices is accessed. You should put a device group in maintenance state when completing repair procedures that require that all I/O activity be acquiesced until completion of the repair. Putting a device group in maintenance state also helps prevent data loss by ensuring that a device group is not brought online on one node while the disk set or disk group is being repaired on another node.
For instructions on how to restore a corrupted diskset, see Restoring a Corrupted Diskset.
Note - Before a device group can be placed in maintenance state, all access to its devices must be stopped, and all dependent file systems must be unmounted.
The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.
# cldevicegroup disable devicegroup
# cldevicegroup offline devicegroup
For Solaris Volume Manager:
# metaset -C take -f -s diskset
Caution - If you are taking ownership of a Solaris Volume Manager disk set, you must use the metaset -C take command when the device group is in maintenance state. Using metaset -t brings the device group online as part of taking ownership. |
Caution - Before taking the device group out of maintenance state, you must release ownership of the disk set or disk group. Failure to release ownership can result in data loss. |
For Solaris Volume Manager:
# metaset -C release -s diskset
# cldevicegroup online devicegroup # cldevicegroup enable devicegroup
Example 5-31 Putting a Device Group in Maintenance State
This example shows how to put device group dg-schost-1 in maintenance state, and remove the device group from maintenance state.
[Place the device group in maintenance state.] # cldevicegroup disable dg-schost-1 # cldevicegroup offline dg-schost-1 [If needed, manually import the disk set or disk group.] For Solaris Volume Manager: # metaset -C take -f -s dg-schost-1 [Complete all necessary repair procedures.] [Release ownership.] For Solaris Volume Manager: # metaset -C release -s dg-schost-1 [Bring the device group online.] # cldevicegroup online dg-schost-1 # cldevicegroup enable dg-schost-1