Go to main content

Reference for Oracle Solaris Cluster 4.4

Exit Print View

Updated: August 2018
 
 

cldevicegroup (8CL)

Name

cldevicegroup, cldg - manage Oracle Solaris Cluster device groups

Synopsis

/usr/cluster/bin/cldevicegroup -V
/usr/cluster/bin/cldevicegroup [subcommand] -?
/usr/cluster/bin/cldevicegroup subcommand [options] -v 
     [devicegroup …]
/usr/cluster/bin/cldevicegroup add-device -d device[,…] 
     devicegroup
/usr/cluster/bin/cldevicegroup add-node -n node[,…]      [-t 
devicegroup-type[,…]] {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup create -n node[,…] 
     -t devicegroup-type [-d device[,…]] [-p name=value] 
     devicegroup ...
/usr/cluster/bin/cldevicegroup create -i {- | clconfigfile} 
     [-d device[,…]] [-n node[,…]] [-p name=value] 
     [-t devicegroup-type[,…]] {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup delete [-t devicegroup-type[,…]] 
     {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup disable [-t devicegroup-type[,…]] 
     {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup enable [-t devicegroup-type[,…]] 
     {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup export [-n node[,…]] 
     [-o {- | clconfigfile}] [-t devicegroup-type[,…]] 
     {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup list [-n node[,…]] 
     [-t devicegroup-type[,…]] [+ | devicegroup ...]
/usr/cluster/bin/cldevicegroup offline [-t devicegroup-type[,…]] 
     {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup online [-e] [-n node] 
     [-t devicegroup-type[,…]] {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup remove-device -d device[,…] 
     devicegroup
/usr/cluster/bin/cldevicegroup remove-node -n node[,…] 
     [-t devicegroup-type[,…]] {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup set -p name=value [-p name=value]… 
     [-d device[,…]] [-n node[,…]] [-t devicegroup-type[,…]] 
     {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup show [-n node[,…]] 
     [-t devicegroup-type[,…]] [+ | devicegroup ...]
/usr/cluster/bin/cldevicegroup status [-n node[,…]] 
     [-t devicegroup-type[,…]] [+ | devicegroup ...]
/usr/cluster/bin/cldevicegroup switch -n node 
     [-t devicegroup-type[,…]] {+ | devicegroup ...}
/usr/cluster/bin/cldevicegroup sync [-t devicegroup-type[,…]] 
     {+ | devicegroup ...}

Description

The cldevicegroup command manages Oracle Solaris Cluster device groups. The cldg command is the short form of the cldevicegroup command. These two commands are identical. You can use either form of the command.

The general form of this command is as follows:

cldevicegroup [subcommand] [options] [operands]

You can omit subcommand only if options specifies the –? option or the –V option.

Each option of this command has a long form and a short form. Both forms of each option are given with the description of the option in the OPTIONS section of this man page.

With the exception of list, show, and status, most subcommands require at least one operand. Many subcommands accept the plus sign (+) as an operand to indicate all applicable objects. Refer to the SYNOPSIS and other sections of this man page for details.

Each subcommand can be used for all device-group types, except for the following subcommands:

  • The add-device and remove-device subcommands are only valid for the rawdisk type.

  • The add-node, create, delete, and remove-node subcommands are valid for either the rawdisk or the zpool device-group type.

You can use this command only in the global zone.

SUBCOMMANDS

The following subcommands are supported:

add-device

Adds new member disk devices to an existing raw-disk device group.

You can only use the add-device subcommand on existing device groups of the type rawdisk. For more information about device-group types, see the description of the –t option.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

For information about how to remove disk devices from a raw-disk device group, see the description of the remove-device subcommand.

add-node

Adds new nodes to an existing device group.

This subcommand supports either the rawdisk or the zpool device-group type. You cannot add a node to an svm or sds device group by using Oracle Solaris Cluster commands. Instead, use Solaris Volume Manager commands to add nodes to Solaris Volume Manager disk sets. Disk sets are automatically registered with Oracle Solaris Cluster software as svm or sds device groups. For more information about device-group types, see the description of the –t option.

You cannot use this subcommand on a device group if the preferenced property for the device group is set to true.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

For information about how to remove nodes from a device group, see the description of the remove-node subcommand.

create

Creates a new device group.

You can use this subcommand only in the global zone.

This subcommand supports either the rawdisk or the zpool device-group type. You cannot create an svm or sds device group by using Oracle Solaris Cluster commands. Instead, use Solaris Volume Manager commands to create Solaris Volume Manager disk sets. Disk sets are automatically registered with Oracle Solaris Cluster software as svm or sds device groups. For more information about device-group types, see the description of the –t option.

If you specify a configuration file with the –i option, you can supply a plus sign (+) as the operand. When you use this operand, the command creates all device groups that are specified in the configuration file that do not already exist.

For device groups of type rawdisk, use the –d option with the create subcommand to specify one or more devices to the device group. You cannot create a device group without any device. When you specify devices, use one –d option per command invocation. You cannot create multiple raw-disk device groups in one command invocation unless you use the –i option.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

For information about how to delete device groups, see the description of the delete subcommand.

delete

Deletes device groups.

This subcommand supports either the rawdisk or the zpool device-group types.

You cannot delete svm or sds device groups by using Oracle Solaris Cluster commands. To delete svm or sds device groups, instead use Solaris Volume Manager commands to delete the underlying Solaris Volume Manager disk sets.

Device groups must be offline before you can delete them.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

For information about how to create device groups, see the description of the create subcommand.

disable

Disables offline device groups.

The disabled state of device groups survives reboots.

Before you can disable an online device group, you must first take the device group offline by using the offline subcommand.

If a device group is currently online, the disable action fails and does not disable the specified device groups.

You cannot bring a disabled device group online by using the switch subcommand or the online subcommand. You must first use the enable subcommand to clear the disabled state of the device group.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

For information about how to enable device groups, see the description of the enable subcommand.

enable

Enables device groups.

The disabled state of device groups survives reboots.

Before you can bring a disabled device group online, you must first clear the disabled state of the device group by using the enable subcommand.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

For information about how to disable device groups, see the description of the disable subcommand.

export

Exports the device-group configuration information.

If you specify a file name with the –o option, the configuration information is written to that new file. If you do not supply the –o option, the output is written to standard output.

Users other than the root user require solaris.cluster.read authorization to use this subcommand.

list

Displays a list of device groups.

By default, this subcommand lists all device groups in the cluster for which the autogen property is set to false. To display all device groups in the cluster, also specify the –v option.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.read authorization to use this subcommand.

offline

Takes device groups offline.

For device groups of type zpool, the file system datasets of the ZFS pool are unmounted and the pool is exported. If any of the unmounts fail due to the file systems being busy, the device group remains online and an error message is printed.

If a device group is online, you must take it offline by running the offline subcommand before you run the disable subcommand.

To start an offline device group, issue an explicit online subcommand or switch subcommand. For device groups of type other than zpool, you can also use the following methods of starting the device group:

  • Access a device within the device group.

  • Mount a file system that depends on the device group.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.admin authorization to use this subcommand.

For information about how to bring device groups online, see the description of the online subcommand.

online

Brings device groups online on a predesignated node.

For device groups of type zpool, the ZFS pool is imported and the file system datasets of the pool are mounted, except for datasets that have the import-at-boot property set to nomount.

If a device group is disabled, you must enable it in one of the following ways before you can bring the device group online:

  • Use the –e option with the online subcommand.

  • Run the enable subcommand before you run the online subcommand.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.admin authorization to use this subcommand.

For information about how to take device groups offline, see the description of the offline subcommand.

remove-device

Removes member disk devices from a raw-disk device group.

You can use this subcommand only in the global zone.

The remove-device subcommand is only valid with device groups of type rawdisk. This subcommand is not valid with svm or sds device-group types.

You cannot use the remove-device subcommand to remove all the devices in a device group. A device group must contain at least one device. To remove all the devices in a device group, use the delete subcommand to remove the device group.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

For information about how to add disk devices to a raw-disk device groups, see the description of the add-device subcommand.

remove-node

Removes nodes from existing device groups.

This subcommand supports either the rawdisk or the zpool device-group type. You cannot remove a node from an svm or sds device group by using Oracle Solaris Cluster commands. Instead, use Solaris Volume Manager commands to remove nodes from Solaris Volume Manager disk sets. Disk sets are automatically registered with Oracle Solaris Cluster software as svm or sds device groups. For more information about device-group types, see the description of the –t option.

You cannot use the remove-node subcommand on a device group if the preferenced property is set to true.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

For information about how to add nodes to a device group, see the description of the add-node subcommand.

set

Modifies attributes that are associated with a device group.

For device groups of type rawdisk, use the –d option with the set subcommand to specify a new list of member disk devices for the specified device group.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

show

Generates a configuration report for device groups.

By default, this subcommand reports on all device groups in the cluster for which the autogen property is set to false. To display all device groups in the cluster, also specify the –v option.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.read authorization to use this subcommand.

status

Generates a status report for device groups.

By default, this subcommand reports on all device groups in the cluster for which the autogen property is set to false. To display all device groups in the cluster, also specify the –v option.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.read authorization to use this subcommand.

switch

Transfers device groups from one primary node in an Oracle Solaris Cluster configuration to another node.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.modify authorization to use this subcommand.

sync

Synchronizes device-group information with the clustering software.

Use this subcommand whenever you change any volume attribute, such as owner, group, or access permissions.

Also use the sync subcommand to change a device-group configuration to a replicated or non-replicated configuration.

After you create a Solaris Volume Manager disk set that contain disks that are configured for replication, you must run the sync subcommand for the corresponding svm or sds device group. A Solaris Volume Manager disk set is automatically registered with Oracle Solaris Cluster software as an svm or sds device group, but replication information is not synchronized at that time.

For newly created rawdisk device-group types, you do not need to manually synchronize replication information for the disks. When you register a raw-disk device group with Oracle Solaris Cluster software, the software automatically discovers any replication information on the disks.

If you specify the + operand, only device groups that have the autogen property set to false are affected. To apply the command to device groups that are automatically created by the system at boot time, which have the autogen property set to true, you must explicitly specify each device group.

Users other than the root user require solaris.cluster.admin authorization to use this subcommand.

Options

The following options are supported:

–?
–-help

Displays help information.

You can use this option either alone or with a subcommand.

  • If you use this option alone, the list of available subcommands is printed.

  • If you use this option with a subcommand, the usage options for that subcommand are printed.

When you use this option, no other processing is performed.

–d device[,…]
–-device=device[,…]
–-device device[,…]

Specifies the list of disk devices to be members of the specified raw-disk device group.

The –d option is only valid with device groups of type rawdisk.

Specify disks only by the DID global device name, for example, d3. See the did(4) man page for more information.

–e
–-enable

Enables a device group. This option is only valid when used with the online subcommand.

If the specified device group is already enabled, the –e option is ignored and the command proceeds to bring the device group online.

–i {- | clconfigfile}
–-input={- | clconfigfile}
–-input {- | clconfigfile}

Specifies configuration information that is to be used for creating device groups. This information must conform to the format that is defined in the clconfiguration(7CL) man page. This information can be contained in a file or supplied through standard input. To specify standard input, supply the minus sign (-) instead of a file name.

The –i option affects only those device groups that you include in the fully qualified device-group list.

Options that you specify in the command override any options that are set in the configuration file. If configuration parameters are missing in the cluster configuration file, you must specify these parameters on the command line.

–n node[,…]
–-node=node[,…]
–-node node[,…]

Specifies a node or a list of nodes.

By default, the order of the node list indicates the preferred order in which nodes attempt to take over as the primary node for a device group. The exception is for local-only disk groups which are outside Oracle Solaris Cluster control and therefore the concept of primary and secondary nodes does not apply.

If the preferenced property of the device group is set to false, the order of the node list is ignored. Instead, the first node to access a device in the group automatically becomes the primary node for that group. See the –p option for more information about setting the preferenced property for a device-group node list.

You cannot use the –n option to specify the node list of an svm or sds device group. You must instead use Solaris Volume Manager commands or utilities to specify the node list of the underlying disk set.

The create and set subcommands use the –n option to specify a list of potential primary nodes only for a device group of type rawdisk. You must specify the entire node list of the device group. You cannot use the –n option to add or remove an individual node from a node list.

The switch subcommand uses the –n option to specify a single node as the new device-group primary.

The export, list, show, and status subcommands use the –n option to exclude from the output those device groups that are not online on the specified nodes.

The concept of primary and secondary nodes does not apply to localonly disk groups, which are outside the control of Oracle Solaris Cluster.

–o {- | clconfigfile}
–-output={- | clconfigfile}
–-output {- | clconfigfile}

Displays the device-group configuration in the format that is described by the clconfiguration(7CL) man page. This information can be written to a file or to standard output.

If you supply a file name as the argument to this option, the command creates a new file and the configuration is printed to that file. If a file of the same name already exists, the command exits with an error. No change is made to the existing file.

If you supply the minus sign (-) as the argument to this option, the command displays the configuration information to standard output. All other standard output for the command is suppressed.

The –o option is only valid with the export subcommand.

–p name=value
–-property=name=value
–-property name=value

Sets the values of device-group properties.

The –p option is only valid with the create and set subcommands. Multiple instances of –pname–=value are allowed.

The following properties are supported:

autogen

The autogen property can have a value of true or false. The default is false for manually created device groups. For system-created device groups, the default is true.

The autogen property is an indicator for the list, show, and status subcommands. These subcommands do not list devices that have the autogen property set to true unless you use the –v option. This is a read-only property and cannot be set or modified by the user.

This property is valid for device groups of types rawdisk and zpool. See the –t option for more information about device-group types.

failback

The failback property can have a value of true or false. The default is false.

The failback property specifies the behavior of the system if a device-group primary node leaves the cluster membership and later returns.

When the primary node of a device group leaves the cluster membership, the device group fails over to the secondary node. When the failed node rejoins the cluster membership, the device group can either continue to be mastered by the secondary node or fail back to the original primary node.

  • If the failback property is set to true, the device group becomes mastered by the original primary node.

  • If the failback property is set to false, the device group continues to be mastered by the secondary node.

By default, the failback property is disabled during device group creation. The failback property is not altered during a set operation.

import-at-boot

The import-at-boot property indicates whether the zpool in the device group will be imported when a booting node finds the device group offline at boot time.

The import-at-boot property can have a value of true, false, or nomount. The default is false.

  • When the value is true, the pool is imported and the datasets are mounted.

  • When the value is false, the pool is not imported.

  • When the value is nomount, the pool is imported but the file system datasets are not mounted.

  • The file system datasets that belong to the Oracle ZFS pool are mounted globally or non-globally depending on the poolaccess property value.

The import-at-boot property also controls whether the file system datasets of the pool get mounted automatically when the online subcommand causes the pool to be imported. The file system datasets are mounted unless the value of import-at-boot is nomount.

This property is valid only for device groups of type zpool.

localonly

The localonly property can have a value of true or false. The default is false.

The localonly property is only valid for disk groups of type rawdisk.

If you want a disk group to be mastered only by a particular node, configure the disk group with the property setting localonly=true. A local-only disk group is outside the control of Oracle Solaris Cluster software. You can specify only one node in the node list of a local-only disk group. When you set the localonly property for a disk group to true, the node list for the disk group must contain only one node.

numsecondaries

The numsecondaries property must have an integer value greater than 0 but less than the total number of nodes in the node list. The default is 1.

This property setting can be used to dynamically change the desired number of secondary nodes for a device group. A secondary node of a device group can take over as the primary node if the current primary node fails.

You can use the numsecondaries property to change the number of secondary nodes for a device group while maintaining a given level of availability. If you remove a node from the secondary-nodes list of a device group, that node can no longer take over as a primary node.

The numsecondaries property only applies to the nodes in a device group that are currently in cluster mode. The nodes must also be capable of being used together with the device group's preferenced property. If a group's preferenced property is set to true, the nodes that are least preferred are removed from the secondary-nodes list first. If no node in a device group is flagged as preferred, the cluster randomly picks the node to remove.

When a device group's actual number of secondary nodes drops to less that the desired level, each eligible node that was removed from the secondary-nodes list is added back to the list. Each node must meet all of the following conditions to be eligible to add back to the secondary-nodes list:

  • The node is currently in the cluster.

  • The node belongs to the device group

  • The node is not currently a primary node or a secondary node.

The conversion starts with the node in the device group that has the highest preference. More nodes are converted in order of preference until the number of desired secondary nodes is matched.

If a node joins the cluster and has a higher preference in the device group than an existing secondary node, the node with the lesser preference is removed from the secondary-nodes list. The removed node is replaced by the newly added node. This replacement only occurs when more actual secondary nodes exist in the cluster than the desired level.

See the preferenced property for more information about setting the preferenced property for a device-group node list.

poolaccess

The poolaccess property can have a value of noglobal or global. The default is noglobal.

The value noglobal indicates to the cluster that the datasets of the pool are to be made accessible only on the node where the pool is imported.

The value global indicates to the cluster that the datasets of the pool are to be made accessible from all the nodes of the device group, irrespective of the direct storage connection to the nodes. All the nodes must be defined by the –n node[,...] option, as described in the Options section of this man page.

This property is valid only for device groups of type zpool.

preferenced

The preferenced property can have a value of true or false. The default is true.

During the creation of a device group, if the preferenced property is set to true, the node list also indicates the preferred-node order. The preferred-node order determines the order in which each node attempts to take over as the primary node for a device group.

During the creation of a device group, if this property is set to false, the first node to access a device in the group automatically becomes the primary node. The order of nodes in the specified node list is not meaningful. Setting this property back to true without also re-specifying the node list does not reactivate node ordering.

The preferred-node order is not changed during a set operation unless both specify the preferenced=true property and use the –n option to supply the entire node list for the device group, in the preferred order.

readonly

The readonly property determines whether the pool is to be imported in read-only mode, and therefore prevent any writes to the datasets of the pool.

The readonly property can have a value of true or false.

This property is valid only for device groups of type zpool.

For more information, see the zpool(8) man page.

searchpaths

The searchpaths property contains a list of directory locations or devices that are used to find the Oracle ZFS pool configuration. The /dev/dsk directory location is used if the searchpaths property is empty or not specified.

This property is valid only for device groups of type zpool.

–t devicegroup-type[,…]
–-type=devicegroup-type[,…]
–-type devicegroup-type[,…]

Specifies a device-group type or a list of device-group types.

For the create subcommand, you can specify only one device-group type. The device group is then created for the type that you specify with this option.

For all other subcommands that accept the –t option, the device-group list that you supply to the command is qualified by this option to include only device groups of the specified type.

Not all subcommands and options are valid for all device-group types. For example, the create subcommand is valid for the zpool and rawdisk device-group types, but not for the svm or sds device-group types.

The –t option supports the following device-group types:

rawdisk

Specifies a raw-disk device group.

A raw disk is a disk that is not part of a volume-manager volume or metadevice. Raw-disk device groups enable you to define a set of disks within a device group. By default, at system boot a raw-disk device group is created for every device ID pseudo driver (DID) device in the configuration. By convention, the raw-disk device group names are assigned at initialization. These names are derived from the DID device names. For every node that you add to a raw-disk device group, the cldevicegroup command verifies that every device in the group is physically ported to the node.

The create subcommand creates a raw-disk device group and adds multiple disk devices to the device group. Before you can create a new raw-disk device group, you must remove each device that you want to add to the new group from the device group that was created for the device at boot time. Then you can create a new raw-disk device group that contains these devices. You specify the list of these devices with the –d option as well as specify the potential-primary node-preference list with the –n option.

To master a device group on a single specified node, use the –p option to configure the device group with the property setting localonly=true. You can specify only one node in the node list when you create a local-only device group.

The delete subcommand removes the device-group name from the cluster device-group configuration.

The set subcommand makes the following changes to a raw-disk device group:

  • Changes the preference order of the potential primary node

  • Specifies a new node list

  • Enables or disables failback

  • Sets the desired number of secondaries

  • Adds more global devices to the device group

If a raw-disk device name is registered in a raw-disk device group, you cannot also register the raw-disk device name in a Solaris Volume Manager device group.

sds

Specifies a device group that was originally created with Solstice DiskSuite software. With the exception of multi-owner disk sets, this device-group type is equivalent to the Solaris Volume Manager device-group type, svm. See the description of the svm device-group type for more information.

svm

Specifies a Solaris Volume Manager device group.

A Solaris Volume Manager device group is defined by the following components:

  • A name

  • The nodes upon which the group can be accessed

  • A global list of devices in the disk set

  • A set of properties that control actions such as potential primary preference and failback behavior

Solaris Volume Manager has the concept of a multi-hosted or shared disk set. A shared disk set is a grouping of two or more hosts and disk drives. The disk drives are accessible by all hosts and have the same device names on all hosts. This identical-device-naming requirement is achieved by using the raw-disk devices to form the disk set. The device ID pseudo driver (DID) allows multi-hosted disks to have consistent names across the cluster. Only hosts that are already configured as part of a disk set can be configured into the node list of a Solaris Volume Manager device group. When you add drives to a shared disk set, the drives must not belong to any other shared disk set.

The Solaris Volume Manager metaset command creates the disk set and automatically registers the disk set with Oracle Solaris Cluster software as a Solaris Volume Manager device group. After you create the device group, you must use the set subcommand of the cldevicegroup command to set the node preference list and the preferenced, failback, and numsecondaries properties.

You can assign only one Solaris Volume Manager disk set to a device group. The device-group name must always match the name of the disk set.

You cannot use the add-node or remove-node subcommands to add or remove nodes in a Solaris Volume Manager device group. Instead, use the Solaris Volume Manager metaset command to add or remove nodes in the underlying Solaris Volume Manager disk set.

You cannot use the delete subcommand to remove a Solaris Volume Manager device group from the cluster configuration. Instead, use the Solaris Volume Manager metaset command to remove the underlying Solaris Volume Manager disk set.

Only the export, list, show, status, and sync subcommands work on Solaris Volume Manager multi-owner disk sets. You must use Solaris Volume Manager commands or utilities to create and delete the underlying disk set of a Solaris Volume Manager device group.

zpool

Specifies a ZFS storage pool device group. A storage pool is a collection of devices that provides physical storage and data replication for ZFS datasets.

For more information, see the zpool(8) man page.

–V
–-version

Displays the version of the command.

Do not specify this option with subcommands, operands, or other options. The subcommands, operands, or other options are ignored. The –V option only displays the version of the command. No other operations are performed.

–v
–-verbose

Displays verbose messages to standard output.

You can use this option with any form of the command.

Operands

The following operand is supported:

devicegroup

Specifies a device group.

The cldevicegroup command accepts only Oracle Solaris Cluster device-group names as operands. For most forms of the command that accept more than one device-group name, you can use the plus sign (+) to specify all possible device groups.


Note -  The + operand includes only manually created device groups, but ignores all automatically created device groups, which have the autogen property set to true. Oracle Solaris Cluster software automatically creates such device groups at each system boot. To apply a command to these “hidden” device groups, you must specify each device group explicitly.

Exit Status

The complete set of exit status codes for all commands in this command set are listed on the Intro(8CL) man page.

If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.

This command returns the following exit status codes:

0 CL_NOERR

No error

1 CL_ENOMEM

Not enough swap space

3 CL_EINVAL

Invalid argument

6 CL_EACCESS

Permission denied

35 CL_EIO

I/O error

36 CL_ENOENT

No such object

39 CL_EEXIST

Object exists

Examples

Example 1 Modifying a Device Group

The following example shows how to set the preferenced property of device group devgrp1 to true and set the numsecondaries property to 2. The command also specifies the desired node list, phys-schost-1,phys-schost-2,phys-schost-3.

# cldevicegroup set -p preferenced=true -p numsecondaries=2 \
-n phys-schost-1,phys-schost-2,phys-schost-3 devgrp1
Example 2 Modifying a Raw-Disk Device Group

The following example shows how to modify the existing raw-disk device group rawdevgrp1. The command specifies devices d3 and d4 in a new-member device list and sets the localonly attribute to true. The node phys-schost-1 is the only primary node that is allowed for the local-only raw-disk device group.

# cldevicegroup set -d d3,d4 \
-p localonly=true -n phys-schost-1 rawdevgrp1
Example 3 Resetting the numsecondaries Attribute of a Device Group

The following example shows how to reset the numsecondaries attribute of device group devgrp1 to the appropriate system default value by specifying no value for that attribute.

# cldevicegroup set -p numsecondaries= devgrp1
Example 4 Switching Over a Device Group

The following example shows how to switch over the device group devgrp1 to a new master node, phys-schost-2.

# cldevicegroup switch -n phys-schost-2 devgrp1
Example 5 Disabling a Device Group

The following example shows how to disable the device group devgrp1.

# cldevicegroup disable devgrp1
Example 6 Taking Offline a Device Group

The following example shows how to take device group devgrp1 offline and then disable it.

# cldevicegroup offline devgrp1
# cldevicegroup disable devgrp1
Example 7 Bringing a Device Group Online on its Primary Node

The following example shows how to bring online the device group devgrp1 on its default primary node. The command first enables the device group.

# cldevicegroup online -e devgrp1
Example 8 Bringing a Device Group Online on a Specified Node

The following example shows how to bring online the device group devgrp1 on phys-schost-2 as its new primary node.

# cldevicegroup switch -n phys-schost-2 devgrp1
Example 9 Adding New Nodes to a Device Group

The following example shows how to add a new node, phys-schost-3, to the device group devgrp1. This device group is not of the device-group type svm.

# cldevicegroup add-node -n phys-schost-3 devgrp1
Example 10 Deleting a Device Group

The following example shows how to delete the device group devgrp1 from the Oracle Solaris Cluster configuration. This device group is not of the device-group type svm.

# cldevicegroup delete devgrp1
Example 11 Synchronizing Replication Information With the Device-Group Configuration

The following example shows how to make Oracle Solaris Cluster software aware of the replication configuration that is used by the disks in the device group devgrp1.

# cldevicegroup sync devgrp1

Attributes

See attributes(7) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
ha-cluster/system/core
Interface Stability
Evolving

See Also

clconfiguration(7CL), did(4), rbac(7), Intro(8CL), cldevice(8CL), cluster(8CL), metaset(8)

Administering an Oracle Solaris Cluster 4.4 Configuration

Notes

The root user can run any forms of this command.

Any user can also run this command with the following options:

  • –? (help) option

  • –V (version) option

To run this command with other subcommands, users other than the root user require authorizations. See the following table.

Subcommand
Authorization
add-device
solaris.cluster.modify
add-node
solaris.cluster.modify
create
solaris.cluster.modify
delete
solaris.cluster.modify
disable
solaris.cluster.modify
enable
solaris.cluster.modify
export
solaris.cluster.read
list
solaris.cluster.read
offline
solaris.cluster.admin
online
solaris.cluster.admin
remove-device
solaris.cluster.modify
remove-node
solaris.cluster.modify
set
solaris.cluster.modify
show
solaris.cluster.read
status
solaris.cluster.read
switch
solaris.cluster.modify
sync
solaris.cluster.admin