JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Reference Manual     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

Introduction

OSC33 1

OSC33 1cl

claccess(1CL)

cldev(1CL)

cldevice(1CL)

cldevicegroup(1CL)

cldg(1CL)

clinterconnect(1CL)

clintr(1CL)

clmib(1CL)

clnas(1CL)

clnasdevice(1CL)

clnode(1CL)

clq(1CL)

clquorum(1CL)

clreslogicalhostname(1CL)

clresource(1CL)

clresourcegroup(1CL)

clresourcetype(1CL)

clressharedaddress(1CL)

clrg(1CL)

clrs(1CL)

clrslh(1CL)

clrssa(1CL)

clrt(1CL)

clsetup(1CL)

clsnmphost(1CL)

clsnmpmib(1CL)

clsnmpuser(1CL)

clta(1CL)

cltelemetryattribute(1CL)

cluster(1CL)

clvxvm(1CL)

clzc(1CL)

clzonecluster(1CL)

OSC33 1ha

OSC33 1m

OSC33 3ha

OSC33 4

OSC33 5

OSC33 5cl

OSC33 7

OSC33 7p

Index

cldevice

, cldev

- manage Oracle Solaris Cluster devices

Synopsis

/usr/cluster/bin/cldevice  -V
/usr/cluster/bin/cldevice  [subcommand] -?
/usr/cluster/bin/cldevice  subcommand  [options]
 -v [+ |  device ...]
/usr/cluster/bin/cldevice check  [-n node[,…]]
 [+]
/usr/cluster/bin/cldevice clear  [-n node[,…]]
 [+]
/usr/cluster/bin/cldevice combine  -t replication-type
 -g replication-device-group -d destination-device
 device
/usr/cluster/bin/cldevice export  [-o {- |  configfile}]
 [-n node[,…]] [+ |  device ...]
/usr/cluster/bin/cldevice list  [-n node[,…]]
 [+ |  device ...]
/usr/cluster/bin/cldevice monitor  [-i {- |  clconfigfile} ]
 [-n node[,…] ] {+ |  disk-device ...}
/usr/cluster/bin/cldevice populate 
/usr/cluster/bin/cldevice refresh  [-n node[,…]]
 [+]
/usr/cluster/bin/cldevice rename  -d destination-device
 device
/usr/cluster/bin/cldevice repair  [-n node[,…]]
 {+ |  device ...}
/usr/cluster/bin/cldevice replicate  -t replication-type [-S source-node]
 -D destination-node [+]
/usr/cluster/bin/cldevice set 
 -p default_fencing={global | pathcount | scsi3 | nofencing | nofencing-noscrub}
 [-n node[,…]] device ...
/usr/cluster/bin/cldevice show  [-n node[,…]]
 [+ |  device ...]
/usr/cluster/bin/cldevice status  [-s state] [-n node[,…]]
 [+ |  [disk-device ]]
/usr/cluster/bin/cldevice unmonitor  [-i {- |  clconfigfile} ]
 [-n node[,…]] {+ |  disk-device ...}

Description

The cldevice command manages devices in the Oracle Solaris Cluster environment. Use this command to administer the Oracle Solaris Cluster device identifier (DID) pseudo device driver and to monitor disk device paths.

The cldev command is the short form of the cldevice command. You can use either form of the command.

With the exception of the list and show subcommands, you must run the cldevice command from a cluster node that is online and in cluster mode.

The general form of this command is as follows:

cldevice [subcommand] [options] [operands]

You can omit subcommand only if options specifies the -? option or the -V option.

Each option of this command has a long form and a short form. Both forms of each option are given with the description of the option in the OPTIONS section of this man page.

See the Intro(1CL)) man page for more information.

You can use this command only in the global zone.

SUBCOMMANDS

The following subcommands are supported:

check

Performs a consistency check to compare the kernel representation of the devices against the physical devices. On failing a consistency check, an error message is displayed. The process continues until all devices are checked.

By default, this subcommand affects only the current node. Use the -n option to perform the check operation for devices that are attached to another node.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand.

clear

Removes all DID references to underlying devices that are no longer attached to the current node.

By default, this subcommand affects only the current node. Use the -n option to specify another cluster node on which to perform the clear operation.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

combine

Combines the specified device with the specified destination device.

The combine subcommand combines the path for the source device with the path for the destination device. This combined path results in a single DID instance number, which is the same as the DID instance number of the destination device. Use this subcommand to combine DID instances with SRDF.

You can use the combine subcommand to manually configure DID devices for storage-based replication. However, for TrueCopy replicated devices, use the replicate subcommand to automatically configure replicated devices.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

export

Exports configuration information for a cluster device.

If you specify a file name with the -o option, the configuration information is written to that new file. If you do not supply the -o option, the configuration information is written to standard output.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand.

list

Displays all device paths.

If you supply no operand, or if you supply the plus sign (+) operand, the report includes all devices.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand.

monitor

Turns on monitoring for the specified disk paths.

The monitor subcommand works only on disk devices. Tapes or other devices are not affected by this subcommand.

You can use the monitor subcommand to tune the disk-path-monitoring daemon, scdpmd. See the scdpmd.conf(4) man page for more information on the configuration file.

By default, this subcommand turns on monitoring for paths from all nodes.

Use the -i option to specify a cluster configuration file from which to set the monitor property of disk paths. The -i option starts disk-path monitoring on those disk paths that are marked in the specified file as monitored. No change is made for other disk paths. See the clconfiguration(5CL) man page for more information about the cluster configuration file.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

populate

Populates the global-devices namespace.

The global-devices namespace is mounted under the /global directory. The namespace consists of a set of logical links to physical devices. Because the /dev/global directory is visible to each node of the cluster, each physical device is visible across the cluster. This visibility means that any disk, tape, or CD-ROM that is added to the global-devices namespace can be accessed from any node in the cluster.

The populate subcommand enables the administrator to attach new global devices to the global-devices namespace without requiring a system reboot. These devices might be tape drives, CD-ROM drives, or disk drives.

You must execute the devfsadm(1M) command before you run the populate subcommand. Alternatively, you can perform a reconfiguration reboot to rebuild the global-devices namespace and to attach new global devices. See the boot(1M) man page for more information about reconfiguration reboots.

You must run the populate subcommand from a node that is a current cluster member.

The populate subcommand performs its work on remote nodes asynchronously. Therefore, command completion on the node from which you issue the command does not signify that the command has completed operation on all cluster nodes.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

refresh

Updates the device configuration information that is based on the current device trees on a cluster node. The command conducts a thorough search of the rdsk and rmt device trees. For each device identifier that was not previously recognized, the command assigns a new DID instance number. Also, a new path is added for each newly recognized device.

By default, this subcommand affects only the current node. Use the -n option with the refresh subcommand to specify the cluster node on which to perform the refresh operation.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

rename

Moves the specified device to a new DID instance number.

The command removes DID device paths that correspond to the DID instance number of the source device and recreates the device path with the specified destination DID instance number. You can use this subcommand to restore a DID instance number that has been accidentally changed.

After you run the rename subcommand on all cluster nodes that are connected to the shared storage, run the devfsadm and cldevice populate commands to update the global-devices namespace with the configuration change.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

repair

Performs a repair procedure on the specified device.

By default, this subcommand affects only the current node. Use the -n option to specify the cluster node on which to perform the repair operation.

If you supply no operand, or if you supply the plus sign (+) operand, the command updates configuration information on all devices that are connected to the current node.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

replicate

Configures DID devices for use with storage-based replication.


Note - The replicate subcommand is not a supported method for combining DID instances with EMC SRDF and can be used only with Hitachi TrueCopy. Use cldevice combine to combine DID instances with SRDF.


The replicate subcommand combines each DID instance number on the source node with its corresponding DID instance number on the destination node. Each pair of replicated devices is merged into a single logical DID device.

By default, the current node is the source node. Use the -S option to specify a different source node.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

set

Modifies the properties of the specified device.

Use the -p option to specify the property to modify.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

show

Displays a configuration report for all specified device paths.

The report shows the paths to devices and whether the paths are monitored or unmonitored.

By default, the subcommand displays configuration information for all devices.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand.

status

Displays the status of all specified disk-device paths.

By default, the subcommand displays the status of all disk paths from all nodes.

The status subcommand works only on disk devices. The report does not include tapes or other devices.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand.

unmonitor

Turns off monitoring for the disk paths that are specified as operands to the command.

By default, the subcommand turns off monitoring for all paths from all nodes.

The unmonitor subcommand works only on disk devices. Tapes or other devices are not affected by this subcommand.

Use the -i option to specify a cluster configuration file from which to turn off monitoring for disk paths. Disk-path monitoring is turned off for those disk paths that are marked in the specified file as unmonitored. No change is made for other disk paths. See the clconfiguration(5CL) man page for more information.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand.

Options

The following options are supported:

-?
--help

Displays help information.

This option can be used alone or with a subcommand.

  • If you use this option alone, the list of available subcommands is printed.

  • If you use this option with a subcommand, the usage options for that subcommand are printed.

When this option is used, no other processing is performed.

-D destination-node
--destinationnode=destination-node
--destinationnode destination-node

Specifies a destination node on which to replicate devices. You can specify a node either by its node name or by its node ID.

The -D option is only valid with the replicate subcommand.

-d destination-device
--device=destination-device
--device destination-device

Specifies the DID instance number of the destination device for storage-based replication.

Only use a DID instance number with the -d option. Do not use other forms of the DID name or the full UNIX path name to specify the destination device.

The -d option is only valid with the rename and combine subcommands.

-g replication-device-group

Specifies the replication device group. This option can be only be used with the combine subcommand.

-i {- | clconfigfile}
--input={- | clconfigfile-}
--input {- | clconfigfile-}

Specifies configuration information that is to be used for monitoring or unmonitoring disk paths. This information must conform to the format that is defined in the clconfiguration(5CL) man page. This information can be contained in a file or supplied through standard input. To specify standard input, specify the minus sign (-) instead of a file name.

The -i option is only valid with the monitor and unmonitor subcommands.

Options that you specify in the command override any options that are set in the configuration file. If configuration parameters are missing in the cluster configuration file, you must specify these parameters on the command line.

-n node[,…]
--node=node[,…]
--node node[,…]

Specifies that the subcommand includes only disk paths from nodes that are specified with the -n option. You can specify a node either by its node name or by its node ID.

-o {- | configfile}
--output={- | configfile-}
--output {- | configfile-}

Writes disk-path configuration information in the format that is defined by the clconfiguration(5CL) man page. This information can be written to a file or to standard output.

The -o option is only valid with the export subcommand.

If you supply a file name as the argument to this option, the command creates a new file and the configuration is printed to that file. If a file of the same name already exists, the command exits with an error. No change is made to the existing file.

If you supply the minus sign (-) as the argument to this option, the command displays the configuration information to standard output. All other standard output for the command is suppressed.

-p default_fencing={global | pathcount | scsi3 | nofencing | nofencing-noscrub}
--property=default_fencing={global|pathcount|scsi3|nofencing|nofencing-noscrub}
--property default_fencing={global|pathcount|scsi3|nofencing|nofencing-noscrub}

Specifies the property to modify.

Use this option with the set subcommand to modify the following property:

default_fencing

Overrides the global default fencing algorithm for the specified device. You cannot change the default fencing algorithm on a device that is configured as a quorum device.

You can set the default fencing algorithm for a device to one of the following values:

global

Uses the global default fencing setting. See the cluster(1CL) man page for information about setting the global default for fencing.

nofencing

After checking for and removing any Persistent Group Reservation (PGR) keys, turns off fencing for the specified device or devices.


Caution

Caution - If you are using a disk that does not support SCSI, such as a Serial Advanced Technology Attachment (SATA) disk, turn off fencing.


nofencing-noscrub

Turns off fencing for the specified device or devices without first checking for or removing PGR keys.


Caution

Caution - If you are using a disk that does not support SCSI, such as a Serial Advanced Technology Attachment (SATA) disk, turn off fencing.


pathcount

Determines the fencing protocol by the number of DID paths that are attached to the shared device.

  • For a device that uses fewer than three DID paths, the command sets the SCSI-2 protocol.

  • For a device that uses three or more DID paths, the command sets the SCSI-3 protocol

scsi3

Sets the SCSI-3 protocol. If the device does not support the SCSI-3 protocol, the fencing protocol setting remains unchanged.

-S source-node
--sourcenode=source-node
--sourcenode source-node

Specifies the source node from which devices are replicated to a destination node. You can specify a node either by its node name or by its node ID.

The -S option is only valid with the replicate subcommand.

-s state[,…]
--state=state[,…]
--state state[,…]

Displays status information for disk paths that are in the specified state.

The -s option is only valid with the status subcommand. When you supply the -s option, the status output is restricted to disk paths that are in the specified state. The following are the possible values of the state:

  • fail

  • ok

  • unknown

  • unmonitored

-t

Specifies the replication device type. This option can be used with the replicate and combine subcommands.

-V
--version

Displays the version of the command.

Do not specify this option with subcommands, operands, or other options. The subcommand, operands, or other options are ignored. The -V option only displays the version of the command. No other operations are performed.

-v
--verbose

Displays verbose information to standard output.

You can specify this option with any form of this command.

Operands

The following operands are supported:

device

Specifies the name of a device. The device can be, but is not limited to, disks, tapes, and CD-ROMs.

If the subcommand accepts more than one device, you can use the plus sign (+) to specify all devices.

All subcommands of the cldevice command except the repair subcommand accept device paths as operands. The repair subcommand accepts only device names as operands. The device name can be either the full global path name, the device name, or the DID instance number. Examples of these forms of a device name are /dev/did/dsk/d3, d3, and 3, respectively. See the did(7) man page for more information.

The device name can also be the full UNIX path name, such as/dev/rdsk/c0t0d0s0.

A specified device can have multiple paths that connect the device to nodes. If the -n option is not used, all paths from all nodes to the specified device are selected.

The monitor, unmonitor, and status subcommands only accept disk devices as operands.

Exit Status

The complete set of exit status codes for all commands in this command set are listed on the Intro(1CL) man page.

If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.

This command returns the following exit status codes:

0 CL_NOERR

No error

1 CL_ENOMEM

Not enough swap space

3 CL_EINVAL

Invalid argument

6 CL_EACCESS

Permission denied

9 CL_ESTATE

Object is in wrong state

15CL_EPROP

Invalid property

35 CL_EIO

I/O error

36 CL_ENOENT

No such object

37 CL_EOP

Operation not allowed

Examples

Example 1 Monitoring All Disk Paths in the Cluster

The following example shows how to enable the monitoring of all disk paths that are in the cluster infrastructure.

# cldevice monitor +

Example 2 Monitoring a Single Disk Path

The following example shows how to enable the monitoring of the path to the disk /dev/did/dsk/d3 on all nodes where this path is valid.

# cldevice monitor /dev/did/dsk/d3

Example 3 Monitoring a Disk Path on a Single Node

The following examples show how to enable the monitoring of the path to the disks /dev/did/dsk/d4 and /dev/did/dsk/d5 on the node phys-schost-2.

The first example uses the -n option to limit monitoring to disk paths that are connected to the node phys-schost-2, then further limits monitoring to the specified devices d4 and d5.

# cldevice monitor -n phys-schost-2 d4 d5

The second example specifies the disk paths to monitor by their node:device names, phys-schost-2:d4 and phys-schost-2:d5.

# cldevice monitor phys-schost-2:d4 phys-schost-2:d5

Example 4 Printing All Disk Paths and Their Status

The following example shows how to print all disk paths in the cluster and their status.

# cldevice status
Device Instance             Node                Status
---------------             ----                ------
/dev/did/rdsk/d1            phys-schost-2       Unmonitored

/dev/did/rdsk/d2            phys-schost-2       Unmonitored

/dev/did/rdsk/d3            phys-schost-1       Ok
                            phys-schost-2       Ok

/dev/did/rdsk/d4            phys-schost-1       Ok
                            phys-schost-2       Ok

/dev/did/rdsk/d5            phys-schost-1       Unmonitored

Example 5 Printing All Disk Paths That Have the Status fail

The following example shows how to print all disk paths that are monitored on the node phys-schost-2 and that have the status fail.

# cldevice status -s fail -n phys-schost-1
Device Instance             Node                Status
---------------             ----                ------
/dev/did/rdsk/d3            phys-schost-1       Fail

/dev/did/rdsk/d4            phys-schost-1       Fail

Example 6 Printing the Status of All Disk Paths From a Single Node

The following example shows how to print the path and the status for all disk paths that are online on the node phys-schost-2.

# cldevice status -n phys-schost-1
Device Instance             Node                Status
---------------             ----                ------
/dev/did/rdsk/d3            phys-schost-1       Ok

/dev/did/rdsk/d4            phys-schost-1       Ok

/dev/did/rdsk/d5            phys-schost-1       Unmonitored

Example 7 Adding New Devices to the Device Configuration Database

The following example shows how to update the CCR database with the current device configurations for the node phys-schost-2, from which the command is issued. This command does not update the database for devices that are attached to any other node in the cluster.

phys-schost-2# cldevice refresh

Example 8 Combining Devices Under a Single DID

The following example shows how to combine the path for one device with the path for another device. This combined path results in a single DID instance number, which is the same as the DID instance number of the destination device.

# cldevice combine -t srdf -g devgrp1 -d 20 30

Example 9 Listing the Device Paths For a Device Instance

The following example shows how to list the paths for all devices that correspond to instance 3 of the DID driver.

# cldevice list 3
d3 

Example 10 Listing all Device Paths in the Cluster

The following example shows how to list all device paths for all devices that are connected to any cluster node.

# cldevice list -v
DID Device          Full Device Path
----------          ----------------
d1                  phys-schost-1:/dev/rdsk/c0t0d0
d2                  phys-schost-1:/dev/rdsk/c0t1d0
d3                  phys-schost-1:/dev/rdsk/c1t8d0
d3                  phys-schost-2:/dev/rdsk/c1t8d0
d4                  phys-schost-1:/dev/rdsk/c1t9d0
d4                  phys-schost-2:/dev/rdsk/c1t9d0
d5                  phys-schost-1:/dev/rdsk/c1t10d0
d5                  phys-schost-2:/dev/rdsk/c1t10d0
d6                  phys-schost-1:/dev/rdsk/c1t11d0
d6                  phys-schost-2:/dev/rdsk/c1t11d0
d7                  phys-schost-2:/dev/rdsk/c0t0d0
d8                  phys-schost-2:/dev/rdsk/c0t1d0

Example 11 Displaying Configuration Information About a Device

The following example shows how to display configuration information about device c4t8d0.

# cldevice show /dev/rdsk/c4t8d0

=== DID Device Instances ===

DID Device Name:                                /dev/did/rdsk/d3
  Full Device Path:                               phys-schost1:/dev/rdsk/c4t8d0
  Full Device Path:                               phys-schost2:/dev/rdsk/c4t8d0
  Replication:                                    none
  default_fencing:                                nofencing

Example 12 Configuring Devices for Use With Storage-Based Replication

The following example configures DID devices for use with storage-based replication. The command is run from the source node, which is configured with replicated devices. Each DID instance number on the source node are combined with its corresponding DID instance number on the destination node, phys-schost-1.

# cldevice replicate -t truecopy -D phys-schost-1

Example 13 Setting the SCSI Protocol for a Single Device

The following example sets the device 11, specified by instance number, to the SCSI-3 protocol. This device is not a configured quorum device.

# cldevice set -p default_fencing=scsi3 11

Example 14 Turning Fencing Off for a Device Without First Checking PGR Keys

The following example turns fencing off for disk /dev/did/dsk/d5 on the device. This command turns fencing off for the device without first checking for and removing any Persistent Group Reservation (PGR) keys.

# cldevice set -p default_fencing=nofencing-noscrub d5

If you are using a disk that does not support SCSI, such as a Serial Advanced Technology Attachment (SATA) disk, turn off SCSI fencing.

Example 15 Turning Fencing Off for All Devices in Two-Node Cluster phys-schost

The following example turns fencing off for all disks in two-node cluster named phys-schost.

# cluster set -p global_fencing=nofencing
# cldevice set -p default_fencing=global -n phys-schost-1,phys-schost-2 d5

For more information about the cluster command and the global_fencing property, see the cluster(1CL) man page.

If you are using a disk that does not support SCSI, such as a Serial Advanced Technology Attachment (SATA) disk, turn off SCSI fencing.

Example 16 Performing a Repair Procedure By Using the Device Name

The following example shows how to perform a repair procedure on the device identifier that was associated with the device /dev/dsk/c1t4d0. This device was replaced with a new device to which a new device identifier is now associated. In the database, the repair subcommand records that instance number now corresponds to the new device identifier.

# cldevice repair c1t4d0

Example 17 Performing a Repair Procedure By Using the Instance Number

The following example shows how to provide an alternate method to perform a repair procedure on a device identifier. This example specifies the instance number that is associated with the device path to the replaced device. The instance number for the replaced device is 2.

# cldevice repair 2

Example 18 Populating the Global-Devices Namespace

The following example shows how to populate the global-devices namespace after adding new global devices or moving a DID device to a new instance number.

# devfsadm
# cldevice populate

Example 19 Moving a DID Device

The following example moves the DID instance on the source instance, 15, to a new DID instance, 10, then updates the global-devices namespace with the configuration change.

# cldevice rename 15:10
# devfsadm
# cldevice populate

Attributes

See attributes(5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
SUNWsczu
Interface Stability
Evolving

See Also

Intro(1CL), cluster(1CL), boot(1M), devfsadm(1M), clconfiguration(5CL), rbac(5), did(7)

Notes

The superuser can run all forms of this command.

Any user can run this command with the following options:

To run this command with other subcommands, users other than superuser require RBAC authorizations. See the following table.

Subcommand
RBAC Authorization
check
solaris.cluster.read
clear
solaris.cluster.modify
combine
solaris.cluster.modify
export
solaris.cluster.read
list
solaris.cluster.read
monitor
solaris.cluster.modify
populate
solaris.cluster.modify
refresh
solaris.cluster.modify
rename
solaris.cluster.modify
repair
solaris.cluster.modify
replicate
solaris.cluster.modify
set
solaris.cluster.modify
show
solaris.cluster.read
status
solaris.cluster.read
unmonitor
solaris.cluster.modify

Disk-path status changes are logged by using the syslogd command.

Each multiported tape drive or CD-ROM drive appears in the namespace once per physical connection.