Oracle® Solaris Cluster Reference Manual

Exit Print View

Updated: July 2014, E39662-01
 
 

clnode (1CL)

Name

clnode - manage Oracle Solaris Cluster nodes

Synopsis

/usr/cluster/bin/clnode -V
/usr/cluster/bin/clnode [subcommand] -?
/usr/cluster/bin/clnode subcommand [options] -v [node …]
/usr/cluster/bin/clnode add -n sponsornode[-i {- | clconfigfile}] 
     -c clustername] [-G globaldevfs][-e endpoint,endpoint] node
/usr/cluster/bin/clnode create-loadlimit -p limitname=value[-p 
     softlimit=value] [-p hardlimit=value] {+ | node[:zone] …}
/usr/cluster/bin/clnode clear [-F] node...
/usr/cluster/bin/clnode delete-loadlimit -p limitname=value
     {+ | node[:zone] …}
/usr/cluster/bin/clnode evacuate [-T seconds] {+ | node …}
/usr/cluster/bin/clnode export [-o {- | clconfigfile}][+ | node …]
/usr/cluster/bin/clnode list [-Z {zoneclustername | global | all}] 
     [+ | node …]
/usr/cluster/bin/clnode rename -n newnodename[node]
/usr/cluster/bin/clnode remove [-n sponsornode][-G globaldevfs] 
     [-F] [node]
/usr/cluster/bin/clnode set [-p name=value] […] {+ | node …}
/usr/cluster/bin/clnode set-loadlimit -p limitname=value[-p 
     softlimit=value] [-p hardlimit=value] {+ | node[:zone] …}
/usr/cluster/bin/clnode show [-p name[,…]][-Z {zoneclustername |
     global | all}][+ | node …]
/usr/cluster/bin/clnode show-rev [node]
/usr/cluster/bin/clnode status [-m][-Z {zoneclustername| global | 
     all}][+ | node …]

Description

This command does the following:

  • Adds a node to the cluster

  • Removes a node from the cluster

  • Attempts to switch over all resource groups and device groups

  • Modifies the properties of a node

  • Manage load limits on nodes

  • Reports or exports the status and configuration of one or more nodes

Most of the subcommands for the clnode command operate in cluster mode. You can run most of these subcommands from any node in the cluster. However, the add and remove subcommands are exceptions. You must run these subcommands in noncluster mode.

When you run the add and remove subcommands, you must run them on the node that you are adding or removing. The clnode add command also initializes the node itself for joining the cluster. The clnode remove command also performs cleanup operations on the removed node.

You can omit subcommand only if options is the –? option or the –V option.

Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS.

The clnode command does not have a short form.

You can use some forms of this command in a zone cluster. For more information about valid uses of this command in clusters, see the descriptions of the individual subcommands.

SUBCOMMANDS

The following subcommands are supported:

add

Configures and adds a node to the cluster.

You can use this subcommand only in the global zone. You can use this subcommand only in the global cluster.

You must run this subcommand in noncluster mode.

To configure and add the node, you must use the –n  sponsornode option. This option specifies an existing active node as the sponsor node. The sponsor node is always required when you configure nodes in the cluster.

If you do not specify –c clustername, this subcommand uses the name of the first node that you add as the new cluster name.

The operand node is optional. However, if you specify an operand, it must be the host name of the node on which you run the subcommand.


Note -  Run the pkg install command to install the Oracle Solaris Cluster software. Then run the scinstall utility to create a new cluster or add a node to an existing cluster. See the Oracle Solaris Cluster Software Installation Guide for instructions.

Users other than superuser require solaris.cluster.modify role-based access control (RBAC) authorization to use this subcommand. See the rbac (5) man page.

clear

Cleans up or clears any remaining information about cluster nodes after you run the remove subcommand.

You can use this subcommand only in the global zone. You can use this subcommand only in the global cluster.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

create-loadlimit

Adds a load limit on a node.

You can use this subcommand in the global zone or in a zone cluster.

See the –p option in OPTIONS.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

delete-loadlimit

Removes an existing load limit on a node.

You can use this subcommand in the global zone or in a zone cluster.

See the –p option in OPTIONS.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

evacuate

Attempts to switch over all resource groups and device groups from the specified nodes to a new set of primary nodes.

You can use this subcommand in a global zone or in a zone cluster node.

The system attempts to select new primary nodes based on configured preferences for each group. All evacuated resource groups are not necessarily re-mastered by the same primary nodes. If one or more resource groups or device groups cannot be evacuated from the specified nodes, this subcommand fails. If this subcommand fails, it issues an error message and exits with a nonzero exit code. If this subcommand cannot change primary ownership of a device group to other nodes, the original nodes retain primary ownership of that device group. If the RGM is unable to start an evacuated resource group on a new primary, the evacuated resource group might end up offline.

You can use the –T option with this subcommand to specify the number of seconds to keep resource groups from switching back. If you do not specify a value, 60 seconds is used by default.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

export

Exports the node configuration information to a file or to the standard output (stdout).

You can use this subcommand only in the global zone. You can use this subcommand only in the global cluster.

If you specify the –o option and the name of a file, the configuration information is written to that file.

If you do not provide the –o option and a file name, the output is written to the standard output.

This subcommand does not modify cluster configuration data.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

list

Displays the names of nodes that are configured in the cluster.

If you specify the –Z option with this subcommand, it lists the names of nodes in the particular cluster or clusters that you specify, as follows:

  • All global-cluster nodes and zone-cluster nodes

  • All global-cluster nodes only

  • Only the zone-cluster node whose name you specify

You can use this subcommand in the global cluster or in a zone cluster.

If you do not specify the node operand, or if you specify the plus sign operand (+), this subcommand displays all node members.

You must run this subcommand in cluster mode.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand option. See the rbac (5) man page.

remove

Removes a node from the cluster.

You can use this subcommand only in the global zone. You can use this subcommand only in the global cluster.

You must run this subcommand in noncluster mode.

To remove a node from a cluster, observe the following guidelines. If you do not observe these guidelines, your removing a node might compromise quorum in the cluster.

  • Unconfigure the node to be removed from any quorum devices, unless you also specify the –F option.

  • Ensure that the node to be removed is not an active cluster member.

  • Do not remove a node from a three-node cluster unless at least one shared quorum device is configured.

The subcommand attempts to remove a subset of references to the node from the cluster configuration database. If you specify the – F option, this subcommand attempts to remove all references to the node from the cluster configuration database.


Note -  You must run the scinstall -r command to remove cluster software from the node. See the Oracle Solaris Cluster Software Installation Guide for more information.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

rename

Renames a node to a new nodename.

You can use this subcommand only in the global zone. You must run this subcommand in noncluster mode.


Note -  You must run this command on the same node where the Oracle Solaris hostname was changed.

To rename the node to a newnodename, you must use the –n newnodename option. The current active Oracle Solaris node must be renamed from the oldnodename. All nodes in the cluster must be in noncluster mode for this command to run successfully.

The operand is optional and it must be the hostname of the node where you run the subcommand.


Note -  Before you can rename a node, you must first run the Oracle Solaris hostname change procedure to rename the cluster nodes in the cluster. For instructions, see How to Change a System’s Identity in Managing System Information, Processes, and Performance in Oracle Solaris 11.2 .

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

set

Modifies the properties that are associated with the node that you specify.

You can use this subcommand only in the global zone. You can use this subcommand only in the global cluster.

See the –p option in OPTIONS.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

set-loadlimit

Modifies an existing load limit on a node.

You can use this subcommand in the global zone or in a zone cluster.

See the –p option in OPTIONS.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

show

Displays the configuration of, or information about the properties on, the specified node or nodes.

If you specify the –Z option with this subcommand, it displays configuration or property information for the node or nodes in the particular cluster or clusters that you specify, as follows:

  • All global-cluster nodes and zone-cluster nodes

  • All global-cluster nodes only

  • Only the zone-cluster node whose name you specify

You can use this subcommand only in the global zone. You can use this subcommand in the global cluster or in a zone cluster.

If you do not specify operands or if you specify the plus sign (+), this subcommand displays information for all cluster nodes.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

show-rev

Displays the names of and release information about the Solaris Cluster packages that are installed on a node.

You can use this subcommand only in the global cluster.

You can run this subcommand in noncluster mode and cluster mode. If you run it in noncluster mode, you can only specify the name of and get information about the node on which you run it. If you run it in cluster mode, you can specify and get information about any node in the cluster.

When you use this subcommand with –v, this subcommand displays the names of packages, their versions, and patches that have been applied to those packages.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

status

Displays the status of the node or nodes that you specify or Internet Protocol (IP) network multipathing (IPMP) groups.

You can use this subcommand in the global cluster or in a zone cluster.

If you do not specify operands or if you specify the plus sign (+), this subcommand displays the status of all cluster nodes. The status of a node can be Online or Offline.

If you specify the –m option with this subcommand, it displays only Oracle Solaris IPMP groups.

If you specify the verbose option –v with this subcommand, it displays both the status of cluster nodes and Oracle Solaris IPMP groups.

If you specify the –Z option with this subcommand, it displays status information for the node or nodes in the particular cluster or clusters that you specify, as follows:

  • All global-cluster nodes and zone-cluster nodes

  • All global-cluster nodes only

  • Only the zone-cluster node whose name you specify

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

Options


Note -  Both the short and long form of each option is shown in this section.

The following options are supported:

–?
–-help

Displays help information.

You can specify this option with or without a subcommand .

If you do not specify a subcommand, the list of all available subcommands is displayed.

If you specify a subcommand, the usage for that subcommand is displayed.

If you specify this option and other options, the other options are ignored.

–c clustername
-–clustername=clustername
-–clustername clustername

Specifies the name of the cluster to which you want to add a node.

Use this option only with the add subcommand.

If you specify this option, the clustername that you specify must match the name of an existing cluster. Otherwise, an error occurs.

–e endpoint,endpoint
-–endpoint=endpoint,endpoint
-–endpoint endpoint,endpoint

Specifies transport connections.

Use this option only with the add subcommand. You specify this option to establish the cluster transport topology. You establish the topology by configuring the cables that connect the adapters and the switches. You can specify an adapter or a switch as the endpoint. To indicate a cable, you specify a comma separated pair of endpoints. The cable establishes a connection from a cluster transport adapter on the current node to one of the following:

  • A port on a cluster transport switch, also called a transport junction.

  • An adapter on another node that is already included in the cluster.

If you do not specify the –e option, the add subcommand attempts to configure a default cable. However, if you configure more than one transport adapter or switch within one instance of the clnode command, clnode cannot construct a default. The default is to configure a cable from the singly configured transport adapter to the singly configured, or default, transport switch.

You must always specify two endpoints that are separated by a comma every time you specify the –e option. Each pair of endpoints defines a cable. Each individual endpoint is specified in one of the following ways:

  • Adapter endpoint:

    node:adapter

  • Switch endpoint:

    switch[@ port]

To specify a tagged-VLAN adapter, use the tagged-VLAN adapter name that is derived from the physical device name and the VLAN instance number. The VLAN instance number is the VLAN ID multiplied by 1000 plus the original physical-unit number. For example, a VLAN ID of 11 on the physical device net2 translates to the tagged-VLAN adapter name net11002.

If you do not specify a port component for a switch endpoint, a default port is assigned.

–F
-–force

Forcefully removes or clears the specified node without verifying that global mounts remain on that node.

Use this option only with the clear or the remove subcommand.

–G {lofi | special | mount-point}
-–globaldevfs={lofi | special | mount-point}
-–globaldevfs {lofi | special | mount-point}

Specifies a lofi device, a raw special disk device, or a dedicated file system for the global-devices mount point.

Use this option only with the add or remove subcommand.

Each cluster node must have a local file system that is mounted globally on /global/.devices/node@nodeID before the node can successfully participate as a cluster member. However, the node ID is unknown until the clnode command runs. By default, the clnode add command looks for an empty file system that is mounted on /globaldevices or on the mount point that is specified to the –G option. If such a file system is provided, the clnode add command makes the necessary changes to the /etc/vfstab file. The file system that you specify is remounted at /globaldevices . The clnode command attempts to add the entry to the vfstab file when the command cannot find a node ID mount. See the vfstab (4) man page.

If /global/.devices/node@nodeID is not mounted and an empty /globaldevices file system is not provided, the command fails.

If –G lofi is specified, a /.globaldevices file is created. A lofi device is associated with that file, and the global-devices file system is created on the lofi device. No /global/.devices/node@nodeID entry is added to the /etc/vfstab file. For more information about lofi devices, see the lofi (7D) man page.

If a raw special disk device name is specified and /global/.devices/node@nodeID is not mounted, a file system is created on the device by using the newfs command. It is an error to supply the name of a device with an already-mounted file system.

As a guideline, a dedicated file system must be at least 512 Mbytes in size. If this partition or file system is not available or is not large enough, you might need to reinstall the Oracle Solaris OS.

For a namespace that is created on a lofi device, 100 MBytes of free space is needed in the root file system.

Use this option with the remove subcommand to specify the new mount point name to use to restore a former /global/.devices mount point.

When used with the remove subcommand, if the global-devices namespace is mounted on a dedicated partition, this option specifies the new mount point name to use to restore the former /global/.devices mount point. If you do not specify the –G option and the global-devices namespace is mounted on a dedicated partition, the mount point is renamed /globaldevices by default.

–i {- | clconfigfile}
-–input={- | clconfigfile}
-–input {- | clconfigfile}

Reads node configuration information from a file or from the standard input (stdin). The format of the configuration information is described in the clconfiguration(5CL) man page.

If you specify a file name with this option, this option reads the node configuration information in the file. If you specify - with this option, the configuration information is read from the standard input (stdin).

–m

Specifies IPMP groups. Use with the status subcommand to display only the status of IPMP groups.

–n newnodename
-–newnodename=newnodename
-–newnodename newnodename

Specifies the new node name.

This option can be used only with the rename subcommand.

You can specify a new node name for the current node. When you rename a node to the newnodename using the rename subcommand, the current node hostname must already be changed to the newnodename.

–n sponsornode
-–sponsornode=sponsornode
-–sponsornode sponsornode

Specifies the name of the sponsor node.

You can specify a name or a node identifier for sponsornode . When you add a node to the cluster by using the add subcommand, the sponsor node is the first active node that you add to the cluster. From that point, that node remains the sponsornode for that cluster. When you remove a node by using the remove subcommand, you can specify any active node other than the node to be removed as the sponsor node.

By default, whenever you specify sponsornode with a subcommand, the cluster to which sponsornode belongs is the cluster that is affected by that subcommand.

–o {- | clconfigfile}
-–output={- | clconfigfile}
-–output {- | clconfigfile}

Writes node configuration information to a file or to the standard output (stdout). The format of the configuration information is described in the clconfiguration(5CL) man page.

If you specify a file name with this option, this option creates a new file. Configuration information is then placed in that file. If you specify - with this option, the configuration information is sent to the standard output (stdout). All other standard output for the command is suppressed.

You can use this option only with the export subcommand.

–p name
-–property=name
-–property name

Specifies the node properties about which you want to display information with the show subcommand.

For information about the properties that you can add or modify with the set subcommand, see the description of the –p name= value option.

You can specify the following properties with this option:

privatehostname

The private host name is used for IP access of a given node over the private cluster interconnect. By default, when you add a node to a cluster, this option uses the private host name clusternode nodeid-priv.

reboot_on_path_failure

Values to which you can set this property are enabled and disabled.

–p name=value
-–property=name=value
-–property name=value

Specifies the node properties that you want to add or modify with the set subcommand.

Multiple instances of –p name= value are allowed.

For information about the properties about which you can display information with the show subcommand, see the description of the –p name option.

You can modify the following properties with this option:

defaultpsetmin

Sets the minimum number of CPUs that are available in the default processor set resource.

The default value is 1 and the minimum value is 1. The maximum value is the number of CPUs on the machine (or machines) on which you are setting this property.

globalzoneshares

Sets the number of shares that are assigned to the global zone.

You can specify a value between 1 and 65535, inclusive. To understand this upper limit, see the prctl (1) man page for information about the zone.cpu-shares attribute. The default value for globalzoneshares is 1.

hardlimit

Defines a mandatory upper boundary for resource group load on a node. The total load on the node is never permitted to exceed the hard limit.

The hardlimit property is an unsigned integer. The softlimit property is an unsigned integer. The default value of the hardlimit property is null. A null or empty value indicates that the corresponding limitname is unlimited on the node. If a non-empty value is specified, it must not exceed 10 million.

limitname

The limitname property is a string. The name is associated with two values, a hard load limit and a soft load limit, specified by the hardlimit and softlimit properties, respectively.

For information on how to assign a load factor for eachlimitname property, see the clresourcegroup(1CL) man page. You can also use the clresourcegroup command to determine priority and preemption mode. For information on how to distribute resource group load across all nodes, see the cluster(1CL) man page.

privatehostname

Is used for IP access of a given node over the private cluster transport. By default, when you add a node to a cluster, this option uses the private host name clusternodenodeid -priv.

Before you modify a private host name, you must disable, on all nodes, all resources or applications that use that private host name. See the example titled “Changing the Private Hostname” in How to Change the Node Private Hostname in Oracle Solaris Cluster System Administration Guide .

Do not store private host names in the hosts database or in any naming services database. See the hosts (4) man page. A special nsswitch command performs all host name lookups for private host names. See the nsswitch.conf (4) man page.

If you do not specify a value, this option uses the default private host name clusternode nodeid-priv.

reboot_on_path_failure

Enables the automatic rebooting of a node when all monitored shared-disk paths fail, provided that the following conditions are met:

  • All monitored shared-disk paths on the node fail.

  • At least one of the disks is accessible from a different node in the cluster. The scdpm daemon uses the private interconnect to check if disks are accessible from a different node in the cluster. If the private interconnect is disabled, the scdpm daemon cannot obtain the status of the disks from another node.

You can use only the set subcommand to modify this property. You can set this property to enabled or to disabled.

Rebooting the node restarts all resource groups and device groups that are mastered on that node on another node.

If all monitored shared-disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored shared-disk paths become available after the node reboots but then all monitored shared-disk paths again fail, the node automatically reboots again.

When you enable the reboot_on_path_failure property, the states of local-disk paths are not considered when determining if a node reboot is necessary. Only monitored shared disks are affected.

If you set this property to disabled and all monitored shared-disk paths on the node fail, the node does not reboot.

softlimit

Defines an advisory upper boundary for a resource group load on a node. The total load on the node can exceed the soft limit, for example, when there is insufficient cluster capacity to distribute the load. When a soft load limit is exceeded, the condition is flagged in commands or tools that display cluster status.

The softlimit property is an unsigned integer. The default value of the softlimit property is 0. A value of 0 for the soft limit means that no soft limit is imposed; there will be no Softlimit exceeded warnings from status commands. The maximum value for the softlimit property is 10 million. The softlimit property for a specific load limit must be less than or equal to the hardlimit value.

–T seconds
-–time=seconds
-–time seconds

Specifies the number of seconds to keep resource groups from switching back onto a node after you have evacuated resource groups from the node.

You can use this option only with the evacuate subcommand. You must specify an integer value between 0 and 65535 for seconds . If you do not specify a value, 60 seconds is used by default.

Resource groups are prevented from failing over, or automatically being brought online, on the evacuating node for 60 seconds or the specified number of seconds after the evacuation completes.

If, however, you use the switch or online subcommand to switch a resource group online, or the evacuated node reboots, the evacuation timer immediately expires and automatic failovers are again allowed.

–v
-–verbose

Displays verbose information on the standard output (stdout).

–V
-–version

Displays the version of the command.

If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the command is displayed. No other processing occurs.

–Z {zoneclustername | global | all}
–-zonecluster={zoneclustername | global | all}
–-zonecluster {zoneclustername | global | all}

Specifies the cluster or clusters in which the node or nodes about which you want information are located.

If you specify this option, you must also specify one argument from the following list:

zoneclustername

Specifies that information about only the zone-cluster node named zoneclustername is to be displayed.

global

Specifies that information about only global-cluster nodes is to be displayed.

all

Specifies that information about all global-cluster and zone-cluster nodes is to be displayed.

Operands

The following operands are supported:

node

The name of the node that you want to manage.

When you use the add subcommand, you specify the host name for node. When you use another subcommand, you specify the node name or node identifier for node.

+

All nodes in the cluster.

Exit Status

The complete set of exit status codes for all commands in this command set are listed on the Intro(1CL) man page.

If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.

This command returns the following exit status codes:

0 CL_NOERR

No error

The command that you issued completed successfully.

1 CL_ENOMEM

Not enough swap space

A cluster node ran out of swap memory or ran out of other operating system resources.

3 CL_EINVAL

Invalid argument

You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the –i option was incorrect.

6 CL_EACCESS

Permission denied

The object that you specified is inaccessible. You might need superuser or RBAC access to issue the command. See the su(1M) and rbac(5) man pages for more information.

15 CL_EPROP

Invalid property

The property or value that you specified with the –p, –y, or –x option does not exist or is not allowed.

35 CL_EIO

I/O error

A physical input/output error has occurred.

36 CL_ENOENT

No such object

The object that you specified cannot be found for one of the following reasons:

  • The object does not exist.

  • A directory in the path to the configuration file that you attempted to create with the –o option does not exist.

  • The configuration file that you attempted to access with the –i option contains errors.

37 CL_EOP

Operation not allowed

You tried to perform an operation on an unsupported configuration, or you performed an unsupported operation.

Examples

Example 1 Adding a Node to a Cluster

The following command configures and adds the node on which you run the command into an existing cluster. By default, this example uses /globaldevices as the global devices mount point. By default, this example also uses clusternode1-priv as the private host name.

This command names the cluster cluster-1 and specifies that the sponsor node is phys-schost-1. This command also specifies that adapter net1 is attached to transport switch switch1. Finally, this command specifies that adapter net2 is attached to transport switch switch2.

# clnode add -c cluster-1 -n phys-schost-1 \
-e phys-schost-2:net1,switch1 -e phys-schost-2:net2,switch2
Example 2 Removing a Node From a Cluster

The following command removes a node from a cluster. This command removes the node on which you run this command. The node is in noncluster mode.

# clnode remove
Example 3 Changing the Private Host Name That Is Associated With a Node

The following command changes the private host name for node phys-schost-1 to the default setting.

# clnode set -p privatehost=phys-schost-1
Example 4 Changing Private Host Name Settings for All Nodes

The following command changes the private host name settings for all nodes to default values. In this case, you must insert a space between the equal sign (=) and the plus sign (+) to indicate that the + is the plus sign operand.

# clnode set -p privatehost= +
Example 5 Setting Load Limits on Global-Cluster Nodes and Zone-Cluster Nodes

The following command modifies an existing load limit on all nodes in a global cluster. The example defines three load limits ( mem_load, disk_load, and cpu_load) and sets soft and hard limits for each one. The mem_load load limit has a soft limit of 11, while disk_load has no soft limit, and cpu_load has no hard limit. The + operand in the examples modifies the load limit on all nodes.

# clnode set-loadlimit -p limitname=mem_load -p softlimit=11 -p hardlimit=20 +
# clnode set-loadlimit -p limitname=disk_load -p hardlimit=20 +
# clnode set-loadlimit -p limitname=cpu_load -p softlimit=8 node1:zone1 node2:zone2

From the global zone, the following command modifies load limits on a zone cluster node. The example defines a load limit with a hard limit for the zone cluster node.

# clnode set-loadlimit -Z zoneclustername
 -p limitname=zc_disk_load -p hardlimit=15
zc-node1
Example 6 Displaying the Status of All Nodes in a Cluster

The following command displays the status of all nodes in a cluster.

# clnode status
=== Cluster Nodes ===

--- Node Status ---

Node Name                                       Status
---------                                       ------
phys-schost-1                                   Online
phys-schost-2                                   Online
Example 7 Displaying the Verbose Status of All Nodes in a Cluster

The following command displays the verbose status of all nodes in a cluster.

# clnode status -v
=== Cluster Nodes ===

--- Node Status ---

Node Name                                                       Status
---------                                                       ------
phys-schost-1                                                   Online
phys-schost-2                                                   Online

--- Node IPMP Group Status ---

Node Name        Group Name        Status        Adapter        Status
---------        ----------        ------        -------        ------
phys-schost-1    sc_ipmp0          Online        net0           Online
phys-schost-2    sc_ipmp0          Online        net0           Online




--- Load Limit Status ---

Node Name      Load Limit Name   Soft Limit/Hard Limit   Load   Status

phys-schost-1  mem_load          30/50                   23     OK
               disk_load         10/15                   14     Softlimit Exceeded
               cpu_load          2/unlimited             1      OK
phys-schost-2  disk_load         90/97                   11     OK
               cpu_load          unlimited/unlimited     0      OK

Example 8 Displaying the Load Limit Status of All Nodes

The following command displays the load limit status of all nodes in a cluster.

# clnode status -l

--- Load Limit Status ---

Node Name      Load Limit Name   Soft Limit/Hard Limit   Load   Status

phys-schost-1  mem_load          30/50                   23     OK
               disk_load         10/15                   14     Softlimit Exceeded
               cpu_load          2/unlimited             1      OK
phys-schost-2  disk_load         90/97                   11     OK
               cpu_load          unlimited/unlimited     0      OK
Example 9 Displaying the Status of All Global-Cluster Nodes and Zone-Cluster Nodes in a Cluster

The following command displays the status of all global-cluster nodes and zone-cluster nodes in a cluster.

# clnode status -Z all

=== Cluster Nodes ===

--- Node Status ---

Node Name                                       Status
---------                                       ------
global:phys-schost-1                            Online
global:phys-schost-2                            Online
global:phys-schost-4                            Online
global:phys-schost-3                            Online


=== Zone Cluster Nodes ===

--- Node Status ---

Node Name                                       Status
---------                                       ------
cz2:phys-schost-1                               Online
cz2:phys-schost-3                               Offline
Example 10 Displaying Configuration Information About All Nodes in a Cluster

The following command displays configuration information about all nodes in a cluster.

# clnode show
=== Cluster Nodes ===

Node Name:                                      phys-schost-1
  Node ID:                                         1
  Enabled:                                         yes
  privatehostname:                                 clusternode1-priv
  reboot_on_path_failure:                          disabled
  globalzoneshares:                                1
  defaultpsetmin:                                  1
  quorum_vote:                                     1
  quorum_defaultvote:                              1
  quorum_resv_key:                                 0x4487349A00000001
  Transport Adapter List:                          net2, net3

Node Name:                                      phys-schost-2
  Node ID:                                         2
  Enabled:                                         yes
  privatehostname:                                 clusternode2-priv
  reboot_on_path_failure:                          disabled
  globalzoneshares:                                1
  defaultpsetmin:                                  1
  quorum_vote:                                     1
  quorum_defaultvote:                              1
  quorum_resv_key:                                 0x4487349A00000002
  Transport Adapter List:                          net2, net3

Example 11 Displaying Configuration Information About a Particular Node in a Cluster

The following command displays configuration information about phys-schost-1 in a cluster.

# clnode show phys-schost-1
=== Cluster Nodes ===                          

Node Name:                                      phys-schost-1
  Node ID:                                         1
  Enabled:                                         yes
  privatehostname:                                 clusternode1-priv
  reboot_on_path_failure:                          disabled
  globalzoneshares:                                1
  defaultpsetmin:                                  1
  quorum_vote:                                     1
  quorum_defaultvote:                              1
  quorum_resv_key:                                 0x4487349A00000001
  Transport Adapter List:                          net2, net3

Attributes

See attributes (5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
ha-cluster/system/core
Interface Stability
Evolving

See also

prctl (1) , claccess(1CL), clresourcegroup(1CL), cluster(1CL), Intro(1CL), newfs (1M) , su (1M) , hosts (4) , scinstall(1M), nsswitch.conf (4) , vfstab (4) , attributes (5) , rbac (5) , clconfiguration(5CL), lofi (7D)

See the example that describes how to change the private hostname in Overview of Administering the Cluster in Oracle Solaris Cluster System Administration Guide .

Notes

The superuser can run all forms of this command.

All users can run this command with the –? (help) or –V (version) option.

To run the clnode command with subcommands, users other than superuser require RBAC authorizations. See the following table.

Subcommand
RBAC Authorization
add
solaris.cluster.modify
clear
solaris.cluster.modify
create-loadlimit
solaris.cluster.modify
delete-loadlimit
solaris.cluster.modify
evacuate
solaris.cluster.admin
export
solaris.cluster.read
list
solaris.cluster.read
remove
solaris.cluster.modify
rename
solaris.cluster.modify
set
solaris.cluster.modify
set-loadlimit
solaris.cluster.modify
show
solaris.cluster.read
show-rev
solaris.cluster.read
status
solaris.cluster.read