Go to main content

Oracle Solaris Cluster 4.3 Reference Manual

Exit Print View

Updated: September 2015
 
 

cluster (1CL)

Name

cluster - manage the global configuration and status of a cluster

Synopsis

/usr/cluster/bin/cluster -V
/usr/cluster/bin/cluster [subcommand] -?
/usr/cluster/bin/cluster subcommand 
     [options] -v [clustername …]
/usr/cluster/bin/cluster check 
     [-F] [-C checkid[,…]]|-E checkid[,…]]
     [-e explorerpath[,…]] [-j jarpath[,…]]
     [-k keyword[,…]] [-n node[,…]] [-o outputdir]
     [-s severitylevel] [clustername]
/usr/cluster/bin/cluster create -i {- | clconfigfile} 
     [clustername]
/usr/cluster/bin/cluster export [-o {- | configfile}] 
     [-t objecttype[,…]] [clustername]
/usr/cluster/bin/cluster monitor-heartbeat [-v] [clustername]
/usr/cluster/bin/cluster list [clustername]
/usr/cluster/bin/cluster list-checks [-F] [-K] 
     [-C checkid[,…]|-E checkid[,…]] [-j jar-path[,…]] 
     [-o outputdir] [clustername]
/usr/cluster/bin/cluster list-cmds [clustername]
/usr/cluster/bin/cluster rename -c newclustername [clustername]
/usr/cluster/bin/cluster restore-netprops [clustername]
/usr/cluster/bin/cluster set {-p name=value} [-p name=value] […] 
     [clustername]
/usr/cluster/bin/cluster set-netprops {-p name=value} 
     [-p name=value] […] [clustername]
/usr/cluster/bin/cluster show [-t objecttype[,…]] [clustername]
/usr/cluster/bin/cluster show-netprops [clustername]
/usr/cluster/bin/cluster shutdown [-y] [-g graceperiod] 
     [-m message] [clustername]
/usr/cluster/bin/cluster status [-t objecttype[,…]] [clustername]

Description

The cluster command displays and manages cluster-wide configuration, status information. This command also shuts down a global cluster.

The following cluster subcommands work within a zone cluster:

  • cluster show - Lists the zone cluster, nodes, resource groups, resource types, and resource properties.

  • cluster status - Displays the status of zone cluster components.

  • cluster shutdown - Shuts down the zone cluster in an orderly fashion.

  • cluster list - Displays the name of the zone cluster.

  • cluster list-cmds - Lists the following commands, which are supported inside a zone cluster:

    • clnode

    • clreslogicalhostname

    • clresource

    • clresourcegroup

    • clresourcetype

    • clressharedaddress

    • cluster

Almost all subcommands that you use with the cluster command operate in cluster mode. You can run these subcommands from any node in the cluster. However, the create, set-netprops, and restore-netprops subcommands are an exception. You must run these subcommands in noncluster mode.

You can omit subcommand only if options is the –? option or the –V option.

The cluster command does not have a short form.

Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS.

Use this command in the global zone.

Sub Commands

The following subcommands are supported:

check

Checks and reports whether the cluster is configured correctly.

You can use this subcommand only in the global zone.

This subcommand has three modes: basic checks, interactive checks, and functional checks.

  • Basic checks are run when the -k interactive or -k functional keyword is not specified. Basic checks read and evaluate certain configuration information to identify possible errors or unmet requirements.

  • Interactive checks are specified by the –k interactive option. If the –C –E option are not specified, all available interactive checks are run.

    Interactive checks are similar to basic checks, but require information from the user that the checks cannot determine. For example, a check might prompt the user to specify the firmware version. Cluster functionality is not interrupted by interactive checks.

  • A functional check is specified by the –k functional -C checkid options. The –k functional option requires the –C option with no more than one check ID of a functional check. The –E option is not valid with the –k functional option.

    Functional checks exercise a specific function or behavior of the cluster configuration, such as by triggering a failover or panicking a node. These checks require user input to provide certain cluster configuration information, such as which node to fail over to, and to confirm whether to begin or continue the check.

    Because some functional checks involve interrupting cluster service, do not start a functional check until you have read the detailed description of the check and determined whether to first take the cluster out of production. Use the cluster list-checks -v -C checkID command to display the full description of a functional check.

When issued from an active member of a running cluster, this subcommand runs configuration checks. These checks ensure that the cluster meets the minimum requirements that are required to successfully run the cluster.

When issued from a node that is not running as an active cluster member, this subcommand runs preinstallation checks on that node. These checks identify vulnerabilities that you should repair to prepare the cluster for installation and to avoid possible loss of availability.

Each configuration check produces a set of reports that are saved in the specified or default output directory. Each report contains a summary that shows the total number of checks that were executed and the number of failures, grouped by severity level.

Each report is produced in both ordinary text and in XML. The DTD for the XML format is available in the /usr/cluster/lib/cfgchk/checkresults.dtd file. The reports are produced in English only.

Users other than superuser require solaris.cluster.read Role-Based Access Control (RBAC) authorization to use this subcommand. See the rbac (5) man page.

create

Creates a new cluster by using configuration information that is stored in a clconfigfile file. The format of this configuration information is described in the clconfiguration(5CL) man page.

You can use this subcommand only in the global zone.

You must run this subcommand in noncluster mode. You must also run this subcommand from a host that is not already configured as part of a cluster. Oracle Solaris Cluster software must already be installed on every node that is going to be a part of the cluster.

If you do not specify a cluster name, the name of the cluster is taken from the clconfigfile file.

Users other than superuser require solaris.cluster.modify role-based access control (RBAC) authorization to use this subcommand. See the rbac (5) man page.

export

Exports the configuration information.

You can use this subcommand only in the global zone.

If you specify a file with the –o option, the configuration information is written to that file. If you do not specify the –o option, the output is written to the standard output (stdout).

The following option limits the information that is exported:

–t objecttype[,…]

Exports configuration information only for components that are of the specified types.

You can export configuration information only for the cluster on which you issue the cluster command. If you specify the name of a cluster other than the one on which you issue the cluster command, this subcommand fails.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

list

Displays the name of the cluster.

You can use this subcommand in the global zone or in a zone cluster.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

list-checks

Prints a list with the check ID and description of each available check.

You can use this command only in the global zone.

Check IDs begin with a letter that indicates the type of check.

F

Functional check

I

Interactive check

M

Basic check on multiple nodes

S

Basic check on a single node

The –v option displays details of a check's operation, including a check's keywords. It is important to display the verbose description of a functional check, to determine whether to remove the cluster from production before you run that check.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

list-cmds

Prints a list of all available Oracle Solaris Cluster commands.

You can use this subcommand in the global zone or in a zone cluster.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

monitor-heartbeat

Manually re-enables heartbeat timeout monitoring for cluster nodes during Dynamic Reconfiguration (DR).

You can use this subcommand only in the global zone. The monitor-heartbeat subcommand in not supported in an exclusive-IP zone cluster.

When you perform a DR operation on a CPU or memory board, the affected node becomes unresponsive so heartbeat monitoring for that node is suspended on all other nodes. After DR is completed, the heartbeat monitoring of the affected node is automatically re-enabled. If the DR operation does not complete, you might need to manually re-enable the heartbeat monitoring with the monitor-heartbeat subcommand. If the affected node is unable to rejoin the cluster, it is ejected from the cluster membership.

For instructions on re-enabling heartbeat timeout monitoring, see Kernel Cage Dynamic Reconfiguration Recovery in Oracle Solaris Cluster Hardware Administration Manual . For general information about DR, see Dynamic Reconfiguration Support in Oracle Solaris Cluster 4.3 Concepts Guide .

rename

Renames the cluster.

You can use this command only in the global zone.

Use the –c option with this subcommand to specify a new name for the cluster.


Note -  If your cluster is configured as part of an active Oracle Solaris Cluster Geographic Edition partnership, see Renaming a Cluster That Is in a Partnership in Oracle Solaris Cluster 4.3 Geographic Edition System Administration Guide . This section describes how to correctly rename a cluster that is configured as a member of an Oracle Solaris Cluster Geographic Edition partnership.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

restore-netprops

Resets the cluster private network settings of the cluster.

You can use this subcommand only in the global zone. You must run this subcommand in noncluster mode.

Use this subcommand only when the set-netprops subcommand fails and the following conditions exist:

  • You are attempting to modify the private network properties.

  • The failure indicates an inconsistent cluster configuration on the nodes. In this situation, you need to run the restore-netprops subcommand.

You must run this subcommand on every node in the cluster. This subcommand repairs the cluster configuration. This subcommand also removes inconsistencies that are caused by the failure of the modification of the IP address range. In case of a failure, any attempts that you make to change the configuration settings are not guaranteed to work.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

set

Modifies the properties of the cluster.

You can use this subcommand only in the global zone.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

set-netprops

Modifies the private network properties.

You can use this subcommand only in the global zone.

You must run this subcommand in noncluster mode, unless you are setting the num_zoneclusters property. To set the num_zoneclusters property, you must run this subcommand only in cluster mode.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

show

Displays detailed configuration information about cluster components.

You can use this subcommand only in the global zone.

The following option limits the information that is displayed:

–t objecttype[,…]

Displays configuration information only for components that are of the specified types.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

show-netprops

Displays information about the private network properties of the cluster.

You can use this subcommand only in the global zone.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

shutdown

Shuts down the global cluster in an orderly fashion.

You can use this subcommand only in the global zone.

If you issue this subcommand in the global cluster, Oracle Solaris Cluster software shuts down the entire global cluster including all zone clusters that are associated with that global cluster. You cannot use the cluster command in a zone cluster.

If you provide the name of a cluster other than the cluster on which you issue the cluster command, this subcommand fails.

Run this subcommand from only one node in the cluster.

This subcommand performs the following actions:

  • Takes offline all functioning resource groups in the cluster. If any transitions fail, this subcommand does not complete and displays an error message.

  • Unmounts all cluster file systems. If an unmount fails, this subcommand does not complete and displays an error message.

  • Shuts down all active device services. If any transition of a device fails, this subcommand does not complete and displays an error message.

  • Halts all nodes in the cluster.

Before this subcommand starts to shut down the cluster, it issues a warning message on all nodes. After issuing the warning, this subcommand issues a final message that prompts you to confirm that you want to shut down the cluster. To prevent this final message from being issued, use the –y option.

By default, the shutdown subcommand waits 60 seconds before it shuts down the cluster. You can use the –g option to specify a different delay time.

To specify a message string to appear with the warning, use the –m option.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

status

Displays the status of cluster components.

You can use this subcommand in the global zone or in a zone cluster.

The option –t objecttype[,…] displays status information for components that are of the specified types only.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

Options

The following options are supported:


Note -  Both the short and the long form of each option are shown in this section.
–?
-–help

Displays help information.

You can specify this option with or without a subcommand.

If you do not specify a subcommand, the list of all available subcommands is displayed.

If you specify a subcommand, the usage for that subcommand is displayed.

If you specify this option and other options, the other options are ignored.

–C checkid[,…]
-–checkID=checkid[,…]
-–checkID checkid[,…]

Specifies the checks to run. Checks that are not specified are not run. If the –E option is specified with the –C option, the –C option is ignored.

For the –k functional keyword, the –C option is required and you must specify only one checkid to run.

You can use this option only with the check and list-checks subcommands.

–c newclustername
-–newclustername=newclustername
-–newclustername newclustername

Specifies a new name for the cluster.

Use this option with the rename subcommand to change the name of the cluster.

–E checkid[,…]
-–excludeCheckID=checkid[,…]
-–excludeCheckID checkid[,…]

Specifies the checks to exclude. All checks except those specified are run. If the –C option is specified with the –E option, the –C option is ignored.

The –E option is not valid with the –k functional keyword.

You can use this option only with the check and list-checks subcommands.

–e explorerpath[,…]
-–explorer=explorerpath[,…]
-–explorer explorerpath[,…]

Specifies the path to an unpacked Oracle Explorer or Sun Explorer archive, to use as an alternative source of data for the system. The value of explorerpath must be a fully qualified path location.

You can use this option only with the check subcommand.

–F
-–force

Forces the execution of the subcommand by ignoring the /var/cluster/logs/cluster_check/cfgchk.lck file, if it exists. Use this option only if you are sure that the check and list-checks subcommands are not already running.

–g graceperiod
-–graceperiod=graceperiod
-–graceperiod graceperiod

Changes the length of time before the cluster is shut down from the default setting of 60 seconds.

You specify graceperiod in seconds.

–i {- | clconfigfile}
-–input={- | clconfigfile}
-–input {- | clconfigfile}

Uses the configuration information in the clconfigfile file. See the clconfiguration(5CL) man page.

To provide configuration information through the standard input (stdin), specify a dash (-) with this option.

If you specify other options, they take precedence over the options and information in the cluster configuration file.

–j jarpath[,…]
-–jar=jarpath[,…]
-–jar jarpath[,…]

Specifies the path to an additional jar file that contains checks. The jarpath must be fully qualified.

You can use this option only with the check and list-checks subcommands.

–K keyword[,…]
-–list-keywords=keyword
-–keyword keyword

Lists all keywords in the available checks. This option overrides all other options.

You can use this option only with the list-checks subcommand.

–k keyword[,…]
-–keyword=keyword
-–keyword keyword

Runs only checks that contain the specified keyword. Use the cluster list-checks -k command to determine what keywords are assigned to available checks.

The –k functional keyword requires the –C option with a single checkid. You cannot specify more than one functional check at a time or specify any other keyword in the same command.

You can use this option only with the check and list-checks subcommands.

–m message
-–message=message
-–message message

Specifies a message string that you want to display with the warning that is displayed when you issue the shutdown subcommand.

The standard warning message is system will be shut down in ….

If message contains more than one word, delimit it with single (') quotation marks or double (“) quotation marks. The shutdown command issues messages at 7200, 3600, 1800, 1200, 600, 300, 120, 60, and 30 seconds before a shutdown begins.

–n node[,…]
-–node=node[,…]
-–node node[,…]

Runs checks only on the specified node or list of nodes. The value of node can be the node name or the node ID number.

You can use this option only with the check subcommand.

–o {- | clconfigfile}
-–output={- | clconfigfile}
-–output {- | clconfigfile }

Writes cluster configuration information to a file or to the standard output (stdout). The format of the configuration information is described in the clconfiguration(5CL) man page.

If you specify a file name with this option, this option creates a new file. Configuration information is then placed in that file. If you specify - with this option, the configuration information is sent to the standard output (stdout). All other standard output for the command is suppressed.

You can use this form of the –o option only with the exportsubcommand.

–o outputdir
-–output=outputdir
-–output outputdir

Specifies the directory in which to save the reports that the check subcommand generates.

You can use this form of the –o option only with the check and list-checks subcommands.

The output directory outputdir must already exist or be able to be created. Previous reports that are located in outputdir are overwritten by the new reports.

If you do not specify the –o option, the directory /var/cluster/logs/cluster_check/datestamp/ is used as outputdir by default.

–p name=value
-–property=name=value
-–property name=value

Modifies cluster-wide properties.

Multiple instances of –p name=value are allowed.

Use this option with the set and the set-netprops subcommands to modify the following properties:

concentrate_load

Specifies how the Resource Group Manager (RGM) distributes the resource group load across the available nodes. The concentrate_load property can be set only in a global cluster. In zone clusters, the concentrate_load property has the default value of FALSE. If the value is set to FALSE, the RGM attempts to spread resource group load evenly across all available nodes or zones in the resource groups' node lists. If the value is set to TRUE in the global cluster, the resource group load is concentrated on the fewest possible nodes or zones without exceeding any configured hard or soft load limits. The default value is FALSE.

If a resource group RG2 declares a ++or +++affinity for a resource group RG1, avoid setting any nonzero load factors for RG. Instead, set larger load factors for RG1 to account for the additional load that would be imposed by RG2 coming online on the same node as RG1. This will allow the Concentrate_loadfeature to work as intended. Alternately, you can set load factors on RG2, but avoid setting any hard load limits for those load factors; set only soft limits. This will allow RG2 to come online even if the soft load limit is exceeded.

Hard and soft load limits for each node are created and modified with the clnode create-loadlimit, clnode set-loadlimit, and clnode delete-loadlimit command. See the clnode(1CL) man page for instructions.

global_fencing

Specifies the global default fencing algorithm for all shared devices.

Acceptable values for this property are nofencing, nofencing-noscrub, pathcount, or prefer3.

After checking for and removing any Persistent Group Reservation (PGR) keys, the nofencing setting turns off fencing for the shared device.

The nofencing-noscrub setting turns off fencing for the shared device without first checking for or removing PGR keys.

The pathcount setting determines the fencing protocol by the number of DID paths that are attached to the shared device. For devices that use three or more DID paths, this property is set to the SCSI-3 protocol.

The prefer3 setting specifies the SCSI-3 protocol for device fencing for all devices. The pathcount setting is assigned to any devices that do not support the SCSI-3 protocol.

By default, this property is set to prefer3.

heartbeat_quantum

Defines how often to send heartbeats, in milliseconds.

Oracle Solaris Cluster software uses a 1 second, or 1,000 milliseconds, heartbeat quantum by default. Specify a value between 100 milliseconds and 10,000 milliseconds.

heartbeat_timeout

Defines the time interval, in milliseconds, after which, if no heartbeats are received from the peer nodes, the corresponding path is declared as down.

Oracle Solaris Cluster software uses a 10 second, or 10,000 millisecond, heartbeat timeout by default. Specify a value between 2,500 milliseconds and 60,000 milliseconds.

The set subcommand allows you to modify the global heartbeat parameters of a cluster, across all the adapters.

Oracle Solaris Cluster software relies on heartbeats over the private interconnect to detect communication failures among cluster nodes. If you reduce the heartbeat timeout, Oracle Solaris Cluster software can detect failures more quickly. The time that is required to detect failures decreases when you decrease the values of heartbeat timeout. Thus, Oracle Solaris Cluster software recovers more quickly from failures. Faster recovery increases the availability of your cluster.

Even under ideal conditions, when you reduce the values of heartbeat parameters by using the set subcommand, there is always a risk that spurious path timeouts and node panics might occur. Always test and thoroughly qualify the lower values of heartbeat parameters under relevant workload conditions before actually implementing them in your cluster.

The value that you specify for heartbeat_timeout must always be greater than or equal to five times the value that you specify for heartbeat_quantum (heartbeat_timeout >= (5*heartbeat_quantum)).

installmode

Specifies the installation-mode setting for the cluster. You can specify either enabled or disabled for the installmode property.

While the installmode property is enabled, nodes do not attempt to reset their quorum configurations at boot time. Also, while in this mode, many administrative functions are blocked. When you first install a cluster, the installmode property is enabled.

After all nodes have joined the cluster for the first time, and shared quorum devices have been added to the configuration, you must explicitly disable the installmode property. When you disable the installmode property, the quorum vote counts are set to default values. If quorum is automatically configured during cluster creation, the installmode property is disabled as well after quorum has been configured.

resource_security

Specifies a security policy for execution of programs by RGM resources. Permissible values of resource_security are SECURE, WARN, OVERRIDE, or COMPATIBILITY.

Resource methods such as Start and Validate always run as root. If the method executable file has non-root ownership or group or world write permissions, an insecurity exists. In this case, if the resource_security property is set to SECURE, execution of the resource method fails at run time and an error is returned. If resource_security has any other setting, the resource method is allowed to execute with a warning message. For maximum security, set resource_security to SECURE.

The resource_security setting also modifies the behavior of resource types that declare the application_user resource property. A resource type that declares the application_user resource property is typically an agent that uses the scha_check_app_user(1HA ) interface to perform additional checks on the executable file ownership and permissions of application programs. For more information, see the application_user section of the r_properties(5) man page.

udp_session_timeout

Specifies the time lapse, in seconds, after which any inactive UDP sessions are removed.

This property can optionally be set to any integer.

This property only applies to UDP services and to the load balancing policy Lb_weighted for which the Round robin load-balancing scheme is enabled.

By default, this property is set to 480 (8 minutes).

Private network properties

You modify private network properties with the set-netprops subcommand only.

You must modify these private network settings only if the default private network address collides with an address that is already in use. You must also modify these private network settings if the existing address range is not sufficient to accommodate the growing cluster configuration.

All nodes of the cluster are expected to be available and in noncluster mode when you modify network properties. You modify the private network settings on only one node of the cluster, as the settings are propagated to all nodes.

When you set the private_netaddr property, you can also set the private_netmask property or the max_nodes and max_privatenets properties, or all properties. If you attempt to set the private_netmask property and either the max_nodes or the max_privatenets property, an error occurs. You must always set both the max_nodes and the max_privatenets properties together.

The default private network address is 172.16.0.0, with a default netmask of 255.255.240.0.

If you fail to set a property due to an inconsistent cluster configuration, in noncluster mode, run the cluster restore-netprops command on each node.

Private network properties are as follows:

max_nodes

Specify the maximum number of nodes that you expect to be a part of the cluster. You can set this property only in conjunction with the private_netaddr and max_privatenets properties, and optionally with the private_netmask property. The maximum value for max_nodes is 64. The minimum value is 2.

max_privatenets

Specifies the maximum number of private networks that you expect to be used in the cluster. You can set this property only in conjunction with the private_netaddr and max_nodes properties, and optionally with the private_netmask property. The maximum value for max_privatenets is 128. The minimum value is 2.

num_zoneclusters

Specifies the number of zone clusters that you intend to configure for a global cluster. Oracle Solaris Cluster software uses a combination of this value, the number of nodes, and the number of private networks that you specify for the global cluster to calculate the private network netmask.

Oracle Solaris Cluster software uses the private network netmask to determine the range of private network IP addresses to hold for cluster use.

You can set this property in cluster mode only.

If you do not specify a value for this property, it is set to 12 by default. You can specify 0 for this property.

private_netaddr

Specifies the private network address.

private_netmask

Specifies the cluster private network mask. The value that you specify in this case must be equal to or greater than the default netmask 255.255.240.0. You can set this property only in conjunction with the private_netaddr property.

If you want to assign a smaller IP address range than the default, you can use the max_nodes and max_privatenets properties instead of or in addition to the private_netmask property.

num_xip_zoneclusters

Specifies the number of exclusive-IP zone clusters that can be configured on the physical cluster. The command invokes a shell script called modify_xip_zc, and it updates the clprivnet configuration file with entries for the number of configurable exclusive-IP zone clusters. The num_xip_zoneclusters property must be a subset of the num_zoneclusters property.

The value of the num_xip_zoneclusters property cannot be less than the highest assigned clprivnet instance number.

The command performs the following tasks for each combination of private network properties:

-p private_netaddr=netaddr

The command assigns the default netmask, 255.255.240.0, to the private interconnect. The default IP address range accommodates a maximum of 64 nodes and 10 private networks.

-p private_netaddr=netaddr,private_netmask= netmask

If the specified netmask is less than the default netmask, the command fails and exits with an error.

If the specified netmask is equal to or greater than the default netmask, the command assigns the specified netmask to the private interconnect. The resulting IP address range accommodates a maximum of 64 nodes and 10 private networks.

To assign a smaller IP address range than the default, specify the max_nodes and max_privatenets properties instead of or in addition to the private_netmask property.

-p private_netaddr=netaddr,max_nodes=nodes,
max_privatenets=privatenets,num_xip_zoneclusters=xip_zoneclusters

The command calculates the minimum netmask to support the specified number of nodes and private networks. The command then assigns the calculated netmask to the private interconnect. It also specifies the number of exclusive-IP zone clusters that can be configured on the physical cluster.

-p private_netaddr=netaddr,private_netmask=netmask,
max_nodes=nodes,max_privatenets=privatenets

The command calculates the minimum netmask that supports the specified number of nodes and private networks.

The command compares that calculation to the specified netmask. If the specified netmask is less than the calculated netmask, the command fails and exits with an error. If the specified netmask is equal to or greater than the calculated netmask, the command assigns the specified netmask to the private interconnect.

–s severitylevel
-–severity=severitylevel
-–severity severitylevel

Reports only violations that are at least the specified severitylevel.

You can use this option only with the check subcommand.

Each check has an assigned severity level. Specifying a severity level excludes any failed checks of lesser severity levels from the report. The value of severity is one of the following values, which are listed in order from lowest severity to highest severity:

information

warning

low

medium

high

critical

When you do not specify this option, a severity level of information is used by default. A severity level of information specifies that failed checks of all severity levels are to be included in the report.

–t objecttype[,…]
-–type=objecttype[,…]
-–type objecttype[,…]

Specifies object types for the export, show, and status subcommands.

Use this option to limit the output of the export, show, and status subcommands to objects of the specified type only. The following object or component types are supported. Note that the status is not available for some of the object types.

Object Type/Short Object Type
Available Status
access/access
No
device/dev
Yes
devicegroup/dg
Yes
global/global
No
interconnect/intr
Yes
nasdevice/nas
No
node/node
Yes
quorum/quorum
Yes
reslogicalhostname/rslh
Yes
resource/rs
Yes
resourcegroup/rg
Yes
resourcetype/rt
No
ressharedaddress/rssa
Yes
–v
-–verbose

Displays verbose information on the standard output (stdout). When used with the check subcommand, displays verbose progress during execution. When used with the list-checks subcommand, provides more detailed information about checks.

–V
-–version

Displays the version of the command.

If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the command is displayed. No other processing occurs.

–y
-–yes

Prevents the prompt that asks you to confirm a shutdown from being issued. The cluster is shut down immediately, without user intervention.

Operands

The following operands are supported:

clustername

The name of the cluster that you want to manage.

For all subcommands except create, the clustername that you specify must match the name of the cluster on which you issue the cluster command.

You specify a new and a unique cluster name by using the create subcommand.

Exit Status

The complete set of exit status codes for all commands in this command set are listed in the Intro(1CL) man page. Returned exit codes are also compatible with the return codes that are described in the scha_calls(3HA) man page.

If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.

This command returns the following exit status codes:

0 CL_NOERR

No error

The command that you issued completed successfully.

1 CL_ENOMEM

Not enough swap space

A cluster node ran out of swap memory or ran out of other operating system resources.

3 CL_EINVAL

Invalid argument

You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the –i option was incorrect.

6 CL_EACCESS

Permission denied

The object that you specified is inaccessible. You might need superuser or RBAC access to issue the command. See the su(1M) and rbac(5) man pages for more information.

35 CL_EIO

I/O error

A physical input/output error has occurred.

36 CL_ENOENT

No such object

The object that you specified cannot be found for one of the following reasons: (1) The object does not exist. (2) A directory in the path to the configuration file that you attempted to create with the –o option does not exist. (3)The configuration file that you attempted to access with the –i option contains errors.

In addition, the check subcommand creates a text file named cluster_check_exit_code.log in the same output directory where it places check reports. If the subcommand itself exits CL_NOERR, a code is reported in this file that indicates the highest severity level of all violated checks. The following are the possible check codes:

100

No violations were reported. There might be check output for the information or warning severity level in the report.

101

critical

102

high

103

medium

104

low

Examples

Example 1 Displaying Cluster Configuration Information

The following command displays all available configuration information for the cluster.

# cluster show
=== Cluster ===

Cluster Name:                                   schost
  clusterid:                                       0x4FA7C35F
  installmode:                                     disabled
  heartbeat_timeout:                               9999
  heartbeat_quantum:                               1000
  private_netaddr:                                 172.16.0.0
  private_netmask:                                 255.255.240.0
  max_nodes:                                       64
  max_privatenets:                                 10
  num_zoneclusters:                                12
  udp_session_timeout:                             480
  concentrate_load:                                True
  resource_security:                               SECURE
  global_fencing:                                  prefer3
  Node List:                                       phys-schost-1, phys-schost-2

  === Host Access Control ===                  

  Cluster name:                                 schost
    Allowed hosts:                                 None
    Authentication Protocol:                       sys

  === Cluster Nodes ===                        

  Node Name:                                    phys-schost-1
    Node ID:                                       1
    Enabled:                                       yes
    privatehostname:                               clusternode1-priv
    reboot_on_path_failure:                        disabled
    globalzoneshares:                              1
    defaultpsetmin:                                1
    quorum_vote:                                   1
    quorum_defaultvote:                            1
    quorum_resv_key:                               0x4FA7C35F00000001
    Transport Adapter List:                        net3, net1

  Node Name:                                    phys-schost-2
    Node ID:                                       2
    Enabled:                                       yes
    privatehostname:                               clusternode2-priv
    reboot_on_path_failure:                        disabled
    globalzoneshares:                              1
    defaultpsetmin:                                1
    quorum_vote:                                   1
    quorum_defaultvote:                            1
    quorum_resv_key:                               0x4FA7C35F00000002
    Transport Adapter List:                        net3, net1

  === Transport Cables ===                     

  Transport Cable:                              phys-schost-1:net3,switch1@1
    Endpoint1:                                     phys-schost-1:net3
    Endpoint2:                                     switch1@1
    State:                                         Enabled

  Transport Cable:                              phys-schost-1:net1,switch2@1
    Endpoint1:                                     phys-schost-1:net1
    Endpoint2:                                     switch2@1
    State:                                         Enabled

  Transport Cable:                              phys-schost-2:net3,switch1@2
    Endpoint1:                                     phys-schost-2:net3
    Endpoint2:                                     switch1@2
    State:                                         Enabled

  Transport Cable:                              phys-schost-2:net1,switch2@2
    Endpoint1:                                     phys-schost-2:net1
    Endpoint2:                                     switch2@2
    State:                                         Enabled

  === Transport Switches ===                   

  Transport Switch:                             switch1
    State:                                         Enabled
    Type:                                          switch
    Port Names:                                    1 2
    Port State(1):                                 Enabled
    Port State(2):                                 Enabled

  Transport Switch:                             switch2
    State:                                         Enabled
    Type:                                          switch
    Port Names:                                    1 2
    Port State(1):                                 Enabled
    Port State(2):                                 Enabled

  === Quorum Devices ===                       

  Quorum Device Name:                           d4
    Enabled:                                       yes
    Votes:                                         1
    Global Name:                                   /dev/did/rdsk/d4s2
    Type:                                          shared_disk
    Access Mode:                                   scsi3
    Hosts (enabled):                               phys-schost-1, phys-schost-2

  === Device Groups ===                        

  === Registered Resource Types ===            

  Resource Type:                                SUNW.LogicalHostname:4
    RT_description:                                Logical Hostname Resource Type
    RT_version:                                    4
    API_version:                                   2
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/hafoip
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    All potential masters
    Installed_nodes:                               <All>
    Failover:                                      True
    Pkglist:                                       <NULL>
    RT_system:                                     True
    Global_zone:                                   True

  Resource Type:                                SUNW.SharedAddress:2
    RT_description:                                HA Shared Address Resource Type
    RT_version:                                    2
    API_version:                                   2
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/hascip
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    <Unknown>
    Installed_nodes:                               <All>
    Failover:                                      True
    Pkglist:                                       <NULL>
    RT_system:                                     True
    Global_zone:                                   True

  === Resource Groups and Resources ===        

  === DID Device Instances ===                 

  DID Device Name:                         /dev/did/rdsk/d1
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000485B6A000058584EDCBD7Ed0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000485B6A000058584EDCBD7Ed0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d2
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000485B6A0000585A4EDCBDA4d0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000485B6A0000585A4EDCBDA4d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d3
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000485B6A0000585C4EDCBDCAd0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000485B6A0000585C4EDCBDCAd0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d4
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000485B6A0000585E4EDCBDF1d0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000485B6A0000585E4EDCBDF1d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d5
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000485B6A000058604EDCBE1Cd0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000485B6A000058604EDCBE1Cd0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d6
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000486F08000073014EDCBED0d0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000486F08000073014EDCBED0d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d7
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000486F08000073034EDCBEFAd0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000486F08000073034EDCBEFAd0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d8
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000486F08000073054EDCBF1Fd0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000486F08000073054EDCBF1Fd0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d9
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000486F08000073074EDCBF46d0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000486F08000073074EDCBF46d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d10
    Full Device Path:                         phys-schost-2:/dev/rdsk/
                                              c0t600A0B8000486F08000073094EDCBF71d0
    Full Device Path:                         phys-schost-1:/dev/rdsk/
                                              c0t600A0B8000486F08000073094EDCBF71d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d11
    Full Device Path:                         phys-schost-1:/dev/rdsk/c3t0d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d12
    Full Device Path:                         phys-schost-1:/dev/rdsk/c4t0d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d13
    Full Device Path:                         phys-schost-1:/dev/rdsk/c4t1d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d14
    Full Device Path:                         phys-schost-2:/dev/rdsk/c3t0d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d15
    Full Device Path:                         phys-schost-2:/dev/rdsk/c4t0d0
    Replication:                              none
    default_fencing:                          global

  DID Device Name:                         /dev/did/rdsk/d16
    Full Device Path:                         phys-schost-2:/dev/rdsk/c4t1d0
    Replication:                              none
    default_fencing:                          global

  === NAS Devices ===                          

  Nas Device:                              qualfugu
    Type:                                     sun_uss
    userid:                                   osc_agent

  === Zone Clusters ===                        

  Zone Cluster Name:                            zc1
    zonename:                                      zc1
    zonepath:                                      /zones/zc1
    autoboot:                                      TRUE
    brand:                                         solaris10
    bootargs:                                      <NULL>
    pool:                                          <NULL>
    limitpriv:                                     <NULL>
    scheduling-class:                              <NULL>
    ip-type:                                       shared
    enable_priv_net:                               TRUE
    resource_security:                             COMPATIBILITY

    --- Solaris Resources for zc1 ---          

    Resource Name:                              net
      address:                                     schost-1
      physical:                                    auto

    Resource Name:                              net
      address:                                     schost-2
      physical:                                    auto

    --- Zone Cluster Nodes for zc1 ---         

    Node Name:                                  phys-schost-1
      physical-host:                               phys-schost-1
      hostname:                                    vzschost1a

      --- Solaris Resources for phys-schost-1 ---     

    Node Name:                                  phys-schost-2
      physical-host:                               phys-schost-2
      hostname:                                    vzschost2a

      --- Solaris Resources for phys-schost-2 ---     

  Zone Cluster Name:                            zc2
    zonename:                                      zc2
    zonepath:                                      /zones/zc2
    autoboot:                                      TRUE
    brand:                                         solaris
    bootargs:                                      <NULL>
    pool:                                          <NULL>
    limitpriv:                                     <NULL>
    scheduling-class:                              <NULL>
    ip-type:                                       shared
    enable_priv_net:                               TRUE
    resource_security:                             COMPATIBILITY

    --- Solaris Resources for zc2 ---          

    --- Zone Cluster Nodes for zc2 ---         

    Node Name:                                  phys-schost-1
      physical-host:                               phys-schost-1
      hostname:                                    vzschost1b

      --- Solaris Resources for phys-schost-1 ---     

    Node Name:                                  phys-schost-2
      physical-host:                               phys-schost-2
      hostname:                                    vzschost2b

      --- Solaris Resources for phys-schost-2 ---     

  Zone Cluster Name:                            zc3
    zonename:                                      zc3
    zonepath:                                      /zones/zc3
    autoboot:                                      TRUE
    brand:                                         solaris
    bootargs:                                      <NULL>
    pool:                                          <NULL>
    limitpriv:                                     <NULL>
    scheduling-class:                              <NULL>
    ip-type:                                       shared
    enable_priv_net:                               TRUE
    resource_security:                             COMPATIBILITY

    --- Solaris Resources for zc3 ---          

    --- Zone Cluster Nodes for zc3 ---         

    Node Name:                                  phys-schost-2
      physical-host:                               phys-schost-2
      hostname:                                    vzschost1c

      --- Solaris Resources for phys-schost-2 ---     

Example 2 Displaying Configuration Information About Selected Cluster Components

The following command displays information about resources, resource types, and resource groups. Information is displayed for only the cluster.

# cluster show -t resource,resourcetype,resourcegroup
  Single_instance:                                 False
  Proxy:                                           False
  Init_nodes:                                      <Unknown>
  Installed_nodes:                                 <All>
  Failover:                                        True
  Pkglist:                                         <NULL>
  RT_system:                                       True

Resource Type:                                  SUNW.qfs
  RT_description:                                  SAM-QFS Agent on SunCluster
  RT_version:                                      3.1
  API_version:                                     3
  RT_basedir:                                      /opt/SUNWsamfs/sc/bin
  Single_instance:                                 False
  Proxy:                                           False
  Init_nodes:                                      All potential masters
  Installed_nodes:                                 <All>
  Failover:                                        True
  Pkglist:                                         <NULL>
  RT_system:                                       False

=== Resource Groups and Resources ===

Resource Group:                                 qfs-rg
  RG_description:                                  <NULL>
  RG_mode:                                         Failover
  RG_state:                                        Managed
  Failback:                                        False
  Nodelist:                                        phys-schost-2 phys-schost-1

  --- Resources for Group qfs-rg ---

  Resource:                                     qfs-res
    Type:                                          SUNW.qfs
    Type_version:                                  3.1
    Group:                                         qfs-rg
    R_description:                                 
    Resource_project_name:                         default
    Enabled{phys-schost-2}:                        True
    Enabled{phys-schost-1}:                        True
    Monitored{phys-schost-2}:                      True
    Monitored{phys-schost-1}:                      True
Example 3 Displaying Cluster Status

The following command displays the status of all cluster nodes.

# cluster status -t node
=== Cluster Nodes ===

--- Node Status ---

Node Name                                       Status
---------                                       ------
phys-schost-1                                   Online
phys-schost-2                                   Online

--- Node Status ---

Node Name                                       Status
---------                                       ------

Alternately, you can also display the same information by using the clnode command.

# clnode status
=== Cluster Nodes ===

--- Node Status ---

Node Name                                       Status
---------                                       ------
phys-schost-1                                   Online
phys-schost-2                                   Online
Example 4 Creating a Cluster

The following command creates a cluster that is named cluster-1 from the cluster configuration file suncluster.xml.

# cluster create -i /suncluster.xml cluster-1
Example 5 Changing a Cluster Name

The following command changes the name of the cluster to cluster-2.

# cluster rename -c cluster-2
Example 6 Disabling a Cluster's installmode Property

The following command disables a cluster's installmode property.

# cluster set -p installmode=disabled
Example 7 Modifying the Private Network

The following command modifies the private network settings of a cluster. The command sets the private network address to 172.10.0.0. The command also calculates and sets a minimum private netmask to support the specified eight nodes and four private networks and specifies that you want to configure eight zone clusters for the global cluster. The command also identifies the number of exclusive-IP zone clusters that can be configured on the physical cluster in non-cluster mode.

You must run this subcommand in non-cluster mode. However, when setting the num_zoneclusters property, you can also run this subcommand in cluster mode.

# cluster set-netprops \
-p private_netaddr=172.10.0.0 \
-p max_nodes=8,\
-p max_privatenets=4 \
-p num_zoneclusters=8 \
-p num_xip_zoneclusters=3
Example 8 Listing Available Checks

The following command lists all checks, shown in single-line format, that are available on the cluster. The actual checks that are available vary by release or update.

# cluster list-checks
 M6336822  :   (Critical)   Global filesystem /etc/vfstab entries are 
not consistent across all Oracle Solaris Cluster nodes.
 S6708689  :   (Variable)   One or more Oracle Solaris Cluster resources 
cannot be validated
 M6708613  :   (Critical)   vxio major numbers are not consistent across 
all Oracle Solaris Cluster nodes.
 S6708255  :   (Critical)   The nsswitch.conf file 'hosts' database 
entry does not have 'cluster' specified first.
 S6708479  :   (Critical)   The /etc/system rpcmod:svc_default_stksize 
parameter is missing or has an incorrect value for Oracle Solaris Cluster.
 F6984121  :   (Critical)   Perform cluster shutdown
 F6984140  :   (Critical)   Induce node panic
…
Example 9 Running Basic Checks on a Cluster

The following command runs in verbose mode all available basic checks on all nodes of the schost cluster, of which phys-schost-1 is a cluster member. The output is redirected to the file basicchks.18Nov2011.schost.

phys-schost-1# cluster check -v -o basicchks.18Nov2011.schost
Example 10 Running Interactive Checks on a Cluster

The following command runs all available interactive checks except those checks that have the vfstab keyword. Output from the check is saved to the file interactive.chk.18Nov2011.

# cluster check -k interactive -E vfstab -o interactive.chk.18Nov2011 cluster-1

User supplies information when prompted
Example 11 Running a Functional Check on a Cluster

The following commands display the detailed description of functional check F6968101 and runs the check on the cluster of which phys-schost-1, phys-schost-2, and phys-schost-3 are the cluster members. Output from the check is saved to the file F6968101.failovertest.19Nov2011. Because the check involves failing over a cluster node, you do not start the check until after you take the cluster out of production.

phys-schost-1# cluster list-checks -v -C F6968101

 initializing...
 F6968101: (Critical) Perform resource group switchover
Keywords: SolarisCluster4.x, functional
Applicability: Applicable if multi-node cluster running live.
Check Logic: Select a resource group and destination node.
Perform '/usr/cluster/bin/clresourcegroup switch' on specified
resource group either to specified node or to all nodes in succession.
Version: 1.118
Revision Date: 13/07/09

 cleaning up...

Take the cluster out of production

phys-schost-1# cluster check -k functional -C F6968101 \
-o F6968101.failovertest.19Nov2011

  initializing...
  initializing xml output...
  loading auxiliary data...
  starting check run...
     phys-schost-1, phys-schost-2, phys-schost-3:     F6968101.... starting:
 Perform resource group switchover

 ============================================================

   >>> Functional Check <<<

Follow onscreen directions
Example 12 Running Limited Checks on Specified Nodes

The following command runs, in verbose mode, all checks that are of the severity level high or higher. These checks run only on the node phys-schost-1.

# cluster check -v -n phys-schost-1 -s high
 initializing...
 initializing xml output...
 loading auxiliary data...
 filtering out checks with severity less than High
 starting check run...
    phys-schost-1:     M6336822.... starting:  Global filesystem /etc/vfstab entries...
    phys-schost-1:     M6336822       not applicable
    phys-schost-1:     S6708689.... starting:  One or more Oracle Solaris Cluster...
    phys-schost-1:     S6708689       passed
…
    phys-schost-1:     S6708606       skipped: severity too low
    phys-schost-1:     S6708638       skipped: severity too low
    phys-schost-1:     S6708641.... starting:  Cluster failover/switchover might...
    phys-schost-1:     S6708641       passed
…

Files

/usr/cluster/lib/cfgchk/checkresults.dtd

/var/cluster/logs/cluster_check/

/outputdir/cluster_check_exit_code.log

Attributes

See attributes (5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
ha-cluster/system/core
Interface Stability
Evolving

See Also

Intro(1CL), init (1M) , su (1M) , scha_calls(3HA), attributes (5) , rbac (5) , clconfiguration(5CL)

Notes

The superuser can run all forms of this command.

All users can run this command with the –? (help) or –V (version) option.

To run the cluster command with subcommands, users other than superuser require RBAC authorizations. See the following table.

Subcommand
RBAC Authorization
check
solaris.cluster.read
create
solaris.cluster.modify
export
solaris.cluster.read
list
solaris.cluster.read
list-checks
solaris.cluster.read
list-cmds
solaris.cluster.read
rename
solaris.cluster.modify
restore-netprops
solaris.cluster.modify
set
solaris.cluster.modify
set-netprops
solaris.cluster.modify
show
solaris.cluster.read
show-netprops
solaris.cluster.read
shutdown
solaris.cluster.admin
status
solaris.cluster.read