NAME | Synopsis | Description | SUBCOMMANDS | Options | Operands | Exit Status | Examples | Files | Attributes | See Also | Notes
/usr/cluster/bin/cluster -V
/usr/cluster/bin/cluster [subcommand] -?
/usr/cluster/bin/cluster subcommand [options] -v [clustername …]
/usr/cluster/bin/cluster check [-C checkid[,…]]|-E checkid[,…]] [-e explorerpath[,…]] [-j jarpath[,…]] [-k keyword[,…]] [-n node[,…]] [-o outputdir] [-s severitylevel] [clustername]
/usr/cluster/bin/cluster create -i {- | clconfigfile} [clustername]
/usr/cluster/bin/cluster export [-o {- | configfile}] [-t objecttype[,…]] [clustername]
/usr/cluster/bin/cluster list [clustername]
/usr/cluster/bin/cluster list-checks [-C checkid[,…]|-E checkid[,…]] [-j jar-path[,…]] [-o outputdir] [clustername]
/usr/cluster/bin/cluster list-cmds [clustername]
/usr/cluster/bin/cluster rename -c newclustername [clustername]
/usr/cluster/bin/cluster restore-netprops [clustername]
/usr/cluster/bin/cluster set {-p name=value} [-p name=value ] […] [clustername]
/usr/cluster/bin/cluster set-netprops {-p name=value} [-p name=value ] […] [clustername]
/usr/cluster/bin/cluster show [-t objecttype[,…]] [clustername]
/usr/cluster/bin/cluster show-netprops [clustername]
/usr/cluster/bin/cluster shutdown [-y] [-g graceperiod] [-m message] [clustername]
/usr/cluster/bin/cluster status [-t objecttype[,…]] [clustername]
The cluster command displays and manages cluster-wide configuration and status information. This command also shuts down a global cluster.
You cannot use the cluster command in a zone cluster.
Almost all subcommands that you use with the cluster command operate in cluster mode. You can run these subcommands from any node in the cluster. However, the create, set-netprops, and restore-netprops subcommands are an exception. You must run these subcommands in noncluster mode.
You can omit subcommand only if options is the -? option or the -V option.
The cluster command does not have a short form.
Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS.
For ease of administration, use this command in the global zone.
The following subcommands are supported:
Checks and reports whether the cluster is configured correctly.
You can use this command only in the global zone.
When issued from an active member of a running cluster, this subcommand runs configuration checks. These checks ensure that the cluster meets the minimum requirements that are required to successfully run the cluster.
When issued from a node that is not running as an active cluster member, this subcommand runs preinstallation checks on that node. These checks identify vulnerabilities that you should repair to prepare the cluster for installation and to avoid possible loss of availability.
Each configuration check produces a set of reports that are saved in the specified or default output directory. Each report contains a summary that shows the total number of checks that were executed and the number of failures, grouped by severity level.
Each report is produced in both ordinary text and in XML. The DTD for the XML format is available in the /usr/cluster/lib/cfgchk/checkresults.dtd file. The reports are produced in English only.
Users other than superuser require solaris.cluster.read Role-Based Access Control (RBAC) authorization to use this subcommand. See the rbac(5) man page.
Creates a new cluster by using configuration information that is stored in a clconfigfile file.
The format of this configuration information is described in the clconfiguration(5CL) man page.
You can use this subcommand only in the global zone.
You must run this subcommand in noncluster mode. You must also run this subcommand from a host that is not already configured as part of a cluster. Sun Cluster software must already be installed on every node that is going to be a part of the cluster.
If you do not specify a cluster name, the name of the cluster is taken from the clconfigfile file.
Users other than superuser require solaris.cluster.modify role-based access control (RBAC) authorization to use this subcommand. See the rbac(5) man page.
Exports the configuration information.
You can use this subcommand only in the global zone.
If you specify a file with the -o option, the configuration information is written to that file. If you do not specify the -o option, the output is written to the standard output (stdout).
The following option limits the information that is exported:
Exports configuration information only for components that are of the specified types.
You can export configuration information only for the cluster on which you issue the cluster command. If you specify the name of a cluster other than the one on which you issue the cluster command, this subcommand fails.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
Displays the name of the cluster.
You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
Prints a list with the check ID and description of each available check.
You can use this command only in the global zone.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
Prints a list of all available Sun Cluster commands.
You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
Renames the cluster.
You can use this subcommand only in the global zone.
Use the -c option with this subcommand to specify a new name for the cluster.
If your cluster is configured as part of an active Sun Cluster Geographic Edition partnership, see Renaming a Cluster in a Partnership in Sun Cluster Geographic Edition System Administration Guide. This section describes how to correctly rename a cluster that is configured as a member of a Sun Cluster Geographic Edition partnership.
Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page.
Resets the cluster private network settings of the cluster.
You can use this subcommand only in the global zone.
You must run this subcommand in noncluster mode.
Use this subcommand only when the set-netprops subcommand fails and the following conditions exist:
You are attempting to modify the private network properties.
The failure indicates an inconsistent cluster configuration on the nodes. In this situation, you need to run the restore-netprops subcommand.
You must run this subcommand on every node in the cluster. This subcommand repairs the cluster configuration. This subcommand also removes inconsistencies that are caused by the failure of the modification of the IP address range. In case of a failure, any attempts that you make to change the configuration settings are not guaranteed to work.
Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page.
Modifies the properties of the cluster.
You can use this subcommand only in the global zone.
Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page.
Modifies the private network properties.
You can use this subcommand only in the global zone.
You must run this subcommand in noncluster mode. However, when setting the num_zoneclusters property, you can also run this subcommand in cluster mode.
Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac(5) man page.
Displays detailed configuration information about cluster components.
You can use this subcommand only in the global zone.
The following option limits the information that is displayed:
Displays configuration information only for components that are of the specified types.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
Displays information about the private network properties of the cluster.
You can use this subcommand only in the global zone.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
Shuts down the global cluster in an orderly fashion.
You can use this subcommand only in the global zone.
If you issue this subcommand in the global cluster, Sun Cluster software shuts down the entire global cluster, including all zone clusters that are associated with that global cluster. You cannot use the cluster command in a zone cluster.
If you provide the name of a cluster other than the cluster on which you issue the cluster command, this subcommand fails.
Run this subcommand from only one node in the cluster.
This subcommand performs the following actions:
Takes offline all functioning resource groups in the cluster. If any transitions fail, this subcommand does not complete and displays an error message.
Unmounts all cluster file systems. If an unmount fails, this subcommand does not complete and displays an error message.
Shuts down all active device services. If any transition of a device fails, this subcommand does not complete and displays an error message.
Halts all nodes in the cluster.
Before this subcommand starts to shut down the cluster, it issues a warning message on all nodes. After issuing the warning, this subcommand issues a final message that prompts you to confirm that you want to shut down the cluster. To prevent this final message from being issued, use the -y option.
By default, the shutdown subcommand waits 60 seconds before it shuts down the cluster. You can use the -g option to specify a different delay time.
To specify a message string to appear with the warning, use the -m option.
Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.
Displays the status of cluster components.
You can use this subcommand in the global zone or in a non-global zone. For ease of administration, use this form of the command in the global zone.
The option -t objecttype[,…] displays status information for components that are of the specified types only.
Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac(5) man page.
The following options are supported:
Both the short and the long form of each option are shown in this section.
When used with the check or list-checks subcommand, the long form of this option, --help, is not available on the Solaris 9 OS. Instead, use the short form, -?.
Displays help information.
You can specify this option with or without a subcommand.
If you do not specify a subcommand, the list of all available subcommands is displayed.
If you specify a subcommand, the usage for that subcommand is displayed.
If you specify this option and other options, the other options are ignored.
The long form of this option, --checkID, is not available on the Solaris 9 OS. Instead, use the short form, -C.
Specifies the checks to run. Checks that are not specified are not run. If the -E option is specified with the -C option, the -C option is ignored.
You can use this option only with the check and list-checks subcommands.
Specifies a new name for the cluster.
Use this option with the rename subcommand to change the name of the cluster.
The long form of this option, --excludeCheckID, is not available on the Solaris 9 OS. Instead, use the short form, -E.
Specifies the checks to exclude. All checks except those specified are run. If the -C option is specified with the -E option, the -C option is ignored.
You can use this option only with the check and list-checks subcommands.
The long form of this option, --explorer, is not available on the Solaris 9 OS. Instead, use the short form, -e.
Specifies the path to an unpacked Sun Explorer archive, to use as an alternative source of data for the system. The value of explorerpath must be a fully qualified path location.
You can use this option only with the check subcommand.
Changes the length of time before the cluster is shut down from the default setting of 60 seconds.
You specify graceperiod in seconds.
Uses the configuration information in the clconfigfile file. See the clconfiguration(5CL) man page.
To provide configuration information through the standard input (stdin), specify a dash (-) with this option.
If you specify other options, they take precedence over the options and information in the cluster configuration file.
The long form of this option, --jar, is not available on the Solaris 9 OS. Instead, use the short form, -j.
Specifies the path to an additional jar file that contains checks. The jarpath must be fully qualified.
You can use this option only with the check subcommand.
The long form of this option, --keyword, is not available on the Solaris 9 OS. Instead, use the short form, -k.
Runs only checks that contain the specified keyword. Use the cluster list-checks -v command to determine what keywords are assigned to available checks.
You can use this option only with the check subcommand.
Specifies a message string that you want to display with the warning that is displayed when you issue the shutdown subcommand.
The standard warning message is system will be shut down in ….
If message contains more than one word, delimit it with single (') quotation marks or double (“) quotation marks. The shutdown command issues messages at 7200, 3600, 1800, 1200, 600, 300, 120, 60, and 30 seconds before a shutdown begins.
The long form of this option, --node, is not available on the Solaris 9 OS. Instead, use the short form, -n.
Runs checks only on the specified node or list of nodes. The value of node can be the node name or the node ID number.
You can use this option only with the check subcommand.
Writes cluster configuration information to a file or to the standard output (stdout). The format of the configuration information is described in the clconfiguration(5CL) man page.
If you specify a file name with this option, this option creates a new file. Configuration information is then placed in that file. If you specify - with this option, the configuration information is sent to the standard output (stdout). All other standard output for the command is suppressed.
You can use this form of the -o option only with the exportsubcommand.
The long form of this option, --output, is not available on the Solaris 9 OS. Instead, use the short form, -o.
Specifies the directory in which to save the reports that the check subcommand generates.
You can use this form of the -o option only with the check and list-checks subcommands.
The output directory outputdir must already exist or be able to be created. Previous reports that are located in outputdir are overwritten by the new reports.
If you do not specify the -d option, the directory /var/cluster/logs/cluster/check/datestamp/ is used as outputdir by default.
Modifies cluster-wide properties.
Multiple instances of -p name=value are allowed.
Use this option with the set and the set-netprops subcommands to modify the following properties:
Specify the installation-mode setting for the cluster. You can specify either enabled or disabled for the installmode property.
While the installmode property is enabled, nodes do not attempt to reset their quorum configurations at boot time. Also, while in this mode, many administrative functions are blocked. When you first install a cluster, the installmode property is enabled.
After all nodes have joined the cluster for the first time, and shared quorum devices have been added to the configuration, you must explicitly disable the installmode property. When you disable the installmode property, the quorum vote counts are set to default values. If quorum is automatically configured during cluster creation, the installmode property is disabled as well after quorum has been configured.
Define how often to send heartbeats, in milliseconds.
Sun Cluster software uses a 1 second, or 1,000 milliseconds, heartbeat quantum by default. Specify a value between 100 milliseconds and 10,000 milliseconds.
Define the time interval, in milliseconds, after which, if no heartbeats are received from the peer nodes, the corresponding path is declared as down.
Sun Cluster software uses a 10 second, or 10,000 millisecond, heartbeat timeout by default. Specify a value between 2,500 milliseconds and 60,000 milliseconds.
The set subcommand allows you to modify the global heartbeat parameters of a cluster, across all the adapters.
Sun Cluster software relies on heartbeats over the private interconnect to detect communication failures among cluster nodes. If you reduce the heartbeat timeout, Sun Cluster software can detect failures more quickly. The time that is required to detect failures decreases when you decrease the values of heartbeat timeout. Thus, Sun Cluster software recovers more quickly from failures. Faster recovery increases the availability of your cluster.
Even under ideal conditions, when you reduce the values of heartbeat parameters by using the set subcommand, there is always a risk that spurious path timeouts and node panics might occur. Always test and thoroughly qualify the lower values of heartbeat parameters under relevant workload conditions before actually implementing them in your cluster.
The value that you specify for heartbeat_timeout must always be greater than or equal to five times the value that you specify for heartbeat_quantum (heartbeat_timeout >= (5*heartbeat_quantum)).
Specify the global default fencing algorithm for all shared devices.
Acceptable values for this property are nofencing, nofencing-noscrub, pathcount, or prefer3.
After checking for and removing any Persistent Group Reservation (PGR) keys, the nofencing setting turns off fencing for the shared device.
The nofencing-noscrub setting turns off fencing for the shared device without first checking for or removing PGR keys.
The pathcount setting determines the fencing protocol by the number of DID paths that are attached to the shared device. For devices that use three or more DID paths, this property is set to the SCSI-3 protocol.
The prefer3 setting specifies the SCSI-3 protocol for device fencing for all devices. The pathcount setting is assigned to any devices that do not support the SCSI-3 protocol.
By default, this property is set to pathcount.
You modify private network properties with the set-netprops subcommand only.
You must modify these private network settings only if the default private network address collides with an address that is already in use. You must also modify these private network settings if the existing address range is not sufficient to accommodate the growing cluster configuration.
All nodes of the cluster are expected to be available and in noncluster mode when you modify network properties. You modify the private network settings on only one node of the cluster, as the settings are propagated to all nodes.
When you set the private_netaddr property, you can also set the private_netmask property or the max_nodes and max_privatenets properties, or all properties. If you attempt to set the private_netmask property and either the max_nodes or the max_privatenets property, an error occurs. You must always set both the max_nodes and the max_privatenets properties together.
The default private network address is 172.16.0.0, with a default netmask of 255.255.240.0 on the Solaris 10 OS or a default netmask of 255.255.248.0 on the Solaris 9 OS.
If you fail to set a property due to an inconsistent cluster configuration, in noncluster mode, run the cluster restore-netprops command on each node.
Private network properties are as follows:
Specify the maximum number of nodes that you expect to be a part of the cluster. Include in this number the expected number of non-global zones that will use the private network. You can set this property only in conjunction with the private_netaddr and max_privatenets properties, and optionally with the private_netmask property. The maximum value for max_nodes is 64. The minimum value is 2.
Specify the maximum number of private networks that you expect to be used in the cluster. You can set this property only in conjunction with the private_netaddr and max_nodes properties, and optionally with the private_netmask property. The maximum value for max_privatenets is 128. The minimum value is 2.
Specify the number of zone clusters that you intend to configure for a global cluster. Sun Cluster software uses a combination of this value, the number of nodes, and the number of private networks that you specify for the global cluster to calculate the private network netmask.
Sun Cluster software uses the private network netmask to determine the range of private network IP addresses to hold for cluster use.
You can set this property in cluster mode or in noncluster mode.
If you do not specify a value for this property, it is set to 12 by default. You can specify 0 for this property.
Specify the private network address.
Specify the cluster private network mask. The value that you specify in this case must be equal to or greater than the default netmask 255.255.240.0 on the Solaris 10 OS or the default netmask 255.255.248.0 on the Solaris 9 OS. You can set this property only in conjunction with the private_netaddr property.
If you want to assign a smaller IP address range than the default, you can use the max_nodes and max_privatenets properties instead of or in addition to the private_netmask property.
The command performs the following tasks for each combination of private network properties:
The command assigns the default netmask, 255.255.240.0 on the Solaris 10 OS or 255.255.248.0 on the Solaris 9 OS, to the private interconnect. The default IP address range accommodates a maximum of 64 nodes and 10 private networks.
If the specified netmask is less than the default netmask, the command fails and exits with an error.
If the specified netmask is equal to or greater than the default netmask, the command assigns the specified netmask to the private interconnect. The resulting IP address range accommodates a maximum of 64 nodes and 10 private networks.
To assign a smaller IP address range than the default, specify the max_nodes and max_privatenets properties instead of or in addition to the private_netmask property.
The command calculates the minimum netmask to support the specified number of nodes and private networks. The command then assigns the calculated netmask to the private interconnect.
The command calculates the minimum netmask that supports the specified number of nodes and private networks.
The command compares that calculation to the specified netmask. If the specified netmask is less than the calculated netmask, the command fails and exits with an error. If the specified netmask is equal to or greater than the calculated netmask, the command assigns the specified netmask to the private interconnect.
The long form of this option, --severity, is not available on the Solaris 9 OS. Instead, use the short form, -s.
Reports only violations that are at least the specified severitylevel.
You can use this option only with the check subcommand.
Each check has an assigned severity level. Specifying a severity level excludes any failed checks of lesser severity levels from the report. The value of severity is one of the following values, which are listed in order from lowest severity to highest severity:
When you do not specify this option, a severity level of information is used by default. A severity level of information specifies that failed checks of all severity levels are to be included in the report.
Specifies object types for the export, show, and status subcommands.
Use this option to limit the output of the export, show, and status subcommands to objects of the specified type only. The following object or component types are supported. Note that the status is not available for some of the object types.
Object Type |
Short Object Type |
Available Status |
---|---|---|
access |
access |
No |
device |
dev |
Yes |
devicegroup |
dg |
Yes |
global |
global |
No |
interconnect |
intr |
Yes |
nasdevice |
nas |
No |
node |
node |
Yes |
quorum |
quorum |
Yes |
reslogicalhostname |
rslh |
Yes |
resource |
rs |
Yes |
resourcegroup |
rg |
Yes |
resourcetype |
rt |
No |
ressharedaddress |
rssa |
Yes |
snmphost |
snmphost |
No |
snmpmib |
snmpmib |
No |
snmpuser |
snmpuser |
No |
telemetryattribute |
ta |
No |
When used with the check or list-checks subcommand, the long form of this option, --verbose, is not available on the Solaris 9 OS. Instead, use the short form, -v.
Displays verbose information on the standard output (stdout). When used with the check subcommand, displays verbose progress during execution. When used with the list-checks subcommand, provides more detailed information about checks.
Displays the version of the command.
If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the command is displayed. No other processing occurs.
Prevents the prompt that asks you to confirm a shutdown from being issued. The cluster is shut down immediately, without user intervention.
The following operands are supported:
The name of the cluster that you want to manage.
For all subcommands except create, the clustername that you specify must match the name of the cluster on which you issue the cluster command.
You specify a new and a unique cluster name by using the create subcommand.
The complete set of exit status codes for all commands in this command set are listed in the Intro(1CL) man page. Returned exit codes are also compatible with the return codes that are described in the scha_calls(3HA) man page.
If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.
This command returns the following exit status codes:
No error
The command that you issued completed successfully.
Not enough swap space
A cluster node ran out of swap memory or ran out of other operating system resources.
Invalid argument
You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the -i option was incorrect.
Permission denied
The object that you specified is inaccessible. You might need superuser or RBAC access to issue the command. See the su(1M) and rbac(5) man pages for more information.
I/O error
A physical input/output error has occurred.
No such object
The object that you specified cannot be found for one of the following reasons:
The object does not exist.
A directory in the path to the configuration file that you attempted to create with the -o option does not exist.
The configuration file that you attempted to access with the -i option contains errors.
In addition, the check subcommand creates a text file named cluster_check_exit_code.log in the same output directory where it places check reports. If the subcommand itself exits CL_NOERR, a code is reported in this file that indicates the highest severity level of all violated checks. The following are the possible check codes:
No violations were reported. There might be check output for the information or warning severity level in the report.
critical
high
medium
low
The following command displays all available configuration information for the cluster.
# cluster show Enabled: yes privatehostname: clusternode1-priv reboot_on_path_failure: disabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x441699B200000001 Transport Adapter List: hme1, qfe3 Node Zones: phys-schost-1:za --- Transport Adapters for phys-schost-1 --- Transport Adapter: hme1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): hme Adapter Property(device_instance): 1 Adapter Property(lazy_free): 0 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.0.129 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled Transport Adapter: qfe3 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): qfe Adapter Property(device_instance): 3 Adapter Property(lazy_free): 1 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.1.1 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled --- SNMP MIB Configuration on phys-schost-1 --- SNMP MIB Name: Event State: SNMPv --- SNMP Host Configuration on phys-schost-1 --- --- SNMP User Configuration on phys-schost-1 --- Node Name: phys-schost-2 Node ID: 2 Type: cluster Enabled: yes privatehostname: clusternode2-priv reboot_on_path_failure: disabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x441699B200000002 Transport Adapter List: hme1, qfe3 Node Zones: phys-schost-2:za --- Transport Adapters for phys-schost-2 --- Transport Adapter: hme1 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): hme Adapter Property(device_instance): 1 Adapter Property(lazy_free): 0 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.0.130 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled Transport Adapter: qfe3 Adapter State: Enabled Adapter Transport Type: dlpi Adapter Property(device_name): qfe Adapter Property(device_instance): 3 Adapter Property(lazy_free): 1 Adapter Property(dlpi_heartbeat_timeout): 10000 Adapter Property(dlpi_heartbeat_quantum): 1000 Adapter Property(nw_bandwidth): 80 Adapter Property(bandwidth): 10 Adapter Property(ip_address): 172.16.1.2 Adapter Property(netmask): 255.255.255.128 Adapter Port Names: 0 Adapter Port State(0): Enabled --- SNMP MIB Configuration on phys-schost-2 --- SNMP MIB Name: Event State: SNMPv --- SNMP Host Configuration on phys-schost-2 --- --- SNMP User Configuration on phys-schost-2 --- === Transport Cables === Transport Cable: phys-schost-1:hme1,switch1@1 Cable Endpoint1: phys-schost-1:hme1 Cable Endpoint2: switch1@1 Cable State: Enabled Transport Cable: phys-schost-1:qfe3,switch2@1 Cable Endpoint1: phys-schost-1:qfe3 Cable Endpoint2: switch2@1 Cable State: Enabled Transport Cable: phys-schost-2:hme1,switch1@2 Cable Endpoint1: phys-schost-2:hme1 Cable Endpoint2: switch1@2 Cable State: Enabled Transport Cable: phys-schost-2:qfe3,switch2@2 Cable Endpoint1: phys-schost-2:qfe3 Cable Endpoint2: switch2@2 Cable State: Enabled === Transport Switches === Transport Switch: switch1 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled Transport Switch: switch2 Switch State: Enabled Switch Type: switch Switch Port Names: 1 2 Switch Port State(1): Enabled Switch Port State(2): Enabled === Quorum Devices === Quorum Device Name: d3 Enabled: yes Votes: 1 Global Name: /dev/did/rdsk/d3s2 Type: scsi Access Mode: scsi2 Hosts (enabled): phys-schost-1, phys-schost-2 === Device Groups === Device Group Name: db1 Type: SVM failback: no Node List: phys-schost-1, phys-schost-2 preferenced: yes numsecondaries: 1 diskset name: db1 === Registered Resource Types === Resource Type: SUNW.LogicalHostname:2 RT_description: Logical Hostname Resource Type RT_version: 2 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hafoip Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: True Pkglist: SUNWscu RT_system: True Resource Type: SUNW.SharedAddress:2 RT_description: HA Shared Address Resource Type RT_version: 2 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hascip Single_instance: False Proxy: False Init_nodes: <Unknown> Installed_nodes: <All> Failover: True Pkglist: SUNWscu RT_system: True Resource Type: SUNW.qfs RT_description: SAM-QFS Agent on SunCluster RT_version: 3.1 API_version: 3 RT_basedir: /opt/SUNWsamfs/sc/bin Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: True Pkglist: <NULL> RT_system: False === Resource Groups and Resources === Resource Group: qfs-rg RG_description: <NULL> RG_mode: Failover RG_state: Managed Failback: False Nodelist: phys-schost-2 phys-schost-1 --- Resources for Group qfs-rg --- Resource: qfs-res Type: SUNW.qfs Type_version: 3.1 Group: qfs-rg R_description: Resource_project_name: default Enabled{phys-schost-2}: True Enabled{phys-schost-1}: True Monitored{phys-schost-2}: True Monitored{phys-schost-1}: True === DID Device Instances === DID Device Name: /dev/did/rdsk/d1 Full Device Path: phys-schost-1:/dev/rdsk/c0t0d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d2 Full Device Path: phys-schost-1:/dev/rdsk/c0t6d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d3 Full Device Path: phys-schost-2:/dev/rdsk/c1t1d0 Full Device Path: phys-schost-1:/dev/rdsk/c1t1d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d4 Full Device Path: phys-schost-2:/dev/rdsk/c1t2d0 Full Device Path: phys-schost-1:/dev/rdsk/c1t2d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d5 Full Device Path: phys-schost-2:/dev/rdsk/c1t3d0 Full Device Path: phys-schost-1:/dev/rdsk/c1t3d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d6 Full Device Path: phys-schost-2:/dev/rdsk/ c6t60020F2000004B843BC38D21000A3116d0 Full Device Path: phys-schost-1:/dev/rdsk/ c6t60020F2000004B843BC38D21000A3116d0 Replication: none default_fencing: scsi3 DID Device Name: /dev/did/rdsk/d7 Full Device Path: phys-schost-2:/dev/rdsk/ c6t60020F2000004B843BC3746B000BB4A0d0 Full Device Path: phys-schost-1:/dev/rdsk/ c6t60020F2000004B843BC3746B000BB4A0d0 Replication: none default_fencing: nofencing DID Device Name: /dev/did/rdsk/d8 Full Device Path: phys-schost-2:/dev/rdsk/ c6t60020F2000004B843BC37F8600083E05d0 Full Device Path: phys-schost-1:/dev/rdsk/ c6t60020F2000004B843BC37F8600083E05d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d9 Full Device Path: phys-schost-2:/dev/rdsk/ c6t60020F2000004B843BC373F10005A987d0 Full Device Path: phys-schost-1:/dev/rdsk/ c6t60020F2000004B843BC373F10005A987d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d10 Full Device Path: phys-schost-2:/dev/rdsk/c3t50020F2300004677d1 Full Device Path: phys-schost-1:/dev/rdsk/c3t50020F2300004677d1 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d11 Full Device Path: phys-schost-2:/dev/rdsk/c3t50020F2300004677d0 Full Device Path: phys-schost-1:/dev/rdsk/c3t50020F2300004677d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d12 Full Device Path: phys-schost-2:/dev/rdsk/c0t0d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d13 Full Device Path: phys-schost-2:/dev/rdsk/c0t1d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d14 Full Device Path: phys-schost-2:/dev/rdsk/c0t6d0 Replication: none default_fencing: global === NAS Devices === === Telemetry Attributes === |
The following command displays information about resources, resource types, and resource groups. Information is displayed for only the cluster.
# cluster show -t resource,resourcetype,resourcegroup Single_instance: False Proxy: False Init_nodes: <Unknown> Installed_nodes: <All> Failover: True Pkglist: SUNWscu RT_system: True Resource Type: SUNW.qfs RT_description: SAM-QFS Agent on SunCluster RT_version: 3.1 API_version: 3 RT_basedir: /opt/SUNWsamfs/sc/bin Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: True Pkglist: <NULL> RT_system: False === Resource Groups and Resources === Resource Group: qfs-rg RG_description: <NULL> RG_mode: Failover RG_state: Managed Failback: False Nodelist: phys-schost-2 phys-schost-1 --- Resources for Group qfs-rg --- Resource: qfs-res Type: SUNW.qfs Type_version: 3.1 Group: qfs-rg R_description: Resource_project_name: default Enabled{phys-schost-2}: True Enabled{phys-schost-1}: True Monitored{phys-schost-2}: True Monitored{phys-schost-1}: True |
The following command displays the status of all cluster nodes.
# cluster status -t node === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online --- Node Status --- Node Name Status --------- ------ |
Alternately, you can also display the same information by using the clnode command.
# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online |
The following command creates a cluster that is named cluster-1 from the cluster configuration file suncluster.xml.
# cluster create -i /suncluster.xml cluster-1 |
The following command changes the name of the cluster to cluster-2.
# cluster rename -c cluster-2 |
The following command disables a cluster's installmode property.
# cluster set -p installmode=disabled |
The following command modifies the private network settings of a cluster. The command sets the private network address to 172.10.0.0. The command also calculates and sets a minimum private netmask to support the specified eight nodes and four private networks and specifies that you want to configure eight zone clusters for the global cluster..
# cluster set-netprops \ -p private_netaddr=172.10.0.0 -p max_nodes=8 \ -p max_privatenets=4 -p num_zoneclusters=8 |
You can also specify this command as follows:
# cluster set-netprops \ -p private_netaddr=172.10.0.0,max_nodes=8,\ max_privatenets=4,num_zoneclusters=8 |
The following command lists all checks, shown in single-line format, that are available on the cluster. The actual checks that are available vary by release or update.
# cluster list-checks M6336822 : (Critical) Global filesystem /etc/vfstab entries are not consistent across all Sun Cluster nodes. S6708689 : (Variable) One or more Sun Cluster resources cannot be validated M6708613 : (Critical) vxio major numbers are not consistent across all Sun Cluster nodes. S6708255 : (Critical) The nsswitch.conf file 'hosts' database entry does not have 'cluster' specified first. S6708479 : (Critical) The /etc/system rpcmod:svc_default_stksize parameter is missing or has an incorrect value for Sun Cluster. … |
The following command runs all available checks on all nodes of the cluster that the command is started from.
# cluster check |
The following command runs, in verbose mode, all checks that are of the severity level high or higher. These checks run only on the node phys-schost-1.
# cluster check -v -n phys-schost-1 -s high initializing... initializing xml output... loading auxiliary data... filtering out checks with severity less than High starting check run... phys-schost-1: M6336822.... starting: Global filesystem /etc/vfstab entries… phys-schost-1: M6336822 not applicable phys-schost-1: S6708689.... starting: One or more Sun Cluster resources… phys-schost-1: S6708689 passed … phys-schost-1: S6708606 skipped: severity too low phys-schost-1: S6708638 skipped: severity too low phys-schost-1: S6708641.... starting: Cluster failover/switchover might… phys-schost-1: S6708641 passed … |
/usr/cluster/lib/cfgchk/checkresults.dtd
/var/cluster/logs/cluster_check/
/outputdir/cluster_check_exit_code.log
See attributes(5) for descriptions of the following attributes:
ATTRIBUTE TYPE |
ATTRIBUTE VALUE |
---|---|
Availability |
SUNWsczu |
Interface Stability |
Evolving |
The superuser can run all forms of this command.
All users can run this command with the -? (help) or -V (version) option.
To run the cluster command with subcommands, users other than superuser require RBAC authorizations. See the following table.
Subcommand |
RBAC Authorization |
---|---|
check |
solaris.cluster.read |
create |
solaris.cluster.modify |
export |
solaris.cluster.read |
list |
solaris.cluster.read |
list-checks |
solaris.cluster.read |
list-cmds |
solaris.cluster.read |
rename |
solaris.cluster.modify |
restore-netprops |
solaris.cluster.modify |
set |
solaris.cluster.modify |
set-netprops |
solaris.cluster.modify |
show |
solaris.cluster.read |
show-netprops |
solaris.cluster.read |
shutdown |
solaris.cluster.admin |
status |
solaris.cluster.read |
NAME | Synopsis | Description | SUBCOMMANDS | Options | Operands | Exit Status | Examples | Files | Attributes | See Also | Notes