JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster System Administration Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Introduction to Administering Oracle Solaris Cluster

Overview of Administering Oracle Solaris Cluster

Working With a Zone Cluster

Oracle Solaris OS Feature Restrictions

Administration Tools

Graphical User Interface

Command-Line Interface

Preparing to Administer the Cluster

Documenting an Oracle Solaris Cluster Hardware Configuration

Using an Administrative Console

Backing Up the Cluster

Beginning to Administer the Cluster

How to Log Into the Cluster Remotely

How to Connect Securely to Cluster Consoles

How to Access the Cluster Configuration Utilities

How to Display Oracle Solaris Cluster Patch Information

How to Display Oracle Solaris Cluster Release and Version Information

How to Display Configured Resource Types, Resource Groups, and Resources

How to Check the Status of Cluster Components

How to Check the Status of the Public Network

How to View the Cluster Configuration

How to Validate a Basic Cluster Configuration

How to Check the Global Mount Points

How to View the Contents of Oracle Solaris Cluster Command Logs

2.  Oracle Solaris Cluster and RBAC

3.  Shutting Down and Booting a Cluster

4.  Data Replication Approaches

5.  Administering Global Devices, Disk-Path Monitoring, and Cluster File Systems

6.  Administering Quorum

7.  Administering Cluster Interconnects and Public Networks

8.  Adding and Removing a Node

9.  Administering the Cluster

10.  Configuring Control of CPU Usage

11.  Patching Oracle Solaris Cluster Software and Firmware

12.  Backing Up and Restoring a Cluster

13.  Administering Oracle Solaris Cluster With the Graphical User Interfaces

A.  Example

Index

Beginning to Administer the Cluster

Table 1-2 provides a starting point for administering your cluster.


Note - The Oracle Solaris Cluster commands that you run only from the global-cluster voting node are not valid for use with zone clusters. See the appropriate Oracle Solaris Cluster man page for information about the valid use of a command in zones.


Table 1-2 Oracle Solaris Cluster Administration Tools

Task
Tool
Instructions
Log in to the cluster remotely
Use the ccp command to launch the Cluster Control Panel (CCP). Then select one of the following icons: cconsole, crlogin, cssh, or ctelnet.
Configure the cluster interactively
Start the clzonecluster(1CL) utility or the clsetup(1CL) utility.
Display Oracle Solaris Cluster release number and version information
Use the clnode(1CL) command with the show-rev -v -node subcommand and option.
Display installed resources, resource groups, and resource types
Use the following commands to display the resource information:
Monitor cluster components graphically
Use Oracle Solaris Cluster Manager.
See online help
Administer some cluster components graphically
Use Oracle Solaris Cluster Manager or the Oracle Solaris Cluster module for Sun Management Center, which is available only with Oracle Solaris Cluster on SPARC based systems.
For Oracle Solaris Cluster Manager, see online help.

For Sun Management Center, see Sun Management Center documentation.

Check the status of cluster components
Use the cluster(1CL) command with the status subcommand.
Check the status of IP network multipathing groups on the public network
For a global cluster, use the clnode(1CL) status command with the -m option.

For a zone cluster, use the clzonecluster(1CL) show command.

View the cluster configuration
For a global cluster, use the cluster(1CL) command with the show subcommand.

For a zone cluster, use the clzonecluster(1CL) command with the show subcommand.

View and display the configured NAS devices
For a global cluster or a zone cluster, use the clzonecluster(1CL) command with the show subcommand.
Check global mount points or verify the cluster configuration
For a global cluster, use the cluster(1CL)cluster(1CL) command with the check subcommand.

For a zone cluster, use the clzonecluster(1CL) verify command.

Look at the contents of Oracle Solaris Cluster command logs
Examine the /var/cluster/logs/ commandlog file.
Look at Oracle Solaris Cluster system messages
Examine the /var/adm/messages file.
Monitor the status of Solaris Volume Manager
Use the metastat command.

How to Log Into the Cluster Remotely

The Cluster Control Panel (CCP) provides a launchpad for the cconsole, crlogin, cssh, and ctelnet tools. All tools start a multiple-window connection to a set of specified nodes. The multiple-window connection consists of a host window for each of the specified nodes and a common window. Input to the common window is sent to each of the host windows, enabling you to run commands simultaneously on all nodes of the cluster.

You can also start cconsole, crlogin, cssh, or ctelnet sessions from the command line.

By default, the cconsole utility uses a telnet connection to the node consoles. To establish secure shell connections to the consoles instead, enable the Use SSH checkbox in the Options menu of the cconsole window. Or, specify the -s option when you issue the ccp or cconsole command.

See the ccp(1M) and cconsole(1M) man pages for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

Before You Begin

Verify that the following prerequisites are met before starting the CCP:

  1. If you have a Sun Enterprise 10000 server platform, log in to the System Service Processor (SSP).
    1. Connect by using the netcon command.
    2. After the connection is made, type Shift~@ to unlock the console and gain write access.
  2. From the administrative console, start the CCP launchpad.
    phys-schost# ccp clustername

    The CCP launchpad is displayed.

  3. To start a remote session with the cluster, click the cconsole icon, crlogin icon, cssh icon, or ctelnet icon in the CCP launch pad.

How to Connect Securely to Cluster Consoles

Perform this procedure to establish secure shell connections to the consoles of the cluster nodes.

Before You Begin

Configure the clusters file, the serialports file, and the nsswitch.conf file if you are using a terminal concentrator. The files can be either /etc files or NIS or NIS+ databases.


Note - In the serialports file, assign the port number to use for secure connection to each console-access device. The default port number for secure shell connection is 22.


See the clusters(4) and serialports(4) man pages for more information.

  1. Become superuser on the administrative console.
  2. Start the cconsole utility in secure mode.
    # cconsole -s [-l username] [-p ssh-port]
    -s

    Enables secure shell connection.

    -l username

    Specifies the user name for the remote connections. If the -l option is not specified, the user name that launched the cconsole utility is used.

    -p ssh-port

    Specifies the secure shell port number to use. If the -p option is not specified, the default port number 22 is used for the secure connections.

How to Access the Cluster Configuration Utilities

The clsetup utility enables you to interactively configure quorum, resource group, cluster transport, private hostname, device group, and new node options for the global cluster. The clzonecluster utility performs similar configuration tasks for a zone cluster. For more information, see the clsetup(1CL) and clzonecluster(1CL) man pages.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Become superuser on an active member node of a global cluster. Perform all steps of this procedure from a node of the global cluster.
  2. Start the configuration utility.
    phys-schost# clsetup
    • For a global cluster, start the utility with the clsetup command.
      phys-schost# clsetup

      The Main Menu is displayed.

    • For a zone cluster, start the utility with the clzonecluster command. The zone cluster in this example is sczone.
      phys-schost# clzonecluster configure sczone

      You can view the available actions in the utility with the following option:

      clzc:sczone> ? 
  3. Choose your configuration from the menu. Follow the onscreen instructions to complete a task. For more detail, see the instructions in Configuring a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.

See Also

See the clsetup or clzonecluster online help for more information.

How to Display Oracle Solaris Cluster Patch Information

You do not need to be logged in as superuser to perform this procedure.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

Example 1-1 Displaying Oracle Solaris Cluster Patch Information

The following example displays information about patch 110648-05.

phys-schost# showrev -p | grep 110648
Patch: 110648-05 Obsoletes:  Requires:  Incompatibles:  Packages: 

How to Display Oracle Solaris Cluster Release and Version Information

You do not need to be logged in as superuser to perform this procedure. Perform all steps of this procedure from a node of the global cluster.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

Example 1-2 Displaying Oracle Solaris Cluster Release and Version Information

The following example displays the cluster's release information and version information for all packages.

phys-schost# clnode show-rev
3.2

phys-schost#% clnode show-rev -v
Oracle Solaris Cluster 3.3 for Solaris 10 sparc
SUNWscu:       3.3.0,REV=2010.06.14.03.44
SUNWsccomu:    3.3.0,REV=2010.06.14.03.44
SUNWsczr:      3.3.0,REV=2010.06.14.03.44
SUNWsccomzu:   3.3.0,REV=2010.06.14.03.44
SUNWsczu:      3.3.0,REV=2010.06.14.03.44
SUNWscsckr:    3.3.0,REV=2010.06.14.03.44
SUNWscscku:    3.3.0,REV=2010.06.14.03.44
SUNWscr:       3.3.0,REV=2010.06.14.03.44
SUNWscrtlh:    3.3.0,REV=2010.06.14.03.44
SUNWscnmr:     3.3.0,REV=2010.06.14.03.44
SUNWscnmu:     3.3.0,REV=2010.06.14.03.44
SUNWscdev:     3.3.0,REV=2010.06.14.03.44
SUNWscgds:     3.3.0,REV=2010.06.14.03.44
SUNWscsmf:     3.3.0,REV=2010.06.14.03.44
SUNWscman:     3.3.0,REV=2010.05.21.18.40
SUNWscsal:     3.3.0,REV=2010.06.14.03.44
SUNWscsam:     3.3.0,REV=2010.06.14.03.44
SUNWscvm:      3.3.0,REV=2010.06.14.03.44
SUNWmdmr:      3.3.0,REV=2010.06.14.03.44
SUNWmdmu:      3.3.0,REV=2010.06.14.03.44
SUNWscmasa:    3.3.0,REV=2010.06.14.03.44
SUNWscmasar:   3.3.0,REV=2010.06.14.03.44
SUNWscmasasen: 3.3.0,REV=2010.06.14.03.44
SUNWscmasazu:  3.3.0,REV=2010.06.14.03.44
SUNWscmasau:   3.3.0,REV=2010.06.14.03.44
SUNWscmautil:  3.3.0,REV=2010.06.14.03.44
SUNWscmautilr: 3.3.0,REV=2010.06.14.03.44
SUNWjfreechart: 3.3.0,REV=2010.06.14.03.44
SUNWscspmr:    3.3.0,REV=2010.06.14.03.44
SUNWscspmu:    3.3.0,REV=2010.06.14.03.44
SUNWscderby:   3.3.0,REV=2010.06.14.03.44
SUNWsctelemetry: 3.3.0,REV=2010.06.14.03.44
SUNWscgrepavs: 3.2.3,REV=2009.10.23.12.12
SUNWscgrepsrdf: 3.2.3,REV=2009.10.23.12.12
SUNWscgreptc:  3.2.3,REV=2009.10.23.12.12
SUNWscghb:     3.2.3,REV=2009.10.23.12.12
SUNWscgctl:    3.2.3,REV=2009.10.23.12.12
SUNWscims:     6.0,REV=2003.10.29
SUNWscics:     6.0,REV=2003.11.14
SUNWscapc:     3.2.0,REV=2006.12.06.18.32
SUNWscdns:     3.2.0,REV=2006.12.06.18.32
SUNWschadb:    3.2.0,REV=2006.12.06.18.32
SUNWschtt:     3.2.0,REV=2006.12.06.18.32
SUNWscs1as:    3.2.0,REV=2006.12.06.18.32
SUNWsckrb5:    3.2.0,REV=2006.12.06.18.32
SUNWscnfs:     3.2.0,REV=2006.12.06.18.32
SUNWscor:      3.2.0,REV=2006.12.06.18.32
SUNWscs1mq:    3.2.0,REV=2006.12.06.18.32
SUNWscsap:     3.2.0,REV=2006.12.06.18.32
SUNWsclc:      3.2.0,REV=2006.12.06.18.32
SUNWscsapdb:   3.2.0,REV=2006.12.06.18.32
SUNWscsapenq:  3.2.0,REV=2006.12.06.18.32
SUNWscsaprepl: 3.2.0,REV=2006.12.06.18.32
SUNWscsapscs:  3.2.0,REV=2006.12.06.18.32
SUNWscsapwebas: 3.2.0,REV=2006.12.06.18.32
SUNWscsbl:     3.2.0,REV=2006.12.06.18.32
SUNWscsyb:     3.2.0,REV=2006.12.06.18.32
SUNWscwls:     3.2.0,REV=2006.12.06.18.32
SUNWsc9ias:    3.2.0,REV=2006.12.06.18.32
SUNWscPostgreSQL: 3.2.0,REV=2006.12.06.18.32
SUNWsczone:    3.2.0,REV=2006.12.06.18.32
SUNWscdhc:     3.2.0,REV=2006.12.06.18.32
SUNWscebs:     3.2.0,REV=2006.12.06.18.32
SUNWscmqi:     3.2.0,REV=2006.12.06.18.32
SUNWscmqs:     3.2.0,REV=2006.12.06.18.32
SUNWscmys:     3.2.0,REV=2006.12.06.18.32
SUNWscsge:     3.2.0,REV=2006.12.06.18.32
SUNWscsaa:     3.2.0,REV=2006.12.06.18.32
SUNWscsag:     3.2.0,REV=2006.12.06.18.32
SUNWscsmb:     3.2.0,REV=2006.12.06.18.32
SUNWscsps:     3.2.0,REV=2006.12.06.18.32
SUNWsctomcat:  3.2.0,REV=2006.12.06.18.32

How to Display Configured Resource Types, Resource Groups, and Resources

You can also accomplish this procedure by using the Oracle Solaris Cluster Manager GUI. Refer to Chapter 13, Administering Oracle Solaris Cluster With the Graphical User Interfaces or see the Oracle Solaris Cluster Manager online help for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

Before You Begin

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand.

Example 1-3 Displaying Configured Resource Types, Resource Groups, and Resources

The following example shows the resource types (RT Name), resource groups (RG Name), and resources (RS Name ) configured for the cluster schost.

phys-schost# cluster show -t resource,resourcetype,resourcegroup


=== Registered Resource Types ===

Resource Type:                                  SUNW.qfs
  RT_description:                                  SAM-QFS Agent on Oracle Solaris Cluster
  RT_version:                                      3.1
  API_version:                                     3
  RT_basedir:                                      /opt/SUNWsamfs/sc/bin
  Single_instance:                                 False
  Proxy:                                           False
  Init_nodes:                                      All potential masters
  Installed_nodes:                                 <All>
  Failover:                                        True
  Pkglist:                                         <NULL>
  RT_system:                                       False

=== Resource Groups and Resources ===

Resource Group:                                 qfs-rg
  RG_description:                                  <NULL>
  RG_mode:                                         Failover
  RG_state:                                        Managed
  Failback:                                        False
  Nodelist:                                        phys-schost-2 phys-schost-1

  --- Resources for Group qfs-rg ---

  Resource:                                     qfs-res
    Type:                                          SUNW.qfs
    Type_version:                                  3.1
    Group:                                         qfs-rg
    R_description:                                 
    Resource_project_name:                         default
    Enabled{phys-schost-2}:                        True
    Enabled{phys-schost-1}:                        True
    Monitored{phys-schost-2}:                      True
    Monitored{phys-schost-1}:                      True

How to Check the Status of Cluster Components

You can also accomplish this procedure by using the Oracle Solaris Cluster Manager GUI. See the Oracle Solaris Cluster Manager online help for more information.


Note - The cluster status command also shows the status of a zone cluster.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

Before You Begin

Users other than superuser require solaris.cluster.read RBAC authorization to use the status subcommand.

Example 1-4 Checking the Status of Cluster Components

The following example provides a sample of status information for cluster components returned by cluster(1CL) status.

phys-schost# cluster status
=== Cluster Nodes ===

--- Node Status ---

Node Name                                       Status
---------                                       ------
phys-schost-1                                   Online
phys-schost-2                                   Online


=== Cluster Transport Paths ===

Endpoint1               Endpoint2               Status
---------               ---------               ------
phys-schost-1:qfe1      phys-schost-4:qfe1      Path online
phys-schost-1:hme1      phys-schost-4:hme1      Path online


=== Cluster Quorum ===

--- Quorum Votes Summary ---

            Needed   Present   Possible
            ------   -------   --------
            3        3         4


--- Quorum Votes by Node ---

Node Name       Present       Possible       Status
---------       -------       --------       ------
phys-schost-1   1             1              Online
phys-schost-2   1             1              Online


--- Quorum Votes by Device ---

Device Name           Present      Possible          Status
-----------               -------      --------      ------
/dev/did/rdsk/d2s2      1            1                Online
/dev/did/rdsk/d8s2      0            1                Offline


=== Cluster Device Groups ===

--- Device Group Status ---

Device Group Name     Primary          Secondary    Status
-----------------     -------          ---------    ------
schost-2              phys-schost-2     -           Degraded


--- Spare, Inactive, and In Transition Nodes ---

Device Group Name   Spare Nodes   Inactive Nodes   In Transistion Nodes
-----------------   -----------   --------------   --------------------
schost-2            -             -                -


=== Cluster Resource Groups ===

Group Name        Node Name      Suspended      Status
----------        ---------      ---------      ------
test-rg           phys-schost-1       No             Offline
                  phys-schost-2       No             Online

test-rg           phys-schost-1       No             Offline
                  phys-schost-2       No             Error--stop failed

test-rg           phys-schost-1       No             Online
                  phys-schost-2       No             Online


=== Cluster Resources ===

Resource Name     Node Name     Status               Message
-------------     ---------     ------               -------
test_1            phys-schost-1      Offline         Offline
                  phys-schost-2      Online          Online

test_1            phys-schost-1      Offline         Offline
                  phys-schost-2      Stop failed     Faulted

test_1            phys-schost-1      Online          Online
                  phys-schost-2      Online          Online


Device Instance             Node                     Status
---------------             ----                     ------
/dev/did/rdsk/d2            phys-schost-1            Ok

/dev/did/rdsk/d3            phys-schost-1            Ok
                            phys-schost-2            Ok

/dev/did/rdsk/d4            phys-schost-1            Ok
                            phys-schost-2            Ok

/dev/did/rdsk/d6            phys-schost-2            Ok



=== Zone Clusters ===

--- Zone Cluster Status ---

Name      Node Name   Zone HostName   Status    Zone Status
----      ---------   -------------   ------    -----------
sczone    schost-1    sczone-1        Online    Running
          schost-2    sczone-2        Online    Running

How to Check the Status of the Public Network

You can also accomplish this procedure by using the Oracle Solaris Cluster Manager GUI. See the Oracle Solaris Cluster Manager online help for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

To check the status of the IP Network Multipathing groups, use the clnode(1CL) command with the status subcommand.

Before You Begin

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand.

Example 1-5 Checking the Public Network Status

The following example provides a sample of status information for cluster components returned by the clnode status command.

% clnode status -m
--- Node IPMP Group Status ---

Node Name         Group Name    Status    Adapter    Status
---------         ----------    ------    -------    ------
phys-schost-1     test-rg       Online    qfe1       Online
phys-schost-2     test-rg       Online    qfe1       Online 

How to View the Cluster Configuration

You can also perform this procedure by using the Oracle Solaris Cluster Manager GUI. See the Oracle Solaris Cluster Manager online help for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

Before You Begin

Users other than superuser require solaris.cluster.read RBAC authorization to use the status subcommand.

Example 1-6 Viewing the Global Cluster Configuration

The following example lists configuration information about the global cluster. If you have a zone cluster configured, it also lists that information.

phys-schost# cluster show
=== Cluster ===                                

Cluster Name:                                   cluster-1
  installmode:                                     disabled
  heartbeat_timeout:                               10000
  heartbeat_quantum:                               1000
  private_netaddr:                                 172.16.0.0
  private_netmask:                                 255.255.248.0
  max_nodes:                                       64
  max_privatenets:                                 10
  global_fencing:                                  Unknown
  Node List:                                       phys-schost-1
  Node Zones:                                      phys_schost-2:za

  === Host Access Control ===                  

  Cluster name:                                 clustser-1
    Allowed hosts:                                 phys-schost-1, phys-schost-2:za
    Authentication Protocol:                       sys

  === Cluster Nodes ===                        

  Node Name:                                    phys-schost-1
    Node ID:                                       1
    Type:                                          cluster
    Enabled:                                       yes
    privatehostname:                               clusternode1-priv
    reboot_on_path_failure:                        disabled
    globalzoneshares:                              3
    defaultpsetmin:                                1
    quorum_vote:                                   1
    quorum_defaultvote:                            1
    quorum_resv_key:                               0x43CB1E1800000001
    Transport Adapter List:                        qfe3, hme0

    --- Transport Adapters for phys-schost-1 ---    

    Transport Adapter:                          qfe3
      Adapter State:                               Enabled
      Adapter Transport Type:                      dlpi
      Adapter Property(device_name):               qfe
      Adapter Property(device_instance):           3
      Adapter Property(lazy_free):                 1
      Adapter Property(dlpi_heartbeat_timeout):    10000
      Adapter Property(dlpi_heartbeat_quantum):    1000
      Adapter Property(nw_bandwidth):              80
      Adapter Property(bandwidth):                 10
      Adapter Property(ip_address):                172.16.1.1
      Adapter Property(netmask):                   255.255.255.128
      Adapter Port Names:                          0
      Adapter Port State(0):                       Enabled

    Transport Adapter:                          hme0
      Adapter State:                               Enabled
      Adapter Transport Type:                      dlpi
      Adapter Property(device_name):               hme
      Adapter Property(device_instance):           0
      Adapter Property(lazy_free):                 0
      Adapter Property(dlpi_heartbeat_timeout):    10000
      Adapter Property(dlpi_heartbeat_quantum):    1000
      Adapter Property(nw_bandwidth):              80
      Adapter Property(bandwidth):                 10
      Adapter Property(ip_address):                172.16.0.129
      Adapter Property(netmask):                   255.255.255.128
      Adapter Port Names:                          0
      Adapter Port State(0):                       Enabled

    --- SNMP MIB Configuration on phys-schost-1 --- 

    SNMP MIB Name:                              Event
      State:                                       Disabled
      Protocol:                                    SNMPv2

    --- SNMP Host Configuration on phys-schost-1 ---

    --- SNMP User Configuration on phys-schost-1 ---

    SNMP User Name:                             foo
      Authentication Protocol:                     MD5
      Default User:                                No

  Node Name:                                    phys-schost-2:za
    Node ID:                                       2
    Type:                                          cluster
    Enabled:                                       yes
    privatehostname:                               clusternode2-priv
    reboot_on_path_failure:                        disabled
    globalzoneshares:                              1
    defaultpsetmin:                                2
    quorum_vote:                                   1
    quorum_defaultvote:                            1
    quorum_resv_key:                               0x43CB1E1800000002
    Transport Adapter List:                        hme0, qfe3

    --- Transport Adapters for phys-schost-2 ---    

    Transport Adapter:                          hme0
      Adapter State:                               Enabled
      Adapter Transport Type:                      dlpi
      Adapter Property(device_name):               hme
      Adapter Property(device_instance):           0
      Adapter Property(lazy_free):                 0
      Adapter Property(dlpi_heartbeat_timeout):    10000
      Adapter Property(dlpi_heartbeat_quantum):    1000
      Adapter Property(nw_bandwidth):              80
      Adapter Property(bandwidth):                 10
      Adapter Property(ip_address):                172.16.0.130
      Adapter Property(netmask):                   255.255.255.128
      Adapter Port Names:                          0
      Adapter Port State(0):                       Enabled

    Transport Adapter:                          qfe3
      Adapter State:                               Enabled
      Adapter Transport Type:                      dlpi
      Adapter Property(device_name):               qfe
      Adapter Property(device_instance):           3
      Adapter Property(lazy_free):                 1
      Adapter Property(dlpi_heartbeat_timeout):    10000
      Adapter Property(dlpi_heartbeat_quantum):    1000
      Adapter Property(nw_bandwidth):              80
      Adapter Property(bandwidth):                 10
      Adapter Property(ip_address):                172.16.1.2
      Adapter Property(netmask):                   255.255.255.128
      Adapter Port Names:                          0
      Adapter Port State(0):                       Enabled

    --- SNMP MIB Configuration on phys-schost-2 --- 

    SNMP MIB Name:                              Event
      State:                                       Disabled
      Protocol:                                    SNMPv2

    --- SNMP Host Configuration on phys-schost-2 ---

    --- SNMP User Configuration on phys-schost-2 ---

  === Transport Cables ===                     

  Transport Cable:                              phys-schost-1:qfe3,switch2@1
    Cable Endpoint1:                               phys-schost-1:qfe3
    Cable Endpoint2:                               switch2@1
    Cable State:                                   Enabled

  Transport Cable:                              phys-schost-1:hme0,switch1@1
    Cable Endpoint1:                               phys-schost-1:hme0
    Cable Endpoint2:                               switch1@1
    Cable State:                                   Enabled

  Transport Cable:                              phys-schost-2:hme0,switch1@2
    Cable Endpoint1:                               phys-schost-2:hme0
    Cable Endpoint2:                               switch1@2
    Cable State:                                   Enabled

  Transport Cable:                              phys-schost-2:qfe3,switch2@2
    Cable Endpoint1:                               phys-schost-2:qfe3
    Cable Endpoint2:                               switch2@2
    Cable State:                                   Enabled

  === Transport Switches ===                   

  Transport Switch:                             switch2
    Switch State:                                  Enabled
    Switch Type:                                   switch
    Switch Port Names:                             1 2
    Switch Port State(1):                          Enabled
    Switch Port State(2):                          Enabled

  Transport Switch:                             switch1
    Switch State:                                  Enabled
    Switch Type:                                   switch
    Switch Port Names:                             1 2
    Switch Port State(1):                          Enabled
    Switch Port State(2):                          Enabled


  === Quorum Devices ===                       

  Quorum Device Name:                           d3
    Enabled:                                       yes
    Votes:                                         1
    Global Name:                                   /dev/did/rdsk/d3s2
    Type:                                          scsi
    Access Mode:                                   scsi2
    Hosts (enabled):                               phys-schost-1, phys-schost-2

  Quorum Device Name:                           qs1
    Enabled:                                       yes
    Votes:                                         1
    Global Name:                                   qs1
    Type:                                          quorum_server
    Hosts (enabled):                               phys-schost-1, phys-schost-2
    Quorum Server Host:                            10.11.114.83
    Port:                                          9000


  === Device Groups ===                        

  Device Group Name:                            testdg3
    Type:                                          SVM
    failback:                                      no
    Node List:                                     phys-schost-1, phys-schost-2
    preferenced:                                   yes
    numsecondaries:                                1
    diskset name:                                  testdg3

  === Registered Resource Types ===            

  Resource Type:                                SUNW.LogicalHostname:2
    RT_description:                                Logical Hostname Resource Type
    RT_version:                                    2
    API_version:                                   2
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/hafoip
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    All potential masters
    Installed_nodes:                               <All>
    Failover:                                      True
    Pkglist:                                       SUNWscu
    RT_system:                                     True

  Resource Type:                                SUNW.SharedAddress:2
    RT_description:                                HA Shared Address Resource Type
    RT_version:                                    2
    API_version:                                   2
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/hascip
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    <Unknown>
    Installed_nodes:                              <All>
    Failover:                                      True
    Pkglist:                                       SUNWscu
    RT_system:                                     True

  Resource Type:                                SUNW.HAStoragePlus:4
    RT_description:                                HA Storage Plus
    RT_version:                                    4
    API_version:                                   2
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/hastorageplus
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    All potential masters
    Installed_nodes:                               <All>
    Failover:                                      False
    Pkglist:                                       SUNWscu
    RT_system:                                     False

  Resource Type:                                SUNW.haderby
    RT_description:                                haderby server for Oracle Solaris Cluster
    RT_version:                                    1
    API_version:                                   7
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/haderby
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    All potential masters
    Installed_nodes:                               <All>
    Failover:                                      False
    Pkglist:                                       SUNWscderby
    RT_system:                                     False

  Resource Type:                                SUNW.sctelemetry
    RT_description:                                sctelemetry service for Oracle Solaris Cluster
    RT_version:                                    1
    API_version:                                   7
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/sctelemetry
    Single_instance:                               True
    Proxy:                                         False
    Init_nodes:                                    All potential masters
    Installed_nodes:                               <All>
    Failover:                                      False
    Pkglist:                                       SUNWsctelemetry
    RT_system:                                     False

  === Resource Groups and Resources ===        

  Resource Group:                               HA_RG
    RG_description:                                <Null>
    RG_mode:                                       Failover
    RG_state:                                      Managed
    Failback:                                      False
    Nodelist:                                      phys-schost-1 phys-schost-2

    --- Resources for Group HA_RG ---          

    Resource:                                   HA_R
      Type:                                        SUNW.HAStoragePlus:4
      Type_version:                                4
      Group:                                       HA_RG
      R_description:                               
      Resource_project_name:                       SCSLM_HA_RG
      Enabled{phys-schost-1}:                      True
      Enabled{phys-schost-2}:                      True
      Monitored{phys-schost-1}:                    True
      Monitored{phys-schost-2}:                    True

  Resource Group:                               cl-db-rg
    RG_description:                                <Null>
    RG_mode:                                       Failover
    RG_state:                                      Managed
    Failback:                                      False
    Nodelist:                                      phys-schost-1 phys-schost-2

    --- Resources for Group cl-db-rg ---       

    Resource:                                   cl-db-rs
      Type:                                        SUNW.haderby
      Type_version:                                1
      Group:                                       cl-db-rg
      R_description:                               
      Resource_project_name:                       default
      Enabled{phys-schost-1}:                      True
      Enabled{phys-schost-2}:                      True
      Monitored{phys-schost-1}:                    True
      Monitored{phys-schost-2}:                    True

  Resource Group:                               cl-tlmtry-rg
    RG_description:                                <Null>
    RG_mode:                                       Scalable
    RG_state:                                      Managed
    Failback:                                      False
    Nodelist:                                      phys-schost-1 phys-schost-2

    --- Resources for Group cl-tlmtry-rg ---   

    Resource:                                   cl-tlmtry-rs
      Type:                                        SUNW.sctelemetry
      Type_version:                                1
      Group:                                       cl-tlmtry-rg
      R_description:                               
      Resource_project_name:                       default
      Enabled{phys-schost-1}:                      True
      Enabled{phys-schost-2}:                      True
      Monitored{phys-schost-1}:                    True
      Monitored{phys-schost-2}:                    True

  === DID Device Instances ===                 

  DID Device Name:                              /dev/did/rdsk/d1
    Full Device Path:                              phys-schost-1:/dev/rdsk/c0t2d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d2
    Full Device Path:                              phys-schost-1:/dev/rdsk/c1t0d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d3
    Full Device Path:                              phys-schost-2:/dev/rdsk/c2t1d0
    Full Device Path:                              phys-schost-1:/dev/rdsk/c2t1d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d4
    Full Device Path:                              phys-schost-2:/dev/rdsk/c2t2d0
    Full Device Path:                              phys-schost-1:/dev/rdsk/c2t2d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d5
    Full Device Path:                              phys-schost-2:/dev/rdsk/c0t2d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d6
    Full Device Path:                              phys-schost-2:/dev/rdsk/c1t0d0
    Replication:                                   none
    default_fencing:                               global

  === NAS Devices ===                          

  Nas Device:                                   nas_filer1
    Type:                                          netapp
    User ID:                                       root

  Nas Device:                                   nas2
    Type:                                          netapp
    User ID:                                       llai

Example 1-7 Viewing the Zone Cluster Configuration

The following example lists the properties of the zone cluster configuration.

% clzonecluster show
=== Zone Clusters ===

Zone Cluster Name:                              sczone
  zonename:                                        sczone
  zonepath:                                        /zones/sczone
  autoboot:                                        TRUE
  ip-type:                                         shared
  enable_priv_net:                                 TRUE

  --- Solaris Resources for sczone ---

  Resource Name:                                net
    address:                                       172.16.0.1
    physical:                                      auto

  Resource Name:                                net
    address:                                       172.16.0.2
    physical:                                      auto

  Resource Name:                                fs
    dir:                                           /gz/db_qfs/CrsHome
    special:                                       CrsHome
    raw:
    type:                                          samfs
    options:                                       []


  Resource Name:                                fs
    dir:                                           /gz/db_qfs/CrsData
    special:                                       CrsData
    raw:
    type:                                          samfs
    options:                                       []


  Resource Name:                                fs
    dir:                                           /gz/db_qfs/OraHome
    special:                                       OraHome
    raw:
    type:                                          samfs
    options:                                       []


  Resource Name:                                fs
    dir:                                           /gz/db_qfs/OraData
    special:                                       OraData
    raw:
    type:                                          samfs
    options:                                       []


  --- Zone Cluster Nodes for sczone ---

  Node Name:                                    sczone-1
    physical-host:                                 sczone-1
    hostname:                                      lzzone-1

  Node Name:                                    sczone-2
    physical-host:                                 sczone-2
    hostname:                                      lzzone-2

You can also view the NAS devices that are configured for global or zone clusters, by using the clnasdevice show subcommand or the Oracle Solaris Cluster Manager. See the clnasdevice(1CL) man page for more information.

How to Validate a Basic Cluster Configuration

The cluster(1CL) command uses the check subcommand to validate the basic configuration that is required for a global cluster to function properly. If no checks fail, cluster check returns to the shell prompt. If a check fails, cluster check produces reports in either the specified or the default output directory. If you run cluster check against more than one node, cluster check produces a report for each node and a report for multinode checks. You can also use the cluster list-checks command to display a list of all available cluster checks.

Beginning in the Oracle Solaris Cluster 3.3 5/11 release, the cluster check command is enhanced with new types of checks. In addition to basic checks, which run without user interaction, the command can also run interactive checks and functional checks. Basic checks are run when the -k keyword option is not specified.

You can run the cluster check command in verbose mode with the -v flag to display progress information.


Note - Run cluster check after performing an administration procedure that might result in changes to devices, volume management components, or the Oracle Solaris Cluster configuration.


Running the clzonecluster(1CL) command at the global—cluster voting node runs a set of checks to validate the configuration that is required for a zone cluster to function properly. If all checks pass, clzonecluster verify returns to the shell prompt and you can safely install the zone cluster. If a check fails, clzonecluster verify reports on the global-cluster nodes where the verification failed. If you run clzonecluster verify against more than one node, a report is produced for each node and a report for multinode checks. The verify subcommand is not allowed inside a zone cluster.

  1. Become superuser on an active member node of a global cluster. Perform all steps of this procedure from a node of the global cluster.
    phys-schost# su
  2. Ensure that you have the most current checks.

    Go to the Patches & Updates tab of My Oracle Support. Using the Advanced Search, select “Solaris Cluster” as the Product and specify “check” in the Description field to locate Oracle Solaris Cluster patches that contain checks. Apply any patches that are not already installed on your cluster.

  3. Run basic validation checks.
    # cluster check -v -o outputdir
    -v

    Verbose mode

    -o outputdir

    Redirects output to the outputdir subdirectory.

    The command runs all available basic checks. No cluster functionality is affected.

  4. Run interactive validation checks.
    # cluster check -v -k interactive -o outputdir
    -k interactive

    Specifies running interactive validation checks

    The command runs all available interactive checks and prompts you for needed information about the cluster. No cluster functionality is affected.

  5. Run functional validation checks.
    1. List all available functional checks in nonverbose mode.
      # cluster list-checks -k functional
    2. Determine which functional checks perform actions that would interfere with cluster availability or services in a production environment.

      For example, a functional check might trigger a node panic or a failover to another node.

      # cluster list-checks -v -C checkID
      -C checkID

      Specifies a specific check.

    3. If the functional check that you want to perform might interrupt cluster functioning, ensure that the cluster is not in production.
    4. Start the functional check.
      # cluster check -v -k functional -C checkid -o outputdir
      -k functional

      Specifies running functional validation checks

      Respond to prompts from the check to confirm that the check should run, and for any information or actions you must perform.

    5. Repeat Step c and Step d for each remaining functional check to run.

      Note - For record-keeping purposes, specify a unique outputdir subdirectory name for each check you run. If you reuse an outputdir name, output for the new check overwrites the existing contents of the reused outputdir subdirectory.


  6. Verify the configuration of the zone cluster to see if a zone cluster can be installed.
    phys-schost# clzonecluster verify zoneclustername
  7. Make a recording of the cluster configuration for future diagnostic purposes.

    See How to Record Diagnostic Data of the Cluster Configuration in Oracle Solaris Cluster Software Installation Guide.

Example 1-8 Checking the Global Cluster Configuration With All Basic Checks Passing

The following example shows cluster check run in verbose mode against nodes phys-schost-1 and phys-schost-2 with all checks passing.

phys-schost# cluster check -v -h phys-schost-1,  
     phys-schost-2

cluster check: Requesting explorer data and node report from phys-schost-1.
cluster check: Requesting explorer data and node report from phys-schost-2.
cluster check: phys-schost-1: Explorer finished.
cluster check: phys-schost-1: Starting single-node checks.
cluster check: phys-schost-1: Single-node checks finished.
cluster check: phys-schost-2: Explorer finished.
cluster check: phys-schost-2: Starting single-node checks.
cluster check: phys-schost-2: Single-node checks finished.
cluster check: Starting multi-node checks.
cluster check: Multi-node checks finished
# 

Example 1-9 Listing Interactive Validation Checks

The following example lists all interactive checks that are available to run on the cluster. Example output shows a sampling of possible checks; actual available checks vary for each configuration

# cluster list-checks -k interactive
 Some checks might take a few moments to run (use -v to see progress)...
 I6994574  :   (Moderate)   Fix for GLDv3 interfaces on cluster transport vulnerability applied?

Example 1-10 Running a Functional Validation Check

The following example first shows the verbose listing of functional checks. The verbose description is then listed for the check F6968101, which indicates that the check would disrupt cluster services. The cluster is taken out of production. The functional check is then run with verbose output logged to the funct.test.F6968101.12Jan2011 subdirectory. Example output shows a sampling of possible checks; actual available checks vary for each configuration.

# cluster list-checks -k functional
 F6968101  :   (Critical)   Perform resource group switchover
 F6984120  :   (Critical)   Induce cluster transport network failure - single adapter.
 F6984121  :   (Critical)   Perform cluster shutdown
 F6984140  :   (Critical)   Induce node panic
…

# cluster list-checks -v -C F6968101
 F6968101: (Critical) Perform resource group switchover
Keywords: SolarisCluster3.x, functional
Applicability: Applicable if multi-node cluster running live.
Check Logic: Select a resource group and destination node. Perform 
'/usr/cluster/bin/clresourcegroup switch' on specified resource group 
either to specified node or to all nodes in succession.
Version: 1.2
Revision Date: 12/10/10 

Take the cluster out of production

# cluster check -k functional -C F6968101 -o funct.test.F6968101.12Jan2011
F6968101 
  initializing...
  initializing xml output...
  loading auxiliary data...
  starting check run...
     pschost1, pschost2, pschost3, pschost4:     F6968101.... starting:  
Perform resource group switchover           


  ============================================================

   >>> Functional Check <<<

    'Functional' checks exercise cluster behavior. It is recommended that you
    do not run this check on a cluster in production mode.' It is recommended
    that you have access to the system console for each cluster node and
    observe any output on the consoles while the check is executed.

    If the node running this check is brought down during execution the check
    must be rerun from this same node after it is rebooted into the cluster in
    order for the check to be completed.

    Select 'continue' for more details on this check.

          1) continue
          2) exit

          choice: 1


  ============================================================

   >>> Check Description <<<
…
Follow onscreen directions

Example 1-11 Checking the Global Cluster Configuration With a Failed Check

The following example shows the node phys-schost-2 in the cluster named suncluster minus the mount point /global/phys-schost-1. Reports are created in the output directory /var/cluster/logs/cluster_check/<timestamp>.

phys-schost# cluster check -v -h phys-schost-1, 
phys-schost-2 -o
     /var/cluster/logs/cluster_check/Dec5/

cluster check: Requesting explorer data and node report from phys-schost-1.
cluster check: Requesting explorer data and node report from phys-schost-2.
cluster check: phys-schost-1: Explorer finished.
cluster check: phys-schost-1: Starting single-node checks.
cluster check: phys-schost-1: Single-node checks finished.
cluster check: phys-schost-2: Explorer finished.
cluster check: phys-schost-2: Starting single-node checks.
cluster check: phys-schost-2: Single-node checks finished.
cluster check: Starting multi-node checks.
cluster check: Multi-node checks finished.
cluster check: One or more checks failed.
cluster check: The greatest severity of all check failures was 3 (HIGH).
cluster check: Reports are in /var/cluster/logs/cluster_check/<Dec5>.
# 
# cat /var/cluster/logs/cluster_check/Dec5/cluster_check-results.suncluster.txt
...
===================================================
= ANALYSIS DETAILS =
===================================================
------------------------------------
CHECK ID : 3065
SEVERITY : HIGH
FAILURE  : Global filesystem /etc/vfstab entries are not consistent across 
all Oracle Solaris  Cluster 3.x nodes.
ANALYSIS : The global filesystem /etc/vfstab entries are not consistent across 
all nodes in this cluster.
Analysis indicates:
FileSystem '/global/phys-schost-1' is on 'phys-schost-1' but missing from 'phys-schost-2'.
RECOMMEND: Ensure each node has the correct /etc/vfstab entry for the 
filesystem(s) in question.
...
 #

How to Check the Global Mount Points

The cluster(1CL) command includes checks that examine the /etc/vfstab file for configuration errors with the cluster file system and its global mount points.


Note - Run cluster check after making cluster configuration changes that have affected devices or volume management components.


  1. Become superuser on an active member node of a global cluster.

    Perform all steps of this procedure from a node of the global cluster.

    % su
  2. Verify the global cluster configuration.
    phys-schost# cluster check

Example 1-12 Checking the Global Mount Points

The following example shows the node phys-schost-2 of the cluster named suncluster minus the mount point /global/schost-1. Reports are being sent to the output directory, /var/cluster/logs/cluster_check/<timestamp>/.

phys-schost# cluster check -v1 -h phys-schost-1,phys-schost-2 -o /var/cluster//logs/cluster_check/Dec5/

cluster check: Requesting explorer data and node report from phys-schost-1.
cluster check: Requesting explorer data and node report from phys-schost-2.
cluster check: phys-schost-1: Explorer finished.
cluster check: phys-schost-1: Starting single-node checks.
cluster check: phys-schost-1: Single-node checks finished.
cluster check: phys-schost-2: Explorer finished.
cluster check: phys-schost-2: Starting single-node checks.
cluster check: phys-schost-2: Single-node checks finished.
cluster check: Starting multi-node checks.
cluster check: Multi-node checks finished.
cluster check: One or more checks failed.
cluster check: The greatest severity of all check failures was 3 (HIGH).
cluster check: Reports are in /var/cluster/logs/cluster_check/Dec5.
# 
# cat /var/cluster/logs/cluster_check/Dec5/cluster_check-results.suncluster.txt

...
===================================================
= ANALYSIS DETAILS =
===================================================
------------------------------------
CHECK ID : 3065
SEVERITY : HIGH
FAILURE  : Global filesystem /etc/vfstab entries are not consistent across 
all Oracle Solaris Cluster 3.x nodes.
ANALYSIS : The global filesystem /etc/vfstab entries are not consistent across 
all nodes in this cluster.
Analysis indicates:
FileSystem '/global/phys-schost-1' is on 'phys-schost-1' but missing from 'phys-schost-2'.
RECOMMEND: Ensure each node has the correct /etc/vfstab entry for the 
filesystem(s) in question.
...
#
# cat /var/cluster/logs/cluster_check/Dec5/cluster_check-results.phys-schost-1.txt

...
===================================================
= ANALYSIS DETAILS =
===================================================
------------------------------------
CHECK ID : 1398
SEVERITY : HIGH
FAILURE  : An unsupported server is being used as an Oracle Solaris Cluster 3.x node.
ANALYSIS : This server may not been qualified to be used as an Oracle Solaris Cluster 3.x node.  
Only servers that have been qualified with Oracle Solaris Cluster 3.x are supported as 
Oracle Solaris Cluster 3.x nodes.
RECOMMEND: Because the list of supported servers is always being updated, check with 
your Oracle representative to get the latest information on what servers 
are currently supported and only use a server that is supported with Oracle Solaris Cluster 3.x.
...
#

How to View the Contents of Oracle Solaris Cluster Command Logs

The /var/cluster/logs/commandlog ASCII text file contains records of selected Oracle Solaris Cluster commands that are executed in a cluster. The logging of commands starts automatically when you set up the cluster and ends when you shut down the cluster. Commands are logged on all nodes that are up and booted in cluster mode.

Commands that are not logged in this file include those commands that display the configuration and current state of the cluster.

Commands that are logged in this file include those commands that configure and change the current state of the cluster:

Records in the commandlog file can contain the following elements:

By default, the commandlog file is regularly archived once a week. To change the archiving policies for the commandlog file, on each node in the cluster, use the crontab command. See the crontab(1) man page for more information.

Oracle Solaris Cluster software maintains up to eight previously archived commandlog files on each cluster node at any given time. The commandlog file for the current week is named commandlog. The most recent complete week's file is named commandlog.0. The oldest complete week's file is named commandlog.7.

Example 1-13 Viewing the Contents of Oracle Solaris Cluster Command Logs

The following example shows the contents of the commandlog file that are displayed by the more command.

more -lines10 /var/cluster/logs/commandlog
11/11/2006 09:42:51 phys-schost-1 5222 root START - clsetup
11/11/2006 09:43:36 phys-schost-1 5758 root START - clrg add "app-sa-1"
11/11/2006 09:43:36 phys-schost-1 5758 root END 0
11/11/2006 09:43:36 phys-schost-1 5760 root START - clrg set -y
"RG_description=Department Shared Address RG" "app-sa-1"
11/11/2006 09:43:37 phys-schost-1 5760 root END 0
11/11/2006 09:44:15 phys-schost-1 5810 root START - clrg online "app-sa-1"
11/11/2006 09:44:15 phys-schost-1 5810 root END 0
11/11/2006 09:44:19 phys-schost-1 5222 root END -20988320
12/02/2006 14:37:21 phys-schost-1 5542 jbloggs START - clrg -c -g "app-sa-1"
-y "RG_description=Joe Bloggs Shared Address RG"
12/02/2006 14:37:22 phys-schost-1 5542 jbloggs END 0