Sun Cluster System Administration Guide for Solaris OS

Beginning to Administer the Cluster

Table 1–2 provides a starting point for administering your cluster.


Note –

The Sun Cluster commands that you run only from the global-cluster voting node are not valid for use with zone clusters. See the appropriate Sun Cluster man page for information about the valid use of a command in zones.


Table 1–2 Sun Cluster 3.2 Administration Tools

Task 

Tool 

Instructions 

Log in to the cluster remotely 

Use the ccp command to launch the Cluster Control Panel (CCP). Then select one of the following icons: cconsole, crlogin, cssh, or ctelnet.

How to Log Into the Cluster Remotely

How to Connect Securely to Cluster Consoles

Configure the cluster interactively 

Start the clzonecluster(1CL) utility or the clsetup(1CL) utility.

How to Access the Cluster Configuration Utilities

Display Sun Cluster release number and version information 

Use the clnode(1CL) command with the show-rev --v -node subcommand and option.

How to Display Sun Cluster Release and Version Information

Display installed resources, resource groups, and resource types 

Use the following commands to display the resource information: 

How to Display Configured Resource Types, Resource Groups, and Resources

Monitor cluster components graphically 

Use Sun Cluster Manager. 

See online help 

Administer some cluster components graphically 

Use Sun Cluster Manager or the Sun Cluster module for Sun Management Center, which is available only with Sun Cluster on SPARC based systems. 

For Sun Cluster Manager, see online help. 

For Sun Management Center, see Sun Management Center documentation. 

Check the status of cluster components 

Use the cluster(1CL) command with the status subcommand.

How to Check the Status of Cluster Components

Check the status of IP network multipathing groups on the public network 

For a global cluster, use the clnode(1CL) status command with the -m option.

For a zone cluster, use the clzonecluster(1CL) show command.

How to Check the Status of the Public Network

View the cluster configuration 

For a global cluster, use the cluster(1CL) command with the show subcommand.

For a zone cluster, use the clzonecluster(1CL) command with the show subcommand.

How to View the Cluster Configuration

Check global mount points or verify the cluster configuration 

For a global cluster, use the cluster(1CL)cluster(1CL) command with the check subcommand.

For a zone cluster, use the clzonecluster(1CL) verify command.

How to Validate a Basic Cluster Configuration

Look at the contents of Sun Cluster command logs 

Examine the /var/cluster/logs/ commandlog file.

How to View the Contents of Sun Cluster Command Logs

Look at Sun Cluster system messages 

Examine the /var/adm/messages file.

Viewing System Messages in System Administration Guide: Advanced Administration

Monitor the status of Solstice DiskSuite 

Use the metastat commands.

Solaris Volume Manager documentation 

Monitor the status of Solaris Volume Manager if running Solaris 9 or Solaris 10 

Use the metastat command.

Solaris Volume Manager Administration Guide

ProcedureHow to Log Into the Cluster Remotely

The Cluster Control Panel (CCP) provides a launchpad for the cconsole, crlogin, cssh, and ctelnet tools. All tools start a multiple-window connection to a set of specified nodes. The multiple-window connection consists of a host window for each of the specified nodes and a common window. Input to the common window is sent to each of the host windows, enabling you to run commands simultaneously on all nodes of the cluster.

You can also start cconsole, crlogin, cssh, or ctelnet sessions from the command line.

By default, the cconsole utility uses a telnet connection to the node consoles. To establish secure shell connections to the consoles instead, enable the Use SSH checkbox in the Options menu of the cconsole window. Or, specify the -s option when you issue the ccp or cconsole command.

See the ccp(1M) and cconsole(1M) man pages for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

Before You Begin

Verify that the following prerequisites are met before starting the CCP:

  1. If you have a Sun Enterprise 10000 server platform, log in to the System Service Processor (SSP).

    1. Connect by using the netcon command.

    2. After the connection is made, type Shift~@ to unlock the console and gain write access.

  2. From the administrative console, start the CCP launchpad.


    phys-schost# ccp clustername
    

    The CCP launchpad is displayed.

  3. To start a remote session with the cluster, click the cconsole icon, crlogin icon, cssh icon, or ctelnet icon in the CCP launch pad.

ProcedureHow to Connect Securely to Cluster Consoles

Perform this procedure to establish secure shell connections to the consoles of the cluster nodes.

Before You Begin

Configure the clusters file, the serialports file, and the nsswitch.conf file if you are using a terminal concentrator. The files can be either /etc files or NIS or NIS+ databases.


Note –

In the serialports file, assign the port number to use for secure connection to each console-access device. The default port number for secure shell connection is 22.


See the clusters(4) and serialports(4) man pages for more information.

  1. Become superuser on the administrative console.

  2. Start the cconsole utility in secure mode.


    # cconsole -s [-l username] [-p ssh-port]
    
    -s

    Enables secure shell connection.

    -l username

    Specifies the user name for the remote connections. If the -l option is not specified, the user name that launched the cconsole utility is used.

    -p ssh-port

    Specifies the secure shell port number to use. If the -p option is not specified, the default port number 22 is used for the secure connections.

ProcedureHow to Access the Cluster Configuration Utilities

The clsetup utility enables you to interactively configure quorum, resource group, cluster transport, private hostname, device group, and new node options for the global cluster. The clzonecluster utility performs similar configuration tasks for a zone cluster. For more information, see the clsetup(1CL) and clzonecluster(1CL) man pages.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser on an active member node of a global cluster. Perform all steps of this procedure from a node of the global cluster.

  2. Start the configuration utility.


    phys-schost# clsetup
    
    • For a global cluster, start the utility with the clsetup command.


      phys-schost# clsetup
      

      The Main Menu is displayed.

    • For a zone cluster, start the utility with the clzonecluster command. The zone cluster in this example is sczone.


      phys-schost# clzonecluster configure sczone
      

      You can view the available actions in the utility with the following option:


      clzc:sczone> ? 
      
  3. Choose your configuration from the menu. Follow the onscreen instructions to complete a task. For more detail, see the instructions in Configuring a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

See Also

See the clsetup or clzonecluster online help for more information.

ProcedureHow to Display Sun Cluster Patch Information

You do not need to be logged in as superuser to perform this procedure.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Display the Sun Cluster patch information:


    phys-schost# showrev -p
    

    Sun Cluster update releases are identified by the main product patch number plus the update version.


Example 1–1 Displaying Sun Cluster Patch Information

The following example displays information about patch 110648-05.


phys-schost# showrev -p | grep 110648
Patch: 110648-05 Obsoletes:  Requires:  Incompatibles:  Packages: 

ProcedureHow to Display Sun Cluster Release and Version Information

You do not need to be logged in as superuser to perform this procedure. Perform all steps of this procedure from a node of the global cluster.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Display Sun Cluster release and version information:


    phys-schost# clnode show-rev -v -node
    

    This command displays Sun Cluster release number and version strings for all Sun Cluster packages.


Example 1–2 Displaying Sun Cluster Release and Version Information

The following example displays the cluster's release information and version information for all packages.


phys-schost# clnode show-rev
3.2

phys-schost#% clnode show-rev -v
Sun Cluster 3.2 for Solaris 9 sparc

SUNWscr:       3.2.0,REV=2006.02.17.18.11
SUNWscu:       3.2.0,REV=2006.02.17.18.11
SUNWsczu:      3.2.0,REV=2006.02.17.18.11
SUNWscsck:     3.2.0,REV=2006.02.17.18.11
SUNWscnm:      3.2.0,REV=2006.02.17.18.11
SUNWscdev:     3.2.0,REV=2006.02.17.18.11
SUNWscgds:     3.2.0,REV=2006.02.17.18.11
SUNWscman:     3.2.0,REV=2005.10.18.08.42
SUNWscsal:     3.2.0,REV=2006.02.17.18.11
SUNWscsam:     3.2.0,REV=2006.02.17.18.11
SUNWscvm:      3.2.0,REV=2006.02.17.18.11
SUNWmdm:       3.2.0,REV=2006.02.17.18.11
SUNWscmasa:    3.2.0,REV=2006.02.17.18.11
SUNWscmautil:  3.2.0,REV=2006.02.17.18.11
SUNWscmautilr: 3.2.0,REV=2006.02.17.18.11
SUNWjfreechart: 3.2.0,REV=2006.02.17.18.11
SUNWscva:      3.2.0,REV=2006.02.17.18.11
SUNWscspm:     3.2.0,REV=2006.02.17.18.11
SUNWscspmu:    3.2.0,REV=2006.02.17.18.11
SUNWscspmr:    3.2.0,REV=2006.02.17.18.11
SUNWscderby:   3.2.0,REV=2006.02.17.18.11
SUNWsctelemetry: 3.2.0,REV=2006.02.17.18.11
SUNWscrsm:     3.2.0,REV=2006.02.17.18.11
SUNWcsc:       3.2.0,REV=2006.02.21.10.16
SUNWcscspm:    3.2.0,REV=2006.02.21.10.16
SUNWcscspmu:   3.2.0,REV=2006.02.21.10.16
SUNWdsc:       3.2.0,REV=2006.02.21.10.09
SUNWdscspm:    3.2.0,REV=2006.02.21.10.09
SUNWdscspmu:   3.2.0,REV=2006.02.21.10.09
SUNWesc:       3.2.0,REV=2006.02.21.10.11
SUNWescspm:    3.2.0,REV=2006.02.21.10.11
SUNWescspmu:   3.2.0,REV=2006.02.21.10.11
SUNWfsc:       3.2.0,REV=2006.02.21.10.06
SUNWfscspm:    3.2.0,REV=2006.02.21.10.06
SUNWfscspmu:   3.2.0,REV=2006.02.21.10.06
SUNWhsc:       3.2.0,REV=2006.02.21.10.20
SUNWhscspm:    3.2.0,REV=2006.02.21.10.20
SUNWhscspmu:   3.2.0,REV=2006.02.21.10.20
SUNWjsc:       3.2.0,REV=2006.02.21.10.22
SUNWjscman:    3.2.0,REV=2006.02.21.10.22
SUNWjscspm:    3.2.0,REV=2006.02.21.10.22
SUNWjscspmu:   3.2.0,REV=2006.02.21.10.22
SUNWksc:       3.2.0,REV=2006.02.21.10.14
SUNWkscspm:    3.2.0,REV=2006.02.21.10.14
SUNWkscspmu:   3.2.0,REV=2006.02.21.10.14

ProcedureHow to Display Configured Resource Types, Resource Groups, and Resources

You can also accomplish this procedure by using the Sun Cluster Manager GUI. Refer to Chapter 13, Administering Sun Cluster With the Graphical User Interfaces or see the Sun Cluster Manager online help for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

Before You Begin

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand.

  1. Display the cluster's configured resource types, resource groups, and resources. Perform all steps of this procedure from a node of the global cluster.


    phys-schost# cluster show -t resource,resourcetype,resourcegroup
    

    For information about individual resources, resource groups, and resource types, use the show subcommand with one of the following commands:

    • resource

    • resource group

    • resourcetype


Example 1–3 Displaying Configured Resource Types, Resource Groups, and Resources

The following example shows the resource types (RT Name), resource groups (RG Name), and resources (RS Name ) configured for the cluster schost.


phys-schost# cluster show -t resource,resourcetype,resourcegroup


=== Registered Resource Types ===

Resource Type:                                  SUNW.qfs
  RT_description:                                  SAM-QFS Agent on SunCluster
  RT_version:                                      3.1
  API_version:                                     3
  RT_basedir:                                      /opt/SUNWsamfs/sc/bin
  Single_instance:                                 False
  Proxy:                                           False
  Init_nodes:                                      All potential masters
  Installed_nodes:                                 <All>
  Failover:                                        True
  Pkglist:                                         <NULL>
  RT_system:                                       False

=== Resource Groups and Resources ===

Resource Group:                                 qfs-rg
  RG_description:                                  <NULL>
  RG_mode:                                         Failover
  RG_state:                                        Managed
  Failback:                                        False
  Nodelist:                                        phys-schost-2 phys-schost-1

  --- Resources for Group qfs-rg ---

  Resource:                                     qfs-res
    Type:                                          SUNW.qfs
    Type_version:                                  3.1
    Group:                                         qfs-rg
    R_description:                                 
    Resource_project_name:                         default
    Enabled{phys-schost-2}:                        True
    Enabled{phys-schost-1}:                        True
    Monitored{phys-schost-2}:                      True
    Monitored{phys-schost-1}:                      True

ProcedureHow to Check the Status of Cluster Components

You can also accomplish this procedure by using the Sun Cluster Manager GUI. See the Sun Cluster Manager online help for more information.


Note –

The cluster status command also shows the status of a zone cluster.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

Before You Begin

Users other than superuser require solaris.cluster.read RBAC authorization to use the status subcommand.

  1. Check the status of cluster components. Perform all steps of this procedure from a node of the global cluster.


    phys-schost# cluster status
    

Example 1–4 Checking the Status of Cluster Components

The following example provides a sample of status information for cluster components returned by cluster(1CL) status.


phys-schost# cluster status
=== Cluster Nodes ===

--- Node Status ---

Node Name                                       Status
---------                                       ------
phys-schost-1                                   Online
phys-schost-2                                   Online


=== Cluster Transport Paths ===

Endpoint1               Endpoint2               Status
---------               ---------               ------
phys-schost-1:qfe1      phys-schost-4:qfe1      Path online
phys-schost-1:hme1      phys-schost-4:hme1      Path online


=== Cluster Quorum ===

--- Quorum Votes Summary ---

            Needed   Present   Possible
            ------   -------   --------
            3        3         4


--- Quorum Votes by Node ---

Node Name       Present       Possible       Status
---------       -------       --------       ------
phys-schost-1   1             1              Online
phys-schost-2   1             1              Online


--- Quorum Votes by Device ---

Device Name           Present      Possible          Status
-----------               -------      --------      ------
/dev/did/rdsk/d2s2      1            1                Online
/dev/did/rdsk/d8s2      0            1                Offline


=== Cluster Device Groups ===

--- Device Group Status ---

Device Group Name     Primary          Secondary    Status
-----------------     -------          ---------    ------
schost-2              phys-schost-2     -           Degraded


--- Spare, Inactive, and In Transition Nodes ---

Device Group Name   Spare Nodes   Inactive Nodes   In Transistion Nodes
-----------------   -----------   --------------   --------------------
schost-2            -             -                -


=== Cluster Resource Groups ===

Group Name        Node Name      Suspended      Status
----------        ---------      ---------      ------
test-rg           phys-schost-1       No             Offline
                  phys-schost-2       No             Online

test-rg           phys-schost-1       No             Offline
                  phys-schost-2       No             Error--stop failed

test-rg           phys-schost-1       No             Online
                  phys-schost-2       No             Online


=== Cluster Resources ===

Resource Name     Node Name     Status               Message
-------------     ---------     ------               -------
test_1            phys-schost-1      Offline         Offline
                  phys-schost-2      Online          Online

test_1            phys-schost-1      Offline         Offline
                  phys-schost-2      Stop failed     Faulted

test_1            phys-schost-1      Online          Online
                  phys-schost-2      Online          Online


Device Instance             Node                     Status
---------------             ----                     ------
/dev/did/rdsk/d2            phys-schost-1            Ok

/dev/did/rdsk/d3            phys-schost-1            Ok
                            phys-schost-2            Ok

/dev/did/rdsk/d4            phys-schost-1            Ok
                            phys-schost-2            Ok

/dev/did/rdsk/d6            phys-schost-2            Ok



=== Zone Clusters ===

--- Zone Cluster Status ---

Name      Node Name   Zone HostName   Status    Zone Status
----      ---------   -------------   ------    -----------
sczone    schost-1    sczone-1        Online    Running
          schost-2    sczone-2        Online    Running

ProcedureHow to Check the Status of the Public Network

You can also accomplish this procedure by using the Sun Cluster Manager GUI. See the Sun Cluster Manager online help for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

To check the status of the IP Network Multipathing groups, use the clnode(1CL) command with the status subcommand.

Before You Begin

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand.

  1. Check the status of cluster components. Perform all steps of this procedure from a node of the global cluster.


    phys-schost# clnode status -m
    

Example 1–5 Checking the Public Network Status

The following example provides a sample of status information for cluster components returned by the clnode status command.


% clnode status -m
--- Node IPMP Group Status ---

Node Name         Group Name    Status    Adapter    Status
---------         ----------    ------    -------    ------
phys-schost-1     test-rg       Online    qfe1       Online
phys-schost-2     test-rg       Online    qfe1       Online 

ProcedureHow to View the Cluster Configuration

You can also perform this procedure by using the Sun Cluster Manager GUI. See the Sun Cluster Manager online help for more information.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

Before You Begin

Users other than superuser require solaris.cluster.read RBAC authorization to use the status subcommand.

  1. View the configuration of a global cluster or zone cluster. Perform all steps of this procedure from a node of the global cluster.


    % cluster show

    Running the cluster show command from a global-cluster voting node shows detailed configuration information about the cluster and information for zone clusters, if you have configured them.

    You can also use the clzonecluster show command to view the configuration information for just the zone cluster. Properties for a zone cluster include zone-cluster name, IP type, autoboot, and zone path. The show subcommand runs inside a zone cluster, and applies only to that particular zone cluster. Running the clzonecluster show command from a zone-cluster node retrieves status only about the objects visible to that specific zone cluster.

    To display more information about the cluster command, use the verbose options. See the cluster(1CL) man page for details. See the clzonecluster(1CL) man page for more information about clzonecluster.


Example 1–6 Viewing the Global Cluster Configuration

The following example lists configuration information about the global cluster. If you have a zone cluster configured, it also lists that information.


phys-schost# cluster show

=== Cluster ===                                

Cluster Name:                                   cluster-1
  installmode:                                     disabled
  heartbeat_timeout:                               10000
  heartbeat_quantum:                               1000
  private_netaddr:                                 172.16.0.0
  private_netmask:                                 255.255.248.0
  max_nodes:                                       64
  max_privatenets:                                 10
  global_fencing:                                  Unknown
  Node List:                                       phys-schost-1
  Node Zones:                                      phys_schost-2:za

  === Host Access Control ===                  

  Cluster name:                                 clustser-1
    Allowed hosts:                                 phys-schost-1, phys-schost-2:za
    Authentication Protocol:                       sys

  === Cluster Nodes ===                        

  Node Name:                                    phys-schost-1
    Node ID:                                       1
    Type:                                          cluster
    Enabled:                                       yes
    privatehostname:                               clusternode1-priv
    reboot_on_path_failure:                        disabled
    globalzoneshares:                              3
    defaultpsetmin:                                1
    quorum_vote:                                   1
    quorum_defaultvote:                            1
    quorum_resv_key:                               0x43CB1E1800000001
    Transport Adapter List:                        qfe3, hme0

    --- Transport Adapters for phys-schost-1 ---    

    Transport Adapter:                          qfe3
      Adapter State:                               Enabled
      Adapter Transport Type:                      dlpi
      Adapter Property(device_name):               qfe
      Adapter Property(device_instance):           3
      Adapter Property(lazy_free):                 1
      Adapter Property(dlpi_heartbeat_timeout):    10000
      Adapter Property(dlpi_heartbeat_quantum):    1000
      Adapter Property(nw_bandwidth):              80
      Adapter Property(bandwidth):                 10
      Adapter Property(ip_address):                172.16.1.1
      Adapter Property(netmask):                   255.255.255.128
      Adapter Port Names:                          0
      Adapter Port State(0):                       Enabled

    Transport Adapter:                          hme0
      Adapter State:                               Enabled
      Adapter Transport Type:                      dlpi
      Adapter Property(device_name):               hme
      Adapter Property(device_instance):           0
      Adapter Property(lazy_free):                 0
      Adapter Property(dlpi_heartbeat_timeout):    10000
      Adapter Property(dlpi_heartbeat_quantum):    1000
      Adapter Property(nw_bandwidth):              80
      Adapter Property(bandwidth):                 10
      Adapter Property(ip_address):                172.16.0.129
      Adapter Property(netmask):                   255.255.255.128
      Adapter Port Names:                          0
      Adapter Port State(0):                       Enabled

    --- SNMP MIB Configuration on phys-schost-1 --- 

    SNMP MIB Name:                              Event
      State:                                       Disabled
      Protocol:                                    SNMPv2

    --- SNMP Host Configuration on phys-schost-1 ---

    --- SNMP User Configuration on phys-schost-1 ---

    SNMP User Name:                             foo
      Authentication Protocol:                     MD5
      Default User:                                No

  Node Name:                                    phys-schost-2:za
    Node ID:                                       2
    Type:                                          cluster
    Enabled:                                       yes
    privatehostname:                               clusternode2-priv
    reboot_on_path_failure:                        disabled
    globalzoneshares:                              1
    defaultpsetmin:                                2
    quorum_vote:                                   1
    quorum_defaultvote:                            1
    quorum_resv_key:                               0x43CB1E1800000002
    Transport Adapter List:                        hme0, qfe3

    --- Transport Adapters for phys-schost-2 ---    

    Transport Adapter:                          hme0
      Adapter State:                               Enabled
      Adapter Transport Type:                      dlpi
      Adapter Property(device_name):               hme
      Adapter Property(device_instance):           0
      Adapter Property(lazy_free):                 0
      Adapter Property(dlpi_heartbeat_timeout):    10000
      Adapter Property(dlpi_heartbeat_quantum):    1000
      Adapter Property(nw_bandwidth):              80
      Adapter Property(bandwidth):                 10
      Adapter Property(ip_address):                172.16.0.130
      Adapter Property(netmask):                   255.255.255.128
      Adapter Port Names:                          0
      Adapter Port State(0):                       Enabled

    Transport Adapter:                          qfe3
      Adapter State:                               Enabled
      Adapter Transport Type:                      dlpi
      Adapter Property(device_name):               qfe
      Adapter Property(device_instance):           3
      Adapter Property(lazy_free):                 1
      Adapter Property(dlpi_heartbeat_timeout):    10000
      Adapter Property(dlpi_heartbeat_quantum):    1000
      Adapter Property(nw_bandwidth):              80
      Adapter Property(bandwidth):                 10
      Adapter Property(ip_address):                172.16.1.2
      Adapter Property(netmask):                   255.255.255.128
      Adapter Port Names:                          0
      Adapter Port State(0):                       Enabled

    --- SNMP MIB Configuration on phys-schost-2 --- 

    SNMP MIB Name:                              Event
      State:                                       Disabled
      Protocol:                                    SNMPv2

    --- SNMP Host Configuration on phys-schost-2 ---

    --- SNMP User Configuration on phys-schost-2 ---

  === Transport Cables ===                     

  Transport Cable:                              phys-schost-1:qfe3,switch2@1
    Cable Endpoint1:                               phys-schost-1:qfe3
    Cable Endpoint2:                               switch2@1
    Cable State:                                   Enabled

  Transport Cable:                              phys-schost-1:hme0,switch1@1
    Cable Endpoint1:                               phys-schost-1:hme0
    Cable Endpoint2:                               switch1@1
    Cable State:                                   Enabled

  Transport Cable:                              phys-schost-2:hme0,switch1@2
    Cable Endpoint1:                               phys-schost-2:hme0
    Cable Endpoint2:                               switch1@2
    Cable State:                                   Enabled

  Transport Cable:                              phys-schost-2:qfe3,switch2@2
    Cable Endpoint1:                               phys-schost-2:qfe3
    Cable Endpoint2:                               switch2@2
    Cable State:                                   Enabled

  === Transport Switches ===                   

  Transport Switch:                             switch2
    Switch State:                                  Enabled
    Switch Type:                                   switch
    Switch Port Names:                             1 2
    Switch Port State(1):                          Enabled
    Switch Port State(2):                          Enabled

  Transport Switch:                             switch1
    Switch State:                                  Enabled
    Switch Type:                                   switch
    Switch Port Names:                             1 2
    Switch Port State(1):                          Enabled
    Switch Port State(2):                          Enabled


  === Quorum Devices ===                       

  Quorum Device Name:                           d3
    Enabled:                                       yes
    Votes:                                         1
    Global Name:                                   /dev/did/rdsk/d3s2
    Type:                                          scsi
    Access Mode:                                   scsi2
    Hosts (enabled):                               phys-schost-1, phys-schost-2

  Quorum Device Name:                           qs1
    Enabled:                                       yes
    Votes:                                         1
    Global Name:                                   qs1
    Type:                                          quorum_server
    Hosts (enabled):                               phys-schost-1, phys-schost-2
    Quorum Server Host:                            10.11.114.83
    Port:                                          9000


  === Device Groups ===                        

  Device Group Name:                            testdg3
    Type:                                          SVM
    failback:                                      no
    Node List:                                     phys-schost-1, phys-schost-2
    preferenced:                                   yes
    numsecondaries:                                1
    diskset name:                                  testdg3

  === Registered Resource Types ===            

  Resource Type:                                SUNW.LogicalHostname:2
    RT_description:                                Logical Hostname Resource Type
    RT_version:                                    2
    API_version:                                   2
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/hafoip
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    All potential masters
    Installed_nodes:                               <All>
    Failover:                                      True
    Pkglist:                                       SUNWscu
    RT_system:                                     True

  Resource Type:                                SUNW.SharedAddress:2
    RT_description:                                HA Shared Address Resource Type
    RT_version:                                    2
    API_version:                                   2
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/hascip
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    <Unknown>
    Installed_nodes:                              <All>
    Failover:                                      True
    Pkglist:                                       SUNWscu
    RT_system:                                     True

  Resource Type:                                SUNW.HAStoragePlus:4
    RT_description:                                HA Storage Plus
    RT_version:                                    4
    API_version:                                   2
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/hastorageplus
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    All potential masters
    Installed_nodes:                               <All>
    Failover:                                      False
    Pkglist:                                       SUNWscu
    RT_system:                                     False

  Resource Type:                                SUNW.haderby
    RT_description:                                haderby server for Sun Cluster
    RT_version:                                    1
    API_version:                                   7
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/haderby
    Single_instance:                               False
    Proxy:                                         False
    Init_nodes:                                    All potential masters
    Installed_nodes:                               <All>
    Failover:                                      False
    Pkglist:                                       SUNWscderby
    RT_system:                                     False

  Resource Type:                                SUNW.sctelemetry
    RT_description:                                sctelemetry service for Sun Cluster
    RT_version:                                    1
    API_version:                                   7
    RT_basedir:                                    /usr/cluster/lib/rgm/rt/sctelemetry
    Single_instance:                               True
    Proxy:                                         False
    Init_nodes:                                    All potential masters
    Installed_nodes:                               <All>
    Failover:                                      False
    Pkglist:                                       SUNWsctelemetry
    RT_system:                                     False

  === Resource Groups and Resources ===        

  Resource Group:                               HA_RG
    RG_description:                                <Null>
    RG_mode:                                       Failover
    RG_state:                                      Managed
    Failback:                                      False
    Nodelist:                                      phys-schost-1 phys-schost-2

    --- Resources for Group HA_RG ---          

    Resource:                                   HA_R
      Type:                                        SUNW.HAStoragePlus:4
      Type_version:                                4
      Group:                                       HA_RG
      R_description:                               
      Resource_project_name:                       SCSLM_HA_RG
      Enabled{phys-schost-1}:                      True
      Enabled{phys-schost-2}:                      True
      Monitored{phys-schost-1}:                    True
      Monitored{phys-schost-2}:                    True

  Resource Group:                               cl-db-rg
    RG_description:                                <Null>
    RG_mode:                                       Failover
    RG_state:                                      Managed
    Failback:                                      False
    Nodelist:                                      phys-schost-1 phys-schost-2

    --- Resources for Group cl-db-rg ---       

    Resource:                                   cl-db-rs
      Type:                                        SUNW.haderby
      Type_version:                                1
      Group:                                       cl-db-rg
      R_description:                               
      Resource_project_name:                       default
      Enabled{phys-schost-1}:                      True
      Enabled{phys-schost-2}:                      True
      Monitored{phys-schost-1}:                    True
      Monitored{phys-schost-2}:                    True

  Resource Group:                               cl-tlmtry-rg
    RG_description:                                <Null>
    RG_mode:                                       Scalable
    RG_state:                                      Managed
    Failback:                                      False
    Nodelist:                                      phys-schost-1 phys-schost-2

    --- Resources for Group cl-tlmtry-rg ---   

    Resource:                                   cl-tlmtry-rs
      Type:                                        SUNW.sctelemetry
      Type_version:                                1
      Group:                                       cl-tlmtry-rg
      R_description:                               
      Resource_project_name:                       default
      Enabled{phys-schost-1}:                      True
      Enabled{phys-schost-2}:                      True
      Monitored{phys-schost-1}:                    True
      Monitored{phys-schost-2}:                    True

  === DID Device Instances ===                 

  DID Device Name:                              /dev/did/rdsk/d1
    Full Device Path:                              phys-schost-1:/dev/rdsk/c0t2d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d2
    Full Device Path:                              phys-schost-1:/dev/rdsk/c1t0d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d3
    Full Device Path:                              phys-schost-2:/dev/rdsk/c2t1d0
    Full Device Path:                              phys-schost-1:/dev/rdsk/c2t1d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d4
    Full Device Path:                              phys-schost-2:/dev/rdsk/c2t2d0
    Full Device Path:                              phys-schost-1:/dev/rdsk/c2t2d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d5
    Full Device Path:                              phys-schost-2:/dev/rdsk/c0t2d0
    Replication:                                   none
    default_fencing:                               global

  DID Device Name:                              /dev/did/rdsk/d6
    Full Device Path:                              phys-schost-2:/dev/rdsk/c1t0d0
    Replication:                                   none
    default_fencing:                               global

  === NAS Devices ===                          

  Nas Device:                                   nas_filer1
    Type:                                          netapp
    User ID:                                       root

  Nas Device:                                   nas2
    Type:                                          netapp
    User ID:                                       llai


Example 1–7 Viewing the Zone Cluster Configuration

The following example lists the properties of the zone cluster configuration.


% clzonecluster show
=== Zone Clusters ===

Zone Cluster Name:                              sczone
  zonename:                                        sczone
  zonepath:                                        /zones/sczone
  autoboot:                                        TRUE
  ip-type:                                         shared
  enable_priv_net:                                 TRUE

  --- Solaris Resources for sczone ---

  Resource Name:                                net
    address:                                       172.16.0.1
    physical:                                      auto

  Resource Name:                                net
    address:                                       172.16.0.2
    physical:                                      auto

  Resource Name:                                fs
    dir:                                           /gz/db_qfs/CrsHome
    special:                                       CrsHome
    raw:
    type:                                          samfs
    options:                                       []


  Resource Name:                                fs
    dir:                                           /gz/db_qfs/CrsData
    special:                                       CrsData
    raw:
    type:                                          samfs
    options:                                       []


  Resource Name:                                fs
    dir:                                           /gz/db_qfs/OraHome
    special:                                       OraHome
    raw:
    type:                                          samfs
    options:                                       []


  Resource Name:                                fs
    dir:                                           /gz/db_qfs/OraData
    special:                                       OraData
    raw:
    type:                                          samfs
    options:                                       []


  --- Zone Cluster Nodes for sczone ---

  Node Name:                                    sczone-1
    physical-host:                                 sczone-1
    hostname:                                      lzzone-1

  Node Name:                                    sczone-2
    physical-host:                                 sczone-2
    hostname:                                      lzzone-2

ProcedureHow to Validate a Basic Cluster Configuration

The cluster(1CL) command uses the check command to validate the basic configuration that is required for a global cluster to function properly. If no checks fail, cluster check returns to the shell prompt. If a check fails, cluster check produces reports in either the specified or the default output directory. If you run cluster check against more than one node, cluster check produces a report for each node and a report for multinode checks. You can also use the cluster list-checks command to display a list of all available cluster checks.

You can run the cluster check command in verbose mode with the -v flag to display progress information.


Note –

Run cluster check after performing an administration procedure that might result in changes to devices, volume management components, or the Sun Cluster configuration.


Running the clzonecluster(1CL) command at the global—cluster voting node runs a set of checks to validate the configuration that is required for a zone cluster to function properly. If all checks pass, clzonecluster verify returns to the shell prompt and you can safely install the zone cluster. If a check fails, clzonecluster verify reports on the global-cluster nodes where the verification failed. If you run clzonecluster verify against more than one node, a report is produced for each node and a report for multinode checks. The verify subcommand is not allowed inside a zone cluster.

  1. Become superuser on an active member node of a global cluster. Perform all steps of this procedure from a node of the global cluster.


    phys-schost# su
    
  2. Verify the cluster configuration.

    • Verify the configuration of the global cluster.


      phys-schost# cluster check
      
    • Verify the configuration of the zone cluster to see if a zone cluster can be installed.


      phys-schost# clzonecluster verify zoneclustername
      

Example 1–8 Checking the Global Cluster Configuration With All Checks Passing

The following example shows cluster check run in verbose mode against nodes phys-schost-1 and phys-schost-2 with all checks passing.


phys-schost# cluster check -v -h phys-schost-1,  
     phys-schost-2

cluster check: Requesting explorer data and node report from phys-schost-1.
cluster check: Requesting explorer data and node report from phys-schost-2.
cluster check: phys-schost-1: Explorer finished.
cluster check: phys-schost-1: Starting single-node checks.
cluster check: phys-schost-1: Single-node checks finished.
cluster check: phys-schost-2: Explorer finished.
cluster check: phys-schost-2: Starting single-node checks.
cluster check: phys-schost-2: Single-node checks finished.
cluster check: Starting multi-node checks.
cluster check: Multi-node checks finished
# 


Example 1–9 Checking the Global Cluster Configuration With a Failed Check

The following example shows the node phys-schost-2 in the cluster named suncluster minus the mount point /global/phys-schost-1. Reports are created in the output directory /var/cluster/logs/cluster_check/<timestamp>.


phys-schost# cluster check -v -h phys-schost-1, 
phys-schost-2 -o
     /var/cluster/logs/cluster_check/Dec5/

cluster check: Requesting explorer data and node report from phys-schost-1.
cluster check: Requesting explorer data and node report from phys-schost-2.
cluster check: phys-schost-1: Explorer finished.
cluster check: phys-schost-1: Starting single-node checks.
cluster check: phys-schost-1: Single-node checks finished.
cluster check: phys-schost-2: Explorer finished.
cluster check: phys-schost-2: Starting single-node checks.
cluster check: phys-schost-2: Single-node checks finished.
cluster check: Starting multi-node checks.
cluster check: Multi-node checks finished.
cluster check: One or more checks failed.
cluster check: The greatest severity of all check failures was 3 (HIGH).
cluster check: Reports are in /var/cluster/logs/cluster_check/<Dec5>.
# 
# cat /var/cluster/logs/cluster_check/Dec5/cluster_check-results.suncluster.txt
...
===================================================
= ANALYSIS DETAILS =
===================================================
------------------------------------
CHECK ID : 3065
SEVERITY : HIGH
FAILURE  : Global filesystem /etc/vfstab entries are not consistent across 
all Sun Cluster 3.x nodes.
ANALYSIS : The global filesystem /etc/vfstab entries are not consistent across 
all nodes in this cluster.
Analysis indicates:
FileSystem '/global/phys-schost-1' is on 'phys-schost-1' but missing from 'phys-schost-2'.
RECOMMEND: Ensure each node has the correct /etc/vfstab entry for the 
filesystem(s) in question.
...
 #

ProcedureHow to Check the Global Mount Points

The cluster(1CL) command includes checks that examine the /etc/vfstab file for configuration errors with the cluster file system and its global mount points.


Note –

Run cluster check after making cluster configuration changes that have affected devices or volume management components.


  1. Become superuser on an active member node of a global cluster. Perform all steps of this procedure from a node of the global cluster.


    % su
    
  2. Verify the global cluster configuration.


    phys-schost# cluster check
    

Example 1–10 Checking the Global Mount Points

The following example shows the node phys-schost-2 of the cluster named suncluster minus the mount point /global/schost-1. Reports are being sent to the output directory, /var/cluster/logs/cluster_check/<timestamp>/.


phys-schost# cluster check -v1 -h phys-schost-1,phys-schost-2 -o /var/cluster//logs/cluster_check/Dec5/

cluster check: Requesting explorer data and node report from phys-schost-1.
cluster check: Requesting explorer data and node report from phys-schost-2.
cluster check: phys-schost-1: Explorer finished.
cluster check: phys-schost-1: Starting single-node checks.
cluster check: phys-schost-1: Single-node checks finished.
cluster check: phys-schost-2: Explorer finished.
cluster check: phys-schost-2: Starting single-node checks.
cluster check: phys-schost-2: Single-node checks finished.
cluster check: Starting multi-node checks.
cluster check: Multi-node checks finished.
cluster check: One or more checks failed.
cluster check: The greatest severity of all check failures was 3 (HIGH).
cluster check: Reports are in /var/cluster/logs/cluster_check/Dec5.
# 
# cat /var/cluster/logs/cluster_check/Dec5/cluster_check-results.suncluster.txt

...
===================================================
= ANALYSIS DETAILS =
===================================================
------------------------------------
CHECK ID : 3065
SEVERITY : HIGH
FAILURE  : Global filesystem /etc/vfstab entries are not consistent across 
all Sun Cluster 3.x nodes.
ANALYSIS : The global filesystem /etc/vfstab entries are not consistent across 
all nodes in this cluster.
Analysis indicates:
FileSystem '/global/phys-schost-1' is on 'phys-schost-1' but missing from 'phys-schost-2'.
RECOMMEND: Ensure each node has the correct /etc/vfstab entry for the 
filesystem(s) in question.
...
#
# cat /var/cluster/logs/cluster_check/Dec5/cluster_check-results.phys-schost-1.txt

...
===================================================
= ANALYSIS DETAILS =
===================================================
------------------------------------
CHECK ID : 1398
SEVERITY : HIGH
FAILURE  : An unsupported server is being used as a Sun Cluster 3.x node.
ANALYSIS : This server may not been qualified to be used as a Sun Cluster 3.x node.  
Only servers that have been qualified with Sun Cluster 3.x are supported as 
Sun Cluster 3.x nodes.
RECOMMEND: Because the list of supported servers is always being updated, check with 
your Sun Microsystems representative to get the latest information on what servers 
are currently supported and only use a server that is supported with Sun Cluster 3.x.
...
#

ProcedureHow to View the Contents of Sun Cluster Command Logs

The /var/cluster/logs/commandlog ASCII text file contains records of selected Sun Cluster commands that are executed in a cluster. The logging of commands starts automatically when you set up the cluster and ends when you shut down the cluster. Commands are logged on all nodes that are up and booted in cluster mode.

Commands that are not logged in this file include those commands that display the configuration and current state of the cluster.

Commands that are logged in this file include those commands that configure and change the current state of the cluster:

Records in the commandlog file can contain the following elements:

By default, the commandlog file is regularly archived once a week. To change the archiving policies for the commandlog file, on each node in the cluster, use the crontab command. See the crontab(1) man page for more information.

Sun Cluster software maintains up to eight previously archived commandlog files on each cluster node at any given time. The commandlog file for the current week is named commandlog. The most recent complete week's file is named commandlog.0. The oldest complete week's file is named commandlog.7.

    View the contents of the current week's commandlog file, one screen at a time.


    phys-schost# more /var/cluster/logs/commandlog
    

Example 1–11 Viewing the Contents of Sun Cluster Command Logs

The following example shows the contents of the commandlog file that are displayed by the more command.


more -lines10 /var/cluster/logs/commandlog
11/11/2006 09:42:51 phys-schost-1 5222 root START - clsetup
11/11/2006 09:43:36 phys-schost-1 5758 root START - clrg add "app-sa-1"
11/11/2006 09:43:36 phys-schost-1 5758 root END 0
11/11/2006 09:43:36 phys-schost-1 5760 root START - clrg set -y
"RG_description=Department Shared Address RG" "app-sa-1"
11/11/2006 09:43:37 phys-schost-1 5760 root END 0
11/11/2006 09:44:15 phys-schost-1 5810 root START - clrg online "app-sa-1"
11/11/2006 09:44:15 phys-schost-1 5810 root END 0
11/11/2006 09:44:19 phys-schost-1 5222 root END -20988320
12/02/2006 14:37:21 phys-schost-1 5542 jbloggs START - clrg -c -g "app-sa-1"
-y "RG_description=Joe Bloggs Shared Address RG"
12/02/2006 14:37:22 phys-schost-1 5542 jbloggs END 0