Sun Cluster 3.0 U1 System Administration Guide

1.4 Beginning to Administer the Cluster

Table 1-2 provides a starting point for administering your cluster.

Table 1-2 Sun Cluster 3.0 Administration Tools

If You Want To... 

Then... 

For More Information Go To... 

Remotely Log in to the Cluster 

Use the ccp command to launch the Cluster Control Panel (CCP). Then select one of the following icons: cconsole, crlogin, or ctelnet.

"1.4.1 How to Remotely Log In to Sun Cluster"

Interactively Configure the Cluster 

Launch the scsetup utility.

"1.4.2 How to Access the scsetup Utility"

Display Sun Cluster Release Number and Version Information 

Use the scinstall command with either the -p or -pv options.

"1.4.3 How to Display Sun Cluster Release and Version Information"

Display Installed Resources, Resource Groups, and Resource Types 

Use the scrgadm -p command.

"1.4.4 How to Display Configured Resource Types, Resource Groups, and Resources"

Graphically Monitor Cluster Components 

Use SunPlex Manager or the Sun Cluster module for Sun Management Center. 

SunPlex Manager or Sun Cluster module for Sun Management Center online help

Graphically Administer Some Cluster Components 

Use SunPlex Manager or the Sun Cluster module for Sun Management Center. 

SunPlex Manager or Sun Cluster module for Sun Management Center online help 

Check the Status of Cluster Components 

Use the scstat command.

"1.4.5 How to Check the Status of Cluster Components"

View the Cluster Configuration 

Use the scconf -p command.

"1.4.6 How to View the Cluster Configuration"

Check Global Mount Points 

Use the sccheck command.

"1.4.7 How to Check the Global Mount Points"

Look at Sun Cluster System Messages 

Examine the/var/adm/messages file.

Solaris system administration documentation 

Monitor the Status of Solstice DiskSuite 

Use the metastat commands.

Solstice DiskSuite documentation 

Monitor the Status of VERITAS Volume Manager 

Use the vxstat or vxva commands.

VERITAS Volume Manager documentation 

1.4.1 How to Remotely Log In to Sun Cluster

The Cluster Control Panel (CCP) provides a launch pad for cconsole, crlogin, and ctelnet tools. All three tools start a multiple window connection to a set of specified nodes. The multiple-window connection consists of a host window for each of the specified nodes and a common window. Input directed into the common window is sent to each of these host windows, allowing you to run commands simultaneously on all nodes of the cluster. See the ccp(1M) and cconsole(1M) man pages for more information.

  1. Verify that the following prerequisites are met before starting the CCP.

    • Install the appropriate Sun Cluster software (SUNWccon package) on the administrative console.

    • Make sure the PATH variable on the administrative console includes the Sun Cluster tools directory, /opt/SUNWcluster/bin, and /usr/cluster/bin. You can specify an alternate location for the tools directory by setting the $CLUSTER_HOME environment variable.

    • Configure the clusters file, the serialports file, and the nsswitch.conf file if using a terminal concentrator. These can be either /etc files or NIS/NIS+ databases. See clusters(4) and serialports(4) for more information.

  2. Determine if you have a Sun Enterprise E10000 server platform.

    • If yes, log into the System Service Processor (SSP) and connect by using the netcon command. Once connected, type Shift~@ to unlock the console and gain write access.

  3. Start the CCP launch pad.

    From the administrative console, type the following command.


    # ccp clustername
    

    The CCP launch pad is displayed.

  4. To start a remote session with the cluster, click the appropriate icon (cconsole, crlogin, or ctelnet) in the CCP launch pad.

1.4.1.1 Example

The following example shows the Cluster Control Panel.

Figure 1-1 Cluster Control Panel

Graphic

1.4.1.2 Where to Go From Here

You can also start cconsole, crlogin, or ctelnet sessions from the command line. See cconsole(1M) for more information.

1.4.2 How to Access the scsetup Utility

The scsetup(1M) utility enables you to interactively configure quorum, resource group, cluster transport, private hostname, device group, and new node options for the cluster.

  1. Become superuser on any node in the cluster.

  2. Enter the scsetup utility.


    # scsetup
    

    The Main Menu is displayed.

  3. Make your selection from the menu and follow the onscreen instructions.

    See the scsetup online help for more information.

1.4.3 How to Display Sun Cluster Release and Version Information

You do not need to be logged in as superuser to perform these procedures.

    Display the Sun Cluster patch numbers.

    Sun Cluster update releases are identified by the main product patch number plus the update version, which is 110648-05 for Sun Cluster 3.0 U1.


    % showrev -p
    

    Display the Sun Cluster release number and version strings for all Sun Cluster packages.


    % scinstall -pv
    

1.4.3.1 Example--Displaying the Sun Cluster Release Number

The following example displays the cluster's release number.


% showrev -p | grep 110648
Patch: 110648-05 Obsoletes:  Requires:  Incompatibles:  Packages: 

1.4.3.2 Example--Displaying Sun Cluster Release and Version Information

The following example displays the cluster's release information and version information for all packages.


% scinstall -pv
SunCluster 3.0
SUNWscr:       3.0.0,REV=2000.10.01.01.00
SUNWscdev:     3.0.0,REV=2000.10.01.01.00
SUNWscu:       3.0.0,REV=2000.10.01.01.00
SUNWscman:     3.0.0,REV=2000.10.01.01.00
SUNWscsal:     3.0.0,REV=2000.10.01.01.00
SUNWscsam:     3.0.0,REV=2000.10.01.01.00
SUNWscvm:      3.0.0,REV=2000.10.01.01.00
SUNWmdm:       4.2.1,REV=2000.08.08.10.01

1.4.4 How to Display Configured Resource Types, Resource Groups, and Resources

You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.

You do not need to be logged in as superuser to perform this procedure.

    Display the cluster's configured resource types, resource groups, and resources.


    % scrgadm -p
    

1.4.4.1 Example--Displaying Configured Resource Types, Resource Groups, and Resources

The following example shows the resource types (RT Name), resource groups (RG Name), and resources (RS Name) configured for the cluster schost.


% scrgadm -p
RT Name: SUNW.SharedAddress
  RT Description: HA Shared Address Resource Type 
RT Name: SUNW.LogicalHostname
  RT Description: Logical Hostname Resource Type 
RG Name: schost-sa-1 
  RG Description:  
    RS Name: schost-1
      RS Description: 
      RS Type: SUNW.SharedAddress
      RS Resource Group: schost-sa-1
RG Name: schost-lh-1 
  RG Description:  
    RS Name: schost-3
      RS Description: 
      RS Type: SUNW.LogicalHostname
      RS Resource Group: schost-lh-1

1.4.5 How to Check the Status of Cluster Components

You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.

You do not need to be logged in as superuser to perform this procedure.

    Check the status of cluster components.


    % scstat -p
    

1.4.5.1 Example--Checking the Status of Cluster Components

The following example provides a sample of status information for cluster components returned by scstat(1M).


% scstat -p
-- Cluster Nodes --
 
                    Node name           Status
                    ---------           ------
  Cluster node:     phys-schost-1      Online
  Cluster node:     phys-schost-2      Online
  Cluster node:     phys-schost-3      Online
  Cluster node:     phys-schost-4      Online
 
------------------------------------------------------------------
 
-- Cluster Transport Paths --
 
                    Endpoint            Endpoint            Status
                    --------            --------            ------
  Transport path:   phys-schost-1:qfe1 phys-schost-4:qfe1 Path online
  Transport path:   phys-schost-1:hme1 phys-schost-4:hme1 Path online
...
 
------------------------------------------------------------------
 
-- Quorum Summary --
 
  Quorum votes possible:      6
  Quorum votes needed:        4
  Quorum votes present:       6
 
-- Quorum Votes by Node --
 
                    Node Name           Present Possible Status
                    ---------           ------- -------- ------
  Node votes:       phys-schost-1      1        1       Online
  Node votes:       phys-schost-2      1        1       Online
...
 
-- Quorum Votes by Device --
 
                    Device Name         Present Possible Status 
                    -----------         ------- -------- ------ 
  Device votes:     /dev/did/rdsk/d2s2  1        1       Online 
  Device votes:     /dev/did/rdsk/d8s2  1        1       Online 
...
 
-- Device Group Servers --
 
                         Device Group        Primary             Secondary
                         ------------        -------             ---------
  Device group servers:  rmt/1               -                   -
  Device group servers:  rmt/2               -                   -
  Device group servers:  schost-1           phys-schost-2      phys-schost-1
  Device group servers:  schost-3           -                   -
 
-- Device Group Status --
 
                              Device Group        Status              
                              ------------        ------              
  Device group status:        rmt/1               Offline
  Device group status:        rmt/2               Offline
  Device group status:        schost-1           Online
  Device group status:        schost-3           Offline
 
------------------------------------------------------------------
 
-- Resource Groups and Resources --
 
            Group Name          Resources
            ----------          ---------
 Resources: test-rg             test_1
 Resources: real-property-rg    -
 Resources: failover-rg         -
 Resources: descript-rg-1       -
...
 
-- Resource Groups --
 
            Group Name          Node Name           State
            ----------          ---------           -----
     Group: test-rg             phys-schost-1      Offline
     Group: test-rg             phys-schost-2      Offline
...
 
-- Resources --
 
            Resource Name       Node Name           State     Status Message
            -------------       ---------           -----     --------------
  Resource: test_1              phys-schost-1      Offline   Offline
  Resource: test_1              phys-schost-2      Offline   Offline

1.4.6 How to View the Cluster Configuration

You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.

You do not need to be logged in as superuser to perform this procedure.

    View the cluster configuration.


    % scconf -p
    

1.4.6.1 Example--Viewing the Cluster Configuration

The following example lists the cluster configuration.


% scconf -p
Cluster name:                       cluster-1
Cluster ID:                         0x3908EE1C
Cluster install mode:               disabled
Cluster private net:                172.16.0.0
Cluster private netmask:            255.255.0.0
Cluster new node authentication:    unix
Cluster new node list:              <NULL - Allow any node>
Cluster nodes:                      phys-schost-1 phys-schost-2 phys-schost-3
phys-schost-4
Cluster node name:                                 phys-schost-1
  Node ID:                                         1
  Node enabled:                                    yes
  Node private hostname:                           clusternode1-priv
  Node quorum vote count:                          1
  Node reservation key:                            0x3908EE1C00000001
  Node transport adapters:                         hme1 qfe1 qfe2
 
Node transport adapter:                          hme1
    Adapter enabled:                               yes
    Adapter transport type:                        dlpi
    Adapter property:                              device_name=hme
    Adapter property:                              device_instance=1
    Adapter property:                              dlpi_heartbeat_timeout=10000
...
Cluster transport junctions:                       hub0 hub1 hub2
 
Cluster transport junction:                        hub0
  Junction enabled:                                yes
  Junction type:                                   switch
  Junction port names:                             1 2 3 4
...
Junction port:                                   1
    Port enabled:                                  yes
 
Junction port:                                   2
    Port enabled:                                  yes
...
Cluster transport cables
                    Endpoint            Endpoint        State
                    --------            --------        -----
  Transport cable:  phys-schost-1:hme1@0 hub0@1        Enabled
  Transport cable:  phys-schost-1:qfe1@0 hub1@1        Enabled
  Transport cable:  phys-schost-1:qfe2@0 hub2@1        Enabled
  Transport cable:  phys-schost-2:hme1@0 hub0@2        Enabled
...
Quorum devices:                                    d2 d8
 
Quorum device name:                                d2
  Quorum device votes:                             1
  Quorum device enabled:                           yes
  Quorum device name:                              /dev/did/rdsk/d2s2
  Quorum device hosts (enabled):                   phys-schost-1
 phys-schost-2
  Quorum device hosts (disabled): 
...
Device group name:                                 schost-3
  Device group type:                               SDS
  Device group failback enabled:                   no
  Device group node list:                          phys-schost-3, phys-schost-4
  Diskset name:                                    schost-3

1.4.7 How to Check the Global Mount Points

The sccheck(1M) command checks the /etc/vfstab file for configuration errors with the cluster file system and its global mount points. The sccheck command only returns errors. If no errors are found, sccheck merely returns to the shell prompt.


Note -

Run sccheck after making cluster configuration changes that have affected devices or volume management components.


  1. Become superuser on any node in the cluster.

  2. Verify the cluster configuration.


    # sccheck
    

1.4.7.1 Example--Verifying the Cluster Configuration

The following example shows that the node phys-schost-3 is missing the mount point /global/schost-1.


# sccheck
vfstab-check: WARNING - phys-schost-3 - Missing mount point /global/schost-1