Table 1-2 provides a starting point for administering your cluster.
Table 1-2 Sun Cluster 3.0 Administration Tools
If You Want To... |
Then... |
For More Information Go To... |
---|---|---|
Remotely Log in to the Cluster |
Use the ccp command to launch the Cluster Control Panel (CCP). Then select one of the following icons: cconsole, crlogin, or ctelnet. | |
Interactively Configure the Cluster |
Launch the scsetup utility. | |
Display Sun Cluster Release Number and Version Information |
Use the scinstall command with either the -p or -pv options. |
"1.5.3 How to Display Sun Cluster Release and Version Information" |
Display Installed Resources, Resource Groups, and Resource Types |
Use the scgradm -p command. |
"1.5.4 How to Display Configured Resource Types, Resource Groups, and Resources" |
Graphically Monitor Cluster Components |
Use the Sun Cluster module for Sun Management Center. |
Sun Cluster module for Sun Management Center online help |
Check the Status of Cluster Components |
Use the scstat command. | |
View the Cluster Configuration |
Use the scconf -p command. | |
Check Global Mount Points |
Use the sccheck command. | |
Look at Sun Cluster System Messages |
Examine the/var/adm/messages file. |
Solaris system administration documentation |
Monitor the Status of Solstice DiskSuite |
Use the metastat or metatool commands. |
Solstice DiskSuite documentation |
Monitor the Status of VERITAS Volume Manager |
Use the vxstat or vxva commands. |
VERITAS Volume Manager documentation |
The Cluster Control Panel (CCP) provides a launch pad for cconsole, crlogin, and ctelnet tools. All three tools start a multiple window connection to a set of specified nodes. The multiple-window connection consists of a host window for each of the specified nodes and a common window. Input directed into the common window is sent to each of these host windows. See the ccp(1M) and cconsole(1M) man pages for more information.
Verify that the following prerequisites are met. To start the Cluster Control Panel (CCP), you must:
Install the appropriate Sun Cluster software (SUNWccon package) on the administrative console.
Make sure the PATH variable on the administrative console includes the Sun Cluster tools directory, /opt/SUNWcluster/bin, and /usr/cluster/bin. You can specify an alternate location for the tools directory by setting the $CLUSTER_HOME environment variable.
Configure the clusters file, the serialports file, and the nsswitch.conf file if using a terminal concentrator. These can be either /etc files or NIS/NIS+ databases. See clusters(4) and serialports(4) for more information.
Determine if you have a Sun Enterprise E10000 server platform.
If no, proceed to Step 3.
If yes, log into the System Service Processor (SSP) and connect by using the netcon command. Once connected, enter Shift~@ to unlock the console and gain write access.
Start the CCP launch pad.
From the administrative console, enter the following command.
# ccp clustername |
The CCP launch pad appears.
To start a remote session with the cluster, click the appropriate icon (cconsole, crlogin, or ctelnet) in the CCP launch pad.
The following example shows the Cluster Control Panel.
You can also start cconsole, crlogin, or ctelnet sessions from the command line. See cconsole(1M) for more information.
The scsetup(1M) utility enables you to interactively configure quorum, cluster transport, private hostnames, device groups, and new node options for the cluster.
Become superuser on a node in the cluster.
Enter the scsetup utility.
# scsetup |
The Main Menu appears.
Make your selection from the menu and follow the onscreen instructions.
See the scsetup online help for more information.
You do not need to be logged in as superuser to perform these procedures.
Display the Sun Cluster release number.
% scinstall -p |
Display the Sun Cluster release number and version strings for all Sun Cluster packages.
% scinstall -pv |
The following example displays the cluster's release number.
% scinstall -p 3.0 |
The following example displays the cluster's release information and version information for all packages.
% scinstall -pv SunCluster 3.0 SUNWscr: 3.0.0,REV=1999.10.20.15.01 SUNWscdev: 3.0.0,REV=1999.10.20.15.01 SUNWscu: 3.0.0,REV=1999.10.20.15.01 SUNWscman: 3.0.0,REV=1999.10.20.15.01 SUNWscsal: 3.0.0,REV=1999.10.20.15.01 SUNWscsam: 3.0.0,REV=1999.10.20.15.01 SUNWrsmop: 3.0.0,REV=1999.10.20.15.01 SUNWsci: 3.0,REV=1999.09.08.17.43 SUNWscid: 3.0,REV=1999.09.08.17.43 SUNWscidx: 3.0,REV=1999.09.08.17.43 SUNWscvm: 3.0.0,REV=1999.10.20.15.01 |
You do not need to be logged in as superuser to perform this procedure.
Display the cluster's configured resource types, resource groups, and resources.
% scrgadm -p |
The following example shows the resource types (RT Name), resource groups (RG Name), and resources (RS Name) configured for the cluster schost.
% scgradm -p RT Name: SUNW.SharedAddress RT Description: HA Shared Address Resource Type RT Name: SUNW.LogicalHostname RT Description: Logical Hostname Resource Type RG Name: schost-sa-1 RG Description: RS Name: schost-1 RS Description: RS Type: SUNW.SharedAddress RS Resource Group: schost-sa-1 RG Name: schost-lh-1 RG Description: RS Name: schost-3 RS Description: RS Type: SUNW.LogicalHostname RS Resource Group: schost-lh-1 |
You do not need to be logged in as superuser to perform this procedure.
Check the status of cluster components.
% scstat -p |
The following example provides a sample of status information for cluster components returned by scstat(1M).
% scstat -p -- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online Cluster node: phys-schost-3 Online Cluster node: phys-schost-4 Online ------------------------------------------------------------------ -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: phys-schost-1:qfe1 phys-schost-4:qfe1 Path online Transport path: phys-schost-1:hme1 phys-schost-4:hme1 Path online ... ------------------------------------------------------------------ -- Quorum Summary -- Quorum votes possible: 6 Quorum votes needed: 4 Quorum votes present: 6 -- Quorum Votes by Node -- Node Name Present Possible Status --------- ------- -------- ------ Node votes: phys-schost-1 1 1 Online Node votes: phys-schost-2 1 1 Online ... -- Quorum Votes by Device -- Device Name Present Possible Status Owner ----------- ------- -------- ------ ----- Device votes: /dev/did/rdsk/d2s2 1 1 Online phys-schost-2 Device votes: /dev/did/rdsk/d8s2 1 1 Online phys-schost-4 ... -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: rmt/1 - - Device group servers: rmt/2 - - Device group servers: schost-1 phys-schost-2 phys-schost-1 Device group servers: schost-3 - - -- Device Group Status -- Device Group Status ------------ ------ Device group status: rmt/1 Offline Device group status: rmt/2 Offline Device group status: schost-1 Online Device group status: schost-3 Offline ------------------------------------------------------------------ -- Resource Groups and Resources -- Group Name Resources ---------- --------- Resources: test-rg test_1 Resources: real-property-rg - Resources: failover-rg - Resources: descript-rg-1 - ... -- Resource Groups -- Group Name Node Name State ---------- --------- ----- Group: test-rg phys-schost-1 Offline Group: test-rg phys-schost-2 Offline ... -- Resources -- Resource Name Node Name State Status Message ------------- --------- ----- -------------- Resource: test_1 phys-schost-1 Offline Offline Resource: test_1 phys-schost-2 Offline Offline |
You do not need to be logged in as superuser to perform this procedure.
View the cluster configuration.
% scconf -p |
The following example lists the cluster configuration.
% scconf -p Cluster name: cluster-1 Cluster ID: 0x3908EE1C Cluster install mode: disabled Cluster private net: 172.16.0.0 Cluster private netmask: 255.255.0.0 Cluster new node authentication: unix Cluster new node list: <NULL - Allow any node> Cluster nodes: phys-schost-1 phys-schost-2 phys-schost-3 phys-schost-4 Cluster node name: phys-schost-1 Node ID: 1 Node enabled: yes Node private hostname: clusternode1-priv Node quorum vote count: 1 Node reservation key: 0x3908EE1C00000001 Node transport adapters: hme1 qfe1 qfe2 Node transport adapter: hme1 Adapter enabled: yes Adapter transport type: dlpi Adapter property: device_name=hme Adapter property: device_instance=1 Adapter property: dlpi_heartbeat_timeout=10000 ... Cluster transport junctions: hub0 hub1 hub2 Cluster transport junction: hub0 Junction enabled: yes Junction type: switch Junction port names: 1 2 3 4 ... |
Junction port: 1 Port enabled: yes Junction port: 2 Port enabled: yes ... Cluster transport cables Endpoint Endpoint State -------- -------- ----- Transport cable: phys-schost-1:hme1@0 hub0@1 Enabled Transport cable: phys-schost-1:qfe1@0 hub1@1 Enabled Transport cable: phys-schost-1:qfe2@0 hub2@1 Enabled Transport cable: phys-schost-2:hme1@0 hub0@2 Enabled ... Quorum devices: d2 d8 Quorum device name: d2 Quorum device votes: 1 Quorum device enabled: yes Quorum device name: /dev/did/rdsk/d2s2 Quorum device hosts (enabled): phys-schost-1 phys-schost-2 Quorum device hosts (disabled): ... Device group name: schost-3 Device group type: SDS Device group failback enabled: no Device group node list: phys-schost-3, phys-schost-4 Diskset name: schost-3 ... |
The sccheck(1M) command checks the /etc/vfstab file for configuration errors with the cluster file system and its global mount points. The sccheck command only returns errors. If no errors are found, sccheck merely returns to the shell prompt.
Run sccheck after making cluster configuration changes that have affected devices or volume management components.
The following example shows that the node phys-schost-3 is missing the mount point /global/schost-1.
# sccheck vfstab-check: WARNING - phys-schost-3 - Missing mount point /global/schost-1 |