This chapter provides information on preparing to administer the cluster and the procedures for using Sun Cluster administration tools.
This is a list of the procedures in this chapter.
"1.4.3 How to Display Sun Cluster Release and Version Information"
"1.4.4 How to Display Configured Resource Types, Resource Groups, and Resources"
Sun Cluster's highly-available environment ensures that critical applications are available to end users. The system administrator's job is to make sure that Sun Cluster is stable and operational.
Before undertaking an administrative task, you should have familiarized yourself with the planning information in the Sun Cluster 3.0 U1 Installation Guide and the glossary in the Sun Cluster 3.0 U1 Concepts document. Sun Cluster administration is organized into tasks among the following manuals.
Standard tasks, used to administer and maintain the cluster on a regular, perhaps daily basis. These tasks are described in this guide.
Data service tasks, such as installation, configuration, and changing properties. These tasks are described in the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide.
Service tasks, such as adding or repairing storage or network hardware. These tasks are described in the Sun Cluster 3.0 U1 Hardware Guide.
For the most part, you can perform Sun Cluster administration tasks while the cluster is operational, with the impact on cluster operation limited to a single node. For those procedures that require that the entire cluster be shut down, schedule downtime for off hours, when there is minimal impact on the system. If you plan to take down the cluster or a cluster node, notify users ahead of time.
You can perform administrative tasks on Sun Cluster using a Graphical User Interface (GUI) or using the command line. This section provides an overview of these tools.
Sun Cluster supports two Graphical User Interface (GUI) tools that you can use to perform various administrative tasks on your cluster. These GUI tools are SunPlex Manager and Sun Management Center. See Chapter 9, Administering Sun Cluster With the Graphical User Interfaces for more information and for procedures about configuring SunPlex Manager and Sun Management Center. For specific information about how to use these tools, see the online help for each GUI.
You can perform most Sun Cluster administration tasks interactively through the scsetup(1M) utility. Whenever possible, administration procedures in this guide are described using scsetup.
You can administer the following items through the scsetup utility.
Quorum
Resource groups
Cluster interconnect
Device groups and volumes
Private hostnames
New nodes
Other cluster properties
Listed here are the other commands you use to administer Sun Cluster. See the man pages for more detailed information.
Table 1-1 Sun Cluster Command-Line Interface Commands
Command |
Description |
---|---|
ccp(1M) |
Starts remote console access to the cluster. |
pmfadm(1M) |
Provides administrative access to the process monitor facility. |
pnmset(1M) |
Configures Public Network Management (PNM). |
pnmstat(1M) |
Reports the status of Network Adapter Failover (NAFO) groups monitored by PNM. |
sccheck(1M) |
Checks and validates the global mount entries in the /etc/vfstab file. |
scconf(1M) |
Updates a Sun Cluster configuration. The -p option lists cluster configuration information. |
scdidadm(1M) |
Provides administrative access to the device ID configuration. |
scgdevs(1M) |
Runs the global device namespace administration script. |
scinstall(1M) |
Installs and configures Sun Cluster software; can be run interactively or non-interactively. The -p option displays release and package version information for the Sun Cluster software. |
scrgadm(1M) |
Manages the registration of resource types, the creation of resource groups, and the activation of resources within a resource group. The -p option displays information on installed resources, resource groups, and resource types. |
scsetup(1M) |
Runs the interactive cluster configuration utility, which generates the scconf command and its various options. |
scshutdown(1M) |
Shuts down the entire cluster. |
scstat(1M) |
Provides a snapshot of the cluster status. |
scswitch(1M) |
Performs changes affecting node mastery and states for resource groups and disk device groups. |
In addition, you use commands to administer the volume manager portion of Sun Cluster. These commands depend on the specific volume manager used in your cluster, either Solstice DiskSuiteTM or VERITAS Volume Manager.
This section describes what to do to prepare for administering your cluster.
As your Sun Cluster configuration grows and changes, documenting the hardware aspects that are unique to your site saves administration time when it becomes necessary to change or upgrade the cluster. Labeling cables and connections between the various cluster components can also make administration easier.
Keeping records of your original cluster configuration, and subsequent changes, can also help to reduce the time required by a third-party service provider when servicing your cluster.
You can use a dedicated SPARC workstation, known as the administrative console, to administer the active cluster. Typically, you install and run the Cluster Control Panel (CCP) and graphical user interface (GUI) tools on the administrative console. For more information on the CCP, see "1.4.1 How to Remotely Log In to Sun Cluster". For instructions on installing the Sun Management Center and SunPlex Manager GUI tools, see the Sun Cluster 3.0 U1 Installation Guide.
The administrative console is not a cluster node. The administrative console is used for remote access to the cluster nodes, either over the public network or through a network-based terminal concentrator.
If your cluster consists of a Sun EnterpriseTM 10000 server, you must have the ability to log in from the administrative console to the System Service Processor (SSP) and connect using the netcon(1M) command. The default method for netcon to connect with a Sun Enterprise 10000 domain is through the network interface. If the network is inaccessible, the cluster console (cconsole) access through the network connection will hang. To prevent this, you can use netcon in "exclusive" mode by setting the -f option or by sending ~* during a normal netcon session. This gives you the option of toggling to the serial interface if the network becomes unreachable. Refer to netcon(1M) for more information.
Sun Cluster does not require a dedicated administrative console, but using one provides these benefits:
Enables centralized cluster management by grouping console and management tools on the same machine
Provides potentially quicker problem resolution by Enterprise Services or your service provider
It is important to back up your cluster on a regular basis. Even though Sun Cluster provides an HA environment, with mirrored copies of data on the storage devices, do not consider this to be a replacement for regular backups. Sun Cluster can survive multiple failures, but it does not protect against user or program error, or catastrophic failure. Therefore, you must have a backup procedure in place to protect against data loss.
The following information should be included as part of your backup.
All file system partitions
All database data if you are running DBMS data services
Disk partition information for all cluster disks
The md.tab file if you are using Solstice DiskSuite as your volume manager
Table 1-2 provides a starting point for administering your cluster.
Table 1-2 Sun Cluster 3.0 Administration Tools
If You Want To... |
Then... |
For More Information Go To... |
---|---|---|
Remotely Log in to the Cluster |
Use the ccp command to launch the Cluster Control Panel (CCP). Then select one of the following icons: cconsole, crlogin, or ctelnet. | |
Interactively Configure the Cluster |
Launch the scsetup utility. | |
Display Sun Cluster Release Number and Version Information |
Use the scinstall command with either the -p or -pv options. |
"1.4.3 How to Display Sun Cluster Release and Version Information" |
Display Installed Resources, Resource Groups, and Resource Types |
Use the scrgadm -p command. |
"1.4.4 How to Display Configured Resource Types, Resource Groups, and Resources" |
Graphically Monitor Cluster Components |
Use SunPlex Manager or the Sun Cluster module for Sun Management Center. |
SunPlex Manager or Sun Cluster module for Sun Management Center online help |
Graphically Administer Some Cluster Components |
Use SunPlex Manager or the Sun Cluster module for Sun Management Center. |
SunPlex Manager or Sun Cluster module for Sun Management Center online help |
Check the Status of Cluster Components |
Use the scstat command. | |
View the Cluster Configuration |
Use the scconf -p command. | |
Check Global Mount Points |
Use the sccheck command. | |
Look at Sun Cluster System Messages |
Examine the/var/adm/messages file. |
Solaris system administration documentation |
Monitor the Status of Solstice DiskSuite |
Use the metastat commands. |
Solstice DiskSuite documentation |
Monitor the Status of VERITAS Volume Manager |
Use the vxstat or vxva commands. |
VERITAS Volume Manager documentation |
The Cluster Control Panel (CCP) provides a launch pad for cconsole, crlogin, and ctelnet tools. All three tools start a multiple window connection to a set of specified nodes. The multiple-window connection consists of a host window for each of the specified nodes and a common window. Input directed into the common window is sent to each of these host windows, allowing you to run commands simultaneously on all nodes of the cluster. See the ccp(1M) and cconsole(1M) man pages for more information.
Verify that the following prerequisites are met before starting the CCP.
Install the appropriate Sun Cluster software (SUNWccon package) on the administrative console.
Make sure the PATH variable on the administrative console includes the Sun Cluster tools directory, /opt/SUNWcluster/bin, and /usr/cluster/bin. You can specify an alternate location for the tools directory by setting the $CLUSTER_HOME environment variable.
Configure the clusters file, the serialports file, and the nsswitch.conf file if using a terminal concentrator. These can be either /etc files or NIS/NIS+ databases. See clusters(4) and serialports(4) for more information.
Determine if you have a Sun Enterprise E10000 server platform.
If no, proceed to Step 3.
If yes, log into the System Service Processor (SSP) and connect by using the netcon command. Once connected, type Shift~@ to unlock the console and gain write access.
Start the CCP launch pad.
From the administrative console, type the following command.
# ccp clustername |
The CCP launch pad is displayed.
To start a remote session with the cluster, click the appropriate icon (cconsole, crlogin, or ctelnet) in the CCP launch pad.
The following example shows the Cluster Control Panel.
You can also start cconsole, crlogin, or ctelnet sessions from the command line. See cconsole(1M) for more information.
The scsetup(1M) utility enables you to interactively configure quorum, resource group, cluster transport, private hostname, device group, and new node options for the cluster.
Become superuser on any node in the cluster.
Enter the scsetup utility.
# scsetup |
The Main Menu is displayed.
Make your selection from the menu and follow the onscreen instructions.
See the scsetup online help for more information.
You do not need to be logged in as superuser to perform these procedures.
Display the Sun Cluster patch numbers.
Sun Cluster update releases are identified by the main product patch number plus the update version, which is 110648-05 for Sun Cluster 3.0 U1.
% showrev -p |
Display the Sun Cluster release number and version strings for all Sun Cluster packages.
% scinstall -pv |
The following example displays the cluster's release number.
% showrev -p | grep 110648 Patch: 110648-05 Obsoletes: Requires: Incompatibles: Packages: |
The following example displays the cluster's release information and version information for all packages.
% scinstall -pv SunCluster 3.0 SUNWscr: 3.0.0,REV=2000.10.01.01.00 SUNWscdev: 3.0.0,REV=2000.10.01.01.00 SUNWscu: 3.0.0,REV=2000.10.01.01.00 SUNWscman: 3.0.0,REV=2000.10.01.01.00 SUNWscsal: 3.0.0,REV=2000.10.01.01.00 SUNWscsam: 3.0.0,REV=2000.10.01.01.00 SUNWscvm: 3.0.0,REV=2000.10.01.01.00 SUNWmdm: 4.2.1,REV=2000.08.08.10.01 |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
You do not need to be logged in as superuser to perform this procedure.
Display the cluster's configured resource types, resource groups, and resources.
% scrgadm -p |
The following example shows the resource types (RT Name), resource groups (RG Name), and resources (RS Name) configured for the cluster schost.
% scrgadm -p RT Name: SUNW.SharedAddress RT Description: HA Shared Address Resource Type RT Name: SUNW.LogicalHostname RT Description: Logical Hostname Resource Type RG Name: schost-sa-1 RG Description: RS Name: schost-1 RS Description: RS Type: SUNW.SharedAddress RS Resource Group: schost-sa-1 RG Name: schost-lh-1 RG Description: RS Name: schost-3 RS Description: RS Type: SUNW.LogicalHostname RS Resource Group: schost-lh-1 |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
You do not need to be logged in as superuser to perform this procedure.
Check the status of cluster components.
% scstat -p |
The following example provides a sample of status information for cluster components returned by scstat(1M).
% scstat -p -- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online Cluster node: phys-schost-3 Online Cluster node: phys-schost-4 Online ------------------------------------------------------------------ -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: phys-schost-1:qfe1 phys-schost-4:qfe1 Path online Transport path: phys-schost-1:hme1 phys-schost-4:hme1 Path online ... ------------------------------------------------------------------ -- Quorum Summary -- Quorum votes possible: 6 Quorum votes needed: 4 Quorum votes present: 6 -- Quorum Votes by Node -- Node Name Present Possible Status --------- ------- -------- ------ Node votes: phys-schost-1 1 1 Online Node votes: phys-schost-2 1 1 Online ... -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/d2s2 1 1 Online Device votes: /dev/did/rdsk/d8s2 1 1 Online ... -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: rmt/1 - - Device group servers: rmt/2 - - Device group servers: schost-1 phys-schost-2 phys-schost-1 Device group servers: schost-3 - - -- Device Group Status -- Device Group Status ------------ ------ Device group status: rmt/1 Offline Device group status: rmt/2 Offline Device group status: schost-1 Online Device group status: schost-3 Offline ------------------------------------------------------------------ -- Resource Groups and Resources -- Group Name Resources ---------- --------- Resources: test-rg test_1 Resources: real-property-rg - Resources: failover-rg - Resources: descript-rg-1 - ... -- Resource Groups -- Group Name Node Name State ---------- --------- ----- Group: test-rg phys-schost-1 Offline Group: test-rg phys-schost-2 Offline ... -- Resources -- Resource Name Node Name State Status Message ------------- --------- ----- -------------- Resource: test_1 phys-schost-1 Offline Offline Resource: test_1 phys-schost-2 Offline Offline |
You can also accomplish this procedure by using the SunPlex Manager GUI. See the SunPlex Manager online help for more information.
You do not need to be logged in as superuser to perform this procedure.
View the cluster configuration.
% scconf -p |
The following example lists the cluster configuration.
% scconf -p Cluster name: cluster-1 Cluster ID: 0x3908EE1C Cluster install mode: disabled Cluster private net: 172.16.0.0 Cluster private netmask: 255.255.0.0 Cluster new node authentication: unix Cluster new node list: <NULL - Allow any node> Cluster nodes: phys-schost-1 phys-schost-2 phys-schost-3 phys-schost-4 Cluster node name: phys-schost-1 Node ID: 1 Node enabled: yes Node private hostname: clusternode1-priv Node quorum vote count: 1 Node reservation key: 0x3908EE1C00000001 Node transport adapters: hme1 qfe1 qfe2 Node transport adapter: hme1 Adapter enabled: yes Adapter transport type: dlpi Adapter property: device_name=hme Adapter property: device_instance=1 Adapter property: dlpi_heartbeat_timeout=10000 ... Cluster transport junctions: hub0 hub1 hub2 Cluster transport junction: hub0 Junction enabled: yes Junction type: switch Junction port names: 1 2 3 4 ... Junction port: 1 Port enabled: yes Junction port: 2 Port enabled: yes ... Cluster transport cables Endpoint Endpoint State -------- -------- ----- Transport cable: phys-schost-1:hme1@0 hub0@1 Enabled Transport cable: phys-schost-1:qfe1@0 hub1@1 Enabled Transport cable: phys-schost-1:qfe2@0 hub2@1 Enabled Transport cable: phys-schost-2:hme1@0 hub0@2 Enabled ... Quorum devices: d2 d8 Quorum device name: d2 Quorum device votes: 1 Quorum device enabled: yes Quorum device name: /dev/did/rdsk/d2s2 Quorum device hosts (enabled): phys-schost-1 phys-schost-2 Quorum device hosts (disabled): ... Device group name: schost-3 Device group type: SDS Device group failback enabled: no Device group node list: phys-schost-3, phys-schost-4 Diskset name: schost-3 |
The sccheck(1M) command checks the /etc/vfstab file for configuration errors with the cluster file system and its global mount points. The sccheck command only returns errors. If no errors are found, sccheck merely returns to the shell prompt.
Run sccheck after making cluster configuration changes that have affected devices or volume management components.
The following example shows that the node phys-schost-3 is missing the mount point /global/schost-1.
# sccheck vfstab-check: WARNING - phys-schost-3 - Missing mount point /global/schost-1 |