This chapter provides step-by-step procedures for installing and configuring your cluster.
The following step-by-step instructions are in this chapter.
"How to Install Cluster Control Panel Software on the Administrative Console"
"How to Install Sun Cluster Software and Establish New Cluster Nodes"
"How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes"
"How to Install the Sun Cluster Module for Sun Management Center"
"How to Add a Cluster Node as a Sun Management Center Agent Host Object"
Before you begin, read the following manuals for information that will help you plan your cluster configuration and prepare your installation strategy.
Sun Cluster 3.0 Concepts--overview of the Sun Cluster 3.0 product
Sun Cluster 3.0 Release Notes--late-breaking information
This entire manual
The following table lists the tasks you perform to install the software.
Table 2-1 Task Map: Installing the Software
Task |
For Instructions, Go To ... |
|
---|---|---|
Plan the layout of your cluster configuration. |
Chapter 1, Planning the Sun Cluster Configuration and "Configuration Worksheets and Examples" in Sun Cluster 3.0 Release Notes |
|
(Optional) Install the Cluster Control Panel (CCP) software on the administrative console. |
"How to Install Cluster Control Panel Software on the Administrative Console" |
|
Install the Solaris operating environment and Sun Cluster software using one of two methods. |
|
|
|
Method 1 - Install Solaris software, then install the Sun Cluster software by using the scinstall utility. |
"How to Install the Solaris Operating Environment" and "How to Install Sun Cluster Software and Establish New Cluster Nodes" |
Method 2 - Install Solaris software and Sun Cluster software in one operation by using the scinstall utility custom JumpStart option. |
"How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes" |
|
Configure the name service look-up order. | ||
Install volume manager software. |
|
|
|
Install Solstice DiskSuite software. |
"How to Install Solstice DiskSuite Software" and Solstice DiskSuite documentation |
Install VERITAS Volume Manager software. |
"How to Install VERITAS Volume Manager Software" and VERITAS Volume Manager documentation |
|
Set up directory paths. | ||
Install data service software packages. | "How to Install Data Service Software Packages" | |
Configure the cluster. |
This procedure describes how to install the Cluster Control Panel (CCP) software on the administrative console. The CCP provides a launchpad for the cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each of these tools provides a multiple-window connection to a set of nodes, plus a common window that sends input to all nodes at one time.
You can use any desktop machine running the Solaris 8 operating environment as an administrative console. In addition, you can also use the administrative console as a Sun Management Center console and/or server, and as an AnswerBook server. Refer to Sun Management Center documentation for information on installing Sun Management Center software. Refer to Sun Cluster 3.0 Release Notes for information on installing an AnswerBook server.
You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.
Ensure that the Solaris 8 operating environment and any Solaris patches are installed on the administrative console.
All platforms require Solaris 8 with at least the End User System Support software group.
If you are installing from the CD-ROM, insert the Sun Cluster 3.0 CD-ROM into the CD-ROM drive of the administrative console.
If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.
Change to the /cdrom_image/suncluster_3_0/SunCluster_3.0/Packages directory.
# cd /cdrom_image/suncluster_3_0/SunCluster_3.0/Packages |
Install the SUNWccon package.
# pkgadd -d . SUNWccon |
(Optional) Install the SUNWscman package.
# pkgadd -d . SUNWscman |
Installing the SUNWscman package on the administrative console enables you to view Sun Cluster man pages from the administrative console prior to installing Sun Cluster software on the cluster nodes.
If you installed from a CD-ROM, eject the CD-ROM.
Create an /etc/clusters file.
Add your cluster name and the physical node name of each cluster node to the file.
# vi /etc/clusters clustername node1 node2 |
See the /opt/SUNWcluster/bin/clusters(4) man page for details.
Create an /etc/serialports file.
Add the physical node name of each cluster node, the terminal concentrator (TC) or System Service Processor (SSP) name, and the serial port numbers to the file.
Use the telnet(1) port numbers, not the physical port numbers, for the serial port numbers in the /etc/serialports file. Determine the serial port number by adding 5000 to the physical port number. For example, if a physical port number is 6, the serial port number should be 5006.
# vi /etc/serialports node1 TC_hostname 500n node2 TC_hostname 500n |
See the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations for the Sun Enterprise E10000 server.
For convenience, add the /opt/SUNWcluster/bin directory to the PATH and the /opt/SUNWcluster/man directory to the MANPATH on the administrative console.
If you installed the SUNWscman package, also add the /usr/cluster/man directory to the MANPATH.
Start the CCP utility.
# /opt/SUNWcluster/bin/ccp clustername |
Refer to the procedure "How to Remotely Log In to Sun Cluster" in Sun Cluster 3.0 System Administration Guide and the /opt/SUNWcluster/bin/ccp(1M) man page for information about using the CCP.
To install Solaris software, go to "How to Install the Solaris Operating Environment". To use the scinstall custom JumpStart option to install Solaris and Sun Cluster software, go to "How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes".
If you are not using the scinstall(1M) custom JumpStart installation method to install software, perform this task on each node in the cluster.
Ensure that the hardware setup is complete and connections are verified before installing Solaris software.
Refer to Sun Cluster 3.0 Hardware Guide and your server and storage device documentation for details.
On each node of the cluster, determine whether the local-mac-address variable is correctly set to false.
# /usr/sbin/eeprom local-mac-address? |
If the command returns local-mac-address=false, the variable setting is correct. Proceed to Step 3.
If the command returns local-mac-address=true, change the setting to false.
# /usr/sbin/eeprom local-mac-address?=false |
The new setting becomes effective at the next system reboot.
Have available your completed "Local File System Layout Worksheet" from Sun Cluster 3.0 Release Notes.
Update naming services.
Add address-to-name mappings for all public hostnames and logical addresses to any naming services (such as NIS, NIS+, or DNS) used by clients for access to cluster services. See "IP Addresses" for planning guidelines.
You also add these addresses to the local /etc/inet/hosts file on each node during the procedure "How to Configure the Name Service Switch".
If you are using a cluster administrative console, display a console screen for each node in the cluster.
If the Cluster Control Panel is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. Otherwise, you must connect to the consoles of each node individually.
To save time, you can install the Solaris operating environment on each node at the same time. Use the cconsole utility to install all nodes at once.
Are you installing a new node to an existing cluster?
If no, proceed to Step 7.
If yes, perform the following steps to create a mount point on the new node for each cluster file system in the cluster.
From another, active node of the cluster, display the names of all cluster file systems.
% mount | grep global | egrep -v node@ | awk `{print $1}' |
On the node you are adding to the cluster, create a mount point for each cluster file system in the cluster.
% mkdir -p mountpoint |
For example, if a file system name returned by the mount command was /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node being added to the cluster.
Install the Solaris operating environment as instructed in the Solaris installation documentation.
You must install all nodes in a cluster with the same version of the Solaris operating environment.
You can use any method normally used for installing the Solaris operating environment to install the software on new nodes being installed into a clustered environment. These methods include the Solaris interactive installation program, Solaris JumpStart, and Solaris Web Start.
During installation, do the following.
Install at least the End User System Support software group. You might need to install other Solaris software packages which are not part of the End User System Support software group, for example, the Apache HTTP server packages. Third-party software, such as Oracle, might also require additional Solaris packages. Refer to third-party documentation for any Solaris software requirements.
Sun Enterprise E10000 servers require the Entire Distribution + OEM software group.
Create a file system of at least 100 MBytes with its mount point set as /globaldevices, as well as any file-system partitions needed to support your volume manager software. Refer to "System Disk Partitions" for partitioning guidelines to support Sun Cluster software.
The /globaldevices file system is required for Sun Cluster software installation to succeed.
Answer no when asked if you want automatic power-saving shutdown. You must disable automatic shutdown in Sun Cluster configurations. Refer to the pmconfig(1M) and power.conf(4) man pages for more information.
For ease of administration, set the same root password on each node.
The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. Refer to the ifconfig(1M) man page for more information about Solaris interface groups.
Install any Solaris software patches.
Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.
Install any hardware-related patches and download any needed firmware contained in the hardware patches.
Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.
To install Sun Cluster software on your cluster nodes, go to "How to Install Sun Cluster Software and Establish New Cluster Nodes".
After installing the Solaris operating environment, perform this task on each node of the cluster.
If you used the scinstall(1M) custom JumpStart method to install software, the Sun Cluster software is already installed. Proceed to "How to Configure the Name Service Switch".
Have available the following completed configuration planning worksheets from Sun Cluster 3.0 Release Notes.
"Cluster and Node Names Worksheet"
"Cluster Interconnect Worksheet"
See Chapter 1, Planning the Sun Cluster Configuration for planning guidelines.
Become superuser on the cluster node.
If you are installing from the CD-ROM, insert the Sun Cluster 3.0 CD-ROM into the CD-ROM drive of the node you want to install and configure.
If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.
Change to the /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools directory.
# cd /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools |
Start the scinstall(1M) utility.
# ./scinstall |
Follow these guidelines while using the interactive scinstall utility.
Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.
Unless otherwise noted, pressing Control-D will return you either to the start of a series of related questions or to the Main Menu.
Your session answers are stored as defaults for the next time you run this menu option.
Until the node has successfully booted in cluster mode, you can rerun scinstall and change the configuration information as needed. However, if bad configuration data for the node has been pushed over to the established portion of the cluster, you might first need to remove the bad information. To do this, log in to one of the active cluster nodes, then use the scsetup(1M) utility to remove the bad adapter, junction, or cable information.
To install the first node and establish the new cluster, type 1 (Establish a new cluster).
Follow the prompts to install Sun Cluster software, using the information from your configuration planning worksheets. You will be asked for the following information.
Cluster name
Names of the other nodes that will become part of this cluster
Node authentication
Private network address and netmask--You cannot change the private network address after the cluster has successfully formed
Cluster interconnect (transport adapters and transport junctions)--You can configure no more than two adapters by using the scinstall command, but you can configure more adapters later by using the scsetup utility
Global devices file-system name
Automatic reboot--Do not choose automatic reboot if you have Sun Cluster software patches to install
When you finish answering the prompts, the scinstall command generated from your input is displayed for confirmation. If you choose not to accept the command, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 1 and provide different answers. Your previous entries are displayed as the defaults.
Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. Because the default file is shipped with references to eight nodes, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See "How to Update Network Time Protocol (NTP)" for information on how to suppress these messages under otherwise normal cluster conditions.
To install the second node of the cluster, type 2 (Add this machine as a node).
You can start this step while the first node is still being installed.
Follow the prompts to install Sun Cluster software, using the information from your configuration planning worksheets. You will be asked for the following information.
Name of an existing cluster node, referred to as the sponsor node
Cluster name
Cluster interconnect (transport adapters and transport junctions)
Global devices file-system name
Automatic reboot--Do not choose automatic reboot if you have Sun Cluster software patches to install
When you finish answering the prompts, the scinstall command generated from your input is displayed for confirmation. If you choose not to accept the command, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 2 and provide different answers. Your previous answers are displayed as the defaults.
If you choose to continue installation and the sponsor node is not yet established, scinstall waits for the sponsor node to become available.
Repeat Step 7 on each additional node until all nodes are fully configured.
You do not need to wait for the second node to complete installation before beginning installation on additional nodes.
Install any Sun Cluster software patches.
Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.
If you installed Sun Cluster software patches, shut down the cluster, then reboot each node in the cluster.
Before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down.
Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".
The following example shows the progress messages displayed as scinstall installation tasks are completed on the node phys-schost-1, which is the first node to be installed in the cluster.
** Installing SunCluster 3.0 ** SUNWscr.....done. SUNWscdev...done. SUNWscu.....done. SUNWscman...done. SUNWscsal...done. SUNWscsam...done. SUNWscrsmop.done. SUNWsci.....done. SUNWscid....done. SUNWscidx...done. SUNWscvm....done. SUNWmdm.....done. Initializing cluster name to "sccluster" ... done Initializing authentication options ... done Initializing configuration for adapter "hme2" ... done Initializing configuration for adapter "hme4" ... done Initializing configuration for junction "switch1" ... done Initializing configuration for junction "switch2" ... done Initializing configuration for cable ... done Initializing configuration for cable ... done Setting the node ID for "phys-schost-1" ... done (id=1) Checking for global devices global file system ... done Checking device to use for global devices file system ... done Updating vfstab ... done Verifying that NTP is configured ... done Installing a default NTP configuration ... done Please complete the NTP configuration after scinstall has finished. Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done Adding the "cluster" switch to "hosts" in nsswitch.conf ... done Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.060199105132 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management. Ensure routing is disabled ... done Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing. Log file - /var/cluster/logs/install/scinstall.log.276 Rebooting ... |
To set up the name service look-up order, go to "How to Configure the Name Service Switch".
Perform this procedure to use the custom JumpStart installation method. This method installs the Solaris operating environment and Sun Cluster software on all cluster nodes in a single operation.
Ensure that the hardware setup is complete and connections are verified before installing Solaris software.
Refer to Sun Cluster 3.0 Hardware Guide and your server and storage device documentation for details on setting up the hardware.
On each node of the cluster, determine whether the local-mac-address variable is correctly set to false.
# /usr/sbin/eeprom local-mac-address? |
If the command returns local-mac-address=false, the variable setting is correct. Proceed to Step 3.
If the command returns local-mac-address=true, change the setting to false.
# /usr/sbin/eeprom local-mac-address?=false |
The new setting becomes effective at the next system reboot.
Have available the following information.
The Ethernet address of each cluster node
The following completed configuration planning worksheets from Sun Cluster 3.0 Release Notes.
"Local File System Layout Worksheet"
"Cluster and Node Names Worksheet"
"Cluster Interconnect Worksheet"
See Chapter 1, Planning the Sun Cluster Configuration for planning guidelines.
Update naming services.
Add address-to-name mappings for all public hostnames and logical addresses, as well as the IP address and hostname of the JumpStart server, to any naming services (such as NIS, NIS+, or DNS) used by clients for access to cluster services. See "IP Addresses" for planning guidelines. You also add these addresses to the local /etc/inet/hosts file on each node during the procedure "How to Configure the Name Service Switch".
If you do not use a name service, create jumpstart-dir/autoscinstall.d/nodes/nodename/archive/etc/inet/hosts files on the JumpStart install server, one file for each node of the cluster, where nodename is the name of a node of the cluster. Add the address-to-name mappings there.
As superuser, set up the JumpStart install server for Solaris operating environment installation.
Refer to the setup_install_server(1M) and add_install_client(1M) man pages and Solaris Advanced Installation Guide for instructions on setting up a JumpStart install server.
When setting up the install server, ensure that the following requirements are met.
The install server is on the same subnet as the cluster nodes, but is not itself a cluster node.
The install server installs the release of the Solaris operating environment required by the Sun Cluster software.
A custom JumpStart directory exists for JumpStart installation of Sun Cluster. This jumpstart-dir directory must contain a copy of the check(1M) utility and be NFS exported for reading by the JumpStart install server.
Each new cluster node is configured as a custom JumpStart install client using the custom JumpStart directory set up for Sun Cluster installation.
(Optional) Create a directory on the JumpStart install server to hold your copies of the Sun Cluster and Sun Cluster data services CD-ROMs.
In the following example, the /export/suncluster directory is created for this purpose.
# mkdir -m 755 /export/suncluster |
Copy the Sun Cluster CD-ROM to the JumpStart install server.
Insert the Sun Cluster 3.0 CD-ROM into the CD-ROM drive on the JumpStart install server.
If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.
Change to the /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools directory.
# cd /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools |
Copy the CD-ROM to a new directory on the JumpStart install server.
The scinstall command creates the new install directory as it copies the CD-ROM files. The install directory name /export/suncluster/sc30 is used here as an example.
# ./scinstall -a /export/suncluster/sc30 |
Eject the CD-ROM.
# cd / # eject cdrom |
Ensure that the Sun Cluster 3.0 CD-ROM image on the JumpStart install server is NFS exported for reading by the JumpStart install server.
Refer to NFS Administration Guide and the share(1M) and dfstab(4) man pages for more information about automatic file sharing.
From the JumpStart install server, start the scinstall(1M) utility.
The path /export/suncluster/sc30 is used here as an example of the install directory you created.
# cd /export/suncluster/sc30/SunCluster_3.0/Tools # ./scinstall |
Follow these guidelines while using the interactive scinstall utility.
Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.
Unless otherwise noted, pressing Control-D returns you either to the start of a series of related questions or to the Main Menu.
Your session answers are stored as defaults for the next time you run this menu option.
To choose JumpStart installation, type 3 (Configure a cluster to be JumpStarted from this install server).
If option 3 does not have an asterisk in front, this omission indicates the option is disabled because JumpStart setup is not complete or has an error. Exit the scinstall utility, correct JumpStart setup, then restart the scinstall utility.
Follow the prompts to specify Sun Cluster configuration information.
JumpStart directory name
Cluster name
Cluster node names
Node authentication
Private network address and netmask--You cannot change the private network address after the cluster has successfully formed
Cluster interconnect (transport adapters and transport junctions)--You can configure no more than two adapters by using the scinstall command, but you can configure additional adapters later by using the scsetup utility
Global devices file-system name
Automatic reboot--Do not choose automatic reboot if you have Sun Cluster software patches to install
When finished, the scinstall commands generated from your input are displayed for confirmation. If you choose not to accept one of them, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 3 and provide different answers. Your previous entries are displayed as the defaults.
If necessary, make adjustments to the default class file, or profile, created by scinstall.
The scinstall command creates the following autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.0 directory.
install_type initial_install system_type standalone partitioning explicit filesys rootdisk.s0 free / filesys rootdisk.s1 750 swap filesys rootdisk.s3 100 /globaldevices filesys rootdisk.s7 10 cluster SUNWCuser add package SUNWman add |
The default class file installs the End User System Support software group (SUNWCuser) of Solaris software. For Sun Enterprise E10000 servers, you must install the Entire Distribution + OEM software group. Also, some third-party software, such as Oracle, might require additional Solaris packages. Refer to third-party documentation for any Solaris software requirements.
You can change the profile in one of the following ways.
Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.
Update the rules file to point to other profiles, then run the check utility to validate the rules file.
As long as minimum file-system allocation requirements are met, no restrictions are imposed on changes to the Solaris operating environment install profile. Refer to "System Disk Partitions" for partitioning guidelines and requirements to support Sun Cluster 3.0 software.
Are you installing a new node to an existing cluster?
If no, proceed to Step 12.
If yes, perform the following steps to create a mount point on the new node for each cluster file system in the cluster.
From another, active node of the cluster, display the names of all cluster file systems.
% mount | grep global | egrep -v node@ | awk `{print $1}' |
On the node you are adding to the cluster, create a mount point for each cluster file system in the cluster.
% mkdir -p mountpoint |
For example, if a file system name returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node being added to the cluster.
Set up Solaris patch directories.
Create jumpstart-dir/autoscinstall.d/nodes/nodename/patches directories on the JumpStart install server, one directory for each node in the cluster, where nodename is the name of a cluster node.
# mkdir jumpstart-dir/autoscinstall.d/nodes/nodename/patches |
Place copies of any Solaris patches into each of these directories. Also place copies of any hardware-related patches that must be installed after Solaris software is installed into each of these directories.
If you do not use a name service, set up files to contain the necessary hostname information.
On the JumpStart install server, create files named jumpstart-dir/autoscinstall.d/nodes/nodename/archive/etc/inet/hosts.
Create one file for each node, where nodename is the name of a cluster node.
Add the following entries into each file.
IP address and hostname of the NFS server that holds a copy of the Sun Cluster CD-ROM image. This could be the JumpStart install server or another machine.
IP address and hostname of each node in the cluster.
(Optional) Add your own post-installation finish script.
You can add your own finish script, which is run after the standard finish script installed by the scinstall command.
If you are using an administrative console, display a console screen for each node in the cluster.
If cconsole(1M) is installed and configured on your administrative console, you can use it to display the individual console screens. Otherwise, you must connect to the consoles of each node individually.
From the ok PROM prompt on the console of each node, type the boot net - install command to begin the network JumpStart installation of each node.
The dash (-) in the command must be surrounded by a space on each side.
ok boot net - install |
Unless you have installed your own ntp.conf file in the /etc/inet directory, the scinstall command installs a default ntp.conf file for you. Because the default file is shipped with references to eight nodes, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See "How to Update Network Time Protocol (NTP)" for information on how to suppress these messages under otherwise normal cluster conditions.
When the installation is successfully completed, each node is fully installed as a new cluster node.
The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. Refer to the ifconfig(1M) man page for more information about Solaris interface groups.
Install any Sun Cluster software patches.
Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.
If you installed Sun Cluster software patches, shut down the cluster, then reboot each node in the cluster.
Before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down.
Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".
To set up the name service look-up order, go to "How to Configure the Name Service Switch".
Perform this task on each node in the cluster.
Become superuser on the cluster node.
Edit the /etc/nsswitch.conf file.
Verify that cluster is the first source look-up for the hosts and netmasks database entries.
This order is necessary for Sun Cluster software to function properly. The scinstall(1M) command adds cluster to these entries during installation.
(Optional) For the hosts and netmasks database entries, follow cluster with files.
(Optional) For all other database entries, place files first in look-up order.
Performing Step b and Step c can increase availability to data services if the naming service becomes unavailable.
The following example shows partial contents of an /etc/nsswitch.conf file. The look-up order for the hosts and netmasks database entries is first cluster, then files. The look-up order for other entries begins with files.
# vi /etc/nsswitch.conf ... passwd: files nis group: files nis ... hosts: cluster files nis ... netmasks: cluster files nis ... |
Update the /etc/inet/hosts file with all public hostnames and logical addresses for the cluster.
To install Solstice DiskSuite volume manager software, go to "How to Install Solstice DiskSuite Software". To install VERITAS Volume Manager volume manager software, go to "How to Install VERITAS Volume Manager Software".
Perform this task on each node in the cluster.
Become superuser on the cluster node.
If you are installing from the CD-ROM, insert the Solaris 8 Software 2 of 2 CD-ROM into the CD-ROM drive on the node.
Solstice DiskSuite software packages are now located on the Solaris 8 software CD-ROM.
This step assumes that the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices.
Install the Solstice DiskSuite software packages.
If you have Solstice DiskSuite software patches to install, do not reboot after installing the Solstice DiskSuite software.
Install software packages in the order shown in the following example.
# cd /cdrom_image/sol_8_sparc_2/Solaris_8/EA/products/DiskSuite_4.2.1/sparc/Packages # pkgadd -d . SUNWmdr SUNWmdu [SUNWmdx] optional-pkgs |
The SUNWmdr and SUNWmdu packages are required for all Solstice DiskSuite installations. The SUNWmdx package is also required for the 64-bit Solstice DiskSuite installation. Refer to your Solstice DiskSuite installation documentation for information about optional software packages.
If you installed from a CD-ROM, eject the CD-ROM.
If not already installed, install any Solstice DiskSuite patches.
Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.
Manually populate the global device namespace for Solstice DiskSuite by running the /usr/cluster/bin/scgdevs command.
If you installed Solstice DiskSuite software patches, shut down the cluster, then reboot each node in the cluster.
Before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down.
Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".
Refer to your Solstice DiskSuite installation documentation for complete information about installing Solstice DiskSuite software.
To set up your root user's environment, go to "How to Set Up the Root User's Environment".
Perform this task on each node in the cluster.
Become superuser on the cluster node.
Disable Dynamic Multipathing (DMP).
# mkdir /dev/vx # ln -s /dev/dsk /dev/vx/dmp # ln -s /dev/rdsk /dev/vx/rdmp |
Insert the VxVM CD-ROM into the CD-ROM drive on the node.
Install the VxVM software packages.
If you have VxVM software patches to install, do not reboot after installing the VxVM software.
# cd /cdrom_image/volume_manager_3_0_4_solaris/pkgs # pkgadd -d . VRTSvxvm VRTSvmdev VRTSvmman |
List VRTSvxvm first in the pkgadd(1M) command and VRTSvmdev second. Refer to your VxVM installation documentation for descriptions of the other VxVM software packages.
The VRTSvxvm and VRTSvmdev packages are required for all VxVM installations.
Eject the CD-ROM.
Install any VxVM patches.
Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.
If you installed VxVM software patches, shut down the cluster, then reboot each node in the cluster.
Before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down.
Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".
Refer to your VxVM installation documentation for complete information about installing VxVM software.
To set up your root user's environment, go to "How to Set Up the Root User's Environment".
Perform these tasks on each node in the cluster.
Become superuser on the cluster node.
Set the PATH to include /usr/sbin and /usr/cluster/bin.
For VERITAS Volume Manager, also set your PATH to include /etc/vx/bin. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/bin to your PATH.
Set the MANPATH to include /usr/cluster/man. Also include the volume manager-specific paths.
For Solstice DiskSuite software, set your MANPATH to include /usr/share/man.
For VERITAS Volume Manager, set your MANPATH to include /opt/VRTSvxvm/man. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/man to your MANPATH.
(Optional) For ease of administration, set the same root password on each node, if you have not already done so.
To install data service software packages, go to "How to Install Data Service Software Packages".
Perform this task on each cluster node.
You must install the same set of data service packages on each node, even if a node is not expected to host resources for an installed data service.
Become superuser on the cluster node.
If you are installing from the CD-ROM, insert the Data Services CD-ROM into the CD-ROM drive on the node.
Start the scinstall(1M) utility.
# scinstall |
Follow these guidelines while using the interactive scinstall utility.
Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.
Unless otherwise noted, pressing Control-D returns you either to the start of a series of related questions or to the Main Menu.
To add data services, type 4 (Add support for a new data service to this cluster node).
Follow the prompts to select all data services you want to install.
If you installed from a CD-ROM, eject the CD-ROM.
Install any Sun Cluster data service patches.
Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.
You do not have to reboot after installing Sun Cluster data service patches, unless specified by the patch special instructions. If a patch instruction requires that you reboot, before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down. Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".
For post-installation setup and configuration tasks, see "Configuring the Cluster".
The following table lists the tasks to perform to configure your cluster.
Table 2-2 Task Map: Configuring the Cluster
Task |
For Instructions, Go To ... |
---|---|
Perform post-installation setup | |
Configure the Solstice DiskSuite or VERITAS Volume Manager volume manager and device groups. |
"How to Configure Volume Manager Software", and volume manager documentation |
Create and mount cluster file systems. | |
(Optional) Configure additional public network adapters. | |
Configure Public Network Management (PNM) and set up NAFO groups. | |
(Optional) Change a node's private hostname. | |
Edit the /etc/inet/ntp.conf file to update node name entries. | |
(Optional) Install the Sun Cluster module to Sun Management Center software. |
"Installation Requirements for Sun Management Center Software for Sun Cluster Monitoring" and Sun Management Center documentation |
Install third-party applications and configure the applications, data services, and resource groups. |
Sun Cluster 3.0 Data Services Installation and Configuration Guide, and third-party application documentation |
Perform this procedure one time only, after the cluster is fully formed.
Verify that all nodes have joined the cluster.
From one node, display a list of cluster nodes to verify that all nodes have joined the cluster.
You do not need to be logged in as superuser to run this command.
% scstat -n |
Output resembles the following.
-- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online |
On each node, display a list of all the devices that the system checks to verify their connectivity to the cluster nodes.
You do not need to be logged in as superuser to run this command.
% scdidadm -L |
The list on each node should be the same. Output resembles the following.
1 phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1 2 phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 2 phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 3 phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 3 phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 ... |
Identify from the scdidadm output the global device ID (DID) name of each shared disk you will configure as a quorum device.
For example, the output in the previous substep shows that global device d2 is shared by phys-schost-1 and phys-schost-2. You need this information in Step 4. Refer to "Quorum Devices" for further information about planning quorum devices.
Become superuser on one node of the cluster.
Start the scsetup(1M) utility.
# scsetup |
The Initial Cluster Setup screen is displayed.
If the Main Menu is displayed instead, this procedure has already been successfully performed.
Respond to the prompts.
At the prompt Do you want to add any quorum disks?, configure at least one shared quorum device if your cluster is a two-node cluster.
A two-node cluster remains in install mode until a shared quorum device is configured. After the scsetup utility configures the quorum device, the message Command completed successfully is displayed. If your cluster has three or more nodes, configuring a quorum device is optional.
At the prompt Is it okay to reset "installmode"?, answer Yes.
After the scsetup utility sets quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed and the utility returns you to the Main Menu.
If the quorum setup process is interrupted or fails to complete successfully, rerun Step 3 and Step 4.
From any node, verify that cluster install mode is disabled.
# scconf -p | grep 'Cluster install mode:' Cluster install mode: disabled |
To configure volume manager software, go to "How to Configure Volume Manager Software".
Have available the following information.
Mappings of your storage disk drives
The following completed configuration planning worksheets from Sun Cluster 3.0 Release Notes.
"Local File System Layout Worksheet"
"Disk Device Group Configurations Worksheet"
"Volume Manager Configurations Worksheet"
"Metadevices Worksheet (Solstice DiskSuite)"
See Chapter 1, Planning the Sun Cluster Configuration for planning guidelines.
Follow the appropriate configuration procedures for your volume manager.
Volume Manager |
Documentation |
---|---|
Solstice DiskSuite |
Appendix A, Configuring Solstice DiskSuite Software Solstice DiskSuite documentation |
VERITAS Volume Manager |
Appendix B, Configuring VERITAS Volume Manager VERITAS Volume Manager documentation |
After configuring your volume manager, to create a cluster file system, go to "How to Add Cluster File Systems".
Perform this task for each cluster file system you add.
Creating a file system destroys any data on the disks. Be sure you have specified the correct disk device name. If you specify the wrong device name, you erase its contents when the new file system is created.
Become superuser on any node in the cluster.
For faster file-system creation, become superuser on the current primary of the global device for which you are creating a file system.
Create a file system by using the newfs(1M) command.
# newfs raw-disk-device |
The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.
Table 2-3 Sample Raw Disk Device Names
Volume Manager |
Sample Disk Device Name |
Description |
---|---|---|
Solstice DiskSuite |
/dev/md/oracle/rdsk/d1 |
Raw disk device d1 within the oracle diskset |
VERITAS Volume Manager |
/dev/vx/rdsk/oradg/vol01 |
Raw disk device vol01 within the oradg disk group |
None |
/dev/global/rdsk/d1s3 |
Raw disk device d1s3 |
On each node in the cluster, create a mount-point directory for the cluster file system.
A mount point is required on each node, even if the cluster file system will not be accessed on that node.
# mkdir -p /global/device-group/mount-point |
Name of the directory that corresponds to the name of the device group that contains the device
Name of the directory on which to mount the cluster file system
For ease of administration, create the mount point in the /global/device-group directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.
On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.
The syncdir mount option is not required for cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior. If you do not, you will have the same behavior that is seen with UFS file systems. Not specifying syncdir can significantly improve performance of writes that allocate disk blocks, such as when appending data to a file. However, in some cases, without syncdir you would not discover an out-of-space condition until you close a file. The cases in which you could have problems if you do not specify syncdir are rare. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.
To automatically mount the cluster file system, set the mount at boot field to yes.
Use the following required mount options.
If you are using Solaris UFS logging, use the global,logging mount options.
If a cluster file system uses a Solstice DiskSuite trans metadevice, use the global mount option (do not use the logging mount option). Refer to Solstice DiskSuite documentation for information about setting up trans metadevices.
Logging is required for all cluster file systems.
Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.
Check the boot order dependencies of the file systems.
For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.
Make sure the entries in each node's /etc/vfstab file list devices in the same order.
Refer to the vfstab(4) man page for details.
On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.
# sccheck |
If no errors occur, nothing is returned.
From any node in the cluster, mount the cluster file system.
# mount /global/device-group/mount-point |
On each node of the cluster, verify that the cluster file system is mounted.
You can use either the df(1M) or mount(1M) command to list mounted file systems.
The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.
# newfs /dev/md/oracle/rdsk/d1 ... (on each node:) # mkdir -p /global/oracle/d1 # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging (save and exit) (on one node:) # sccheck # mount /global/oracle/d1 # mount ... /global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/ largefiles on Sun Oct 3 08:56:16 1999 |
If your cluster nodes are connected to more than one public subnet, to configure additional public network adapters, go to "How to Configure Additional Public Network Adapters".
Otherwise, to configure PNM and set up NAFO groups, go to "How to Configure Public Network Management (PNM)".
If the nodes in the cluster are connected to more than one public subnet, you can configure additional public network adapters for the secondary subnets. However, configuring secondary subnets is not required.
Configure only public network adapters, not private network adapters.
Have available your completed "Public Networks Worksheet" from Sun Cluster 3.0 Release Notes.
Become superuser on the node being configured for additional public network adapters.
Create a file named /etc/hostname.adapter, where adapter is the adapter name.
In each NAFO group, an /etc/hostname.adapter file should exist for only one adapter in the group.
Type the hostname of the public network adapter IP address in the /etc/hostname.adapter file.
For example, the following shows the file /etc/hostname.hme3, created for the adapter hme3, which contains the hostname phys-schost-1.
# vi /etc/hostname.hme3 phys-schost-1 |
On each cluster node, ensure that the /etc/inet/hosts file contains the IP address and corresponding hostname assigned to the public network adapter.
For example, the following shows the entry for phys-schost-1.
# vi /etc/inet/hosts ... 192.29.75.101 phys-schost-1 ... |
If you use a naming service, this information should also exist in the naming service database.
On each cluster node, turn on the adapter.
# ifconfig adapter plumb # ifconfig adapter hostname netmask + broadcast + -trailers up |
Verify that the adapter is configured correctly.
# ifconfig adapter |
The output should contain the correct IP address for the adapter.
Each public network adapter to be managed by the Resource Group Manager (RGM) must belong to a NAFO group. To configure PNM and set up NAFO groups, go to "How to Configure Public Network Management (PNM)".
Perform this task on each node of the cluster.
All public network adapters must belong to a Network Adapter Failover (NAFO) group. Also, each node can have only one NAFO group per subnet.
Have available your completed "Public Networks Worksheet" from Sun Cluster 3.0 Release Notes.
Become superuser on the node being configured for a NAFO group.
Create the NAFO group.
# pnmset -c nafo_group -o create adapter [adapter ...] |
Configures the NAFO group nafo_group
Creates a new NAFO group containing one or more public network adapters
Refer to the pnmset(1M) man page for more information.
Verify the status of the NAFO group.
# pnmstat -l |
Refer to the pnmstat(1M) man page for more information.
The following example creates NAFO group nafo0, which uses public network adapters qfe1 and qfe5.
# pnmset -c nafo0 -o create qfe1 qfe5 # pnmstat -l group adapters status fo_time act_adp nafo0 qfe1:qfe5 OK NEVER qfe5 nafo1 qfe6 OK NEVER qfe6 |
If you want to change any private hostnames, go to "How to Change Private Hostnames". Otherwise, to update the /etc/inet/ntp.conf file, go to "How to Update Network Time Protocol (NTP)".
Perform this task if you do not want to use the default private hostnames (clusternodenodeid-priv) assigned during Sun Cluster software installation.
This procedure should not be performed after applications and data services have been configured and started. Otherwise, an application or data service might continue using the old private hostname after it has been renamed, causing hostname conflicts. If any applications or data services are running, stop them before performing this procedure.
Become superuser on a node in the cluster.
Start the scsetup(1M) utility.
# scsetup |
To work with private hostnames, type 4 (Private hostnames).
To change a private hostname, type 1 (Change a private hostname).
Follow the prompts to change the private hostname. Repeat for each private hostname you want to change.
Verify the new private hostnames.
# scconf -pv | grep 'private hostname' (phys-schost-1) Node private hostname: phys-schost-1-priv (phys-schost-3) Node private hostname: phys-schost-3-priv (phys-schost-2) Node private hostname: phys-schost-2-priv |
To update the /etc/inet/ntp.conf file, go to "How to Update Network Time Protocol (NTP)".
Perform this task on each node.
Become superuser on the cluster node.
Edit the /etc/inet/ntp.conf file.
The scinstall(1M) command copies a template file, ntp.cluster, to /etc/inet/ntp.conf as part of standard cluster installation. But if an ntp.conf file already exists before Sun Cluster software is installed, that existing file remains unchanged. If cluster packages are installed by using other means, such as direct use of pkgadd(1M), you need to configure NTP.
Remove all entries for private hostnames that are not used by the cluster.
If the ntp.conf file contains non-existent private hostnames, when a node is rebooted, error messages are generated on the node's attempts to contact those private hostnames.
If you changed any private hostnames after Sun Cluster software installation, update each file entry with the new private hostname.
If necessary, make other modifications to meet your NTP requirements.
The primary requirement when configuring NTP, or any time synchronization facility, within the cluster is that all cluster nodes be synchronized to the same time. Consider accuracy of time on individual nodes secondary to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs, as long as this basic requirement for synchronization is met.
Refer to Sun Cluster 3.0 Concepts for further information about cluster time and to the ntp.cluster template for guidelines on configuring NTP for a Sun Cluster configuration.
Restart the NTP daemon.
# /etc/init.d/xntpd stop # /etc/init.d/xntpd start |
If you want to use the Sun Management Center product to configure resource groups or monitor the cluster, go to "Installation Requirements for Sun Management Center Software for Sun Cluster Monitoring".
Otherwise, to install third-party applications, refer to the documentation supplied with the application software and to Sun Cluster 3.0 Data Services Installation and Configuration Guide. To register resource types, set up resource groups, and configure data services, refer to Sun Cluster 3.0 Data Services Installation and Configuration Guide.
The following table lists the tasks to perform to install the Sun Cluster module software for Sun Management Center.
Table 2-4 Task Map: Installing the Sun Cluster Module for Sun Management Center
Task |
For Instructions, Go To... |
---|---|
Install Sun Management Center server, help server, agent, and console packages. |
Sun Management Center documentation "Installation Requirements for Sun Management Center Software for Sun Cluster Monitoring" |
Install Sun Cluster module packages. |
"How to Install the Sun Cluster Module for Sun Management Center" |
Start Sun Management Center server, console, and agent processes. | |
Add each cluster node as a Sun Management Center agent host object. |
"How to Add a Cluster Node as a Sun Management Center Agent Host Object" |
Load the Sun Cluster module to begin monitoring of the cluster. |
The Sun Cluster module for the Sun Management Center product (formerly Sun Enterprise SyMON) is used to configure resource groups and monitor clusters. Perform the following tasks before installing the Sun Cluster module packages.
Space requirements - Ensure that 25 Mbytes of space is available on each cluster node for Sun Cluster module packages.
Sun Management Center packages - You must install the Sun Management Center server, help server, and console packages on non-cluster nodes. If you have an administrative console or other dedicated machine, you can realize improved performance by running the console on the administrative console and the server on a separate machine. You must install the Sun Management Center agent package on each cluster node.
Follow procedures in the Sun Management Center documentation to install the Sun Management Center packages.
Simple Network Management Protocol (SNMP) port - When installing the Sun Management Center product on the agent, choose whether to use the default of 161 for the agent (SNMP) communication port or another number. This port number enables the server to communicate with this agent. Record the port number you choose for reference later when configuring the cluster for monitoring.
Perform this procedure to install the Sun Cluster module console, server, and help server packages.
The Sun Cluster module agent packages (SUNWscsal and SUNWscsam) were added to cluster nodes during Sun Cluster software installation.
Ensure that the Sun Management Center core packages are installed.
This step includes installing Sun Management Center agent packages on each cluster node. Refer to Sun Management Center documentation for installation instructions.
On the administrative console, install the Sun Cluster module console package.
Become superuser.
If you are installing from the CD-ROM, insert the Sun Cluster module CD-ROM into the CD-ROM drive.
Change to the /cdrom_image/SunCluster_3.0/Packages directory.
Install the Sun Cluster module console package.
# pkgadd -d . SUNWscscn |
If you installed from a CD-ROM, eject the CD-ROM.
On the server machine, install the Sun Cluster module server package SUNWscssv.
Use the same procedure as in Step 2.
On the help server machine, install the Sun Cluster module help server package SUNWscshl.
Use the same procedure as in Step 2.
Install any Sun Cluster module patches.
Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.
To start the Sun Management Center software, go to "How to Start Sun Management Center Software".
Perform this procedure to start the Sun Management Center server, agent, and console processes.
As superuser, on the Sun Management Center server machine, start the Sun Management Center server process.
# /opt/SUNWsymon/sbin/es-start -S |
As superuser, on each Sun Management Center agent machine (cluster node), start the Sun Management Center agent process.
# /opt/SUNWsymon/sbin/es-start -a |
On the Sun Management Center console machine (administrative console), start the Sun Management Center console.
You do not need to be superuser to start the console process.
% /opt/SUNWsymon/sbin/es-start -c |
Type your login name, password, and server hostname and click Login.
To add cluster nodes as monitored host objects, go to "How to Add a Cluster Node as a Sun Management Center Agent Host Object".
Perform this procedure to create a Sun Management Center agent host object for a cluster node.
You need only one cluster node host object to use Sun Cluster module monitoring and configuration functions for the entire cluster. However, if that cluster node becomes unavailable, connection to the cluster through that host object also becomes unavailable. Then you need another cluster node host object to reconnect to the cluster.
From the Sun Management Center main window, select a domain from the Sun Management Center Administrative Domains pull-down list.
This domain will contain the Sun Management Center agent host object you are creating. During Sun Management Center software installation, a Default Domain was automatically created for you. You can use this domain, select another existing domain, or create a new one.
Refer to your Sun Management Center documentation for information about creating Sun Management Center domains.
Select Edit>Create an Object from the pull-down menu.
Select the Node tab.
From the Monitor via pull-down list, select Sun Management Center Agent - Host.
Fill in the name of the cluster node (for example, phys-schost-1) in the Node Label and Hostname text fields.
Leave the IP text field blank. The Description text field is optional.
In the Port text field, type the port number you chose during Sun Management Center agent installation.
Click OK.
A Sun Management Center agent host object is created in the domain.
To load the Sun Cluster module, go to "How to Load the Sun Cluster Module".
Perform this procedure to start cluster monitoring.
From the Sun Management Center main window, double-click the agent host object for a cluster node.
The agent host object is shown in two places. You can double-click either one. The Details window of the host object is then displayed.
Select the icon at the root (top) of the hierarchy to highlight it.
This icon is labeled with the cluster node name.
Select Module>Load Module from the pull-down menu.
The Load Module window lists each available Sun Management Center module and whether it is currently loaded.
Select Sun Cluster: Not loaded (usually at the bottom of the list) and click OK.
The Module Loader window shows the current parameter information for the selected module.
Click OK.
After a few moments the module is loaded and a Sun Cluster icon is displayed in the Details window.
In the Details window under the Operating System category, expand the Sun Cluster subtree in either of the following ways.
In the tree hierarchy on the left side of the window, place the cursor over the Sun Cluster module icon and single-click the left mouse button.
In the topology view on the right side of the window, place the cursor over the Sun Cluster module icon and double-click the left mouse button.
Refer to the Sun Cluster module online help for information about using Sun Cluster module features.
To view online help for a specific Sun Cluster module item, place the cursor over the item, click the right mouse button, and select Help from the pop-up menu.
To access the home page for the Sun Cluster module online help, place the cursor over the Cluster Info icon, click the right mouse button, and select Help from the pop-up menu.
To directly access the home page for the Sun Cluster module online help, click the Sun Management Center Help button to launch the help browser, then go to the URL file:/opt/SUNWsymon/lib/locale/C/help/main.top.html.
The Help button in the Sun Management Center browser accesses Sun Management Center online help, not topics specific to the Sun Cluster module.
Refer to Sun Management Center online help and your Sun Management Center documentation for information about using the Sun Management Center product.
To install third-party applications, refer to the documentation supplied with the application software and to Sun Cluster 3.0 Data Services Installation and Configuration Guide. To register resource types, set up resource groups, and configure data services, refer to Sun Cluster 3.0 Data Services Installation and Configuration Guide.