This chapter provides procedures for how to install and configure your cluster. You can also use these procedures to add a new node to an existing cluster. This chapter also provides procedures to uninstall certain cluster software.
The following sections are in this chapter.
This section provides information and procedures to install software on the cluster nodes.
The following task map lists the tasks that you perform to install software on multiple-node or single-node clusters. Complete the procedures in the order that is indicated.
Table 2–1 Task Map: Installing the Software
Task |
Instructions |
---|---|
1. Plan the layout of your cluster configuration and prepare to install software. | |
2. (Optional) Install Cluster Control Panel (CCP) software on the administrative console. |
How to Install Cluster Control Panel Software on an Administrative Console |
3. Install the Solaris OS on all nodes. | |
4. (Optional) SPARC: Install Sun StorEdge Traffic Manager software. | |
5. (Optional) SPARC: Install VERITAS File System software. | |
6. Install Sun Cluster software packages and any Sun Java System data services for the Solaris 8 or Solaris 9 OS that you will use. |
How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer) |
7. Set up directory paths. | |
8. Establish the cluster or additional cluster nodes. |
Before you begin to install software, make the following preparations.
Ensure that the hardware and software that you choose for your cluster configuration are supported for this release of Sun Cluster software.
Contact your Sun sales representative for the most current information about supported cluster configurations.
Read the following manuals for information that can help you plan your cluster configuration and prepare your installation strategy.
Sun Cluster 3.1 8/05 Release Notes for Solaris OS – Restrictions, bug workarounds, and other late-breaking information.
Sun Cluster 3.x Release Notes Supplement – Post-release documentation about additional restrictions, bug workarounds, new features, and other late-breaking information. This document is regularly updated and published online at the following Web site.
Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS – Overviews of the Sun Cluster product.
Sun Cluster Software Installation Guide for Solaris OS (this manual) – Planning guidelines and procedures for installing and configuring Solaris, Sun Cluster, and volume-manager software.
Sun Cluster Data Services Planning and Administration Guide for Solaris OS – Planning guidelines and procedures to install and configure data services.
Have available all related documentation, including third-party documents.
The following is a partial list of products whose documentation you might need to reference during cluster installation:
Solaris OS
Solstice DiskSuite or Solaris Volume Manager software
Sun StorEdge QFS software
SPARC: VERITAS Volume Manager
SPARC: Sun Management Center
Third-party applications
Plan your cluster configuration.
Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Solaris and Sun Cluster software installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Sun Cluster software.
For example, the Oracle Real Application Clusters Guard option of Oracle Real Application Clusters has special requirements for the hostnames that you use in the cluster. Another example with special requirements is Sun Cluster HA for SAP. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software.
Also note that both Oracle Real Application Clusters and Sun Cluster HA for SAP are not supported for use in x86 based clusters.
Use the planning guidelines in Chapter 1, Planning the Sun Cluster Configuration and in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS to determine how to install and configure your cluster.
Fill out the cluster framework and data-services configuration worksheets that are referenced in the planning guidelines. Use your completed worksheets for reference during the installation and configuration tasks.
Obtain all necessary patches for your cluster configuration.
See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.
Copy the patches that are required for Sun Cluster into a single directory.
The directory must be on a file system that is accessible by all nodes. The default patch directory is /var/cluster/patches/.
After you install Solaris software on a node, you can view the /etc/release file to see the exact version of Solaris software that is installed.
(Optional) If you are using SunPlex Installer, you can create a patch list file.
If you specify a patch list file, SunPlex Installer only installs the patches that are listed in the patch list file. For information about creating a patch-list file, refer to the patchadd(1M) man page.
Record the path to the patch directory.
If you want to use Cluster Control Panel software to connect from an administrative console to your cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.
Otherwise, choose the Solaris installation procedure to use.
If you intend to install Sun Cluster software by using either the scinstall(1M) utility (text-based method) or SunPlex Installer (GUI-based method), go to How to Install Solaris Software to first install Solaris software.
If you intend to install Solaris and Sun Cluster software in the same operation (JumpStart method), go to How to Install Solaris and Sun Cluster Software (JumpStart).
You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.
This procedure describes how to install the Cluster Control Panel (CCP) software on an administrative console. The CCP provides a single interface from which to start the cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each of these tools provides a multiple-window connection to a set of nodes, as well as a common window. You can use the common window to send input to all nodes at one time.
You can use any desktop machine that runs the Solaris 8 or Solaris 9 OS as an administrative console. In addition, you can also use the administrative console as a documentation server. If you are using Sun Cluster on a SPARC based system, you can use the administrative console as a Sun Management Center console or server as well. See Sun Management Center documentation for information about how to install Sun Management Center software. See the Sun Cluster 3.1 8/05 Release Notes for Solaris OS for additional information about how to install Sun Cluster documentation.
Ensure that a supported version of the Solaris OS and any Solaris patches are installed on the administrative console. All platforms require at least the End User Solaris Software Group.
Become superuser on the administrative console.
Insert the Sun Cluster 2 of 2 CD-ROM in the CD-ROM drive of the administrative console.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.
Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10 .
# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ |
Install the SUNWccon package.
# pkgadd -d . SUNWccon |
(Optional) Install the SUNWscman package.
# pkgadd -d . SUNWscman |
When you install the SUNWscman package on the administrative console, you can view Sun Cluster man pages from the administrative console before you install Sun Cluster software on the cluster nodes.
(Optional) Install the Sun Cluster documentation packages.
If you do not install the documentation on your administrative console, you can still view HTML or PDF documentation directly from the CD-ROM. Use a web browser to view the Solaris_arch/Product/sun_cluster/index.html file on the Sun Cluster 2 of 2 CD-ROM, where arch is sparc or x86.
Determine whether the SUNWsdocs package is already installed on the administrative console.
# pkginfo | grep SUNWsdocs application SUNWsdocs Documentation Navigation for Solaris 9 |
If the SUNWsdocs package is not yet installed, you must install it before you install the documentation packages.
Choose the Sun Cluster documentation packages to install.
The following documentation collections are available in both HTML and PDF format:
Collection Title |
HTML Package Name |
PDF Package Name |
---|---|---|
Sun Cluster 3.1 9/04 Software Collection for Solaris OS (SPARC Platform Edition) |
SUNWscsdoc |
SUNWpscsdoc |
Sun Cluster 3.1 9/04 Software Collection for Solaris OS (x86 Platform Edition) |
SUNWscxdoc |
SUNWpscxdoc |
Sun Cluster 3.x Hardware Collection for Solaris OS (SPARC Platform Edition) |
SUNWschw |
SUNWpschw |
Sun Cluster 3.x Hardware Collection for Solaris OS (x86 Platform Edition) |
SUNWscxhw |
SUNWpscxhw |
Sun Cluster 3.1 9/04 Reference Collection for Solaris OS |
SUNWscref |
SUNWpscref |
Install the SUNWsdocs package, if not already installed, and your choice of Sun Cluster documentation packages.
All documentation packages have a dependency on the SUNWsdocs package. The SUNWsdocs package must exist on the system before you can successfully install a documentation package on that system.
# pkgadd -d . SUNWsdocs pkg-list |
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
Create an /etc/clusters file on the administrative console.
Add your cluster name and the physical node name of each cluster node to the file.
# vi /etc/clusters clustername node1 node2 |
See the /opt/SUNWcluster/bin/clusters(4) man page for details.
Create an /etc/serialports file.
Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC), a System Service Processor (SSP), and a Sun Fire system controller.
# vi /etc/serialports node1 ca-dev-hostname port node2 ca-dev-hostname port |
Physical names of the cluster nodes
Hostname of the console-access device
Serial port number
Note these special instructions to create an /etc/serialports file:
For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.
For all other console-access devices, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.
For Sun Enterprise 10000 servers, also see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.
(Optional) For convenience, set the directory paths on the administrative console.
# /opt/SUNWcluster/bin/ccp & |
Click the cconsole, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:
# /opt/SUNWcluster/bin/ctelnet & |
See the procedure “How to Remotely Log In to Sun Cluster” in Beginning to Administer the Cluster in Sun Cluster System Administration Guide for Solaris OS for additional information about how to use the CCP utility. Also see the ccp(1M) man page.
Determine whether the Solaris OS is already installed to meet Sun Cluster software requirements.
If the Solaris OS meets Sun Cluster requirements, go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
If the Solaris OS does not meet Sun Cluster requirements, install, reconfigure, or reinstall the Solaris OS as needed. See Planning the Solaris OS for information about Sun Cluster installation requirements for the Solaris OS. For installation procedures, go to How to Install Solaris Software.
Follow these procedures to install the Solaris OS on each node in the cluster or to install the Solaris OS on the master node that you will flash archive for a JumpStart installation. See How to Install Solaris and Sun Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.
To speed installation, you can install the Solaris OS on each node at the same time.
If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.
Perform the following tasks:
Ensure that the hardware setup is complete and that connections are verified before you install Solaris software. See the Sun Cluster Hardware Administration Collection and your server and storage device documentation for details.
Ensure that your cluster configuration planning is complete. See How to Prepare for Cluster Software Installation for requirements and guidelines.
Complete the Local File System Layout Worksheet.
If you use a naming service, add address-to-name mappings for all public hostnames and logical addresses to any naming services that clients use for access to cluster services. See IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.
If you are using a cluster administrative console, display a console screen for each node in the cluster.
If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.
Use the following command to start the cconsole utility:
# /opt/SUNWcluster/bin/cconsole clustername & |
The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
If you do not use the cconsole utility, connect to the consoles of each node individually.
Install the Solaris OS as instructed in your Solaris installation documentation.
You must install all nodes in a cluster with the same version of the Solaris OS.
You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:
Install at least the End User Solaris Software Group.
To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.
See Solaris Software Group Considerations for information about additional Solaris software requirements.
Choose Manual Layout to set up the file systems.
Create a file system of at least 512 Mbytes for use by the global-device subsystem.
If you intend to use SunPlex Installer to install Sun Cluster software, you must create the file system with a mount-point name of /globaldevices. The /globaldevices mount-point name is the default that is used by scinstall.
Sun Cluster software requires a global-devices file system for installation to succeed.
Specify that slice 7 is at least 20 Mbytes in size.
If you intend to use SunPlex Installer to install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9 or Solaris 10), also make this file system mount on /sds.
If you intend to use SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, SunPlex Installer must also install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9 or Solaris 10).
Create any other file-system partitions that you need, as described in System Disk Partitions.
For ease of administration, set the same root password on each node.
If you are adding a node to an existing cluster, prepare the cluster to accept the new node.
On any active cluster member, start the scsetup(1M) utility.
# scsetup |
The Main Menu is displayed.
Choose the menu item, New nodes.
Choose the menu item, Specify the name of a machine which may add itself.
Follow the prompts to add the node's name to the list of recognized machines.
The scsetup utility prints the message Command completed successfully if the task is completed without error.
Quit the scsetup utility.
From the active cluster node, display the names of all cluster file systems.
% mount | grep global | egrep -v node@ | awk '{print $1}' |
On the new node, create a mount point for each cluster file system in the cluster.
% mkdir -p mountpoint |
For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.
If you are adding a node and VxVM is installed on any node in the cluster, perform the following tasks.
Ensure that the same vxio number is used on the VxVM-installed nodes.
# grep vxio /etc/name_to_major vxio NNN |
Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.
If the vxio number is already in use on a node that does not have VxVM installed, change the /etc/name_to_major entry to use a different number.
If you installed the End User Solaris Software Group, use the pkgadd command to manually install any additional Solaris software packages that you might need.
The following Solaris packages are required to support some Sun Cluster functionality.
Install packages in the order in which they are listed in the following table.
Feature |
Mandatory Solaris Software Packages |
---|---|
RSMAPI, RSMRDT drivers, or SCI-PCI adapters (SPARC based clusters only) |
Solaris 8 or Solaris 9: SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox Solaris 10: SUNWrsm SUNWrsmo |
SunPlex Manager |
SUNWapchr SUNWapchu |
For the Solaris 8 or Solaris 9 OS, use the following command:
# pkgadd -d . packages |
For the Solaris 10 OS, use the following command:
# pkgadd -G -d . packages |
You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.
Install any required Solaris OS patches and hardware-related firmware and patches, including those for storage-array support. Also download any needed firmware that is contained in the hardware patches.
See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.
x86: Set the default boot file to kadb.
# eeprom boot-file=kadb |
The setting of this value enables you to reboot the node if you are unable to access a login prompt.
Update the /etc/inet/hosts file on each node with all IP addresses that are used in the cluster.
Perform this step regardless of whether you are using a naming service. See IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.
If you will use ce adapters for the cluster interconnect, add the following entry to the /etc/system file.
set ce:ce_taskq_disable=1 |
This entry becomes effective after the next system reboot.
(Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.
Add the following entry to the /etc/system file on each node of the cluster:
set kernel_cage_enable=1 |
This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.
If you intend to use Sun multipathing software, go to SPARC: How to Install Sun Multipathing Software.
If you intend to install VxFS, go to SPARC: How to Install VERITAS File System Software.
Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration.
Perform this procedure on each node of the cluster to install and configure Sun multipathing software for fiber channel (FC) storage. Multipathing software manages multiple I/O paths to the shared cluster storage.
For the Solaris 8 or Solaris 9 OS, you install and configure Sun StorEdge Traffic Manager software.
For the Solaris 10 OS, you enable the Solaris multipathing feature, which is installed by default as part of the Solaris 10 software.
Perform the following tasks:
Ensure that the Solaris OS is installed to support Sun Cluster software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.
For the Solaris 8 or Solaris 9 OS, have available your software packages, patches, and documentation for Sun StorEdge Traffic Manager software and Sun StorEdge SAN Foundation software. See http://www.sun.com/products-n-solutions/hardware/docs/ for links to documentation.
For the Solaris 10 OS, have available the Solaris Fibre Channel Storage Configuration and Multipathing Administration Guide at http://docs.sun.com/source/819-0139/.
Become superuser.
For the Solaris 8 or Solaris 9 OS, install on each node Sun StorEdge Traffic Manager software and any necessary patches.
For the procedure about how to install Sun StorEdge Traffic Manager software, see the Sun StorEdge Traffic Manager Installation and Configuration Guide at http://www.sun.com/products-n-solutions/hardware/docs/.
For a list of required patches for Sun StorEdge Traffic Manager software, see the Sun StorEdge Traffic Manager Software Release Notes at http://www.sun.com/storage/san/.
Enable multipathing functionality.
For the Solaris 8 or 9 OS, change the value of the mpxio-disable parameter to no.
Modify this entry in the /kernel/drv/scsi_vhci.conf file on each node.
set mpxio-disable=no |
For the Solaris 10 OS, issue the following command on each node:
If Sun Cluster software is already installed, do not issue this command. Running the stmsboot command on an active cluster node might cause Solaris services to go into the maintenance state. Instead, follow instructions in the stmsboot(1M) man page for using the stmsboot command in a Sun Cluster environment.
# /usr/sbin/stmsboot -e |
Enables Solaris I/O multipathing
See the stmsboot(1M) man page for more information.
For the Solaris 8 or Solaris 9 OS, determine whether your version of Sun StorEdge SAN Foundation software includes built-in support for your storage array.
If the software does not include built-in support for your storage array, edit the /kernel/drv/scsi_vhci.conf file on each node to include the necessary entries. For more information, see the release notes for your storage device.
For the Solaris 8 or Solaris 9 OS, shut down each node and perform a reconfiguration boot.
The reconfiguration boot creates the new Solaris device files and links.
# shutdown -y -g0 -i0 ok boot -r |
After the reconfiguration reboot is finished on all nodes, perform any additional tasks that are necessary to complete the configuration of your storage array.
See installation instructions for your storage array in the Sun Cluster Hardware Administration Collection for details.
If you installed Sun multipathing software after Sun Cluster software was installed on the cluster, DID mappings might require updating. Issue the following commands on each node of the cluster to regenerate the DID namespace.
# scdidadm -C# scdidadm -r(Solaris 8 or 9 only) # cfgadm -c configure# scgdevs
See the scdidadm(1M), scgdevs(1M)man pages for more information.
If you intend to install VxFS, go to SPARC: How to Install VERITAS File System Software.
Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
Perform this procedure on each node of the cluster.
Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.
Install any Sun Cluster patches that are required to support VxFS.
See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.
In the /etc/system file on each node, set the following values.
set rpcmod:svc_default_stksize=0x8000 set lwp_default_stksize=0x6000 |
These changes become effective at the next system reboot.
Sun Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation sets the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually set the value to 0x8000 after VxFS installation is complete.
You must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.
Install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
Follow this procedure to use the Sun JavaTM Enterprise System (Java ES) installer program to perform one or more of the following installation tasks:
To install the Sun Cluster framework software packages on each node in the cluster.
To install Sun Cluster framework software on the master node that you will flash archive for a JumpStart installation. See How to Install Solaris and Sun Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.
To install Sun Java System data services for the Solaris 8 or Solaris 9 OS from the Sun Cluster 2 of 2 CD-ROM.
Do not use this procedure to install the following kinds of data service packages:
Data services for the Solaris 10 OS from the Sun Cluster 2 of 2 CD-ROM - Instead, follow procedures in How to Install Data-Service Software Packages (pkgadd).
Data services from the Sun Cluster Agents CD - Instead, follow procedures in How to Install Data-Service Software Packages (scinstall).
For data services for the Solaris 8 or Solaris 9 OS from the Sun Cluster Agents CD, you can alternatively follow procedures in How to Install Data-Service Software Packages (Web Start installer).
Perform the following tasks:
Ensure that the Solaris OS is installed to support Sun Cluster software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.
Have available the Sun Cluster 1 of 2 CD-ROM and the Sun Cluster 2 of 2 CD-ROM.
(Optional) To use the installer program with a GUI, ensure that the display environment of the cluster node to install is set to display the GUI.
% xhost + % setenv DISPLAY nodename:0.0 |
Become superuser on the cluster node to install.
Insert the Sun Cluster 1 of 2 CD-ROM in the CD-ROM drive.
Change to the directory of the CD-ROM where the installer program resides.
# cd /cdrom/cdrom0/Solaris_arch/ |
In the Solaris_arch/ directory, arch is sparc or x86.
Start the Java ES installer program.
# ./installer |
Follow instructions on the screen to install Sun Cluster framework software and data services on the node.
When prompted whether to configure Sun Cluster framework software, choose Configure Later.
After installation is finished, you can view any available installation log. See the Sun Java Enterprise System 2005Q5 Installation Guide for additional information about using the Java ES installer program.
Install additional packages if you intend to use any of the following features.
Remote Shared Memory Application Programming Interface (RSMAPI)
SCI-PCI adapters for the interconnect transport
RSMRDT drivers
Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.
Determine which packages you must install.
The following table lists the Sun Cluster 3.1 8/05 packages that each feature requires, in the order in which you must install each group of packages. The Java ES installer program does not automatically install these packages.
Install packages in the order in which they are listed in the following table.
Insert the Sun Cluster 2 of 2 CD-ROM, if it is not already inserted in the CD-ROM drive.
Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10 .
# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ |
Install the additional packages.
# pkgadd -d . packages |
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
Ensure that the /usr/java/ directory is a symbolic link to the minimum or latest version of Java software.
Sun Cluster software requires at least version 1.4.2_03 of Java software.
Determine what directory the /usr/java/ directory is symbolically linked to.
# ls -l /usr/java lrwxrwxrwx 1 root other 9 Apr 19 14:05 /usr/java -> /usr/j2se/ |
Determine what version or versions of Java software are installed.
The following are examples of commands that you can use to display the version of their related releases of Java software.
# /usr/j2se/bin/java -version # /usr/java1.2/bin/java -version # /usr/jdk/jdk1.5.0_01/bin/java -version |
If the /usr/java/ directory is not symbolically linked to a supported version of Java software, recreate the symbolic link to link to a supported version of Java software.
The following example shows the creation of a symbolic link to the /usr/j2se/ directory, which contains Java 1.4.2_03 software.
# rm /usr/java # ln -s /usr/j2se /usr/java |
If you want to install Sun StorEdge QFS file system software, follow the procedures for initial installation in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.
Otherwise, to set up the root user environment, go to How to Set Up the Root Environment.
In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell. The files must verify this before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See Customizing a User's Work Environment in System Administration Guide, Volume 1 (Solaris 8) or in System Administration Guide: Basic Administration (Solaris 9 or Solaris 10) for more information.
Perform this procedure on each node in the cluster.
Become superuser on a cluster node.
Modify PATH and MANPATH entries in the .cshrc or .profile file.
See your volume manager documentation and other application documentation for additional file paths to set.
(Optional) For ease of administration, set the same root password on each node, if you have not already done so.
Configure Sun Cluster software on the cluster nodes. Go to Establishing the Cluster.
This section provides information and procedures to establish a new cluster or to add a node to an existing cluster. Before you start to perform these tasks, ensure that you installed software packages for the Solaris OS, Sun Cluster framework, and other products as described in Installing the Software.
The following task map lists the tasks to perform. Complete the procedures in the order that is indicated.
Table 2–2 Task Map: Establish the Cluster
Method |
Instructions |
---|---|
1. Use one of the following methods to establish a new cluster or add a node to an existing cluster: |
|
|
How to Configure Sun Cluster Software on All Nodes (scinstall) |
| |
| |
|
How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall) |
2. (Oracle Real Application Clusters only) If you added a node to a two-node cluster that runs Sun Cluster Support for Oracle Real Application Clusters and that uses a shared SCSI disk as the quorum device, update the SCSI reservations. | |
3. Install data-service software packages. |
How to Install Data-Service Software Packages (pkgadd) How to Install Data-Service Software Packages (scinstall) How to Install Data-Service Software Packages (Web Start installer) |
4. Assign quorum votes and remove the cluster from installation mode, if this operation was not already performed. | |
5. Validate the quorum configuration. |
How to Verify the Quorum Configuration and Installation Mode |
6. Configure the cluster. |
Perform this procedure from one node of the cluster to configure Sun Cluster software on all nodes of the cluster.
Perform the following tasks:
Ensure that the Solaris OS is installed to support Sun Cluster software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.
Ensure that Sun Cluster software packages are installed on the node. See How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
Determine which mode of the scinstall utility you will use, Typical or Custom. For the Typical installation of Sun Cluster software, scinstall automatically specifies the following configuration defaults.
Component |
Default Value |
---|---|
Private-network address |
172.16.0.0 |
Private-network netmask |
255.255.0.0 |
Cluster-transport junctions |
switch1 and switch2 |
Global-devices file-system name |
/globaldevices |
Installation security (DES) |
Limited |
Solaris and Sun Cluster patch directory |
/var/cluster/patches/ |
Complete one of the following cluster configuration worksheets, depending on whether you run the scinstall utility in Typical mode or Custom mode.
Typical Mode - If you will use Typical mode and accept all defaults, complete the following worksheet.
Component |
Description/Example |
Answer |
|
---|---|---|---|
Cluster Name |
What is the name of the cluster that you want to establish? | ||
Cluster Nodes |
What are the names of the other cluster nodes planned for the initial cluster configuration? | ||
Cluster-Transport Adapters and Cables |
What are the names of the two cluster-transport adapters that attach the node to the private interconnect? |
First
|
Second
|
Will this be a dedicated cluster transport adapter? |
Yes | No |
Yes | No |
|
If no, what is the VLAN ID for this adapter? | |||
Quorum Configuration (two-node cluster only) |
Do you want to disable automatic quorum device selection? (Answer Yes if any shared storage is not qualified to be a quorum device or if you want to configure a Network Appliance NAS device as a quorum device.) |
Yes | No |
|
Check |
Do you want to interrupt installation for sccheck errors? (sccheck verifies that preconfiguration requirements are met) |
Yes | No |
Custom Mode - If you will use Custom mode and customize the configuration data, complete the following worksheet.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
If you disabled remote configuration during Sun Cluster software installation, re-enable remote configuration.
Enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser to all cluster nodes.
(Optional) To use the scinstall(1M) utility to install patches, download patches to a patch directory.
If you use Typical mode to install the cluster, use a directory named either /var/cluster/patches/ or /var/patches/ to contain the patches to install.
In Typical mode, the scinstall command checks both those directories for patches.
If neither of those directories exist, then no patches are added.
If both directories exist, then only the patches in the /var/cluster/patches/ directory are added.
If you use Custom mode to install the cluster, you specify the path to the patch directory. Specifying the path ensures that you do not have to use the patch directories that scinstall checks for in Typical mode.
You can include a patch-list file in the patch directory. The default patch-list file name is patchlist. For information about creating a patch-list file, refer to the patchadd(1M) man page.
Become superuser on the cluster node from which you intend to configure the cluster.
Start the scinstall utility.
# /usr/cluster/bin/scinstall |
From the Main Menu, choose the menu item, Install a cluster or cluster node.
*** Main Menu *** Please select from one of the following (*) options: * 1) Install a cluster or cluster node 2) Configure a cluster to be JumpStarted from this install server 3) Add support for new data services to this cluster node 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1 |
From the Install Menu, choose the menu item, Install all nodes of a new cluster.
From the Type of Installation menu, choose either Typical or Custom.
Follow the menu prompts to supply your answers from the configuration planning worksheet.
The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.
If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.
# svcs multi-user-server STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default |
From one node, verify that all nodes have joined the cluster.
Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.
% scstat -n |
Output resembles the following.
-- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online |
Install any necessary patches to support Sun Cluster software, if you have not already done so.
To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.
exclude:lofs |
The re-enabling of LOFS becomes effective after the next system reboot.
You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:
Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.
See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.
The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall Typical mode. The other cluster node is phys-schost-2. The adapter names are qfe2 and qfe3. The automatic selection of a quorum device is enabled.
Installation and Configuration Log file - /var/cluster/logs/install/scinstall.log.24747 Testing for "/globaldevices" on "phys-schost-1" … done Testing for "/globaldevices" on "phys-schost-2" … done Checking installation status … done The Sun Cluster software is already installed on "phys-schost-1". The Sun Cluster software is already installed on "phys-schost-2". Starting discovery of the cluster transport configuration. The following connections were discovered: phys-schost-1:qfe2 switch1 phys-schost-2:qfe2 phys-schost-1:qfe3 switch2 phys-schost-2:qfe3 Completed discovery of the cluster transport configuration. Started sccheck on "phys-schost-1". Started sccheck on "phys-schost-2". sccheck completed with no errors or warnings for "phys-schost-1". sccheck completed with no errors or warnings for "phys-schost-2". Removing the downloaded files … done Configuring "phys-schost-2" … done Rebooting "phys-schost-2" … done Configuring "phys-schost-1" … done Rebooting "phys-schost-1" … Log file - /var/cluster/logs/install/scinstall.log.24747 Rebooting … |
If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:
Sun Cluster 2 of 2 CD-ROM (Sun Java System data services) |
Sun Cluster Agents CD (All other data services) |
|||
---|---|---|---|---|
Procedure |
Solaris 8 or 9 |
Solaris 10 |
Solaris 8 or 9 |
Solaris 10 |
How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer) |
X | |||
X | ||||
X |
X |
|||
How to Install Data-Service Software Packages (Web Start installer) |
X |
Otherwise, go to the next appropriate procedure:
If you installed a single-node cluster, cluster establishment is complete. Go to Configuring the Cluster to install volume management software and configure the cluster.
If you installed a multiple-node cluster and chose automatic quorum configuration, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
If you installed a multiple-node cluster and declined automatic quorum configuration, perform postinstallation setup. Go to How to Configure Quorum Devices.
You cannot change the private-network address and netmask after scinstall processing is finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then perform the procedures in How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer) and then perform this procedure to reinstall the software and configure the node with the correct information.
This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris OS and Sun Cluster software on all cluster nodes in the same operation and establishes the cluster. You can also use this procedure to add new nodes to an existing cluster.
Perform the following tasks:
Ensure that the hardware setup is complete and connections are verified before you install Solaris software. See the Sun Cluster Hardware Administration Collection and your server and storage device documentation for details on how to set up the hardware.
Determine the Ethernet address of each cluster node.
If you use a naming service, ensure that the following information is added to any naming services that clients use to access cluster services. See IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.
Address-to-name mappings for all public hostnames and logical addresses
The IP address and hostname of the JumpStart server
Ensure that your cluster configuration planning is complete. See How to Prepare for Cluster Software Installation for requirements and guidelines.
On the server from which you will create the flash archive, ensure that all Solaris OS software, patches, and firmware that is necessary to support Sun Cluster software is installed.
If Solaris software is already installed on the server, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.
Ensure that Sun Cluster software packages and patches are installed on the server from which you will create the flash archive. See How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
Determine which mode of the scinstall utility you will use, Typical or Custom. For the Typical installation of Sun Cluster software, scinstall automatically specifies the following configuration defaults.
Component |
Default Value |
---|---|
Private-network address |
172.16.0.0 |
Private-network netmask |
255.255.0.0 |
Cluster-transport junctions |
switch1 and switch2 |
Global-devices file-system name |
/globaldevices |
Installation security (DES) |
Limited |
Solaris and Sun Cluster patch directory |
/var/cluster/patches |
Complete the appropriate planning worksheet. See Planning the Sun Cluster Environment for planning guidelines.
Typical Mode - If you will use Typical mode and accept all defaults, complete the following worksheet.
Component |
Description/Example |
Answer |
|
---|---|---|---|
JumpStart Directory |
What is the name of the JumpStart directory to use? | ||
Cluster Name |
What is the name of the cluster that you want to establish? | ||
Cluster Nodes |
What are the names of the cluster nodes that are planned for the initial cluster configuration? | ||
Cluster-Transport Adapters and Cables |
First node name: | ||
Transport adapters: |
First
|
Second
|
|
Will this be a dedicated cluster transport adapter? |
Yes | No |
Yes | No |
|
If no, what is the VLAN ID for this adapter? | |||
Specify for each additional node |
Node name: | ||
Transport adapters: |
First
|
Second
|
|
Quorum Configuration (two-node cluster only) |
Do you want to disable automatic quorum device selection? (Answer Yes if any shared storage is not qualified to be a quorum device or if you want to configure a Network Appliance NAS device as a quorum device.) |
Yes | No |
Yes | No |
Custom Mode - If you will use Custom mode and customize the configuration data, complete the following worksheet.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
Set up your JumpStart installation server.
Follow the appropriate instructions for your software platform.
Solaris OS Platform |
Instructions |
---|---|
SPARC |
See one of the following manuals for instructions about how to set up a JumpStart installation server:
See also the setup_install_server(1M) and add_install_client(1M) man pages. |
x86 |
See Solaris 9 Software Installation From a PXE Server in Sun Fire V60x and Sun Fire V65x Server Solaris Operating Environment Installation Guide for instructions about how to set up a JumpStart Dynamic Host Configuration Protocol (DHCP) server and a Solaris network for Preboot Execution Environment (PXE) installations. |
Ensure that the JumpStart installation server meets the following requirements.
The installation server is on the same subnet as the cluster nodes, or on the Solaris boot server for the subnet that the cluster nodes use.
The installation server is not itself a cluster node.
The installation server installs a release of the Solaris OS that is supported by the Sun Cluster software.
A custom JumpStart directory exists for JumpStart installation of Sun Cluster software. This jumpstart-dir directory must contain a copy of the check(1M) utility. The directory must also be NFS exported for reading by the JumpStart installation server.
Each new cluster node is configured as a custom JumpStart installation client that uses the custom JumpStart directory that you set up for Sun Cluster installation.
If you are installing a new node to an existing cluster, add the node to the list of authorized cluster nodes.
Switch to another cluster node that is active and start the scsetup(1M) utility.
Use the scsetup utility to add the new node's name to the list of authorized cluster nodes.
For more information, see How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS.
On a cluster node or another machine of the same server platform, install the Solaris OS, if you have not already done so.
Follow procedures in How to Install Solaris Software.
On the installed system, install Sun Cluster software, if you have not done so already.
Follow procedures in How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
Enable the common agent container daemon to start automatically during system boots.
# cacaoadm enable |
On the installed system, install any necessary patches to support Sun Cluster software.
On the installed system, update the /etc/inet/hosts file with all IP addresses that are used in the cluster.
Perform this step regardless of whether you are using a naming service. See IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.
For Solaris 10, on the installed system, update the /etc/inet/ipnodes file with all IP addresses that are used in the cluster.
Perform this step regardless of whether you are using a naming service.
Create the flash archive of the installed system.
# flarcreate -n name archive |
Name to give the flash archive.
File name to give the flash archive, with the full path. By convention, the file name ends in .flar.
Follow procedures in one of the following manuals:
Ensure that the flash archive is NFS exported for reading by the JumpStart installation server.
See Solaris NFS Environment in System Administration Guide, Volume 3 (Solaris 8) or Managing Network File Systems (Overview), in System Administration Guide: Network Services (Solaris 9 or Solaris 10) for more information about automatic file sharing.
From the JumpStart installation server, start the scinstall(1M) utility.
The path /export/suncluster/sc31/ is used here as an example of the installation directory that you created. In the CD-ROM path, replace arch with sparc or x86 and replace ver with 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10.
# cd /export/suncluster/sc31/Solaris_arch/Product/sun_cluster/ \ Solaris_ver/Tools/ # ./scinstall |
From the Main Menu, choose the menu item, Configure a cluster to be JumpStarted from this installation server.
This option is used to configure custom JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.
*** Main Menu *** Please select from one of the following (*) options: * 1) Install a cluster or cluster node * 2) Configure a cluster to be JumpStarted from this install server 3) Add support for new data services to this cluster node 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 2 |
Follow the menu prompts to supply your answers from the configuration planning worksheet.
The scinstall command stores your configuration information and copies the autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.1/ directory. This file is similar to the following example.
install_type initial_install system_type standalone partitioning explicit filesys rootdisk.s0 free / filesys rootdisk.s1 750 swap filesys rootdisk.s3 512 /globaldevices filesys rootdisk.s7 20 cluster SUNWCuser add package SUNWman add |
Make adjustments to the autoscinstall.class file to configure JumpStart to install the flash archive.
Modify entries as necessary to match configuration choices you made when you installed the Solaris OS on the flash archive machine or when you ran the scinstall utility.
For example, if you assigned slice 4 for the global-devices file system and specified to scinstall that the file-system name is /gdevs, you would change the /globaldevices entry of the autoscinstall.class file to the following:
filesys rootdisk.s4 512 /gdevs |
Change the following entries in the autoscinstall.class file.
Existing Entry to Replace |
New Entry to Add |
||
---|---|---|---|
install_type |
initial_install |
install_type |
flash_install |
system_type |
standalone |
archive_location |
retrieval_type location |
See archive_location Keyword in Solaris 8 Advanced Installation Guide, Solaris 9 9/04 Installation Guide, or Solaris 10 Installation Guide: Custom JumpStart and Advanced Installations for information about valid values for retrieval_type and location when used with the archive_location keyword.
Remove all entries that would install a specific package, such as the following entries.
cluster SUNWCuser add package SUNWman add |
Set up Solaris patch directories, if you did not already install the patches on the flash-archived system.
If you specified a patch directory to the scinstall utility, patches that are located in Solaris patch directories are not installed.
Create jumpstart-dir/autoscinstall.d/nodes/node/patches/ directories that are NFS exported for reading by the JumpStart installation server.
Create one directory for each node in the cluster, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared patch directory.
# mkdir jumpstart-dir/autoscinstall.d/nodes/node/patches/ |
Place copies of any Solaris patches into each of these directories.
Place copies of any hardware-related patches that you must install after Solaris software is installed into each of these directories.
If you are using a cluster administrative console, display a console screen for each node in the cluster.
If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.
Use the following command to start the cconsole utility:
# /opt/SUNWcluster/bin/cconsole clustername & |
The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
If you do not use the cconsole utility, connect to the consoles of each node individually.
Shut down each node.
# shutdown -g0 -y -i0 |
Boot each node to start the JumpStart installation.
On SPARC based systems, do the following:
ok boot net - install |
Surround the dash (-) in the command with a space on each side.
On x86 based systems, do the following:
When the BIOS information screen appears, press the Esc key.
The Select Boot Device screen appears.
On the Select Boot Device screen, choose the listed IBA that is connected to the same network as the JumpStart PXE installation server.
The lowest number to the right of the IBA boot choices corresponds to the lower Ethernet port number. The higher number to the right of the IBA boot choices corresponds to the higher Ethernet port number.
The node reboots and the Device Configuration Assistant appears.
On the Boot Solaris screen, choose Net.
At the following prompt, choose Custom JumpStart and press Enter:
Select the type of installation you want to perform: 1 Solaris Interactive 2 Custom JumpStart Enter the number of your choice followed by the <ENTER> key. If you enter anything else, or if you wait for 30 seconds, an interactive installation will be started. |
When prompted, answer the questions and follow the instructions on the screen.
JumpStart installs the Solaris OS and Sun Cluster software on each node. When the installation is successfully completed, each node is fully installed as a new cluster node. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log. N file.
For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.
If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.
# svcs multi-user-server STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default |
If you are installing a new node to an existing cluster, create mount points on the new node for all existing cluster file systems.
From another cluster node that is active, display the names of all cluster file systems.
% mount | grep global | egrep -v node@ | awk '{print $1}' |
On the node that you added to the cluster, create a mount point for each cluster file system in the cluster.
% mkdir -p mountpoint |
For example, if a file-system name that is returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node that is being added to the cluster.
The mount points become active after you reboot the cluster in Step 24.
If VERITAS Volume Manager (VxVM) is installed on any nodes that are already in the cluster, view the vxio number on each VxVM–installed node.
# grep vxio /etc/name_to_major vxio NNN |
Ensure that the same vxio number is used on each of the VxVM-installed nodes.
Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.
If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.
(Optional) To use dynamic reconfiguration on Sun Enterprise 10000 servers, add the following entry to the /etc/system file. Add this entry on each node in the cluster.
set kernel_cage_enable=1 |
This entry becomes effective after the next system reboot. See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.
To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.
exclude:lofs |
The re-enabling of LOFS becomes effective after the next system reboot.
You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:
Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.
See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.
x86: Set the default boot file to kadb.
# eeprom boot-file=kadb |
The setting of this value enables you to reboot the node if you are unable to access a login prompt.
If you performed a task that requires a cluster reboot, follow these steps to reboot the cluster.
The following are some of the tasks that require a reboot:
Adding a new node to an existing cluster
Installing patches that require a node or cluster reboot
Making configuration changes that require a reboot to become active
From one node, shut down the cluster.
# scshutdown |
Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.
Cluster nodes remain in installation mode until the first time that you run the scsetup(1M) command. You run this command during the procedure How to Configure Quorum Devices.
Reboot each node in the cluster.
On SPARC based systems, do the following:
ok boot |
On x86 based systems, do the following:
<<< Current Boot Parameters >>> Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b |
The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
From one node, verify that all nodes have joined the cluster.
Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.
% scstat -n |
Output resembles the following.
-- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online |
If you added a node to a two-node cluster, go to How to Update SCSI Reservations After Adding a Node.
If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:
Sun Cluster 2 of 2 CD-ROM (Sun Java System data services) |
Sun Cluster Agents CD (All other data services) |
|||
---|---|---|---|---|
Procedure |
Solaris 8 or 9 |
Solaris 10 |
Solaris 8 or 9 |
Solaris 10 |
How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer) |
X | |||
X | ||||
X |
X |
|||
How to Install Data-Service Software Packages (Web Start installer) |
X |
Otherwise, go to the next appropriate procedure:
If you installed a single-node cluster, cluster establishment is complete. Go to Configuring the Cluster to install volume management software and configure the cluster.
If you added a new node to an existing cluster, verify the state of the cluster. Go to How to Verify the Quorum Configuration and Installation Mode.
If you installed a multiple-node cluster and chose automatic quorum configuration, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
If you installed a multiple-node cluster and declined automatic quorum configuration, perform postinstallation setup. Go to How to Configure Quorum Devices.
If you added a node to a cluster that had less or more than two nodes, go to How to Verify the Quorum Configuration and Installation Mode.
Disabled scinstall option – If the JumpStart option of the scinstall command does not have an asterisk in front, the option is disabled. This condition indicates that JumpStart setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 10 to correct JumpStart setup, then restart the scinstall utility.
Error messages about nonexistent nodes – Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. The default file is shipped with references to the maximum number of nodes. Therefore, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information about how to suppress these messages under otherwise normal cluster conditions.
Changing the private-network address – You cannot change the private-network address and netmask after scinstall processing has finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then repeat this procedure to reinstall and configure the node with the correct information.
Do not use this configuration method in the following circumstances:
To configure a single-node cluster. Instead, follow procedures in How to Configure Sun Cluster Software on All Nodes (scinstall).
To use a different private-network IP address or netmask than the defaults. SunPlex Installer automatically specifies the default private-network address (172.16.0.0) and netmask (255.255.0.0). Instead, follow procedures in How to Configure Sun Cluster Software on All Nodes (scinstall).
To configure tagged-VLAN capable adapters or SCI-PCI adapters for the cluster transport. Instead, follow procedures in How to Configure Sun Cluster Software on All Nodes (scinstall).
To add a new node to an existing cluster. Instead, follow procedures in How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall) or How to Install Solaris and Sun Cluster Software (JumpStart).
This section describes how to use SunPlex Installer, the installation module of SunPlex Manager, to establish a new cluster. You can also use SunPlex Installer to install or configure one or more of the following additional software products:
(On Solaris 8 only) Solstice DiskSuite software – After it installs Solstice DiskSuite software, SunPlex Installer configures up to three metasets and associated metadevices. SunPlex Installer also creates and mounts cluster file systems for each metaset.
(On Solaris 9 or Solaris 10 only) Solaris Volume Manager software – SunPlex Installer configures up to three Solaris Volume Manager volumes. SunPlex Installer also creates and mounts cluster file systems for each volume. Solaris Volume Manager software is already installed as part of Solaris software installation.
Sun Cluster HA for NFS data service.
Sun Cluster HA for Apache scalable data service.
The following table lists SunPlex Installer installation requirements for these additional software products.
Table 2–3 Requirements to Use SunPlex Installer to Install Software
The test IP addresses that you supply must meet the following requirements:
Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.
Test IP addresses must not be used by normal applications because the test IP addresses are not highly available.
The following table lists each metaset name and cluster-file-system mount point that is created by SunPlex Installer. The number of metasets and mount points that SunPlex Installer creates depends on the number of shared disks that are connected to the node. For example, if a node is connected to four shared disks, SunPlex Installer creates the mirror-1 and mirror-2 metasets. However, SunPlex Installer does not create the mirror-3 metaset, because the node does not have enough shared disks to create a third metaset.
Table 2–4 Metasets Created by SunPlex Installer
Shared Disks |
Metaset Name |
Cluster File System Mount Point |
Purpose |
---|---|---|---|
First pair |
mirror-1 |
/global/mirror-1 |
Sun Cluster HA for NFS or Sun Cluster HA for Apache scalable data service, or both |
Second pair |
mirror-2 |
/global/mirror-2 |
Unused |
Third pair |
mirror-3 |
/global/mirror-3 |
Unused |
If the cluster does not meet the minimum shared-disk requirement, SunPlex Installer still installs the Solstice DiskSuite packages. However, without sufficient shared disks, SunPlex Installer cannot configure the metasets, metadevices, or volumes. SunPlex Installer then cannot configure the cluster file systems that are needed to create instances of the data service.
SunPlex Installer recognizes a limited character set to increase security. Characters that are not a part of the set are silently filtered out when HTML forms are submitted to the SunPlex Installer server. The following characters are accepted by SunPlex Installer:
()+,-./0-9:=@A-Z^_a-z{|}~ |
This filter can cause problems in the following two areas:
Password entry for Sun JavaTM System services – If the password contains unusual characters, these characters are stripped out, resulting in one of the following problems:
The resulting password therefore fails because it has less than eight characters.
The application is configured with a different password than the user expects.
Localization – Alternative character sets, such as accented characters or Asian characters, do not work for input.
Perform this procedure to use SunPlex Installer to configure Sun Cluster software and install patches on all nodes in the cluster in a single operation. In addition, you can use this procedure to install Solstice DiskSuite software and patches (Solaris 8) and to configure Solstice DiskSuite or Solaris Volume Manager mirrored disk sets.
Do not use this configuration method in the following circumstances:
To configure a single-node cluster. Instead, follow procedures in How to Configure Sun Cluster Software on All Nodes (scinstall).
To use a different private-network IP address or netmask than the defaults. SunPlex Installer automatically specifies the default private-network address (172.16.0.0) and netmask (255.255.0.0). Instead, follow procedures in How to Configure Sun Cluster Software on All Nodes (scinstall).
To configure tagged-VLAN capable adapters or SCI-PCI adapters for the cluster transport. Instead, follow procedures in How to Configure Sun Cluster Software on All Nodes (scinstall).
To add a new node to an existing cluster. Instead, follow procedures in How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall) or How to Install Solaris and Sun Cluster Software (JumpStart).
The installation process might take from 30 minutes to two or more hours. The actual length of time depends on the number of nodes that are in the cluster, your choice of data services to install, and the number of disks that are in your cluster configuration.
Perform the following tasks:
Ensure that the cluster configuration meets the requirements to use SunPlex Installer to install software. See Using SunPlex Installer to Configure Sun Cluster Software for installation requirements and restrictions.
Ensure that the Solaris OS is installed to support Sun Cluster software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.
Ensure that Apache software packages and Apache software patches are installed on the node.
# pkginfo SUNWapchr SUNWapchu SUNWapchd |
If necessary, install any missing Apache software packages from the Solaris Software 2 of 2 CD-ROM.
Ensure that Sun Cluster software packages are installed on the node. See How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
If you intend to use the root password to access SunPlex Installer or SunPlex Manager, ensure that the root password is the same on every node of the cluster. If necessary, also use the chkey command to update the RPC key pair. See the chkey(1) man page.
If you intend to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, ensure that the cluster configuration meets all applicable requirements. See Using SunPlex Installer to Configure Sun Cluster Software.
Complete the following configuration planning worksheet. See Planning the Solaris OS and Planning the Sun Cluster Environment for planning guidelines. See the Sun Cluster Data Services Planning and Administration Guide for Solaris OS for data-service planning guidelines.
Prepare file-system paths to a CD-ROM image of each software product that you intend to install.
Follow these guidelines to prepare the file-system paths:
Provide each CD-ROM image in a location that is available to each node.
Ensure that the CD-ROM images are accessible to all nodes of the cluster from the same file-system path. These paths can be one or more of the following locations:
CD-ROM drives that are exported to the network from machines outside the cluster.
Exported file systems on machines outside the cluster.
CD-ROM images that are copied to local file systems on each node of the cluster. The local file system must use the same name on each node.
x86: Determine whether you are using the Netscape NavigatorTM browser or the Microsoft Internet Explorer browser on your administrative console.
x86: Ensure that the Java plug-in is installed and working on your administrative console.
Start the Netscape Navigator browser on the administrative console that you use to connect to the cluster.
From the Help menu, choose About Plug-ins.
Determine whether the Java plug-in is listed.
Download the latest Java plug-in from http://java.sun.com/products/plugin.
Install the plug-in on your administrative console.
Create a symbolic link to the plug-in.
% cd ~/.netscape/plugins/ % ln -s /usr/j2se/plugin/i386/ns4/javaplugin.so . |
Skip to Step 5.
x86: Ensure that Java 2 Platform, Standard Edition (J2SE) for Windows is installed and working on your administrative console.
On your Microsoft Windows desktop, click Start, point to Settings, and then select Control Panel.
The Control Panel window appears.
Determine whether the Java Plug-in is listed.
Download the latest version of J2SE for Windows from http://java.sun.com/j2se/downloads.html.
Install the J2SE for Windows software on your administrative console.
Restart the system on which your administrative console runs.
The J2SE for Windows control panel is activated.
If patches exist that are required to support Sun Cluster or Solstice DiskSuite software, determine how to install those patches.
To manually install patches, use the patchadd command to install all patches before you use SunPlex Installer.
To use SunPlex Installer to install patches, copy patches into a single directory.
Ensure that the patch directory meets the following requirements:
The patch directory resides on a file system that is available to each node.
Only one version of each patch is present in this patch directory. If the patch directory contains multiple versions of the same patch, SunPlex Installer cannot determine the correct patch dependency order.
The patches are uncompressed.
From the administrative console or any other machine outside the cluster, launch a browser.
Disable the browser's Web proxy.
SunPlex Installer installation functionality is incompatible with Web proxies.
Ensure that disk caching and memory caching is enabled.
The disk cache and memory cache size must be greater than 0.
From the browser, connect to port 3000 on a node of the cluster.
https://node:3000 |
The Sun Cluster Installation screen is displayed in the browser window.
If SunPlex Installer displays the data services installation screen instead of the Sun Cluster Installation screen, Sun Cluster framework software is already installed and configured on that node. Check that the name of the node in the URL is the correct name of the cluster node to install.
If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.
Log in as superuser.
In the Sun Cluster Installation screen, verify that the cluster meets the listed requirements for using SunPlex Installer.
If you meet all listed requirements, click Next to continue to the next screen.
Follow the menu prompts to supply your answers from the configuration planning worksheet.
Click Begin Installation to start the installation process.
Follow these guidelines to use SunPlex Installer:
Do not close the browser window or change the URL during the installation process.
If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.
If the browser prompts for login information, type the appropriate superuser ID and password for the node that you connect to.
SunPlex Installer installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
During installation, the screen displays brief messages about the status of the cluster installation process. When installation and configuration is complete, the browser displays the cluster monitoring and administration GUI.
SunPlex Installer installation output is logged in the /var/cluster/spm/messages file. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log. N file.
From one node, verify that all nodes have joined the cluster.
Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.
% scstat -n |
Output resembles the following.
-- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online |
Verify the quorum assignments and modify those assignments, if necessary.
For clusters with three or more nodes, the use of shared quorum devices is optional. SunPlex Installer might or might not have assigned quorum votes to any quorum devices, depending on whether appropriate shared disks were available. You can use SunPlex Manager to designate quorum devices and to reassign quorum votes in the cluster. See Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS for more information.
To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.
exclude:lofs |
The re-enabling of LOFS becomes effective after the next system reboot.
You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:
Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.
See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.
If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:
Sun Cluster 2 of 2 CD-ROM (Sun Java System data services) |
Sun Cluster Agents CD (All other data services) |
|||
---|---|---|---|---|
Procedure |
Solaris 8 or 9 |
Solaris 10 |
Solaris 8 or 9 |
Solaris 10 |
How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer) |
X | |||
X | ||||
X |
X |
|||
How to Install Data-Service Software Packages (Web Start installer) |
X |
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.
You cannot change the private-network address and netmask after scinstall processing has finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then repeat this procedure to reinstall and configure the node with the correct information.
Perform this procedure to add a new node to an existing cluster. To use JumpStart to add a new node, instead follow procedures in How to Install Solaris and Sun Cluster Software (JumpStart).
Perform the following tasks:
Ensure that all necessary hardware is installed.
Ensure that the host adapter is installed on the new node. See the Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.
Verify that any existing cluster interconnects can support the new node. See the Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.
Ensure that any additional storage is installed. See the appropriate manual from the Sun Cluster 3.x Hardware Administration Collection.
Ensure that the Solaris OS is installed to support Sun Cluster software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.
Ensure that Sun Cluster software packages are installed on the node. See How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
Determine which mode of the scinstall utility you will use, Typical or Custom. For the Typical installation of Sun Cluster software, scinstall automatically specifies the following configuration defaults.
Component |
Default Value |
---|---|
Cluster-transport junctions |
switch1 and switch2 |
Global-devices file-system name |
/globaldevices |
Solaris and Sun Cluster patch directory |
/var/cluster/patches |
Complete one of the following configuration planning worksheets. See Planning the Solaris OS and Planning the Sun Cluster Environment for planning guidelines.
Typical Mode - If you will use Typical mode and accept all defaults, complete the following worksheet.
Custom Mode - If you will use Custom mode and customize the configuration data, complete the following worksheet.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
If you are adding this node to a single-node cluster, ensure that two cluster interconnects already exist by displaying the interconnect configuration.
# scconf -p | grep cable # scconf -p | grep adapter |
You must have at least two cables or two adapters configured before you can add a node.
If the output shows configuration information for two cables or for two adapters, proceed to Step 2.
If the output shows no configuration information for either cables or adapters, or shows configuration information for only one cable or adapter, configure new cluster interconnects.
On the existing cluster node, start the scsetup(1M) utility.
# scsetup |
Choose the menu item, Cluster interconnect.
Choose the menu item, Add a transport cable.
Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport junction.
If necessary, repeat Step c to configure a second cluster interconnect.
When finished, quit the scsetup utility.
Verify that the cluster now has two cluster interconnects configured.
# scconf -p | grep cable # scconf -p | grep adapter |
The command output should show configuration information for at least two cluster interconnects.
If you are adding this node to an existing cluster, add the new node to the cluster authorized–nodes list.
On any active cluster member, start the scsetup(1M) utility.
# scsetup |
The Main Menu is displayed.
Choose the menu item, New nodes.
Choose the menu item, Specify the name of a machine which may add itself.
Follow the prompts to add the node's name to the list of recognized machines.
The scsetup utility prints the message Command completed successfully if the task is completed without error.
Quit the scsetup utility.
Become superuser on the cluster node to configure.
Start the scinstall utility.
# /usr/cluster/bin/scinstall |
From the Main Menu, choose the menu item, Install a cluster or cluster node.
*** Main Menu *** Please select from one of the following (*) options: * 1) Install a cluster or cluster node 2) Configure a cluster to be JumpStarted from this install server 3) Add support for new data services to this cluster node 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1 |
From the Install Menu, choose the menu item, Add this machine as a node in an existing cluster.
Follow the menu prompts to supply your answers from the configuration planning worksheet.
The scinstall utility configures the node and boots the node into the cluster.
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
Install any necessary patches to support Sun Cluster software, if you have not already done so.
Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.
For the Solaris 10 OS, verify on each node that multi-user services for the Service Management Facility (SMF) are online.
If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.
# svcs multi-user-server STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default |
From an active cluster member, prevent any other nodes from joining the cluster.
# /usr/cluster/bin/scconf -a -T node=. |
Specifies the add form of the command
Specifies authentication options
Specifies the node name of dot (.) to add to the authentication list, to prevent any other node from adding itself to the cluster
Alternately, you can use the scsetup(1M) utility. See How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS for procedures.
From one node, verify that all nodes have joined the cluster.
Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.
% scstat -n |
Output resembles the following.
-- Cluster Nodes -- Node name Status --------- ------ Cluster node: phys-schost-1 Online Cluster node: phys-schost-2 Online |
To re-enable the loopback file system (LOFS), delete the following entry from the /etc/system file on each node of the cluster.
exclude:lofs |
The re-enabling of LOFS becomes effective after the next system reboot.
You cannot have LOFS enabled if you use Sun Cluster HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for Sun Cluster HA for NFS. If you enable LOFS and later choose to add Sun Cluster HA for NFS on a highly available local file system, you must do one of the following:
Restore the exclude:lofs entry to the /etc/system file on each node of the cluster and reboot each node. This change disables LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by Sun Cluster HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.
See Types of File Systems in System Administration Guide, Volume 1 (Solaris 8) or The Loopback File System in System Administration Guide: Devices and File Systems (Solaris 9 or Solaris 10) for more information about loopback file systems.
The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.
*** Adding a Node to an Existing Cluster *** Fri Feb 4 10:17:53 PST 2005 scinstall -ik -C schost -N phys-schost-1 -A trtype=dlpi,name=qfe2 -A trtype=dlpi,name=qfe3 -m endpoint=:qfe2,endpoint=switch1 -m endpoint=:qfe3,endpoint=switch2 Checking device to use for global devices file system ... done Adding node "phys-schost-3" to the cluster configuration ... done Adding adapter "qfe2" to the cluster configuration ... done Adding adapter "qfe3" to the cluster configuration ... done Adding cable to the cluster configuration ... done Adding cable to the cluster configuration ... done Copying the config from "phys-schost-1" ... done Copying the postconfig file from "phys-schost-1" if it exists ... done Copying the Common Agent Container keys from "phys-schost-1" ... done Setting the node ID for "phys-schost-3" ... done (id=1) Setting the major number for the "did" driver ... Obtaining the major number for the "did" driver from "phys-schost-1" ... done "did" driver major number set to 300 Checking for global devices global file system ... done Updating vfstab ... done Verifying that NTP is configured ... done Initializing NTP configuration ... done Updating nsswitch.conf ... done Adding clusternode entries to /etc/inet/hosts ... done Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files Updating "/etc/hostname.hme0". Verifying that power management is NOT configured ... done Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done The "local-mac-address?" parameter setting has been changed to "true". Ensure network routing is disabled ... done Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done Updating file ("hosts") on node phys-schost-1 ... done Rebooting ... |
Determine your next step:
If you added a node to a two-node cluster, go to How to Update SCSI Reservations After Adding a Node.
If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:
Sun Cluster 2 of 2 CD-ROM (Sun Java System data services) |
Sun Cluster Agents CD (All other data services) |
|||
---|---|---|---|---|
Procedure |
Solaris 8 or 9 |
Solaris 10 |
Solaris 8 or 9 |
Solaris 10 |
How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer) |
X | |||
X | ||||
X |
X |
|||
How to Install Data-Service Software Packages (Web Start installer) |
X |
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.
When you increase or decrease the number of node attachments to a quorum device, the cluster does not automatically recalculate the quorum vote count. To reestablish the correct quorum vote, use the scsetup utility to remove each quorum device and then add it back into the configuration. Do this on one quorum device at a time.
If the cluster has only one quorum device, configure a second quorum device before you remove and readd the original quorum device. Then remove the second quorum device to return the cluster to its original configuration.
If you added a node to a two-node cluster that uses one or more shared SCSI disks as quorum devices, you must update the SCSI Persistent Group Reservations (PGR). To do this, you remove the quorum devices which have SCSI-2 reservations. If you want to add back quorum devices, the newly configured quorum devices will have SCSI-3 reservations.
Ensure that you have completed installation of Sun Cluster software on the added node.
Become superuser on any node of the cluster.
View the current quorum configuration.
The following example output shows the status of quorum device d3.
# scstat -q |
Note the name of each quorum device that is listed.
Remove the original quorum device.
Perform this step for each quorum device that is configured.
# scconf -r -q globaldev=devicename |
Removes
Specifies the name of the quorum device
Verify that all original quorum devices are removed.
# scstat -q |
(Optional) Add a SCSI quorum device.
You can configure the same device that was originally configured as the quorum device or choose a new shared device to configure.
(Optional) If you want to choose a new shared device to configure as a quorum device, display all devices that the system checks.
Otherwise, skip to Step c.
# scdidadm -L |
Output resembles the following:
1 phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1 2 phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 2 phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 3 phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 3 phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 … |
From the output, choose a shared device to configure as a quorum device.
Configure the shared device as a quorum device.
# scconf -a -q globaldev=devicename |
Adds
Repeat for each quorum device that you want to configure.
If you added any quorum devices, verify the new quorum configuration.
# scstat -q |
Each new quorum device should be Online and have an assigned vote.
The following example identifies the original quorum device d2, removes that quorum device, lists the available shared devices, and configures d3 as a new quorum device.
(List quorum devices) # scstat -q … -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/d2s2 1 1 Online (Remove the original quorum device) # scconf -r -q globaldev=d2 (Verify the removal of the original quorum device) # scstat -q … -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ (List available devices) # scdidadm -L … 3 phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 3 phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 … (Add a quorum device) # scconf -a -q globaldev=d3 (Verify the addition of the new quorum device) # scstat -q … -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/d3s2 2 2 Online |
If you intend to install data services, go to the appropriate procedure for the data service that you want to install and for your version of the Solaris OS:
Sun Cluster 2 of 2 CD-ROM (Sun Java System data services) |
Sun Cluster Agents CD (All other data services) |
|||
---|---|---|---|---|
Procedure |
Solaris 8 or 9 |
Solaris 10 |
Solaris 8 or 9 |
Solaris 10 |
How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer) |
X | |||
X | ||||
X |
X |
|||
How to Install Data-Service Software Packages (Web Start installer) |
X |
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode
Perform this procedure to install data services for the Solaris 10 OS from the Sun Cluster 2 of 2 CD-ROM. The Sun Cluster 2 of 2 CD-ROM contains the data services for Sun Java System applications. This procedure uses the pkgadd(1M) program to install the packages. Perform this procedure on each node in the cluster on which you want to run a chosen data service.
Do not use this procedure for the following kinds of data-service packages:
Data services for the Solaris 8 or Solaris 9 OS from the Sun Cluster 2 of 2 CD-ROM - Instead, follow installation procedures in How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
Data services for the Solaris 10 OS from the Sun Cluster Agents CD - Instead, follow installation procedures in How to Install Data-Service Software Packages (scinstall). The Web Start installer program on the Sun Cluster Agents CD is not compatible with the Solaris 10 OS.
Become superuser on the cluster node.
Insert the Sun Cluster 2 of 2 CD-ROM in the CD-ROM drive.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.
Change to the Solaris_arch/Product/sun_cluster_agents/Solaris_10/Packages/ directory, where arch is sparc or x86.
# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster_agents/ \ Solaris_10/Packages/ |
Install the data service packages on the global zone.
# pkgadd -G -d . [packages] |
Adds packages to the current zone only. You must add Sun Cluster packages only to the global zone. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.
Specifies the location of the packages to install.
Optional. Specifies the name of one or more packages to install. If no package name is specified, the pkgadd program displays a pick list of all packages that are available to install.
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
Install any patches for the data services that you installed.
See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.
You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:
From one node, shut down the cluster by using the scshutdown(1M) command.
Reboot each node in the cluster.
Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.
If you chose automatic quorum configuration during Sun Cluster installation or used SunPlex Installer to install Sun Cluster software, the installation utility automatically assigns quorum votes and removes the cluster from installation mode during installation reboot. However, if you did not choose one of these methods, cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Configure Quorum Devices.
If you installed a single-node cluster, cluster establishment is complete. Go to Configuring the Cluster to install volume management software and configure the cluster.
If you added a new node to an existing cluster, verify the state of the cluster. Go to How to Verify the Quorum Configuration and Installation Mode.
If you declined automatic quorum configuration during Sun Cluster software installation of a multiple-node cluster, perform postinstallation setup. Go to How to Configure Quorum Devices.
If you chose automatic quorum configuration during Sun Cluster software installation of a multiple-node cluster, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
If you used SunPlex Installer to install a multiple-node cluster, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
Perform this procedure to install data services from the Sun Cluster Agents CD of the Sun Cluster 3.1 8/05 release. This procedure uses the interactive scinstall utility to install the packages. Perform this procedure on each node in the cluster on which you want to run a chosen data service.
Do not use this procedure for the following kinds of data-service packages:
Data services for the Solaris 10 OS from the Sun Cluster 2 of 2 CD-ROM - Instead, follow installation procedures in How to Install Data-Service Software Packages (pkgadd).
Data services for the Solaris 8 or Solaris 9 OS from the Sun Cluster 2 of 2 CD-ROM - Instead, follow installation procedures in How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
You do not need to perform this procedure if you used SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache or both and you do not intend to install any other data services. Instead, go to How to Configure Quorum Devices.
To install data services from the Sun Cluster 3.1 10/03 release or earlier, you can alternatively use the Web Start installer program to install the packages. See How to Install Data-Service Software Packages (Web Start installer).
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
Become superuser on the cluster node.
Insert the Sun Cluster Agents CD in the CD-ROM drive on the node.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.
Change to the directory where the CD-ROM is mounted.
# cd /cdrom/cdrom0/ |
Start the scinstall(1M) utility.
# scinstall |
From the Main Menu, choose the menu item, Add support for new data services to this cluster node.
Follow the prompts to select the data services to install.
You must install the same set of data-service packages on each node. This requirement applies even if a node is not expected to host resources for an installed data service.
After the data services are installed, quit the scinstall utility.
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
Install any Sun Cluster data-service patches.
See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.
You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:
From one node, shut down the cluster by using the scshutdown(1M) command.
Reboot each node in the cluster.
Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. This inability to obtain quorum causes the entire cluster to shut down.
If you chose automatic quorum configuration during Sun Cluster installation or used SunPlex Installer to install Sun Cluster software, the installation utility automatically assigns quorum votes and removes the cluster from installation mode during installation reboot. However, if you did not choose one of these methods, cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Configure Quorum Devices.
If you installed a single-node cluster, cluster establishment is complete. Go to Configuring the Cluster to install volume management software and configure the cluster.
If you added a new node to an existing cluster, verify the state of the cluster. Go to How to Verify the Quorum Configuration and Installation Mode.
If you declined automatic quorum configuration during Sun Cluster software installation of a multiple-node cluster, perform postinstallation setup. Go to How to Configure Quorum Devices.
If you chose automatic quorum configuration during Sun Cluster software installation of a multiple-node cluster, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
If you used SunPlex Installer to install a multiple-node cluster, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
Perform this procedure to install data services for the Solaris 8 or Solaris 9 OS from the Sun Cluster Agents CD. This procedure uses the Web Start installer program on the CD-ROM to install the packages. Perform this procedure on each node in the cluster on which you want to run a chosen data service.
Do not use this procedure for the following kinds of data-service packages:
Data services for the Solaris 10 OS from the Sun Cluster Agents CD - Instead, follow installation procedures in How to Install Data-Service Software Packages (scinstall). The Web Start installer program on the Sun Cluster Agents CD is not compatible with the Solaris 10 OS.
Data services for the Solaris 10 OS from the Sun Cluster 2 of 2 CD-ROM - Instead, follow installation procedures in How to Install Data-Service Software Packages (pkgadd).
Data services for the Solaris 8 or Solaris 9 OS from the Sun Cluster 2 of 2 CD-ROM - Instead, follow installation procedures in How to Install Sun Cluster Framework and Data-Service Software Packages (Java ES installer).
You do not need to perform this procedure if you used SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache or both and you do not intend to install any other data services. Instead, go to How to Configure Quorum Devices.
To install data services from the Sun Cluster 3.1 10/03 release or earlier, you can alternatively follow the procedures in How to Install Data-Service Software Packages (scinstall).
You can run the installer program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the installer program, see the installer(1M) man page.
If you intend to use the installer program with a GUI, ensure that the DISPLAY
environment variable is set.
Become superuser on the cluster node.
Insert the Sun Cluster Agents CD in the CD-ROM drive.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.
Change to the directory of the CD-ROM where the installer program resides.
# cd /cdrom/cdrom0/Solaris_arch/ |
In the Solaris_arch/ directory, arch is sparc or x86.
Start the Web Start installer program.
# ./installer |
When you are prompted, select the type of installation.
See the Sun Cluster Release Notes for a listing of the locales that are available for each data service.
When you are prompted, select the locale to install.
Follow instructions on the screen to install the data-service packages on the node.
After the installation is finished, the installer program provides an installation summary. This summary enables you to view logs that the program created during the installation. These logs are located in the /var/sadm/install/logs/ directory.
Quit the installer program.
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
Install any Sun Cluster data-service patches.
See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.
You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:
From one node, shut down the cluster by using the scshutdown(1M) command.
Reboot each node in the cluster.
Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.
If you chose automatic quorum configuration during Sun Cluster installation or used SunPlex Installer to install Sun Cluster software, the installation utility automatically assigns quorum votes and removes the cluster from installation mode during installation reboot. However, if you did not choose one of these methods, cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Configure Quorum Devices.
If you installed a single-node cluster, cluster establishment is complete. Go to Configuring the Cluster to install volume management software and configure the cluster.
If you added a new node to an existing cluster, verify the state of the cluster. Go to How to Verify the Quorum Configuration and Installation Mode.
If you declined automatic quorum configuration during Sun Cluster software installation of a multiple-node cluster, perform postinstallation setup. Go to How to Configure Quorum Devices.
If you chose automatic quorum configuration during Sun Cluster software installation of a multiple-node cluster, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
If you used SunPlex Installer to install a multiple-node cluster, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
You do not need to configure quorum devices in the following circumstances:
You chose automatic quorum configuration during Sun Cluster software configuration.
You used SunPlex Installer to install the cluster. SunPlex Installer assigns quorum votes and removes the cluster from installation mode for you.
You installed a single-node cluster.
You added a node to an existing cluster and already have sufficient quorum votes assigned.
Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.
Perform this procedure one time only, after the cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.
If you intend to configure a Network Appliance network-attached storage (NAS) device as a quorum device, do the following:
Install the NAS device hardware and software. See Chapter 1, Installing and Maintaining Network Appliance Network-Attached Storage Devices in a Sun Cluster Environment, in Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS and your device documentation for requirements and installation procedures for NAS hardware and software.
Have available the following information:
The name of the NAS device
The LUN ID of the NAS device
See the following Network Appliance NAS documentation for information about creating and setting up a Network Appliance NAS device and LUN. You can access the following documents at http://now.netapp.com.
Setting up a NAS device
System Administration File Access Management Guide
Setting up a LUN
Host Cluster Tool for Unix Installation Guide
Installing ONTAP software
Software Setup Guide, Upgrade Guide
Exporting volumes for the cluster
Data ONTAP Storage Management Guide
Installing NAS support software packages on cluster nodes
Log in to http://now.netapp.com. From the Software Download page, download the Host Cluster Tool for Unix Installation Guide.
If you want to use a shared SCSI disk as a quorum device, verify device connectivity to the cluster nodes and choose the device to configure.
From one node of the cluster, display a list of all the devices that the system checks.
You do not need to be logged in as superuser to run this command.
% scdidadm -L |
Output resembles the following:
1 phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1 2 phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 2 phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2 3 phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 3 phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3 … |
Ensure that the output shows all connections between cluster nodes and storage devices.
Determine the global device-ID name of each shared disk that you are configuring as a quorum device.
Any shared disk that you choose must be qualified for use as a quorum device. See Quorum Devices for further information about choosing quorum devices.
Use the scdidadm output from Step a to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step a shows that global device d2 is shared by phys-schost-1 and phys-schost-2.
Become superuser on one node of the cluster.
Start the scsetup(1M) utility.
# scsetup |
The Initial Cluster Setup screen is displayed.
If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 8.
Answer the prompt Do you want to add any quorum disks?.
Specify what type of device you want to configure as a quorum device.
Specify the name of the device to configure as a quorum device.
For a Network Appliance NAS device, also specify the following information:
The name of the NAS device
The LUN ID of the NAS device
At the prompt Is it okay to reset "installmode"?, type Yes.
After the scsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.
Quit the scsetup utility.
Verify the quorum configuration and that installation mode is disabled. Go to How to Verify the Quorum Configuration and Installation Mode.
Interrupted scsetup processing — If the quorum setup process is interrupted or fails to be completed successfully, rerun scsetup.
Changes to quorum vote count — If you later increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote by removing each quorum device and then add it back into the configuration, one quorum device at a time. For a two-node cluster, temporarily add a new quorum device before you remove and add back the original quorum device. Then remove the temporary quorum device. See the procedure “How to Modify a Quorum Device Node List” in Chapter 5, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.
Perform this procedure to verify that quorum configuration was completed successfully and that cluster installation mode is disabled.
From any node, verify the device and node quorum configurations.
% scstat -q |
From any node, verify that cluster installation mode is disabled.
You do not need to be superuser to run this command.
% scconf -p | grep "install mode" Cluster install mode: disabled |
Cluster installation is complete.
Go to Configuring the Cluster to install volume management software and perform other configuration tasks on the cluster or new cluster node.
If you added a new node to a cluster that uses VxVM, you must perform steps in SPARC: How to Install VERITAS Volume Manager Software to do one of the following tasks:
Install VxVM on that node.
Modify that node's /etc/name_to_major file, to support coexistence with VxVM.
This section provides information and procedures to configure the software that you installed on the cluster or new cluster node. Before you start to perform these tasks, ensure that you completed the following tasks:
Installed software packages for the Solaris OS, Sun Cluster framework, and other products as described in Installing the Software
Established the new cluster or cluster node as described in Establishing the Cluster
The following table lists the tasks to perform to configure your cluster. Complete the procedures in the order that is indicated.
If you added a new node to a cluster that uses VxVM, you must perform steps in SPARC: How to Install VERITAS Volume Manager Software to do one of the following tasks:
Install VxVM on that node.
Modify that node's /etc/name_to_major file, to support coexistence with VxVM.
Task |
Instructions |
---|---|
1. Install and configure volume management software: |
|
|
Chapter 3, Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software Solstice DiskSuite or Solaris Volume Manager documentation |
|
Chapter 4, SPARC: Installing and Configuring VERITAS Volume Manager VERITAS Volume Manager documentation |
2. Create and mount cluster file systems. | |
3. (Solaris 8 or SunPlex Installer installations) Create Internet Protocol (IP) Network Multipathing groups for each public-network adapter that is not already configured in an IP Network Multipathing group. |
How to Configure Internet Protocol (IP) Network Multipathing Groups |
4. (Optional) Change a node's private hostname. | |
5. Create or modify the NTP configuration file. | |
6. (Optional) SPARC: Install the Sun Cluster module to Sun Management Center software. |
SPARC: Installing the Sun Cluster Module for Sun Management Center Sun Management Center documentation |
7. Install third-party applications and configure the applications, data services, and resource groups. |
Sun Cluster Data Services Planning and Administration Guide for Solaris OS Third-party application documentation |
Perform this procedure for each cluster file system that you want to create. Unlike a local file system, a cluster file system is accessible from any node in the cluster. If you used SunPlex Installer to install data services, SunPlex Installer might have already created one or more cluster file systems.
Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete.
Perform the following tasks:
Ensure that volume-manager software is installed and configured. For volume-manager installation procedures, see Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software or SPARC: Installing and Configuring VxVM Software.
Determine the mount options to use for each cluster file system that you want to create. Observe the Sun Cluster mount-option requirements and restrictions that are described in the following tables:
Mount Options for UFS Cluster File Systems
See the mount_ufs(1M) man page for more information about UFS mount options.
Mount Parameters for Sun StorEdge QFS Shared File Systems
Mount Parameter |
Description |
---|---|
shared |
Required. This option specifies that this is a shared file system, therefore globally visible to all nodes in the cluster. |
Ensure that settings in the /etc/vfstab file do not conflict with settings in the /etc/opt/SUNWsamfs/samfs.cmd file. Settings in the /etc/vfstab file override settings in the /etc/opt/SUNWsamfs/samfs.cmd file.
See the mount_samfs(1M) man page for more information about QFS mount parameters.
Certain data services such as Sun Cluster Support for Oracle Real Application Clusters have additional requirements and guidelines for QFS mount parameters. See your data service manual for any additional requirements.
Logging is not enabled by an /etc/vfstab mount parameter, nor does Sun Cluster software require logging for QFS shared file systems.
Mount Options for VxFS Cluster File Systems
Mount Option |
Description |
---|---|
global |
Required. This option makes the file system globally visible to all nodes in the cluster. |
log |
Required. This option enables logging. |
See the VxFS mount_vxfs man page and Administering Cluster File Systems Overview in Sun Cluster System Administration Guide for Solaris OS for more information about VxFS mount options.
Become superuser on any node in the cluster.
For faster file-system creation, become superuser on the current primary of the global device for which you create a file system.
Create a file system.
For a UFS file system, use the newfs(1M) command.
# newfs raw-disk-device |
The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.
Volume Manager |
Sample Disk Device Name |
Description |
---|---|---|
Solstice DiskSuite or Solaris Volume Manager |
/dev/md/nfs/rdsk/d1 |
Raw disk device d1 within the nfs disk set |
SPARC: VERITAS Volume Manager |
/dev/vx/rdsk/oradg/vol01 |
Raw disk device vol01 within the oradg disk group |
None |
/dev/global/rdsk/d1s3 |
Raw disk device d1s3 |
For a Sun StorEdge QFS file system, follow the procedures for defining the configuration in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.
SPARC: For a VERITAS File System (VxFS) file system, follow the procedures that are provided in your VxFS documentation.
On each node in the cluster, create a mount-point directory for the cluster file system.
A mount point is required on each node, even if the cluster file system is not accessed on that node.
For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.
# mkdir -p /global/device-group/mountpoint/ |
Name of the directory that corresponds to the name of the device group that contains the device
Name of the directory on which to mount the cluster file system
On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.
See the vfstab(4) man page for details.
In each entry, specify the required mount options for the type of file system that you use.
Do not use the logging mount option for Solstice DiskSuite trans metadevices or Solaris Volume Manager transactional volumes. Trans metadevices and transactional volumes provide their own logging.
In addition, Solaris Volume Manager transactional-volume logging (formerly Solstice DiskSuite trans-metadevice logging) is scheduled to be removed from the Solaris OS in an upcoming Solaris release. Solaris UFS logging provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.
To automatically mount the cluster file system, set the mount at boot field to yes.
Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.
Ensure that the entries in each node's /etc/vfstab file list devices in the same order.
Check the boot order dependencies of the file systems.
For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/, and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.
On any node in the cluster, run the sccheck(1M) utility.
The sccheck utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster.
# sccheck |
If no errors occur, nothing is returned.
Mount the cluster file system.
# mount /global/device-group/mountpoint/ |
For UFS and QFS, mount the cluster file system from any node in the cluster.
SPARC: For VxFS, mount the cluster file system from the current master of device-group to ensure that the file system mounts successfully. In addition, unmount a VxFS file system from the current master of device-group to ensure that the file system unmounts successfully.
To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.
On each node of the cluster, verify that the cluster file system is mounted.
You can use either the df(1M) or mount(1M) command to list mounted file systems.
The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.
# newfs /dev/md/oracle/rdsk/d1 … (on each node) # mkdir -p /global/oracle/d1 # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type ; pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging (save and exit) (on one node) # sccheck # mount /global/oracle/d1 # mount … /global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles on Sun Oct 3 08:56:16 2000 |
If you installed Sun Cluster software on the Solaris 8 OS or you used SunPlex Installer to install the cluster, go to How to Configure Internet Protocol (IP) Network Multipathing Groups.
If you want to change any private hostnames, go to How to Change Private Hostnames.
If you did not install your own /etc/inet/ntp.conf file before you installed Sun Cluster software, install or create the NTP configuration file. Go to How to Configure Network Time Protocol (NTP).
SPARC: If you want to configure Sun Management Center to monitor the cluster, go to SPARC: Installing the Sun Cluster Module for Sun Management Center.
Otherwise, install third-party applications, register resource types, set up resource groups, and configure data services. Follow procedures in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS and in the documentation that is supplied with your application software.
Perform this task on each node of the cluster. If you used SunPlex Installer to install Sun Cluster HA for Apache or Sun Cluster HA for NFS, SunPlex Installer configured IP Network Multipathing groups for the public-network adapters those data services use. You must configure IP Network Multipathing groups for the remaining public-network adapters.
All public-network adapters must belong to an IP Network Multipathing group.
Have available your completed Public Networks Worksheet.
Configure IP Network Multipathing groups.
Perform procedures in Deploying Network Multipathing in IP Network Multipathing Administration Guide (Solaris 8), Configuring Multipathing Interface Groups in System Administration Guide: IP Services (Solaris 9), or Configuring IPMP Groups in System Administration Guide: IP Services (Solaris 10).
Follow these additional requirements to configure IP Network Multipathing groups in a Sun Cluster configuration:
Each public network adapter must belong to a multipathing group.
In the following kinds of multipathing groups, you must configure a test IP address for each adapter in the group:
On the Solaris 8 OS, all multipathing groups require a test IP address for each adapter.
On the Solaris 9 or Solaris 10 OS, multipathing groups that contain two or more adapters require test IP addresses. If a multipathing group contains only one adapter, you do not need to configure a test IP address.
Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.
Test IP addresses must not be used by normal applications because the test IP addresses are not highly available.
In the /etc/default/mpathd file, the value of TRACK_INTERFACES_ONLY_WITH_GROUPS must be yes.
The name of a multipathing group has no requirements or restrictions.
If you want to change any private hostnames, go to How to Change Private Hostnames.
If you did not install your own /etc/inet/ntp.conf file before you installed Sun Cluster software, install or create the NTP configuration file. Go to How to Configure Network Time Protocol (NTP).
If you are using Sun Cluster on a SPARC based system and you want to use Sun Management Center to monitor the cluster, install the Sun Cluster module for Sun Management Center. Go to SPARC: Installing the Sun Cluster Module for Sun Management Center.
Otherwise, install third-party applications, register resource types, set up resource groups, and configure data services. Follow procedures in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS and in the documentation that is supplied with your application software.
Perform this task if you do not want to use the default private hostnames, clusternodenodeid-priv, that are assigned during Sun Cluster software installation.
Do not perform this procedure after applications and data services have been configured and have been started. Otherwise, an application or data service might continue to use the old private hostname after the hostname is renamed, which would cause hostname conflicts. If any applications or data services are running, stop them before you perform this procedure.
Perform this procedure on one active node of the cluster.
Become superuser on a node in the cluster.
Start the scsetup(1M) utility.
# scsetup |
From the Main Menu, choose the menu item, Private hostnames.
From the Private Hostname Menu, choose the menu item, Change a private hostname.
Follow the prompts to change the private hostname.
Repeat for each private hostname to change.
Verify the new private hostnames.
# scconf -pv | grep "private hostname" (phys-schost-1) Node private hostname: phys-schost-1-priv (phys-schost-3) Node private hostname: phys-schost-3-priv (phys-schost-2) Node private hostname: phys-schost-2-priv |
If you did not install your own /etc/inet/ntp.conf file before you installed Sun Cluster software, install or create the NTP configuration file. Go to How to Configure Network Time Protocol (NTP).
SPARC: If you want to configure Sun Management Center to monitor the cluster, go to SPARC: Installing the Sun Cluster Module for Sun Management Center.
Otherwise, install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
If you installed your own /etc/inet/ntp.conf file before you installed Sun Cluster software, you do not need to perform this procedure. Determine your next step:
SPARC: If you want to configure Sun Management Center to monitor the cluster, go to SPARC: Installing the Sun Cluster Module for Sun Management Center.
Otherwise, install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Perform this task to create or modify the NTP configuration file after you perform any of the following tasks:
Install Sun Cluster software
Add a node to an existing cluster
Change the private hostname of a node in the cluster
If you added a node to a single-node cluster, you must ensure that the NTP configuration file that you use is copied to the original cluster node as well as to the new node.
The primary requirement when you configure NTP, or any time synchronization facility within the cluster, is that all cluster nodes must be synchronized to the same time. Consider accuracy of time on individual nodes to be of secondary importance to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs if this basic requirement for synchronization is met.
See the Sun Cluster Concepts Guide for Solaris OS for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines on how to configure NTP for a Sun Cluster configuration.
Become superuser on a cluster node.
If you have your own file, copy your file to each node of the cluster.
If you do not have your own /etc/inet/ntp.conf file to install, use the /etc/inet/ntp.conf.cluster file as your NTP configuration file.
Do not rename the ntp.conf.cluster file as ntp.conf.
If the /etc/inet/ntp.conf.cluster file does not exist on the node, you might have an /etc/inet/ntp.conf file from an earlier installation of Sun Cluster software. Sun Cluster software creates the /etc/inet/ntp.conf.cluster file as the NTP configuration file if an /etc/inet/ntp.conf file is not already present on the node. If so, perform the following edits instead on that ntp.conf file.
Use your preferred text editor to open the /etc/inet/ntp.conf.cluster file on one node of the cluster for editing.
Ensure that an entry exists for the private hostname of each cluster node.
If you changed any node's private hostname, ensure that the NTP configuration file contains the new private hostname.
If necessary, make other modifications to meet your NTP requirements.
Copy the NTP configuration file to all nodes in the cluster.
The contents of the NTP configuration file must be identical on all cluster nodes.
Stop the NTP daemon on each node.
Wait for the command to complete successfully on each node before you proceed to Step 6.
For the Solaris 8 or Solaris 9 OS, use the following command:
# /etc/init.d/xntpd stop |
For the Solaris 10 OS, use the following command:
# svcadm disable ntp |
Restart the NTP daemon on each node.
If you use the ntp.conf.cluster file, run the following command:
# /etc/init.d/xntpd.cluster start |
The xntpd.cluster startup script first looks for the /etc/inet/ntp.conf file.
If the ntp.conf file exists, the script exits immediately without starting the NTP daemon.
If the ntp.conf file does not exist but the ntp.conf.cluster file does exist, the script starts the NTP daemon. In this case, the script uses the ntp.conf.cluster file as the NTP configuration file.
If you use the ntp.conf file, run one of the following commands:
For the Solaris 8 or Solaris 9 OS, use the following command:
# /etc/init.d/xntpd start |
For the Solaris 10 OS, use the following command:
# svcadm enable ntp |
SPARC: To configure Sun Management Center to monitor the cluster, go to SPARC: Installing the Sun Cluster Module for Sun Management Center.
Otherwise, install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
This section provides information and procedures to install software for the Sun Cluster module to Sun Management Center.
The Sun Cluster module for Sun Management Center enables you to use Sun Management Center to monitor the cluster. The following table lists the tasks to perform to install the Sun Cluster–module software for Sun Management Center.
Table 2–6 Task Map: Installing the Sun Cluster Module for Sun Management Center
Task |
Instructions |
---|---|
1. Install Sun Management Center server, help-server, agent, and console packages. |
Sun Management Center documentation |
2. Install Sun Cluster–module packages. |
SPARC: How to Install the Sun Cluster Module for Sun Management Center |
3. Start Sun Management Center server, console, and agent processes. | |
4. Add each cluster node as a Sun Management Center agent host object. |
SPARC: How to Add a Cluster Node as a Sun Management Center Agent Host Object |
5. Load the Sun Cluster module to begin to monitor the cluster. |
The Sun Cluster module for Sun Management Center is used to monitor a Sun Cluster configuration. Perform the following tasks before you install the Sun Cluster module packages.
Space requirements - Ensure that 25 Mbytes of space is available on each cluster node for Sun Cluster–module packages.
Sun Management Center installation - Follow procedures in your Sun Management Center installation documentation to install Sun Management Center software.
The following are additional requirements for a Sun Cluster configuration:
Install the Sun Management Center agent package on each cluster node.
When you install Sun Management Center on an agent machine (cluster node), choose whether to use the default of 161 for the agent (SNMP) communication port or another number. This port number enables the server to communicate with this agent. Record the port number that you choose for reference later when you configure the cluster nodes for monitoring.
See your Sun Management Center installation documentation for information about choosing an SNMP port number.
Install the Sun Management Center server, help–server, and console packages on noncluster nodes.
If you have an administrative console or other dedicated machine, you can run the console process on the administrative console and the server process on a separate machine. This installation approach improves Sun Management Center performance.
Web browser - Ensure that the web browser that you use to connect to Sun Management Center is supported by Sun Management Center. Certain features, such as online help, might not be available on unsupported web browsers. See your Sun Management Center documentation for information about supported web browsers and any configuration requirements.
Perform this procedure to install the Sun Cluster–module server and help–server packages.
The Sun Cluster–module agent packages, SUNWscsal and SUNWscsam, are already added to cluster nodes during Sun Cluster software installation.
Ensure that all Sun Management Center core packages are installed on the appropriate machines. This task includes installing Sun Management Center agent packages on each cluster node. See your Sun Management Center documentation for installation instructions.
On the server machine, install the Sun Cluster–module server package SUNWscssv.
Become superuser.
Insert the Sun Cluster 2 of 2 CD-ROM for the SPARC platform in the CD-ROM drive.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.
Change to the Solaris_sparc/Product/sun_cluster/Solaris_ver/Packages/ directory, where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10.
# cd /cdrom/cdrom0/Solaris_sparc/Product/sun_cluster/Solaris_ver/Packages/ |
Install the Sun Cluster–module server package.
# pkgadd -d . SUNWscssv |
Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.
# eject cdrom |
On the Sun Management Center 3.0 help-server machine or the Sun Management Center 3.5 server machine, install the Sun Cluster–module help–server package SUNWscshl.
Use the same procedure as in the previous step.
Install any Sun Cluster–module patches.
See Patches and Required Firmware Levels in Sun Cluster 3.1 8/05 Release Notes for Solaris OS for the location of patches and installation instructions.
Start Sun Management Center. Go to SPARC: How to Start Sun Management Center.
Perform this procedure to start the Sun Management Center server, agent, and console processes.
As superuser, on the Sun Management Center server machine, start the Sun Management Center server process.
The install-dir is the directory on which you installed the Sun Management Center software. The default directory is /opt.
# /install-dir/SUNWsymon/sbin/es-start -S |
As superuser, on each Sun Management Center agent machine (cluster node), start the Sun Management Center agent process.
# /install-dir/SUNWsymon/sbin/es-start -a |
On each Sun Management Center agent machine (cluster node), ensure that the scsymon_srv daemon is running.
# ps -ef | grep scsymon_srv |
If any cluster node is not already running the scsymon_srv daemon, start the daemon on that node.
# /usr/cluster/lib/scsymon/scsymon_srv |
On the Sun Management Center console machine (administrative console), start the Sun Management Center console.
You do not need to be superuser to start the console process.
% /install-dir/SUNWsymon/sbin/es-start -c |
Add a cluster node as a monitored host object. Go to SPARC: How to Add a Cluster Node as a Sun Management Center Agent Host Object.
Perform this procedure to create a Sun Management Center agent host object for a cluster node.
Log in to Sun Management Center.
See your Sun Management Center documentation.
From the Sun Management Center main window, select a domain from the Sun Management Center Administrative Domains pull-down list.
This domain contains the Sun Management Center agent host object that you create. During Sun Management Center software installation, a Default Domain was automatically created for you. You can use this domain, select another existing domain, or create a new domain.
See your Sun Management Center documentation for information about how to create Sun Management Center domains.
Choose Edit⇒Create an Object from the pull-down menu.
Click the Node tab.
From the Monitor Via pull-down list, select Sun Management Center Agent - Host.
Fill in the name of the cluster node, for example, phys-schost-1, in the Node Label and Hostname text fields.
Leave the IP text field blank. The Description text field is optional.
In the Port text field, type the port number that you chose when you installed the Sun Management Center agent machine.
Click OK.
A Sun Management Center agent host object is created in the domain.
Load the Sun Cluster module. Go to SPARC: How to Load the Sun Cluster Module.
You need only one cluster node host object to use Sun Cluster–module monitoring and configuration functions for the entire cluster. However, if that cluster node becomes unavailable, connection to the cluster through that host object also becomes unavailable. Then you need another cluster-node host object to reconnect to the cluster.
Perform this procedure to start cluster monitoring.
In the Sun Management Center main window, right click the icon of a cluster node.
The pull-down menu is displayed.
Choose Load Module.
The Load Module window lists each available Sun Management Center module and whether the module is currently loaded.
Choose Sun Cluster: Not Loaded and click OK.
The Module Loader window shows the current parameter information for the selected module.
Click OK.
After a few moments, the module is loaded. A Sun Cluster icon is then displayed in the Details window.
Verify that the Sun Cluster module is loaded.
Under the Operating System category, expand the Sun Cluster subtree in either of the following ways:
See the Sun Cluster module online help for information about how to use Sun Cluster module features.
To view online help for a specific Sun Cluster module item, place the cursor over the item. Then click the right mouse button and select Help from the pop-up menu.
To access the home page for the Sun Cluster module online help, place the cursor over the Cluster Info icon. Then click the right mouse button and select Help from the pop-up menu.
To directly access the home page for the Sun Cluster module online help, click the Sun Management Center Help button to launch the help browser. Then go to the following URL, where install-dir is the directory on which you installed the Sun Management Center software:
file:/install-dir/SUNWsymon/lib/locale/C/help/main.top.html
The Help button in the Sun Management Center browser accesses online help for Sun Management Center, not the topics specific to the Sun Cluster module.
See Sun Management Center online help and your Sun Management Center documentation for information about how to use Sun Management Center.
Install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
This section provides the following procedures to uninstall or remove Sun Cluster software:
Perform this procedure if the installed node cannot join the cluster or if you need to correct configuration information. For example, perform this procedure to reconfigure the transport adapters or the private-network address.
If the node has already joined the cluster and is no longer in installation mode, as described in Step 2 of How to Verify the Quorum Configuration and Installation Mode, do not perform this procedure. Instead, go to “How to Uninstall Sun Cluster Software From a Cluster Node” in Adding and Removing a Cluster Node in Sun Cluster System Administration Guide for Solaris OS.
Attempt to reinstall the node. You can correct certain failed installations by repeating Sun Cluster software installation on the node.
Add to the cluster's node-authentication list the node that you intend to uninstall.
If you are uninstalling a single-node cluster, skip to Step 2.
Become superuser on an active cluster member other than the node that you are uninstalling.
Specify the name of the node to add to the authentication list.
# /usr/cluster/bin/scconf -a -T node=nodename |
Add
Specifies authentication options
Specifies the name of the node to add to the authentication list
You can also use the scsetup(1M) utility to perform this task. See How to Add a Node to the Authorized Node List in Sun Cluster System Administration Guide for Solaris OS for procedures.
Become superuser on the node that you intend to uninstall.
Shut down the node that you intend to uninstall.
# shutdown -g0 -y -i0 |
Reboot the node into noncluster mode.
On SPARC based systems, do the following:
ok boot -x |
On x86 based systems, do the following:
<<< Current Boot Parameters >>> Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -x |
Change to a directory, such as the root (/) directory, that does not contain any files that are delivered by the Sun Cluster packages.
# cd / |
Uninstall Sun Cluster software from the node.
# /usr/cluster/bin/scinstall -r |
See the scinstall(1M) man page for more information.
Reinstall and reconfigure Sun Cluster software on the node.
Refer to Table 2–1 for the list of all installation tasks and the order in which to perform the tasks.
Perform this procedure on each node in the cluster.
Verify that no applications are using the RSMRDT driver before performing this procedure.
Become superuser on the node to which you want to uninstall the SUNWscrdt package.
Uninstall the SUNWscrdt package.
# pkgrm SUNWscrdt |
If the driver remains loaded in memory after completing How to Uninstall the SUNWscrdt Package, perform this procedure to unload the driver manually.
Start the adb utility.
# adb -kw |
Set the kernel variable clifrsmrdt_modunload_ok to 1.
physmem NNNN clifrsmrdt_modunload_ok/W 1 |
Exit the adb utility by pressing Control-D.
Find the clif_rsmrdt and rsmrdt module IDs.
# modinfo | grep rdt |
Unload the clif_rsmrdt module.
You must unload the clif_rsmrdt module before you unload the rsmrdt module.
# modunload -i clif_rsmrdt_id |
Specifies the numeric ID for the module being unloaded
Unload the rsmrdt module.
# modunload -i rsmrdt_id |
Specifies the numeric ID for the module being unloaded
Verify that the module was successfully unloaded.
# modinfo | grep rdt |
The following example shows the console output after the RSMRDT driver is manually unloaded.
# adb -kw physmem fc54 clifrsmrdt_modunload_ok/W 1 clifrsmrdt_modunload_ok: 0x0 = 0x1 ^D # modinfo | grep rsm 88 f064a5cb 974 - 1 rsmops (RSMOPS module 1.1) 93 f08e07d4 b95 - 1 clif_rsmrdt (CLUSTER-RSMRDT Interface module) 94 f0d3d000 13db0 194 1 rsmrdt (Reliable Datagram Transport dri) # modunload -i 93 # modunload -i 94 # modinfo | grep rsm 88 f064a5cb 974 - 1 rsmops (RSMOPS module 1.1) # |
If the modunload command fails, applications are probably still using the driver. Terminate the applications before you run modunload again.