Sun Cluster 2.2 Release Notes

Chapter 1 Sun Cluster 2.2 Release Notes

This document provides information on the following topics.

New Features

Additional information on Sun Cluster 2.2 can be found on the following web site:

http://www.sun.com/clusters

Supported Products

This section describes the software products supported with Sun Cluster 2.2.


Note -

For information about supported server platforms and storage devices, contact your Sun representative.


Volume Managers

Sun Cluster 2.2 supports the following volume managers.

Volume Manager 

Release 

Solaris Version 

Cluster Volume Manager 

2.2.1 

2.6 

Solstice DiskSuiteTM

4.2 

2.6 and 7 

Sun StorEdge Volume ManagerTM

2.6 

2.6 


Note -

You cannot use more than one volume manager simultaneously within a cluster. Use only one volume manager per cluster.


Data Services

Sun Cluster 2.2 supports the following data services.

Data Service 

Release 

Solaris Version 

Informix-Online XPS 

8.21 

2.6 

Oracle Parallel Server 

7.3.4, 8.0.5 

2.6 

Sun Cluster HA for DNS 

N/A 

2.6 and 7 

Sun Cluster HA for Informix 

7.23, 7.30 

7.23 on 2.6 7.30 on 2.6 and 7 

Sun Cluster HA for Lotus 

4.6, 4.6.1 

2.6 

Sun Cluster HA for Netscape 

3.5 

2.6 

Sun Cluster HA for NFSTM

N/A 

2.6 and 7 

Sun Cluster HA for Oracle 

7.3.3, 7.3.4, 8.0.4, 8.0.5 

2.6 and 7 

Sun Cluster HA for SAP [These versions of SAP with the Oracle database have been qualified with Sun Cluster HA for SAP on Sun Cluster 2.2. SAP with the Informix database has not yet been fully qualified with Sun Cluster HA for SAP on Sun Cluster 2.2. For the most current support information, see your service provider.]

3.1h, 3.1i, 4.0b 

2.6 

Sun Cluster HA for Sybase 

11.5 

2.6 and 7 

Sun Cluster HA for Tivoli 

3.2 

2.6 

Restrictions

The following restrictions apply to Sun Cluster 2.2.

Changes From Previous Releases

This section describes functionality and command changes from previous releases. See the associated man pages for more information.

Command Changes

Sun Cluster 2.2 - The following Solstice HA 1.3 commands have been replaced in or removed from Sun Cluster 2.2:

 

Solstice HA 1.3 

Sun Cluster 2.2 

Replaced 

hainstall

scinstall

hainetconfig

hadsconfig

haremove

scinstall

hasetup

scconf / scinstall

hastart

scadmin startcluster (first node) scadmin startnode (remaining nodes)

hastop

scadmin stopnode

Removed 

hacheck 

 

hafstab 

halicense 

haload

The Sun Cluster 2.1 command scinstall(1M) has been changed for Sun Cluster 2.2. Refer to the scinstall(1M) man page for current syntax and usage.

Sun StorEdge Volume Manager (SSVM) - The following (1M) commands and options are supported only with SSVM. See the associated man pages for more information.

Solstice DiskSuite - The following changes apply to Solstice DiskSuite.

Change to the Oracle GMS Daemon

In Sun Cluster 2.2, Oracle's Group Membership Services (GMS) daemon is no longer started as part of the Sun Cluster framework with scogms(1M). This means that for Oracle Parallel Server databases, the GMS daemon must be started manually using the binary ogmsctl that is provided by the Oracle Parallel Server Option installation on a cluster node. The ogmsctl daemon must be up and running even when the instance on the node is started in exclusive mode. Therefore, this daemon should be running when the database is created.

The GMS binaries ogms and ogmsctl are located in the /bin directory under $ORACLE_HOME. The default home directory for the GMS daemon is /tmp/.ogms. This directory contains trace files and the gms0000.dat file.

The GMS daemon must be started from the oracle user login. The following commands start the GMS daemon.

# su - oracle
$ ogmsctl start

For the daemon to start successfully, the node upon which it is run must be a cluster member.

The following command stops the GMS daemon.

$ ogmsctl stop

The ogmsctl command has the following options.

Option 

Description 

start

Starts GMS 

stop

Stops GMS 

abort

Kills GMS 

trace=x

Sets trace level to x

status

Determines whether GMS is running 

interactive

Enters GMS debugger mode 

ogms_home=x

Sets GMS home directory to x

global-status

Gets a list of active GMS nodes 

group-status domain group

Gets a list of group member information 

Licensing

You will receive paper licenses for the Sun Cluster 2.2 framework, for each hardware platform on which Sun Cluster 2.2 will run. You will also receive paper licenses for each Sun Cluster data service, per node.

No licenses are required for Solstice DiskSuite or CVM. The SSVM product is bundled with its own license.

The Sun Cluster 2.2 framework does not enforce these licenses, but you should retain the paper licenses as proof of ownership when you need technical support or other support services. See http://www.sun.com/licensing/ for more licensing information.

Contact your third party service provider for third party product licenses.

Installation and Upgrade Information

The Sun Cluster 2.2 release consists of four CD-ROMs:

For more information about installing Solstice DiskSuite 4.2 software and documentation, see "Installing and Upgrading Solstice DiskSuite 4.2".

For details about installing and configuring CVM, see the Sun Cluster 2.2 Cluster Volume Manager Guide.

For instructions on installing and configuring SSVM, see your Sun StorEdge Volume Manager documentation and the Sun Cluster 2.2 Software Installation Guide.


Note -

CVM 2.2.1 will be replaced by CVM 3.0. At that time, CVM 2.2.1 users will be required to upgrade to CVM 3.0 to retain support for their configurations.


Changes to Installation Procedures

The Sun Cluster installation procedures have changed significantly from Solstice HA 1.3 and Sun Cluster 2.1. In Sun Cluster 2.2, the interactive command scinstall(1M) is used to install the software and to set up cluster components such as logical hosts and network interfaces.

The steps to install and configure Sun Cluster are grouped into three procedures:

  1. Preparing the administrative workstation and installing the client software.

    This entails installing the Solaris operating environment and Sun Cluster 2.2 client software on the administrative workstation.

  2. Installing the server software.

    This includes using the Cluster Console to install the Solaris operating environment and Sun Cluster 2.2 software on all cluster nodes; using scinstall(1M) to set up network interfaces, logical hosts, and quorum devices; and selecting data services and volume manager support packages.

  3. Configuring and bringing up the cluster.

    This includes setting up paths; installing patches; installing and configuring your volume manager, SCI, PNM backup groups, logical hosts, and data services; and bringing up the cluster.

    See Chapter 3, "Installing and Configuring Sun Cluster Software, in the Sun Cluster 2.2 Software Installation Guide for the detailed procedures.

Installing and Upgrading Solstice DiskSuite 4.2

Existing Solstice HA 1.3 customers must upgrade to Solstice DiskSuite 4.2 to use Sun Cluster 2.2. The Solstice DiskSuite 4.2 software and documentation is provided on a separate CD-ROM containing the following:


Note -

The Solstice DiskSuite 4.2 documentation refers to Solaris Easy Access Server documentation, Solaris Web Start, and i386. This special edition of Solstice DiskSuite 4.2 for Sun Cluster 2.2 is completely standalone; disregard these references.


Accessing Solstice DiskSuite 4.2 Installation Instructions

To access Solstice DiskSuite 4.2 installation procedures, do these steps:

  1. Open the README file contained on the Solstice DiskSuite 4.2 CD-ROM, using a browser to access the menu options that enable you to read an HTML file. For example, in Netscape, do the following:

    1. From the Netscape browser menu bar, choose File>Open Page>Choose File. This opens the File Browser dialog box.

    2. Choose the file /cdrom/cdrom0/README.html. The browser brings up the README.html file.

  2. Install the AnswerBook2 server and the Solstice DiskSuite 4.2 AnswerBook using the README file instructions.

  3. Access the Solstice DiskSuite 4.2 AnswerBook and follow the online instructions found in the Solstice DiskSuite 4.2 Installation and Product Notes to install Solstice DiskSuite.


Note -

The latest version of Patch 106627 is required for Solstice DiskSuite 4.2 running on either Solaris 2.6 or Solaris 7. The patch is available from all Sun service providers and from the SunSolveSM web site (http://sunsolve.sun.com/).


Upgrading to Solstice DiskSuite 4.2

As part of the upgrade from earlier versions of Solstice DiskSuite, you are asked to add the SUNWmd package. In the following example, note that several files are shown with an asterisk, indicating that they are in conflict. When you answer y at each prompt, the new commands are installed, but the conflicting files are not overwritten.


Caution - Caution -

Do not remove the old SUNWmd package before adding the new one. Doing so will make all data inaccessible.


# pkgadd -d . SUNWmd
 Processing package instance <SUNWmd> from 
 </net/sag/export/unbundled/Solstice/disksuite/disksuite_4_2_seas/sparc>
 Solstice DiskSuite
 (sparc) 4.2,REV=1998.02.09.12.47.28
 Copyright 1998 Sun Microsystems, Inc. All rights reserved.
 ## Executing checkinstall script.
 This is an upgrade. Conflict approval questions may be displayed. 
 The listed files are the ones that will be upgraded. Please answer "y" 
 to these questions if they are presented.
 Using </> as the package base directory.
 ## Processing package information.
 ## Processing system information.
    26 package pathnames are already properly installed.
 ## Verifying package dependencies.
 ## Verifying disk space requirements.
 ## Checking for conflicts with packages already installed.
 The following files are already installed on the system and are 
 being used by another package:
 ../etc/init.d/SUNWmd.init
 /etc/init.d/SUNWmd.sync
 /etc/opt/SUNWmd/lock
 /etc/opt/SUNWmd/md.cf
 /etc/opt/SUNWmd/md.ctlrmap
 * /etc/opt/SUNWmd/md.tab
 /etc/opt/SUNWmd/mddb.cf
 /kernel/drv/md
 * /kernel/drv/md.conf
 /kernel/misc/md_hotspares
 /usr/opt/SUNWmd/man/man7/md.7
 /usr/opt/SUNWmd/sbin/growfs
 /usr/opt/SUNWmd/sbin/metaclear
 /usr/opt/SUNWmd/sbin/metadb
 /usr/opt/SUNWmd/sbin/metadetach
 /usr/opt/SUNWmd/sbin/metahs
 /usr/opt/SUNWmd/sbin/metainit
 /usr/opt/SUNWmd/sbin/metaoffline
 /usr/opt/SUNWmd/sbin/metaonline
 /usr/opt/SUNWmd/sbin/metaparam
 /usr/opt/SUNWmd/sbin/metarename
 /usr/opt/SUNWmd/sbin/metareplace
 /usr/opt/SUNWmd/sbin/metaroot
 /usr/opt/SUNWmd/sbin/metaset
 /usr/opt/SUNWmd/sbin/metastat
 /usr/opt/SUNWmd/sbin/metasync
 /usr/opt/SUNWmd/sbin/metattach
 /usr/opt/SUNWmd/sbin/rpc.metad
 /usr/opt/SUNWmd/sbin/rpc.metamhd

 * - conflicts with a file which does not belong to any package.
 Do you want to install these conflicting files [y,n,?,q] y

Configuring Mediators When Migrating From Solstice HA 1.3 to Sun Cluster 2.2

This section is only relevant to clusters that were originally set up under Solstice HA 1.3 using Solstice DiskSuite mediators (two-string configurations). It describes changes that are automatically made to a mediator configuration when you upgrade from Solstice HA 1.3 to Sun Cluster 2.2. There is no direct user impact, but you should note the changes in any configuration information you keep on the cluster.

The Solstice HA 1.3-to-Sun Cluster 2.2 upgrade procedure documented in the Sun Cluster 2.2 Software Installation Guide changes the Solstice HA 1.3 mediator configuration. The original Solstice HA 1.3 mediator configuration resembles the following:

Mediator Host(s) 

Aliases 

ha-red

ha-red-priv1, ha-red-priv2

ha-green

ha-green-priv1, ha-green-priv2

After running the Sun Cluster 2.2 upgrade procedure, this configuration is converted to one similar to the following:

Mediator Host(s) 

Aliases 

ha-red

204.152.65.34

ha-green

204.152.65.33


Note -

In Solstice HA 1.3, the hosts referred to the private links by physical names, whereas in Sun Cluster 2.2, the private link IP addresses are used.


For more information about configuring mediators for Sun Cluster 2.2, see Chapter 9, "Using Dual-String Mediators, in the Sun Cluster 2.2 System Administration Guide.

Sun Cluster HA for SAP Upgrade Issues

Before performing an upgrade to Sun Cluster 2.2 from Solstice HA 1.3 or Sun Cluster 2.1, note these SAP-related issues.

Patches

All patches are available through SunSolve. Always install the latest version of each patch. For the most current patch information, access the SunSolve web site at http://sunsolve.sun.com/.

Patches for Sun Cluster 2.2 on Solaris 7

Sun Cluster 2.2 in the Solaris 7 operating environment requires the following patches.

Table 1-1 Solaris 7 Patches for Sun Cluster 2.2

Required/ Recommended 

Patch Number 

Minimum Level 

Description 

Required for Solstice DiskSuite 

106627 

01 

Mediators 

Required 

107388 

01 

Sun Cluster Manager 

Patches for Sun Cluster 2.2 on Solaris 2.6

The following patches have been tested successfully with Sun Cluster 2.2 in the Solaris 2.6 operating environment (SunOSTM 5.6).

Table 1-2 Solaris Patches for Sun Cluster 2.2

Required/ Recommended 

Patch Number 

Minimum Level 

Description 

Required 

105181 

04 

SunOS 5.6: Kernel update patch 

Recommended 

105210 

05 

SunOS 5.6: libc & watchmalloc patch

Recommended 

105216 

03 

SunOS 5.6: /usr/sbin/rpcbind patch

Recommended 

105284 

05 

Motif 1.2.7 Runtime library patch 

Required for A5000 

105356 

04 

SunOS 5.6: /kernel/drv/ssd patch

Required for A5000 

105357 

01 

SunOS 5.6: /kernel/drv/ses patch

Required for A5000 

105375 

05 

SunOS 5.6: sf & socal driver patch 

Recommended 

105379 

03 

SunOS 5.6: /kernel/misc/nfssrv patch

Recommended 

105393 

01 

SunOS 5.6: /usr/bin/at patch

 

105395 

02  

SunOS 5.6: /usr/lib/sendmail patch

 

105401 

08  

SunOS 5.6: libnsl and NIS+ commands patch

 

105407 

01  

SunOS 5.6: /usr/bin/volrmmount patch

 

105464 

01  

OpenWindowsTM 3.6: multiple xterm fixes

Required 

105490 

02  

SunOS 5.6: linker patch 

Required for QFE 2.1 

105541 

05 

Sun QFE 2.1: QFE driver 

 

105552 

02  

SunOS 5.6: /usr/sbin/rpc.nisd_resolv patch

 

105558 

01  

CDE 1.2: dtpad patch

 

105562 

01  

SunOS 5.6: chkey and keylogin patch

 

105566 

05 

CDE 1.2: Calendar Manager patch 

Required for E450 

105580 

06 

SunOS 5.6: /kernel/drv/glm patch

Required for D1000 

105600 

05 

SunOS 5.6: /kernel/drv/isp patch

 

105615 

03 

SunOS 5.6: /usr/lib/nfs/mountd patch

 

105621 

02  

SunOS 5.6: libbsm patch

 

105642 

03  

SunOS 5.6: prtdiag patch

 

105665 

01  

SunOS 5.6: /usr/bin/login patch

 

105669 

02  

CDE 1.2: libDtSvc patch

Required for E10000 

105684 

04 

SSP 3.1: OBP to support PCI probing & DR PCI attach/detach 

 

105720 

03  

SunOS 5.6: /kernel/fs/nfs patch

 

105741 

02  

SunOS 5.6: /kernel/drv/ecpp patch

 

105755 

03  

SunOS 5.6: in.named & libresolv patch

Required 

105786 

04  

SunOS 5.6: /kernel/drv/ip patch

Required 

105795 

05 

SunOS 5.6: /kernel/drv/hme patch

Required 

105797 

02  

SunOS 5.6: /kernel/drv/sd patch

 

105800 

03  

SunOS 5.6: /usr/bin/admintool Year 2000 patch

 

105837 

02  

CDE 1.2: dtappgather patch

 

105926 

01  

SunOS 5.6: /usr/sbin/static/tar patch

 

106040 

03  

SunOS 5.6: X Input & Output Method patch 

 

106049 

01  

SunOS 5.6: security in.telnetd BANNER

 

106125 

02  

SunOS 5.6: Patch for patchadd and patchrm

Required for A5000 

106129 

02 

Hardware, 9GB Disks: Downloads 

 

106172 

02 

SunOS 5.6: /kernel/drv/fas patch

 

106193 

03  

SunOS 5.6: Year 2000 sysid unzip patch 

 

106222 

01  

OpenWindows 3.6: filemgr (ff.core) fixes

 

106226 

01  

SunOS 5.6: /usr/sbin/format patch

 

106235 

01  

SunOS 5.6: lp patch

 

106242 

01  

CDE 1.2: libDtHelp.so.1 fixes

 

106257 

04  

SunOS 5.6: /usr/lib/libpam.so.1 patch

 

106271 

04  

SunOS 5.6: /usr/lib/security/pam_unix.so.1 patch

 

106301 

01  

SunOS 5.6: /usr/sbin/in.ftpd patch

 

106448 

01  

SunOS 5.6: /usr/sbin/ping patch

Required for QFE 2.2 

106532 

01 

SUN QFE 2.2: Patch for Solaris 2.6 QFE driver 

Required for Solstice DiskSuite 

106627 

01 

Mediators 

Required for A1000 or A3x00 

106707 

01 

RM 6.1.1, A1000/A3x00 Support for Sun Cluster 2.1 FCS 

Required 

107388 

01 

Sun Cluster Manager 

Sun Cluster Manager

This section describes how to use Sun Cluster Manager (SCM).

Monitoring Sun Cluster Servers With SCM

SCM provides a single interface to many of Sun Cluster's command line monitoring features. SCM consists of two parts: SCM server software, and the SCM Graphical User Interface (GUI). The SCM server runs on each node in the cluster. The SCM GUI runs in a Java Development Kit (JDKTM) 1.1-compliant browser such as HotJavaTM. The HotJava browser can be running on any machine, including the cluster nodes. The SCM GUI reports information on:


Note -

Refer to the Patch 107388-01 README for complete information on SCM.


Running the SCM GUI in a HotJava Browser

The following set of procedures outline what you need to do to run SCM in a HotJava browser with your system configuration.


Note -

If you choose to use the HotJava browser shipped with your Solaris 2.6 or Solaris 7 operating environment, there may be problems with the HotJava browser. See "Running SCM With the HotJava Browser", for more information. If you choose to use a later version of the HotJava browser, refer to the appropriate procedure depending on your software needs.


You may need to determine if you have the correct version of the following:


Note -

The Solaris 2.6 operating environment requires installation of JDK 1.1.6 (or later) and HotJava 1.1.4 (or later). The Solaris 7 operating environment requires installation of HotJava 1.1.4 (or later).


You need to determine whether you want to:

Depending on what you decide, refer to the appropriate procedure.

How to Download the JDK

Type the following from the console prompt on the server in your cluster:

# java -versio
java version "1.1.6"

If the system displays a lower version of java, follow the instructions download the JDK version 1.1.6 (or later) software from the following URL:

http://www.sun.com/solaris/java

How to Download HotJava

From the machine running the HotJava browser, select About HotJava from the Help menu.

If the browser displays a version lower than 1.1.4, or if you do not have a HotJava browser, follow the instructions to download the HotJava version 1.1.4 (or later) software from the following URL:

http://java.sun.com/products/hotjava/index.html

How to Run the SCM Applet in a HotJava Browser From a Cluster Node

  1. Run your HotJava browser on a node in the cluster.

  2. Remotely display it on an X windows workstation.

  3. Set the applet security preferences in your HotJava browser:

  4. Choose Applet Security from Preferences on the Edit menu.

  5. Click Medium Security as the Default setting for Unsigned applets.

  6. When you are ready to begin monitoring the cluster with SCM, type the appropriate URL.

    file:/opt/SUNWcluster/scmgr/index.html
    
  7. Click OK on dialog boxes that ask for permission to access certain files, ports, and so forth from the remote display workstation to the cluster node where the browser is started.


    Note -

    It might take HotJava some time to download and run the applet. No status information will appear during this time.


    Refer to the online help for complete information on menu navigation, tasks, and reference.

How to Set Up a Web Server to Run With SCM

If you choose, you can install a web server on the cluster nodes to run with SCM.

  1. Install a web server on all nodes in the cluster.


    Note -

    If you are running the Sun Cluster HA for Netscape HTTP service and an HTTP server on SCM, you need to configure the HTTP servers to listen on different ports. Otherwise there will be a port conflict between the two.


  2. Follow the web server's configuration procedure to make sure that SCM's index.html file is accessible to the clients.

    The client applet for SCM is in the index.html file in the /opt/SUNWcluster/scmgr directory. For example, go to your HTTP server's document root and create a link to the /opt/SUNWcluster/scmgr directory.

  3. Run your HotJava browser from your workstation.

  4. Set the applet security preferences in your HotJava browser:

    1. Choose Applet Security from Preferences on the Edit menu.

    2. Click Medium Security as the Default setting for Unsigned applets.

  5. When you are ready to begin monitoring the cluster with SCM, type the appropriate URL.

    For example, if you had created a link from the web server's document_root directory to the /opt/SUNWcluster/scmgr directory, you would type the following URL:

    http://clusternode/scmgr/index.html
    
  6. Click OK on dialog boxes that ask for permission to access certain files, ports, and so forth to the cluster node where the browser is started.


    Note -

    It might take HotJava some time to download and run the applet. No status information will appear during this time.


    Refer to the online help for complete information on menu navigation, tasks, and reference.

SCM Online Help System

SCM provides online help information on menu navigation, tasks, and reference.

How to Display SCM Online Help

To display the Help window from SCM, select Help Contents from the Help menu.

Alternatively, click on the Help icon (question mark) in the tool bar above the folder.

If necessary, you can run online help in a separate browser by typing the following URL:

file:/opt/SUNWcluster/scmgr/help/locale/en/main.howtotopics.html

For example, if you had created a link from the web server's document_root directory to the /opt/SUNWcluster/scmgr directory, you would type the following URL:

http://clusternode/scmgr/help/locale/en/main.howtotopics.html

Online Help Browser Display

When you finish viewing the online help, close its HotJava browser. Selecting online help again brings up a new browser and loads the help.

Known Problems

The following known problems affect the operation of Sun Cluster 2.2.

Framework Bugs

4185966 - A bad trap following loss of heartbeat might result in the SCI module causing node panic.

4202413 - The cluster aborts when a majority of nodes halt simultaneously. If the volume manager is CVM or SSVM, this can be avoided by selecting a single direct-attached disk as a quorum disk when configuring the cluster.

4202418 - An SCI heartbeat-alive check failure might cause node failure.

4213128 - In Solstice DiskSuite configurations in which a logical host has multiple disksets, takeover of the logical host fails because the hactl(1M) utility does not parse the diskset names correctly. This bug compromises fault monitoring in certain scenarios. The workaround involves replacing the file /opt/SUNWcluster/ha/nfs/have_maj_util with a modified file. The modified file is available through your service provider.

Administrative Command Bugs

4209264 - The scconf -F command does not always mirror the administrative file system across controllers. Use vxprint to display the volumes; if the administrative file system is not mirrored across controllers, manually create a mirror of that volume on a different controller.

4210684 - Installing and configuring a cluster by using scinstall(1M) command-line options in conjunction with its configuration menus does not work. In addition, when using scinstall(1M) command-line options to remove the server software, the cluster network packages are not removed. To perform these tasks, run the scinstall(1M) command interactively (without options).

4210191 - When all public network connections fail on a node with Solstice DiskSuite, the node aborts from the cluster and panics with the following panic string:

Failfast timeout - unit "abort_thread"

4213927 - The pnmset(1M) command fails in some cases due to ping(1M) timeout after an ifconfig(1M) operation on some gigabit Ethernet cards. Work around the problem by configuring the /etc/pnmconfig file manually. See the pnmconfig(4M) man page for more information.

Data Service Bugs

4210065 - In Solstice DiskSuite configurations in which a logical host has multiple disksets, the Sun Cluster HA for NFS shell script /opt/SUNWcluster/ha/nfs/fdl_enum_probe_disks reports an error. This causes fault monitoring of the disksets to fail. The workaround involves replacing the file /opt/SUNWcluster/ha/nfs/fdl_enum_probe_disks with a modified file. The modified file is available through your service provider.

4210646 - The Sun Cluster HA for Oracle fault monitor does not restart Oracle correctly if the character set is non-USASCII. This is commonly the case when Oracle is installed during SAP installation. To correct this, you must establish the following link so that NLS data files specified by the fault monitor's ORA_NLS33 environment variable will be found by Oracle during startup. Create this link on all cluster nodes:

# ln -s /opt/SUNWcluster /SUNWcluster

SCM Bugs

4207695 - In SCM, the Previous button on the syslog page is enabled even when the syslog is empty. Using the Previous button in this case will have no effect.

4207726 - SCM does not detect the loss of a public network until after network connection is reestablished.

4208089 - SCM does not display the correct current status for the Sun Cluster HA for Oracle data service. When an Oracle instance is stopped with the command haoracle stop, the instance is put into maintenance mode, and no message is posted to syslog. While an instance is in maintenance mode, it is not monitored by Sun Cluster. SCM interprets this state as unknown.

4211950 - If a logical host is put into maintenance mode, SCM displays the node as waiting to be given up. Manually refresh the screen to show the correct state.

4212030 - When the NFS service is off, the NFS service on some logical hosts may be displayed as OK.

4212623 - When a cluster node leaves a cluster, the private and public networks will no longer reflect the correct state, and should therefore be ignored.

4212691 - There are some cases when all nodes that own a logical host are not part of the cluster. In this case, the logical host is also down. SCM displays these logical hosts as up.

Other Known Issues

The following issues apply to Sun Cluster 2.2.

Running SCM With the HotJava Browser

If you choose to use the HotJava browser shipped with your Solaris 2.6 or Solaris 7 operating environment to run SCM, there may be problems such as:

Timeout Values

After configuring each logical host with the scinstall(1M) or scconf(1M) commands, you might need to use the scconf clustername -l command to set the timeout values for the logical host. The timeout value is site-dependent; it is tied to the number of logical hosts, spindles, and file systems.

Refer to the scconf(1M) man page for details. For procedures for setting timeout values, refer to Section 3.14, "Configuring Timeouts for Cluster Transition Steps, in the Sun Cluster 2.2 System Administration Guide.

Encapsulated Root Disks

If you are running SSVM with an encapsulated root disk, you must unencapsulate the root disk before installing Sun Cluster 2.2. After you install Sun Cluster 2.2, encapsulate the disk again. You also must unencapsulate the root disk before changing the major numbers.

Refer to your SSVM documentation for the procedures to encapsulate and unencapsulate the root disk.

SNMP Default Port

As part of the client software installation, the SUNWcsnmp package is installed to provide simple network management protocol (SNMP) support for Sun Cluster. The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use Port 161. Once the SUNWcsnmp package is installed, you must change the Sun Cluster SNMP port number using the procedure described in Section D.6, "Configuring the Cluster SNMP Agent Port, in the Sun Cluster 2.2 System Administration Guide.

Installation Directory for Sun Cluster HA for Informix

The INFORMIX_ESQL Embedded Language Runtime Facility product must be installed in the /var/opt/informix directory on Sun Cluster servers. This is required even if Informix server binaries are installed on the physical host.

Lotus and Netscape Message Servers

You can set up Lotus Domino servers as HTTP, POP3, IMAP, NNTP, or LDAP servers. Lotus Domino will start server tasks for all of these types. However, do not set up instances of any Netscape message servers on a logical host that is potentially mastered by the node on which Lotus Domino is installed.

Lotus and Netscape Port Numbers

Within a cluster, do not configure Netscape services with the same port number as the one used by the Lotus Domino server. The following port numbers are used by default by the Lotus Domino server:

HTTP

Port 80

POP3

Port 110

IMAP

Port 143

LDAP

Port 389

NNTP

Port 119

Failover/Switchover When Logical Host File System Is Busy

If a failover or switchover occurs while a logical host's file system is busy, the logical host fails over only partially; some of the disk group remains on the original target physical host. Do not attempt a switchover if a logical host's file system is busy. Also, do not access any host's file system locally, because file locking does not work correctly when both NFS locks and local locks are present.

SSP Password Must Be Correct

If an incorrect password is used for the System Service Processor (SSP) on an Ultra Enterprise 10000, the system will behave unpredictably and might crash.

Harmless Error When Stopping a Node

When you stop a node, the following error message might be displayed:

in.rdiscd[517]: setsockopt (IP_DROP_MEMBERSHIP): Cannot assign requested address

The error is caused by a timing issue between the in.rdiscd daemon and the IP module. It is harmless and can be ignored safely.

Harmless Error by NFS lockd Daemon

For Sun Cluster HA for NFS running on Solaris 7, if the lockd daemon is killed before the statd daemon is fully running, the following error message is displayed:

WARNING: lockd: cannot contact statd (error 4), continuing.

This error message can be ignored safely.

Directory Permissions and Ownership of $ORACLE_HOME

If the Sun Cluster HA for Oracle fault monitor displays errors like those shown below, make sure that the $ORACLE_HOME directory permissions are set to 755 and that the directory is owned by the Oracle administrative user with group ID dba.

Feb 16 17:13:13 ID[SUNWcluster.ha.haoracle_fmon.2520]: hahost1:HA1: 
 DBMS Error: connecting to database: ORA-12546: TNS:permission denied
 Feb 16 17:12:13 ID[SUNWcluster.ha.haoracle_fmon.2050]: hahost1:HA1: 
 RDBMS error, but HA-RDBMS Oracle will take no action for this error code 

Displaying LOG_DB_WARNING Messages for the SAP Probe

The Sun Cluster HA for SAP parameter LOG_DB_WARNING determines whether warning messages should be displayed if the Sun Cluster HA for SAP probe cannot connect to the database. When LOG_DB_WARNING is set to -y and the probe cannot connect to the database, a message is logged at the warning level in the local0 facility. By default, the syslogd(1M) daemon does not display these messages to /dev/console or to /var/adm/messages. To see these warnings, you must modify the /etc/syslog.conf file to display messages of local0.warning priority. For example:

...
 *.err;kern.notice;auth.notice;local0.warning /dev/console
 *.err;kern.debug;daemon.notice;mail.crit;local0.warning /var/adm/messages
 ...

After modifying the file, you must restart syslogd(1M). See the syslog.conf(1M) and syslogd(1M) man pages for more information.

Nodelock Freeze After Cluster Panic

In a cluster with more than two nodes and with direct-attached storage, a problem occurs if the last node in the cluster panics or exits the cluster unusually (without performing the stopnode transition). In such a case, all nodes have been removed from the cluster and the cluster no longer exists, but because the last node left the cluster in an unusual manner, it still holds the nodelock. A subsequent invocation of the scadmin startcluster command will fail to acquire the nodelock.

To work around this problem, manually clear the nodelock before restarting the cluster.

Use the following procedure to manually clear the nodelock and restart the cluster, after the cluster has aborted completely.

  1. As root, display the cluster configuration.

    # scconf clustername -p
    

    Look for this line in the output:

    clustername Locking TC/SSP, port  : A.B.C.D, E
    
    • If E is a positive number, the nodelock is on Terminal Concentrator A.B.C.D and Port E. Proceed to Step 2.

    • If E is -1, the lock is on an SSP. Proceed to Step 3.

  2. For a nodelock on a Terminal Concentrator (TC), perform the following steps (otherwise, proceed to Step 3).

    1. Start a telnet connection to Terminal Concentrator tc-name.

      $ telnet tc-name
       Trying 192.9.75.51...
       Connected to tc-name.
       Escape character is `^]'.

      Enter Return to continue.

    2. Specify -cli (command line interface).

      Enter Annex port name or number: cli
      
    3. Log in as root.

    4. Run the admin command.

      annex# admin
      
    5. Reset Port E.

      admin : reset E
      
    6. Close the telnet connection

      annex# hangup
      
    7. Proceed to Step 4.

  3. For a nodelock on a System Service Processor (SSP), perform the following steps.

    1. Connect to the SSP.

      $ telnet ssp-name
      
    2. Log in as user ssp.

    3. Display information on the clustername.lock file by using the following command (this file is a symbolic link to /proc/csh.pid).

      $ ls -l /var/tmp/clustername.lock
      
    4. Search for the process csh.pid.

      $ ps -ef | grep csh.pid
      
    5. If the csh.pid process exists in the ps -ef output, kill the process by using the following command.

      $ kill -9 csh.pid 
      
    6. Delete the clustername.lock file.

      $ rm -f /var/tmp/clustername.lock
      
    7. Log out of the SSP.

  4. Restart the cluster.

    $ scadmin startcluster
    

Setting Up the /etc/nsswitch.conf Files With DBMS Data Services

The following applies to configurations using Sun Cluster HA for Oracle, Sun Cluster HA for Informix, or Sun Cluster HA for Sybase.

The Sun Cluster 2.2 Software Installation Guide contains erroneous information about how to set up the /etc/nsswitch.conf files for these DBMS data services. In order for the data services to start and stop correctly in case of switchovers or failovers, the /etc/nsswitch.conf files must be set up as follows.

On each node that can master the logical host running the DBMS data service, the /etc/nsswitch.conf file must have one of the following entries for group.

group:
 group:		 	files
 group:		 	files [NOTFOUND=return] nis
 group:		 	files [NOTFOUND=return] nisplus

The DMBS data services use the su user command when starting and stopping the database node. The above settings will ensure that the su user command does not refer to NIS/NIS+ when the network information name service is not available due to failure of the public network on the cluster node.

Future Changes

This section describes Sun Cluster features that might be changed or discontinued after Sun Cluster 2.2.

Commands to Be Replaced or Made Obsolete

The following commands might be changed or discontinued after Sun Cluster 2.2.

Commands with options or interfaces to be changed:

Commands to be renamed:

Commands to be removed:

Future Changes to API Features

This section describes elements in the data service API for Sun Cluster 2.2 that might change or might not be available in releases following Sun Cluster 2.2.

Changes to API Commands or Command Options

The following API commands and command options might change in future Sun Cluster releases.

API Commands or Command Options That Might Become Obsolete

The following commands and command options might not be available in future Sun Cluster releases.

Internal Programs to be Retired in Future Releases

The Sun Cluster implementation contains many programs that are used internally by the implementation and are not intended for use by customers. Any program that does not have a man page in the Sun Cluster 2.2 release falls into this category. These programs will not exist in their current form in subsequent releases of the product. Some examples include clustm, scccd, and ccdmatch.