Sun Cluster 3.1 8/05 Release Notes for Solaris OS

Sun Cluster 3.1 8/05 Release Notes for Solaris OS

This document provides the following information for SunTM Cluster 3.1 8/05 software.

What's New in Sun Cluster 3.1 8/05 Software

This section provides information related to new features, functionality, and supported products in Sun Cluster 3.1 8/05 software. This section also provides information on any restrictions introduced in this release.

New Features and Functionality

This section describes each of the new features provided in Sun Cluster 3.1 8/05.

Improved Cluster Installation and Upgrade Functionality

This release introduces several enhancements to the installation and configuration of Sun Cluster software.

This functionality is reflected in the installation and upgrade procedures in Chapter 2, Installing and Configuring Sun Cluster Software, in Sun Cluster Software Installation Guide for Solaris OS and Chapter 5, Upgrading Sun Cluster Software, in Sun Cluster Software Installation Guide for Solaris OS.

Support for Network Appliance Network-Attached Storage (NAS) Devices

Sun Cluster software supports Network Appliance NAS devices, for shared storage only, beginning with Sun Cluster 3.1 9/04 software. It supports NAS devices as quorum devices beginning with Sun Cluster 3.1 8/05 software. The Network Appliance NAS device can now be deployed as following with Sun Cluster:

For information about installing and maintaining a NAS device in a Sun Cluster environment, see Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS. For information about using a Network Appliance NAS device as a quorum device, see How to Add a Network Appliance Network-Attached Storage (NAS) Quorum Device in Sun Cluster System Administration Guide for Solaris OS.

Simplified SunPlex Manager Interface

The information displayed in the initial data screen of the SunPlex Manager has been simplified. The initial screen now shows only the Nodes and Resource Groups tables. You can access the other tables by clicking on the appropriate item in the Navigation Tree in the left side of the browser window.

Support for Tagged VLAN to Share Network Adapters

Sun Cluster software supports tagged Virtual Local Area Networks (VLANs) to share an adapter between the private interconnect and the public network. For information about configuring a tagged VLAN adapter for the private interconnect, see Cluster Interconnect in Sun Cluster Software Installation Guide for Solaris OS.

Sun Cluster HA for Solaris Containers

The Sun Cluster HA for Solaris Containers data service enables applications to run in non-global zones under the control of Sun Cluster.

To enable applications to run in non-global zones under the control of Sun Cluster, Sun Cluster HA for Solaris Containers performs the following operations:

If you plan to use this data service, you must write your own scripts or SMF manifests for applications that are to run in non-global zones.

The following restrictions apply to the Sun Cluster HA for Solaris Containers data service:

For more information about this data service, see Sun Cluster Data Service for Solaris Containers Guide.

Support for Solaris SMF Services

Sun Cluster software enables you to use Sun Cluster to make highly available an application that is integrated with the Solaris Service Management Facility (SMF). If you use Sun Cluster to make an SMF service highly available, restrictions apply to the use of the Solaris SMF. For more information, see Enabling Solaris SMF Services to Run Under the Control of Sun Cluster in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Support for the AMD 64–Bit Platform

Sun Cluster software runs on the 64-bit family of microprocessor chips and compatible microprocessor chips that are made by AMD.

Support for Kerberos

Sun Cluster software supports the use of Kerberos with NFS. For more information, see Securing Sun Cluster HA for NFS With Kerberos V5 in Sun Cluster Data Service for NFS Guide for Solaris OS.

Support for Oracle 10g and Oracle 10g Real Application Clusters on the SPARC Platform and x86 Platform

Sun Cluster software supports version 10g of Oracle and Oracle Real Application Clusters on the SPARC platform. Qualification of version 10g of Oracle and Oracle Real Application Clusters on the x86 platform is pending and is not yet supported. For more information, see the following documentation:

Restrictions

The following restrictions apply to the Sun Cluster 3.1 8/05 release:

For other known problems or restrictions, see Known Issues and Bugs.

Compatibility Issues

This section contains information on Sun Cluster compatibility issues such as features nearing end of life.

Additional Sun Cluster framework compatibility issues are documented in Chapter 1, Planning the Sun Cluster Configuration, in Sun Cluster Software Installation Guide for Solaris OS.

Additional Sun Cluster upgrade compatibility issues are documented in Overview of Upgrading a Sun Cluster Configuration in Sun Cluster Software Installation Guide for Solaris OS.

For other known problems or restrictions, see Known Issues and Bugs.

Features Nearing End of Life

Solstice DiskSuite

Solstice DiskSuite software might not be supported in a future release of Sun Cluster software. If you use Solstice DiskSuite software, upgrade to the Solaris 9 or Solaris 10 OS, which will upgrade you automatically to Solaris Volume Manager software. For upgrade information, see Solaris 9 9/04 Installation Guide or Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

Sun Fire Link

Sun Fire Link might not be supported in a future release of Sun Cluster software. If you use Sun Fire Link, use another interconnect technology that Sun Cluster software supports. For information about interconnect hardware that Sun Cluster software supports, see Chapter 3, Installing Cluster Interconnect Hardware and Configuring VLANs, in Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS.

SunPlex Installer

SunPlex Installer might not be supported in a future release of Sun Cluster software. To establish a new Sun Cluster configuration, use the scinstall utility instead. The scinstall utility supports, through the command line interface, all functionality that SunPlex Installer provides.

SunPlex Manager/Configuration of IPMP Groups

The configuration of IPMP groups (add or remove) might not be included in a future release. This function is available through the Solaris ifconfig(1M) command. See the ifconfig(1M) man page for information on specific options.

SUNW.RGOffload

The SUNW.RGOffload resource type might not be available in future Sun Cluster releases. All functions provided by this resource type are available through the RG_affinities resource group property and its "negative affinity" option.

If you have currently configured a SUNW.RGOffload resource, perform the following actions to use the "negative affinity" option of the RG_affinities resource group property.

ProcedureHow to Use the Negative Affinity Option of RG_affinities

Steps
  1. Remove the dependency of the critical resource on the SUNW.RGOffload resource.


    # scrgadm -cj critical-rs -y Resource_dependencies=""
    
  2. Remove the SUNW.RGOffload resource and resource type.


    # scrgadm -nj rgofl
    # scrgadm -rj rgofl
    # scrgadm -rt SUNW.RGOffload
    
  3. Change the non-critical resource group property to have a negative affinity towards the critical resource group (which contains critical-rs).


    # scrgadm -c -g non-critical-rg -y RG_affinities=--critical-rg
    

    Note –

    This example shows only a strong negative affinity. You may be able to set weak negative affinity, and other type of dependencies across online resource groups. Please refer to Distributing Online Resource Groups Among Cluster Nodes in Sun Cluster Data Services Planning and Administration Guide for Solaris OSfor details on configuring online resource group dependencies feature.


Solstice DiskSuite/Solaris Volume Manager GUI

DiskSuite Tool (Solstice DiskSuite metatool) and the Enhanced Storage module of Solaris Management Console (Solaris Volume Manager) are not compatible with Sun Cluster software. Use the command-line interface or Sun Cluster utilities to configure Solstice DiskSuite or Solaris Volume Manager software.

Non-Global Zones

Sun Cluster 3.1 8/05 software does not support non-global zones. All Sun Cluster software and software that is managed by the cluster must be installed only on the global zone of the node. Do not install cluster-related software on a non-global zone. In addition, all cluster-related software must be installed in a way that prevents propagation to a non-global zone that is later created on a cluster node. For more information, see Adding a Package to the Global Zone Only in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

However, Sun Cluster 3.1 8/05 software does support applications that run in a non-global zone and that are managed by the Sun Cluster HA for Solaris Containers data service. See Sun Cluster HA for Solaris Containers for more information.

Loopback File System (LOFS)

Sun Cluster 3.1 8/05 software does not support the use of LOFS under certain conditions. If you must enable LOFS on a cluster node, such as when you configure non-global zones under Sun Cluster HA for Solaris Containers, first determine whether the LOFS restrictions apply to your configuration. See Cluster File Systems in Sun Cluster Software Installation Guide for Solaris OS for more information about the restrictions and workarounds that permit the use of LOFS when the restricting conditions exist.

Upgrade to Solaris 10

Sun Cluster 3.1 8/05 software does not support upgrade to the original release of the Solaris 10 OS, which was distributed in March 2005. You must upgrade to a compatible version of Solaris 10. Contact your Sun service representative for more information.

Change to VxVM Installation Procedures

The scvxinstall command and Sun Cluster procedures have changed for installing VxVM software in a Sun Cluster configuration. See Installing and Configuring VxVM Software in Sun Cluster Software Installation Guide for Solaris OS.

Accessibility Features for People With Disabilities

To obtain accessibility features that have been released since the publishing of this media, consult Section 508 product assessments available from Sun upon request to determine which versions are best suited for deploying accessible solutions. Updated versions of applications can be found at: http://sun.com/software/javaenterprisesystem/get.html. For information on Sun's commitment to accessibility, visit http://sun.com/access.

Commands Modified in This Release

This section describes changes to the Sun Cluster command interfaces that might cause user scripts to fail.

scconf Command

The -q option of the scconf command has been modified to distinguish between shared local quorum devices (scsi) and other types of quorum devices (including NetApp NAS devices). Use the name suboption to specify the name of the attached shared storage device when adding or removing a shared quorum device to or from the cluster. This suboption can also be used with the change form of the command to change the state of a quorum device. The globaldev suboption can still be used for scsi shared storage devices, but the name suboption must be used for all other types of shared storage devices. For more information about this change to scconf and working with quorum devices, see scconf(1M), scconf_quorum_dev_netapp_nas(1M), scconf_quorum_dev_netapp_nas(1M), and scconf_quorum_dev_scsi(1M).

Product Name Changes

This section provides information on product name changes for applications that Sun Cluster software supports. Depending on the Sun Cluster software release that you are running, your Sun Cluster documentation might not reflect the following product name changes.

Current Product Name 

Former Product Name 

Sun Java System Application Server 

Sun ONE Application Server 

Sun Java System Application Server EE (HADB) 

Sun Java System HADB 

Sun Java System Message Queue 

Sun ONE Message Queue 

Sun Java System Web Server 

  • Sun ONE Web Server

  • iPlanet Web Server

  • NetscapeTM HTTP

Supported Products

This section describes the supported software and memory requirements for Sun Cluster 3.1 8/05 software.

Sun Cluster Security Hardening

Sun Cluster Security Hardening uses the Solaris operating system hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.

The Sun Cluster Security Hardening documentation is available at http://www.sun.com/blueprints/0203/817-1079.pdf. You can also access the article from http://www.sun.com/software/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.x Software.” The documentation describes how to secure Sun Cluster 3.1 deployments in a Solaris 8 and Solaris 9 environment. The description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts.

Table 1 Data Services Supported by Sun Cluster Security Hardening

Data Service Agent 

Application Version: Failover 

Application Version: Scalable 

Solaris Version 

Sun Cluster HA for Apache 

1.3.9 

1.3.9 

Solaris 8, Solaris 9 (version 1.3.9) 

Sun Cluster HA for Apache Tomcat 

3.3, 4.0, 4.1 

3.3, 4.0, 4.1 

Solaris 8, Solaris 9 

Sun Cluster HA for DHCP 

S8U7+ 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for DNS 

with OS 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for Sun Java System Messaging Server 

6.0 

4.1 

Solaris 8 

Sun Cluster HA for MySQL 

3.23.54a - 4.0.15 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for NetBackup 

3.4 

N/A 

Solaris 8 

Sun Cluster HA for NFS 

with OS 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for Oracle E-Business Suite 

11.5.8 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for Oracle 

8.1.7 and 9i (32 and 64 bit) 

N/A 

Solaris 8, Solaris 9 (HA Oracle 9iR2) 

Sun Cluster Support for Oracle Real Application Clusters 

8.1.7 and 9i (32 and 64 bit) 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for SAP 

4.6D (32 and 64 bit) and 6.20 

4.6D (32 and 64 bit) and 6.20 

Solaris 8, Solaris 9 

Sun Cluster HA for SWIFTAlliance Access 

4.1, 5.0 

N/A 

Solaris 8 

Sun Cluster HA for Samba 

2.2.2, 2.2.7, 2.2.7a, 2.2.8, 2.2.8a 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for Siebel 

7.5 

N/A 

Solaris 8 

Sun Cluster HA for Solaris Containers 

with OS 

N/A 

Solaris 10 

Sun Cluster HA for Sun Java System Application Server 

7.0, 7.0 update 1 

N/A 

Solaris 8,Solaris 9 

Sun Cluster HA for Sun Java System Directory Server 

4.12 

N/A 

Solaris 8, Solaris 9 (version 5.1) 

Sun Cluster HA for Sun Java System Message Queue 

3.0.1 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for Sun Java System Web Server 

6.0 

4.1 

Solaris 8, Solaris 9 (version 4.1) 

Sun Cluster HA for Sybase ASE 

12.0 (32 bit) 

N/A 

Solaris 8 

Sun Cluster HA for BEA WebLogic Server 

7.0 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for WebSphere MQ 

5.2, 5.3 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for WebSphere MQ Integrator 

2.0.2, 2.1 

N/A 

Solaris 8, Solaris 9 

Known Issues and Bugs

The following known issues and bugs affect the operation of the Sun Cluster 3.1 8/05 release.

scvxinstall Creates Incorrect vfstab Entries When Boot Device is Multipathed (4639243)

Problem Summary: scvxinstall creates incorrect /etc/vfstab entries when boot device is multipathed.

Workaround: Run scvxinstall and choose to encapsulate. When the following message appears, type Ctrl-C to abort the reboot:


This node will be re-booted in 20 seconds. Type Ctrl-C to abort.

Edit the vfstab entry so /global/.devices uses the /dev/{r}dsk/cXtXdX name instead of the /dev/did/{r}dsk name. This revised entry enables VxVM to recognize it as the root disk. Rerun scvxinstall and choose to encapsulate. The vfstab file has the necessary updates. Allow the reboot to occur. The encapsulation proceeds as normal.

ProcedureHow to Correct /etc/vfstab Errors For a Multipathed Boot Device

Steps
  1. Run scvxinstall and choose to encapsulate.

    The system displays the following message:


    This node will be re-booted in 20 seconds.  Type Ctrl-C to abort.
  2. Abort the reboot.


    Ctrl-C
  3. Edit the /etc/vfstab entry so /global/.devices uses the /dev/{r}dsk/cXtXdX name instead of the /dev/did/{r}dsk name.

    This revised entry enables VxVM to recognize it as the root disk.

  4. Rerun scvxinstall and choose to encapsulate.

    The /etc/vfstab file has the necessary updates. Allow the reboot to occur. The encapsulation proceeds as normal.

SAP liveCache Stop Method Times Out (4836272)

Problem Summary: The Sun Cluster HA for SAP liveCache data service uses the dbmcli command to start and stop liveCache. If you are running Solaris 9, the network service might become unavailable when a cluster node's public network fails.

Workaround: Include one of the following entries for the publickey database in the /etc/nsswitch.conf files on each node that can be the primary for liveCache resources:

publickey: 
publickey:  files
publickey:  files [NOTFOUND=return] nis 
publickey:  files [NOTFOUND=return] nisplus

Adding one of the above entries, in addition to updates documented in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS, ensures that the su command and the dbmcli command do not refer to the NIS/NIS+ name services. Bypassing the NIS/NIS+ name services ensures that the data service starts and stops correctly during a network failure.

nsswitch.conf Requirement Should Not Apply to passwd Database (4904975)

Problem Summary: The requirement for the nsswitch.conf file in Preparing the Nodes and Disks in Sun Cluster Data Service for SAP liveCache Guide for Solaris OS does not apply to the entry for the passwd database. If these requirements are met, the su command might hang on each node that can master the liveCache resource when the public network is down.

Workaround: On each node that can master the liveCache resource, ensure that the entry in the /etc/nsswitch.conf file for the passwd database is as follows:

passwd: files nis [TRYAGAIN=0]

sccheck Hangs (4944192)

Problem Summary: sccheck might hang if launched simultaneously from multiple nodes.

Workaround: Do not launch sccheck from any multi-console that passes commands to multiple nodes. sccheck runs can overlap, but should not be launched simultaneously.

Java Binaries Linked to Incorrect Java Version Cause HADB Agent to Malfunction (4968899)

Problem Summary: Currently, HADB data service does not use the JAVA_HOME environment variable. Therefore, HADB, when invoked from the HADB data service, takes Java binaries from /usr/bin/. The Java binaries in /usr/bin/ need to be linked to the appropriate version of Java 1.4 and above for HADB data service to work properly.

Workaround: If you do not object to changing the default version available, perform the following procedure. As an example, this workaround assumes that the /usr/j2se directory is where you have the latest version of Java (such as 1.4 and above).

  1. If you have a directory called java/ in the /usr/ directory, move it to a temporary location.

  2. From the /usr/ directory, link /usr/bin/java and all other Java-related binaries to the appropriate version of Java.


    # ln -s j2se java
    

If you do not want to change the default version available, assign the JAVA_HOME environment variable with the appropriate version of Java (J2SE 1.4 and above) in the /opt/SUNWappserver7/SUNWhadb/4/bin/hadbm script.

Adding a New Cluster Node Requires Cluster Reboot (4971299)

Problem Summary: When a node is added to the cluster that runs Sun Cluster Support for Oracle Real Application Clusters and uses the VxVM cluster feature, the cluster feature running on other nodes does not recognize the new node.

Workaround: A fix for this problem is expected to be made available by VERITAS in VxVM 3.5 MP4 and VxVM 4.0 MP2. The fix for VxVM 4.1 is currently available.

To correct the problem if a code fix is not yet available, restart the Oracle database and reboot the cluster nodes. This step synchronizes the Oracle UDLM and updates the VxVM cluster feature to recognize the added node.


Note –

Do not install and configure Sun Cluster Support for Oracle Real Application Clusters on the new node until after you perform this step.


  1. From a cluster node other than the node that you just added, shut down the Oracle Real Application Clusters database.

  2. Reboot the same node on which you shut down the Oracle database.


    # scswitch -S -h thisnode
    # shutdown -g0 -y -i6
    

    Wait until the node is fully rebooted back into the cluster before you proceed to the next step.

  3. Restart the Oracle database.

  4. Repeat Step 1 through Step 3 on each remaining node that runs Sun Cluster Support for Oracle Real Application Clusters.

    • If a single node is capable of handling the Oracle database workload, you can perform these steps on multiple nodes simultaneously.

    • If more than one node is required to support the database workload, perform these steps on one node at a time.

HA-DB Reinitializes Without Spares (4973982)

Problem Summary: Due to bug 4974875, whenever autorecovery is performed, the database reinitializes itself without any spares. The mentioned bug has been fixed and integrated into HA-DB release 4.3. For HA-DB 4.2 and below releases, follow one of the procedures below to change the roles of the HA-DB nodes.

Workaround: Complete one of the following procedures to change the roles of the HA-DB nodes:

  1. Identify the HA-DB nodes that have their roles changed after autorecovery is successful.

  2. On all the nodes that you identified in Step 1, and one node at a time, disable the fault monitor for the HA-DB resource in question.


    # cladm noderole -db dbname -node nodeno -setrole role-before-auto_recovery
    
  3. Enable the fault monitor for the HA-DB resource in question.

    or

  1. Identify the HA-DB nodes that have their roles changed after autorecovery is successful.

  2. On all nodes that host the database, disable the fault monitor for the HA-DB resource in question.

  3. On any one of the nodes, execute the command for each HA-DB node that needs its role changed.


    # cladm noderole -db dbname -node nodeno -setrole role-before-auto_recovery
    

pnmd Not Accessible by the Other Node During Rolling Upgrade (4997693)

Problem Summary: If rolling upgrade is not completed on all the nodes, the nodes that are not yet upgraded will not be able to see the IPMP groups on the upgraded nodes

Workaround: Finish upgrading all nodes on the cluster.

Date Field on Advanced Filter Panel Accepts Only mm/dd/yyyy Format (5075018)

Problem Summary: The date field on the Advanced Filter panel of SunPlex Manager accepts only mm/dd/yyyy format. However, in non-English locale environments, the date format is different from mm/dd/yyyy; and the return date format from the Calendar panel is other than mm/dd/yyyy format.

Workaround: Type the date range in the Advanced Filer panel in mm/dd/yyyy format. Do not use the Set... button to display the calendar and choose the date.

In the Japanese Locale, Error Messages From scrgadm Contain Junk Characters (5083147)

Problem Summary: In the Japanese locale, the error messages from scrgadm are not displayed correctly. The messages contain junk characters.

Workaround: Run the system locale in English to display the error messages in English.

The /usr/cluster/lib/cmass/ipmpgroupmanager.sh Script Unplumbs the IPv6 Interface (6174170)

Problem Summary: SunPlex Manager uses the /usr/cluster/lib/cmass/ipmpgroupmanager.sh to delete IPMP groups and adapters from IPMP groups. The script updates the /etc/hostname6.adaptername file correctly to just remove the group name, but runs the following ifconfig command to unplumb the IPv6 interface :


ifconfig adaptername inet6 unplumb

Workaround: Reboot the node to plumb up the interface. Alternatively, run the following ifconfig command on the node. This alternative workaround does not require the node to be rebooted.


ifconfig adaptername inet6 plumb up

The IPMP Group Page Should Populate the Adapter List Based on the IP Version Chosen by the User (6174805)

Problem Summary: The list of adapters displayed in the IPMP group pages is not dependent on the IP version chosen by the user. The page displays a list of all adapters that do not have groups configured. The list should be updated when the IP Version radio button is selected as follows :

Workaround: After selecting the IP version, make sure you choose only the adapter from the list which is enabled for the selected IP version.

When Moving an Adapter from IPv4 and IPv6 to a IPv4 Only, the IPv4 Version is Not Removed (6179721)

Problem Summary: The adapter list that is displayed in the IPMP group pages is dependent on the IP version the user chooses. The current SunPlex Manager has a bug that always displays a complete list of adapters regardless of the IP version. SunPlex Manager should not let the user move an adapter which is enabled for both IPv4 and IPv6 to IPv4 only.

Workaround: The user should not attempt to move an adapter configured for both IPv4 and IPv6 to IPv4 only.

Configuration of Sun Java System Administration Server Fails if SUNWasvr Package is Not Installed (6196005)

Problem Summary: An attempt to configure the data service for Sun Java System Administration Server fails if the Sun Java System Administration Server is not installed. The attempt fails because the SUNW.mps resource type requires that the /etc/mps/admin/v5.2/cluster/SUNW.mps directory exists. This directory exists only if the SUNWasvr package is installed.

Workaround: To correct this problem, complete the following procedure.

ProcedureHow to Install the SUNWasvr Package

Steps
  1. Log in as root or assume an equivalent role on a cluster node.

  2. Determine whether the SUNWasvr package is installed.


    # pkginfo SUNWasvr
    
  3. If the SUNWasvr package is not installed, install the package from the Sun Cluster CD-ROM by completing the following step:

    1. Insert the Sun Cluster 2 of 2 CD-ROM into the appropriate drive.

    2. Go to the directory that contains the SUNWasvr package.


      # cd /cdrom/cdrom0/Solaris_sparc/Product/administration_svr/Packages
      
    3. Type the command to install the package.


      # pkgadd -d . SUNWasvr
      
    4. Remove the CD-ROM from the drive.

Change to startd/duration Does Not Become Effective Immediately (6196325)

Problem Summary: As of Solaris 10, the Sun Cluster HA for NFS data service sets the property /startd/duration to transient for the Service Management Facility (SMF) services /network/nfs/server, /network/nfs/status, and /network/nfs/nlockmgr. The intention of this property setting is to cause SMF not to restart these services in the event of any failure. A bug in SMF causes SMF to restart /network/nfs/status and /network/nfs/nlockmgr after the first failure despite this property setting.

Workaround: For Sun Cluster HA for NFS to run correctly, run the following command on all nodes after creating the first Sun Cluster HA for NFS resource and before bringing the Sun Cluster HA for NFS resource online.


# pkill -9 -x 'startd|lockd'

If you are booting Sun Cluster for the first time, run the above command on all the potential primary nodes, after creating the first Sun Cluster HA for NFS resource and before bringing the Sun Cluster HA for NFS resource online.

scinstall Does Not Copy All Common Agent Container Security Files (6203133)

Problem Summary: When a node is added to a cluster, the scinstall utility checks for the presence of Network Security Services (NSS) files on the node that you are adding. These files and security keys are required by the common agent container. If the NSS files exist, the utility copies the common agent container security files from the sponsoring node to the added node. But if the sponsoring node does not have the NSS security keys installed, the copy fails and scinstall processing quits.

Workaround: Perform the following procedure to install NSS software, recreate the security keys, and restart the common agent container on the existing cluster nodes.

ProcedureHow to Install NSS Software When Adding a Node to a Cluster

Perform the following procedure on all existing cluster nodes as superuser or a role that permits the appropriate access.

Before You Begin

Have available the Sun Cluster 1 of 2 CD-ROM. The NSS packages are located at /cdrom/cdrom0/Solaris_arch/Product/shared_components/Packages/, where arch is sparc or x86 and where ver is 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10.

Steps
  1. On each node, stop the Sun Web Console agent.


    # /usr/sbin/smcwebserver stop
    
  2. On each node, stop the security file agent.


    # /opt/SUNWcacao/bin/cacaoadm stop
    
  3. On each node, determine whether NSS packages are installed and, if so, what version.


    # cat /var/sadm/pkg/SUNWtls/pkginfo | grep SUNW_PRODVERS
    SUNW_PRODVERS=3.9.4
  4. If a version earlier than 3.9.4 is installed, remove the existing NSS packages.


    # pkgrm packages
    

    The following table lists the applicable packages for each hardware platform.

    Hardware Platform 

    NSS Package Names 

    SPARC 

    SUNWtls SUNWtlsu SUNWtlsx

    x86 

    SUNWtls SUNWtlsu

  5. On each node, if you removed NSS packages or none were installed, install the latest NSS packages from the Sun Cluster 1 of 2 CD-ROM.

    • For the Solaris 8 or Solaris 9 OS, use the following command:


      # pkgadd -d . packages
      
    • For the Solaris 10 OS, use the following command:


      # pkgadd -G -d . packages
      
  6. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    
  7. On each node, create the NSS security keys.


    # /opt/SUNWcacao/bin/cacaoadm create-keys
    
  8. On each node, start the security file agent.


    # /opt/SUNWcacao/bin/cacaoadm start
    
  9. On each node, start the Sun Web Console agent.


    # /usr/sbin/smcwebserver start
    
  10. On the node that you are adding to the cluster, restart the scinstall utility and follow procedures to install the new node.

Deleting a Public Interface Group Which has IPv4 and IPv6 Adapters Sometimes Fails From SunPlex Manager (6209229)

Problem Summary: Deleting a public interface group which has both IPv4 and IPv6 enabled adapters sometimes fails when trying to delete the IPv6 adapter from the group. The following error message is displayed :


ifparse: Operation netmask not supported for inet6
/sbin/ifparse
/usr/cluster/lib/cmass/ipmpgroupmanager.sh[8]:
/etc/hostname.adaptname.tmpnumber: cannot open

Workaround: Edit the/etc/hostname6.adaptername file to include the following lines:


plumb
up
-standby

Run the following command on the cluster node :


ifconfig adaptername inet6 plumb up -standby

Memory Leak During Rebooting Patch (Node) Procedure (Bug 6210440)

Problem Summary: Sun Cluster software hangs when attempting to perform a rolling upgrade from Sun Cluster 3.1 9/04 software to Sun Cluster 3.1 8/05 software due to a memory problem triggered when the first upgraded node is rebooted in cluster mode.

Workaround: If you are running Sun Cluster 3.1 9/04 software or the patch equivalent (revision 09 or higher) and want to perform a Rebooting Patch procedure to upgrade to Sun Cluster 3.1 8/05 software or the patch equivalent (revision 12), you must complete the following steps before you upgrade your cluster or apply this core patch.

ProcedureHow to Prepare for an Upgrade to Sun Cluster 3.1 8/05 Software

Steps
  1. Choose the type of patch installation procedure that is appropriate to your availability requirements:

    • Rebooting Patch (Node)

    • Rebooting Patch (Cluster and Firmware)

    These patch installation procedures are provided in Chapter 8, Patching Sun Cluster Software and Firmware, in Sun Cluster System Administration Guide for Solaris OS.

  2. Apply one of the following patches depending on the operating system you are using:

    • 117909-11 Sun Cluster 3.1 Core Patch for SunOS 5.9 X86

    • 117950-11 Sun Cluster 3.1 Core Patch for Solaris 8

    • 117949-11 Sun Cluster 3.1 Core Patch for Solaris 9

    You must complete the entire patch installation procedure before upgrading to Sun Cluster 3.1 8/05 software or the patch equivalent (revision 12).

Zone Install and Zone Boot Does Not Work After Sun Cluster Install (6211453)

Problem Summary: Sun Cluster software installation adds exclude: lofs to /etc/system. Because lofs is critical to the function of zones, both zone install and zone boot fail.

Workaround: Before attempting to create any zones, perform the following procedure.

ProcedureHow to Run Zone Install and Zone Boot After a Sun Cluster Installation

Steps
  1. If you are running Sun Cluster HA for NFS, exclude from the automounter map all files that are part of the highly available local file system that is exported by the NFS server.

  2. On each cluster node, edit the /etc/system file to remove any exclude: lofs lines.

  3. Reboot the cluster.

Solaris 10 Requires Additional Steps to Recover From the Failure of a Cluster File System to Mount at Boot Time (6211485)

Problem Summary: The Solaris 10 OS requires different recovery procedures than previous versions of the Solaris OS when a cluster file system fails to mount at boot time. Rather than present a login prompt, the mountgfsys service might fail and put the node into the maintenance state. The output messages are similar to the following:


WARNING - Unable to globally mount all filesystems.
Check logs for error messages and correct the problems.
 
May 18 14:06:58 pkaffa1 svc.startd[8]: system/cluster/mountgfsys:default misconfigured
 
May 18 14:06:59 pkaffa1 Cluster.CCR: /usr/cluster/bin/scgdevs: 
Filesystem /global/.devices/node@1 is not available in /etc/mnttab.

Workaround: After you repair the mount problem for the cluster file system, you must manually bring the mountgfsys service back online. Run the following commands to bring the mountgfsys service online and to synchronize the global devices namespace:


# svcadm clear svc:/system/cluster/mountgfsys:default
# svcadm clear svc:/system/cluster/gdevsync:default

Boot processing will now continue.

Unsupported Upgrade to the Solaris 10 OS Corrupts the /etc/path_to_inst File (6216447)

Problem Summary: Sun Cluster 3.1 8/05 software does not support upgrade to the March 2005 release of the Solaris 10 OS. An attempt to upgrade to that release might corrupt the /etc/path_to_inst file. This file corruption would prevent the node from booting successfully. The corrupted file would appear similar to the following, in that it contains duplicate entries for some of the same device names except that the physical device name contains the prefix /node@nodeid:


…
"/node@nodeid/physical_device_name" instance_number "driver_binding_name"
…
"/physical_device_name" instance_number "driver_binding_name"

In addition, some key Solaris services might fail to start, including networking and file-system mounting, and messages might print on the console which state that the service is misconfigured.

Workaround: Use the following procedure.

ProcedureHow to Recover From a Corrupted /etc/path_to_inst File

The following procedure describes how to recover from an upgrade to Solaris 10 software that results in a corrupted /etc/path_to_inst file.


Note –

This procedure does not attempt to correct any other problem that can be associated with upgrading a Sun Cluster configuration to the March 2005 release of the Solaris 10 OS.


Perform this procedure on each node that was upgraded to the March 2005 release of the Solaris 10 OS.

Before You Begin

If a node cannot boot, boot the node from the network or from a CD-ROM. Once the node is up, run the fsck command and mount the local file system in a partition such as /a. In Step 2, use the name of the local-file-system mount in the path to the /etc directory.

Steps
  1. Become superuser or an equivalent role on the node.

  2. Change to the /etc directory.


    # cd /etc
    
  3. Determine whether the path_to_inst file is corrupted.

    The following characteristics are present if the path_to_inst file is corrupted:

    • The file includes a block of entries that contain /node@nodeid at the beginning of physical device names.

    • Some of the same entries are listed again but without the /node@nodeid prefix.

    If the file is not of this format, then some other problem exists. Do not continue this procedure. Contact your Sun service representative if you need assistance.

  4. If the path_to_inst file is corrupted as described in Step 3, run the following commands.


    # cp path_to_inst path_to_inst.bak
    # sed -n -e "/^#/p" -e "s,node@./,,p" path_to_inst.bak > path_to_inst
    
  5. Inspect the path_to_inst file to ensure that the file is repaired.

    A repaired file will reflect the following changes:

    • The /node@nodeid prefix is removed from all physical device names.

    • There are no duplicate entries for any physical device name.

  6. Ensure that the permissions of the path_to_inst file are read only.


    # ls -l /etc/path_to_inst
    -r--r--r--   1 root     root        2946 Aug  8  2005 path_to_inst
  7. Perform a reconfiguration reboot into non-cluster mode.


    # reboot -- -rx
    
  8. After you repair all affected cluster nodes, go to How to Upgrade Dependency Software Before a Nonrolling Upgrade in Sun Cluster Software Installation Guide for Solaris OS to continue the upgrade process.

CMM Reconfiguration Callback Timed Out; Node Aborting (6217017)

Problem Summary: On x86 clusters with ce transports, a node under heavy load could be halted by CMM as a result of a split-brain.

Workaround: For x86 clusters using the PCI Gigaswift Ethernet card on the private network, add the following to /etc/system:


set ce:ce_tx_ring_size=8192

Nodes Might Panic When a Node Joins or Leaves a Cluster With More Than Two Nodes, Running Solaris 10, and Using Hitachi Storage (6227074)

Problem Summary: On clusters with more than two nodes, running Solaris 10, and using Hitachi storage, all of the cluster nodes might panic when a node joins or leaves the cluster.

Workaround: No current workaround exists. If you encounter this problem, contact your Sun Service provider about acquiring a patch.

Java ES 2005Q1 installer Does Not Install Application Server 8.1 EE Completely (6229510)

Problem Summary: Application Server Enterprise Edition 8.1 cannot be installed by the Java ES 2005Q1 installer if the Configure Later option is selected. Selecting the Configure Later option installs the Platform Edition and not the Enterprise Edition.

Workaround: While installing the Application Server Enterprise Edition 8.1 using the Java ES installer, use the Configure Now option to install. Selecting the Configure Later option installs the Platform Edition only.

scvxinstall Causes rpcbind to Restart (6237044)

Problem Summary: Restart of the bind SMF service can impact Solaris Volume Manager operation. Installation of Veritas 4.1 VxVM packages causes the SMF bind service to be restarted.

Workaround: Reboot Solaris Volume Manager after either restarting the bind SMF service or after installing VxVM 4.1 on a S10 host.


svcadm restart svc:/network/rpc/scadmd:default

On a System Using Solaris 10, Sun Cluster Data Services Cannot be Installed After Sun Cluster is Installed Using the Java ES installer (6237159)

Problem Summary: This problem occurs only on systems using Solaris 10. If the user uses the Java ES installer on the Sun Cluster Agents CD-ROM to install Sun Cluster data services after the Sun Cluster core has been installed, the installer fails with the following messages :


The installer has determined that you must manually remove incompatible versions 
of the following components before proceeding: 

[Sun Cluster 3.1 8/05, Sun Cluster 3.1 8/05, Sun Cluster 3.1 8/05]

After you remove these components, go back. 
Component                       Required By ...

1. Sun Cluster 3.1 8/05     HA Sun Java System Message Queue : HA Sun Java 
                            System Message Queue 
2. Sun Cluster 3.1 8/05     HA Sun Java System Application Server : HA Sun Java 
									System Application Server 
3. Sun Cluster 3.1 8/05     HA/Scalable Sun Java System Web Server : HA/Scalable 
									Sun Java System Web Server 
4. Select this option to go back to the component list. This process might take
									a few moments while the installer rechecks your
									system for installed components.

Select a component to see the details. Press 4 to go back the product list
[4] {"<" goes back, "!" exits}

Workaround: On a system using Solaris 10, install the Sun Cluster data service manually by using pkgadd or scinstall. If the Sun Cluster data service has a dependency on shared components, install the shared components manually by using pkgadd. The following link lists the shared components for each product:

http://docs.sun.com/source/819-0062/preparing.html#wp28178

/usr/sbin/smcwebserver: ... j2se/opt/javahelp/lib: does not exist Error Message (6238302)

Problem Summary: During startup of Sun Web Console, the following message might be displayed.


/usr/sbin/smcwebserver:../../../../j2se/opt/javahelp/lib: does not exist

Workaround: The message is safe to ignore. You can manually add a link in /usr/j2se/opt to point to the correct Java Help 2.0 by entering the following:


# ln -s /usr/jdk/packages/javax.help-2.0 /usr/j2se/opt/javahelp

Node Panic After OS Upgrade to Solaris 10 From Sun Cluster 3.1 4/04 on Solaris 9 (6245238)

Problem Summary: After upgrading from the Solaris 9 OS to the Solaris 10 OS on a cluster that runs Sun Cluster 3.1 4/04 software or earlier, booting the node into noncluster mode results in the node panicking.

Workaround: Install one of the following patches before you upgrade from Solaris 9 to Solaris 10 software.

SunPlex Installer is Not Creating Resources in Resource Groups (6250327)

Problem Summary: When using SunPlex Installer to configure Sun Cluster HA for Apache and Sun Cluster HA for NFS data services as part of Sun Cluster installation, SunPlex Installer does not create the necessary device groups and resources in the resource groups.

Workaround: Do not use SunPlex Installer to install and configure data services. Instead, follow procedures in the Sun Cluster Software Installation Guide for Solaris OS and the Sun Cluster Data Service for Apache Guide for Solaris OS or Sun Cluster Data Service for NFS Guide for Solaris OS manuals to install and configure these data services.

HA-NFS Changes to Support NFSv4 Fix for 6244819 (6251676)

Problem Summary: NFSv4 is not supported in Sun Cluster 3.1 8/05.

Workaround: Solaris 10 introduces a new version of NFS protocol, NFSv4. This is the default protocol for Solaris 10 clients and server. The Sun Cluster 3.1 8/05 release supports Solaris 10, however it does not support use of NFSv4 protocol with Sun Cluster HA for NFS service on the cluster to achieve high-availability for NFS server. To make sure no NFS client can use NFSv4 protocol to talk to NFS server on Sun Cluster software, edit the /etc/default/nfs file to change the line NFS_SERVER_VERSMAX=4 to NFS_SERVER_VERSMAX=3. This would make sure that only NFSv3 protocol is used by the clients of Sun Cluster HA for NFS service on the cluster.

NOTE: Use of Solaris 10 cluster nodes as NFSv4 clients is not affected by this restriction and the above mentioned workaround. The cluster nodes can act as NFSv4 clients.

metaset Command Fails After the rpcbind Service is Restarted (6252216)

Problem Summary: The metaset command fails after the rpcbind service is restarted.

Workaround: Ensure that you are not performing any configuration operations on your Sun Cluster system, then kill the rpc.metad process using the following command:


# pkill -9 rpc.metad

Node Panic Due to metaclust Return Step Error: RPC: Program not Registered (6256220)

Problem Summary: When shutting down the cluster, some of the nodes may panic due to the order in which services are stopped on the nodes. If the RPC service is stopped before the RAC framework is stopped, errors may result when the SVM resource attempts to reconfigure. This results in an error being reported back to the RAC framework resulting in a node panic. This problem has been observed with Sun Cluster running the RAC framework with the SVM storage option. There should be no impact to Sun Cluster functionality.

Workaround: The panic is by design and can safely be ignored, although clean-up of the saved core files should be performed to reclaim filesystem space.

NIS Address Resolution Hangs and Causes Failure to Fail Over (6257112)

Problem Summary: In the Solaris 10 OS, the /etc/nsswitch.conf file has been modified to include NIS in the ipnodes entry.


ipnodes:    files nis [NOTFOUND=return]

This causes the address resolution to hang if NIS becomes inaccessible, either due to a NIS problem or due to failure of all public network adapters. This problem eventually causes failover resources or shared address resources to fail to fail over.

Workaround: Complete the following before you create logical host or shared address resources:

  1. Change the ipnodes entry in the /etc/nsswitch.conf file from [NOTFOUND=return] to [TRYAGAIN=0].


    ipnodes:    files nis [TRYAGAIN=0]
  2. Ensure that all IP addresses for logical hosts and shared addresses are added to the /etc/inet/ipnodes file, in addition to the /etc/inet/hosts file.

scinstall Fails to Upgrade the Sun Cluster Data Service for Sun Java System Application Server EE (6263451)

Problem Summary: While attempting to update the Sun Cluster Data Service for Sun Java System Application Server EE from 3.1 9/04 to 3.1 8/05, scinstall does not remove the package for j2ee and displays the following message:


Skipping "SUNWscswa" - already installed

Sun Cluster Data Service for Sun Java System Application Server EE is not upgraded.

Workaround: Manually remove and add the sap_j2ee package using the following commands :


# # pkgrm SUNWscswa
# pkgadd [-d device] SUNWscswa

scnas: NAS Filesystem did not get Mounted During Bootup (6268260)

Problem Summary: The NFS file system cannot be checked for viability prior to a failover or scswitch being used to locate the data service to the node. If a node doesn't have the NFS filesystem, a switch/failover to that node will result in a failure of the data service that requires manual intervention. A mechanism like HAStoragePlus is needed to check the viability of the filesystem prior to attempting the fail/switchover to that node.

Workaround: File systems using NAS filers (with entries in /etc/vfstab) are mounted outside Sun Cluster software control, and this means that Sun Cluster software is unaware of any problems. Should the file system become unavailable, some data services, such as Sun Cluster HA for Oracle, will fail when data service methods, such as START or STOP, are executed.

Failure of these methods may lead to several possibilities:

Perform one of the following procedures to avoid the above problems:

HADB Fault Monitor Will Not Restart the ma Process (6269813)

Problem Summary: The Sun Cluster data service does not restart the ma process when the data service is killed or exits abruptly.

Workaround: This is the expected behavior and the data services is not affected.

rgmd Dumps Core During Rolling Upgrade (6271037)

Problem Summary: Attempting to delete a resource during a rolling upgrade before all nodes are running the new software might cause one of the nodes to panic. Do not delete a resource until all nodes have the new software installed.

Workaround: During a rolling upgrade, do not delete an RGM resource until all nodes have the new software installed.

HADB Database Fails to Restart After Shut Down and Boot of Cluster (6276868)

Problem Summary: The HADB database fails to restart after the cluster nodes are rebooted. The user will not be able to access the database.

Workaround: Restart one of your management data services by completing the following procedure. If the following procedure does not resolve the problem, delete the database and recreate it.

ProcedureRestarting a Management Data Service

Steps
  1. On the node to be shut down, type the following command. The -h option should not include the node name on which you want the management agent to be stopped.


    scswitch -z -g hadb resource grp -h node1, node2...
    
  2. Switch the resource group back to the original node.


    scswitch —Z —g hadb resource grp
    
  3. Check the status of the database. Wait until the database comes to the “stopped” state.


    hadbm status -n database
    
  4. Start the database.


    hadbm start database
    

SUNW.iim Has Size 0 After Adding SUNWiimsc Package (6277593)

Problem Summary: The SUNWiimsc package in sun_cluster_agents is invalid. After adding this package, SUNW.iim in /opt/SUNWiim/cluster has size 0.

Workaround: Replace the SUNW.iim package and register again by completing the following steps.

ProcedureHow to Install the Correct SUNW.iim Package

Steps
  1. Copy the correct SUNW.iim from the CD-ROM.


    # cp 2of2_CD/Solaris_arch/Product/sun_cluster_agents/Solaris_os
    /Packages/SUNWiimsc/reloc/SUNWiim/cluster/SUNW.iim /opt/SUNWiim/Cluster/SUNW.iim
    
  2. Remove any existing SUNW.iim registration.


    # rm /usr/cluster/lib/rgm/rtreg/SUNW.iim
    
  3. Register the data service with Sun Cluster


    sh 2of2_CD/Solaris_arch/Product/sun_cluster_agents/
    Solaris_os/Packages/SUNWiimsc/install/postinstall

Adding a New IPMP Group Through SunPlex Manager Sometimes Fails (6278059)

Problem Summary: Trying to add a new IPMP group using SunPlex Manger sometimes fails with the following message.


An error was encountered by the system. If you were performing an action 
when this occurred, review the current system state prior to proceeding.

Workaround: Perform one of the following procedures depending on the version of IP you are running.

ProcedureAdding a New IPMP Group Through SunPlex Manager When You are Using IPv4

Steps
  1. Enter the following command:


    ifconfig interface inet plumb group groupname [addif address deprecated] 
    netmask + broadcast + up -failover
    
  2. If a test address has been provided, update the /etc/hostname .interface file to add the following:


    group groupname addif address netmask + broadcast + deprecated -failover up
  3. If a test address has not been provided, update the /etc/hostname.interface file to add the following:


    group.groupname netmask + broadcast -failover up

ProcedureAdding a New IPMP Group Through SunPlex Manager When You are Using IPv6

Steps
  1. Entering the following command:


    ifconfig interface inet6 plumb up group groupname
    
  2. Update the /etc/hostname6.interface file to add the following entries:


    group groupname plumb up
  3. If the /etc/hostname6.interface file does not already exist, create the file and add the entries mentioned above.

HADB Resource Keeps Restarting After Panicking One of the Cluster Nodes (6278435)

Problem Summary: After bringing the resource online and panicking one of the nodes in the cluster (for example, shutdown or uadmin), the resource keeps restarting on the other nodes. The user will not be able to issue any management commands.

Workaround: To prevent this problem, log onto a single node as root or a role with equivalent access privileges and increase the probe_timeout of the resource to a value of 600 seconds, using the following command:


scrgadm -c -j hadb resource -x Probe_timeout=600

To verify your change, shutdown one of the cluster nodes and check to make sure the resource does not go into the degraded state.

On Solaris 10, Scalable Services do not Work When Both the Public Networks and Sun Cluster Transports use bge(7D)-driven Adapters (6278520)

Problem Summary: The load balancing feature of Sun Cluster scalable services does not work on Solaris 10 systems when both the public networks and Sun Cluster transports use bge-driven adapters. Platforms with built-in NICs that use bge include Sun Fire V210, V240, and V250.

Failover data services are not affected by this bug.

Workaround: Do not configure public networking and cluster transports to both use bge-driven adapters.

Cannot See the System Log from SunPlex Manager When the Default Locale is set to Multibyte Locale (6281445)

Problem Summary: When the SunPlex Manager default locale is set to multibyte locale, you cannot see the system log.

Workaround: Set the default locale to C or view the syslog (/var/adm/messages) manually through a command line shell

Cannot Bring Node Agent Online Using scswitch on Node1 (6283646)

Problem Summary: The instances and node agents must be configured to listen on the failover IP address/hostname. When the node agents and Sun Java System Application Server instances are created, the physical node hostname is set by default. The HTTP IP Address and the client-hostname is changed in the domain.xml. But Domain Admin Server is not restarted so the changes do not take effect. Therefore, the node agents come up only on the physical node where they were configured, but not on the other node.

Workaround: Change the client-hostname property in the Node Agent section of domain.xml to listen on the failover IP and restart the Domain Admin Server for the changes to take effect.

SunPlex Manager and Cacao 1.1 Only Support JDK 1.5.0_03 (6288183)

Problem Summary: When using SunPlex Manager in Sun Cluster 3.1 8/05 with Cacao 1.1, only JDK 1.5.0_03 is supported.

Workaround: Manually install JDK 1.5 by completing the following procedure.

ProcedureHow to Manually Install JDK 1.5

Steps
  1. Add JDK 1.5 from JES 4 shared components directory (See JES 4 RN for instructions).

  2. Stop cacao.


    # /opt/SUNWcacao/bin/cacaoadm stop
    
  3. Start cacao.


    # /opt/SUNWcacao/bin/cacaoadm start
    

After Installing SC3.1 (8/05) Patch 117949–14 on Solaris 9 and Patch 117950–14 on Solaris 8 Java VM Errors Occur During Boot (6291206)

Problem Summary: This bug is seen on a Sun Cluster system running 3.1 (9/04) plus patches that is upgraded to Sun Cluster (8/05) by applying patch 117949-14 on a system running Solaris 9 or patch 117950-14 on a system running Solaris 8. The following error message displays once the machine boots :


# An unexpected error has been detected by HotSpot Virtual Machine:
#
#  SIGSEGV (0xb) at pc=0xfaa90a88, pid=3102, tid=1
#
# Java VM: Java HotSpot(TM) Client VM (1.5.0_01-b07 mixed mode, sharing)
# Problematic frame:
# C  [libcmas_common.so+0xa88]  newStringArray+0x70
#
# An error report file with more information is saved as /tmp/hs_err_pid3102.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

Workaround: When upgrading from Sun Cluster 3.1 (9/04) to Sun Cluster 3.1 (8/05), install the SPM patch in addition to the core patch by entering the following command.

On a system running Solaris 8, run the following command after applying core patch 117950-14:


 patchadd patchdir/118626-04

On a system running Solaris 9, run the following command after patch 117949-14 has been applied:


patchadd patchdir/118627-04 

Directory Server and Administration Server Resource Registration Sometimes Fails (6298187)

Problem Summary: The resource registration sometimes fails for Directory Server and Administration Server. The system will display the following message:


Registration file not found for "SUNW.mps" in /usr/cluster/lib/rgm/rtreg

Workaround: Register the missing file from the pkg location directly by entering one of the following commands:

Solaris 10 Cluster Nodes May Fail to Communicate With Machines That Have Both IPv4 and IPv6 Address Mappings (6306113)

Problem Summary: If a Sun Cluster node running Solaris 10 does not have IPv6 interfaces configured for public networking (for example, not for cluster interconnects), it cannot access machines that have both an IPv4 and IPv6 address mapping in a name service, such as NIS. Applications such as telnet and traceroot that choose the IPv6 address over IPv4 will see their packets getting sent to the cluster transport adaptors and dropped.

Workaround: Use one of the following workarounds depending on the configuration or your cluster.

Patches and Required Firmware Levels

This section provides information about patches for Sun Cluster configurations. If you are upgrading to Sun Cluster 3.1 8/05, see How to Prepare for an Upgrade to Sun Cluster 3.1 8/05 Software.


Note –

You must be a registered SunSolveTM user to view and download the required patches for the Sun Cluster product. If you do not have a SunSolve account, contact your Sun service representative or sales engineer, or register online at http://sunsolve.sun.com.


PatchPro

PatchPro is a patch-management tool designed to ease the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides a Sun Cluster-specific Interactive Mode tool to make the installation of patches easier and an Expert Mode tool to maintain your configuration with the latest set of patches. Expert Mode is especially useful for those who want to get all of the latest patches, not just the high availability and security patches.

To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click on “Sun Cluster,” then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

SunSolve Online

The SunSolveTM Online Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site at http://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.

Sun Cluster 3.1 8/05 third-party patch information is provided through SunSolve Info Docs. This Info Doc page provides third-party patch information for specific hardware that you intend to use in a Sun Cluster 3.1 environment. To locate this Info Doc, log on to SunSolve and access the Simple Search selection from the top of the main page. From the Simple Search page, click on the Info Docs box and type Sun Cluster 3.x Third-Party Patches in the search criteria box.

Before you install Sun Cluster 3.1 8/05 software and apply patches to a cluster component (Solaris OS, Sun Cluster software, volume manager software, data services software, or disk hardware), review each README file that accompanies the patches that you retrieved. All cluster nodes must have the same patch level for proper cluster operation.

For specific patch procedures and tips on administering patches, see Chapter 8, Patching Sun Cluster Software and Firmware, in Sun Cluster System Administration Guide for Solaris OS.

Sun Cluster 3.1 8/05 Documentation

The Sun Cluster 3.1 8/05 user documentation set consists of the following collections:

The Sun Cluster 3.1 8/05 user documentation is available in PDF and HTML format on the SPARC and x86 versions of the Sun Cluster 3.1 8/05 CD-ROM. For more information, see the Solaris_arch/Product/sun_cluster/index.html file on the SPARC or x86 versions of the Sun Cluster 3.1 8/05 CD-ROM, where arch is sparc or x86. This index.html file enables you to read the PDF and HTML manuals directly from the CD-ROM and to access instructions to install the documentation packages.


Note –

The SUNWsdocs package must be installed before you install any Sun Cluster documentation packages. You can use pkgadd to install the SUNWsdocs package. The SUNWsdocs package is located in the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory of the Sun Cluster 3.1 8/05 CD-ROM, where arch is sparc or x86, and ver is either 8 for Solaris 8, 9 for Solaris 9, or 10 for Solaris 10. The SUNWsdocs package is also automatically installed when you run the installer program from the Solaris 10 Documentation CD-ROM.


In addition, the docs.sun.comSM web site enables you to access Sun Cluster documentation on the Web. You can browse the docs.sun.com archive or search for a specific book title or subject at the following Web site:

http://docs.sun.com

Sun Cluster 3.1 8/05 Software Collection for Solaris OS (SPARC Platform Edition)

Table 2 Sun Cluster 3.1 8/05 Software Collection for Solaris OS (SPARC Platform Edition): Software Manuals

Part Number 

Book Title 

819-0421 

Sun Cluster Concepts Guide for Solaris OS

819-0579 

Sun Cluster Overview for Solaris OS

819-0420 

Sun Cluster Software Installation Guide for Solaris OS

819-0580 

Sun Cluster System Administration Guide for Solaris OS

819-0581 

Sun Cluster Data Services Developer’s Guide for Solaris OS

819-0427 

Sun Cluster Error Messages Guide for Solaris OS

819-0582 

Sun Cluster Reference Manual for Solaris OS

819-0703 

Sun Cluster Data Services Planning and Administration Guide for Solaris OS

Table 3 Sun Cluster 3.1 8/05 Software Collection for Solaris OS (SPARC Platform Edition): Individual Data Service Manuals

Part Number 

Book Title 

819-1250 

Sun Cluster Data Service for Agfa IMPAX Guide for Solaris OS

817-6998 

Sun Cluster Data Service for Apache Guide for Solaris OS

819-1085 

Sun Cluster Data Service for Apache Tomcat Guide for Solaris OS

819-0691 

Sun Cluster Data Service for BroadVision One-To-One Enterprise Guide for Solaris OS

819-1082 

Sun Cluster Data Service for DHCP Guide for Solaris OS

819-0692 

Sun Cluster Data Service for DNS Guide for Solaris OS

819-1088 

Sun Cluster Data Service for MySQL Guide for Solaris OS

819-1247 

Sun Cluster Data Service for N1 Grid Service Provisioning System for Solaris OS

819-0693 

Sun Cluster Data Service for NetBackup Guide for Solaris OS

817-6999 

Sun Cluster Data Service for NFS Guide for Solaris OS

819-1248 

Sun Cluster Data Service for Oracle Application Server Guide for Solaris OS

819-1087 

Sun Cluster Data Service for Oracle E-Business Suite Guide for Solaris OS

819-0694 

Sun Cluster Data Service for Oracle Guide for Solaris OS

819-0583 

Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS

819-1081 

Sun Cluster Data Service for Samba Guide for Solaris OS

819-0695 

Sun Cluster Data Service for SAP DB Guide for Solaris OS

819-0696 

Sun Cluster Data Service for SAP Guide for Solaris OS

819-0697 

Sun Cluster Data Service for SAP liveCache Guide for Solaris OS

819-0698 

Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS

819-0699 

Sun Cluster Data Service for Siebel Guide for Solaris OS

819-2664 

Sun Cluster Data Service for Solaris Containers Guide

819-1089 

Sun Cluster Data Service for Sun Grid Engine Guide for Solaris OS

817-7000 

Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS

819-0700 

Sun Cluster Data Service for Sun Java System Application Server EE (HADB) Guide for Solaris OS

817-7002 

Sun Cluster Data Service for Sun Java System Message Queue Guide for Solaris OS

817-7003 

Sun Cluster Data Service for Sun Java System Web Server Guide for Solaris OS

819-1086 

Sun Cluster Data Service for SWIFTAlliance Access Guide for Solaris OS

819-1249 

Sun Cluster Data Service for SWIFTAlliance Gateway Guide for Solaris OS

819-0701 

Sun Cluster Data Service for Sybase ASE Guide for Solaris OS

819-0702 

Sun Cluster Data Service for WebLogic Server Guide for Solaris OS

819-1084 

Sun Cluster Data Service for WebSphere MQ Integrator Guide for Solaris OS

819-1083 

Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Sun Cluster 3.1 8/05 Software Collection for Solaris OS (x86 Platform Edition)

Table 4 Sun Cluster 3.1 8/05 Software Collection for Solaris OS (x86 Platform Edition): Software Manuals

Part Number 

Book Title 

819-0421 

Sun Cluster Concepts Guide for Solaris OS

819-0579 

Sun Cluster Overview for Solaris OS

819-0420 

Sun Cluster Software Installation Guide for Solaris OS

819-0580 

Sun Cluster System Administration Guide for Solaris OS

819-0581 

Sun Cluster Data Services Developer’s Guide for Solaris OS

819-0427 

Sun Cluster Error Messages Guide for Solaris OS

817819-0582 

Sun Cluster Reference Manual for Solaris OS

819-0703 

Sun Cluster Data Services Planning and Administration Guide for Solaris OS

Table 5 Sun Cluster 3.1 8/05 Software Collection for Solaris OS (x86 Platform Edition): Individual Data Service Manuals

Part Number 

Book Title 

817-6998 

Sun Cluster Data Service for Apache Tomcat Guide for Solaris OS

819-1082 

Sun Cluster Data Service for DHCP Guide for Solaris OS

819-0692 

Sun Cluster Data Service for DNS Guide for Solaris OS

819-1088 

Sun Cluster Data Service for MySQL Guide for Solaris OS

817-6999 

Sun Cluster Data Service for NFS Guide for Solaris OS

819-1081 

Sun Cluster Data Service for Samba Guide for Solaris OS

819-2664 

Sun Cluster Data Service for Solaris Containers Guide

817-7000 

Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS

817-7002 

Sun Cluster Data Service for Sun Java System Message Queue Guide for Solaris OS

817-7003 

Sun Cluster Data Service for Sun Java System Web Server Guide for Solaris OS

Sun Cluster 3.x Hardware Collection for Solaris OS (SPARC Platform Edition)

Table 6 Sun Cluster 3.x Hardware Collection for Solaris OS (SPARC Platform Edition)

Part Number 

Book Title 

817–0168 

Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS

817–0180 

Sun Cluster 3.0-3.1 With Sun StorEdge 3310 SCSI RAID Array Manual for Solaris OS

817–1673 

Sun Cluster 3.0-3.1 With Sun StorEdge 3510 or 3511 FC RAID Array Manual for Solaris OS

817–0179 

Sun Cluster 3.0-3.1 With Sun StorEdge 3900 Series or Sun StorEdge 6900 Series System Manual

817–1701 

Sun Cluster 3.0-3.1 With Sun StorEdge 6120 Array Manual for Solaris OS

817–1702 

Sun Cluster 3.0-3.1 With Sun StorEdge 6320 System Manual for Solaris OS

817–6747 

Sun Cluster 3.x With Sun StorEdge 6920 System Manual for Solaris OS

817–0177 

Sun Cluster 3.0-3.1 With Sun StorEdge 9900 Series Storage Device Manual

817–5682 

Sun Cluster 3.0-3.1 With StorEdge A1000 Array, Netra st A1000 Array, or StorEdge A3500 System Manual

817–0174 

Sun Cluster 3.0-3.1 With Sun StorEdge A3500FC System Manual for Solaris OS

817–5683 

Sun Cluster 3.0-3.1 With Fibre Channel JBOD Storage Device Manual

817–5681 

Sun Cluster 3.0-3.1 With SCSI JBOD Storage Device Manual for Solaris OS

817–0176 

Sun Cluster 3.0-3.1 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

817-7899 

Sun Cluster 3.0-3.1 With Sun StorEdge 6130 Array Manual for Solaris OS

817-7957 

Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS

Sun Cluster 3.x Hardware Collection for Solaris OS (x86 Platform Edition)

Table 7 Sun Cluster 3.x Hardware Collection for Solaris OS (x86 Platform Edition)

Part Number 

Book Title 

817–0168 

Sun Cluster 3.0-3.1 Hardware Administration Manual for Solaris OS

817–0180 

Sun Cluster 3.0-3.1 With Sun StorEdge 3310 SCSI RAID Array Manual for Solaris OS

817-7957 

Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS

Localization Issues

Documentation Issues

This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.

All Sun Cluster 3.1 8/05 Books

The Preface of all of the Sun Cluster 3.1 8/05 books provides a website for Support and Training. This website has been changed to the following websites:

Software Installation Guide

This section discusses errors and omissions from the Sun Cluster Software Installation Guide for Solaris OS.

Implied Support of Java ES Applications on Non-Global Zones

How to Install Data-Service Software Packages (pkgadd) in Sun Cluster Software Installation Guide for Solaris OS describes how to install Sun Java System data services on a cluster that runs the Solaris 10 OS. The procedure uses the pkgadd -G command to install these data services only in the global zone. The -G option ensures that the packages are not propagated to any existing non-global zone or to a non-global zone that is created later.

If the system contains a non-global zone, certain Sun Java Enterprise System (Java ES) applications and other Java ES components might not be supported. This restriction would apply if the non-global zone exists at the time of installation or if the zone is configured afterwards. The use of the pkgadd -G command to install data services for such applications does not override this restriction. If the Java ES application cannot coexist with non-global zones, you cannot use a data service for that application on a cluster that has non-global zones.

See Solaris 10 Zones in Sun Java Enterprise System 2005Q5 Installation Guide for information about Java ES support of Solaris zones.

Resetting Quorum Devices From SCSI-2 to SCSI-3 Brings the Node Down

Performing the procedure How to Update SCSI Reservations After Adding a Node in Sun Cluster Software Installation Guide for Solaris OS as documented might cause the node to panic. To prevent a node panic during this procedure, run the scgdevs command after you remove all quorum devices but before you configure new quorum devices.

Incorrect Release Date for the First Update of the Solaris 10 OS

In Chapter 5, Upgrading Sun Cluster Software, in Sun Cluster Software Installation Guide for Solaris OS, upgrade guidelines and procedures refer to the first update release of the Solaris 10 OS as Solaris 10 10/05. The date of this release is incorrect. At publication time of this document, the expected release date of the first update of the Solaris 10 OS is unknown. Additionally, support of upgrade to this future release is not yet determined. Contact your Sun service representative concerning support of upgrade to future releases of Solaris 10 software.

Manually Install Shared Components When Java ES Applications Are Installed on a Cluster File System (6270408)

Java ES application binaries can be installed on a cluster file system instead of on each cluster node. For Solaris 10 cluster configurations, when you install the data service (agent) by using pkgadd, you must also use pkgadd to manually install the Java ES shared components that the application requires.

See the Sun Java Enterprise System 2005Q5 Installation Guide for the list of shared components that each Java ES application requires and the package list for each shared component product.

Incorrect Commands to Check Product Versions (6288988)

In How to Upgrade Dependency Software Before a Nonrolling Upgrade in Sun Cluster Software Installation Guide for Solaris OS and How to Upgrade Dependency Software Before a Rolling Upgrade in Sun Cluster Software Installation Guide for Solaris OS, the instructions to check the version level of two of the shared components contain an error.

Step 2b, Apache Tomcat

Incorrect:


# patchadd -p | grep 114016

Correct:


# showrev -p | grep 114016

Step 5a, Explorer

Incorrect:


# pkginfo -l SUNWexplo | grep SUNW_PRODVERS

Correct:


# pkginfo -l SUNWexplo | grep VERSION

Rolling Upgrade

Rolling Upgrade might not be supported in a future release of Sun Cluster software. In that case, other procedures will be provided designed to limit Sun Cluster outages during software upgrade.

SunPlex Manager Online Help

This section discusses errors and omissions in SunPlex Manager online help.

Sun Cluster HA for Oracle

In the online help file that is titled “Sun Cluster HA for Oracle,” in the section titled “Before Starting,” a note is incorrect.

Incorrect:

If no entries exist for shmsys and semsys in /etc/system, default values for these variables are automatically inserted in/etc/system. The system must then be rebooted. Check Oracle installation documentation to verify that these values are correct for your database.

Correct:

If no entries exist for the shmsys and semsys variables in the /etc/system file when you install the Oracle data service, you can open /etc/system and insert default values for these variables. You must then reboot the system. Check Oracle installation documentation to verify that the values that you insert are correct for your database.

SunPlex Manager Icons and Conventions

In the online help file that is titled “SPM Icons and Conventions”, two descriptions given in the “Other labels” section are incorrect.

Incorrect:

Table 8 Other labels

Label 

Meaning 

Primary resource group of the failover type

Primary resource group of the failover type 

Secondary resource group of the failover type

Secondary resource group of the failover type 

Correct:

Table 9 Other labels

Label 

Meaning 

Primary node for the resource

Primary node for the resource 

Secondary node for the resource

Secondary node for the resource 

Sun Cluster Concepts Guide

This section discusses errors and omissions from the Sun Cluster Concepts Guide for Solaris OS.

In Chapter 3, the section on “Using the Cluster Interconnect for Data Service Traffic“ should read as follows:

A cluster must have multiple network connections between nodes, forming the cluster interconnect. The clustering software uses multiple interconnects both for high availability and to improve performance. For both internal and external traffic (for example, file system data or scalable services data), messages are striped across all available interconnects.

The cluster interconnect is also available to applications, for highly available communication between nodes. For example, a distributed application might have components running on different nodes that need to communicate. By using the cluster interconnect rather than the public transport, these connections can withstand the failure of an individual link.

To use the cluster interconnect for communication between nodes, an application must use the private hostnames configured when the cluster was installed. For example, if the private hostname for node 1 is clusternode1-priv, use that name to communicate over the cluster interconnect to node 1. TCP sockets opened using this name are routed over the cluster interconnect and can be transparently rerouted in the event of network failure. Application communication between any two nodes is striped over all interconnects. The traffic for a given TCP connection flows on one interconnect at any point. Different TCP connections are striped across all interconnects. Additionally, UDP traffic is always striped across all interconnects.

Note that because the private hostnames can be configured during installation, the cluster interconnect can use any name chosen at that time. The actual name can be obtained from scha_cluster_get(3HA) with the scha_privatelink_hostname_node argument.

System Administration Guide

This section describes errors and omissions in the Sun Cluster System Administration Guide for Solaris OS.

How to Remove a Sun Cluster Patch

The Rebooting Patch (Node) How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS procedure is not-reversible as a per-node procedure. Similarly, rolling downgrade of Sun Cluster releases is not supported. To remove a Sun Cluster patch or update release, you must re-apply the previous patch or update release by following the How to Apply a Rebooting Patch (Cluster and Firmware) in Sun Cluster System Administration Guide for Solaris OS.

Sun Cluster Data Service for NFS Guide for Solaris OS

Sun Cluster Data Service for NFS Guide for Solaris OS omits some restrictions that apply to the use of Sun Cluster HA for NFS with NFS v3.

If you are using Sun Cluster HA for NFS, do not use the cluster nodes as NFS v3 clients of external NFS servers. This restriction applies even when the external NFS server is a network-attached storage (NAS) device. If you configure your cluster nodes this way, locks that the cluster nodes might have set on the external NFS servers are lost.

This restriction does not apply to NFS v4 clients. You can use NFS v4 to mount external NFS servers.

Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS

This section describes omissions in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.

Referring to SAP Notes for Changing Host Names

When changing any reference to the host name of the system, refer to the corresponding SAP notes. The SAP notes contain the most recent information about changing host names. Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS omits specific references to these SAP notes.

The following sections explain how to change the host name.

Installing the SAP J2EE Engine as a Scalable Resource

The section How to Install and Configure the SAP Web Application Server and the SAP J2EE Engine in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS omits instructions for installing the SAP J2EE engine when you plan to configure it as a scalable resource. Step 2 and Step 7 of the procedure in this section are incomplete.

Correct Step 2:

If you are using the SAP J2EE engine, install the SAP J2EE engine software.

Refer to the SAP installation documentation.

Correct Step 7:

If you are using the SAP J2EE engine, modify the loghost script to return host names for the SAP J2EE engine.

Modify the script loghost, which was created in Step 6, to return either the logical host names or the physical host names for each instance of the SAP J2EE engine.

Sun Cluster Data Service for Solaris Containers Guide

This section describes errors and omissions in Sun Cluster Data Service for Solaris Containers Guide.

Information Missing From Configuration Restrictions

Configuration Restrictions in Sun Cluster Data Service for Solaris Containers Guide omits the restriction that applies to the autoboot property of a failover zone or a multiple-masters zone.

When creating a failover zone or a multiple-masters zone, ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the global zone is booted. The Sun Cluster HA for Solaris Containers data service can manage a zone only if the zone is booted under the control of the data service.

Information Missing From Configuration Requirements

Configuration Requirements in Sun Cluster Data Service for Solaris Containers Guide omits the requirement that applies to the loopback file system (LOFS).

Ensure that the loopback file system (LOFS) is enabled.

The Sun Cluster installation tools disable the LOFS. If you are using Sun Cluster HA for Solaris Containers to manage a zone, enable the LOFS after installing and configuration the Sun Cluster framework. To enable the LOFS, delete the following line from the /etc/system file:

exclude: lofs

Errors in the Procedure for Installing and Configuring a Zone

The procedure How to Install and Configure a Zone in Sun Cluster Data Service for Solaris Containers Guide contains the following errors:

Erroneous Code Samples

The sample code in the following sections is incorrect:

The correct code for both sections is as follows:

# cat /var/tmp/probe-apache2
#!/usr/bin/ksh
if echo "GET; exit" | mconnect -p 80 > /dev/null 2>&1
then
    exit 0
else
    exit 100
fi

Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS

This section discusses errors and omissions from the Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS

Installing a Network Appliance NAS Device in a Sun Cluster Environment

The NetApp NAS unit must be connected directly to a network which has direct connections to all the cluster nodes.

When you set up a NetApp NAS filer, you must complete the following steps in addition to those found in Installing a Network Appliance NAS Device in a Sun Cluster Environment in Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS.

ProcedureHow to Install a Network Appliance NAS Device in a Sun Cluster Environment

Steps
  1. Add the NetApp NAS filer name to /etc/inet/hosts.

    Add a hostname-to-address mapping for the filer in the /etc/inet/hosts file on all cluster nodes. For example:


    netapp-123 192.168.11.123
  2. Add the filer (NAS subset) netmasks to /etc/inet/netmasks.

    Add an entry to the /etc/inet/netmasks file on all cluster nodes for the subnet the filer is on. For example:


    192.168.11.0 255.255.255.0
  3. Verify that the hosts and netmasks entries in /etc/nsswitch.conf file on all cluster nodes have files appearing before nis and dns. If they are not, edit the corresponding line in /etc/nsswitch.conf by moving files before nis and dns.

Man Pages

This section discusses errors and omissions from the Sun Cluster man pages.

Sun Cluster 3.0 Data Service Man Pages

To display Sun Cluster 3.0 data service man pages, install the latest patches for the Sun Cluster 3.0 data services that you installed on Sun Cluster 3.1 8/05 software. See Patches and Required Firmware Levels for more information.

After you have applied the patch, access the Sun Cluster 3.0 data service man pages by issuing the man -M command with the full man page path as the argument. The following example opens the Apache man page.


% man -M /opt/SUNWscapc/man SUNW.apache

Consider modifying your MANPATH to enable access to Sun Cluster 3.0 data service man pages without specifying the full path. The following example describes command input for adding the Apache man page path to your MANPATH and displaying the Apache man page.


% MANPATH=/opt/SUNWscapc/man:$MANPATH; export MANPATH
% man SUNW.apache