Sun Cluster 3.1 8/05 Release Notes for Solaris OS

Documentation Issues

This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.

All Sun Cluster 3.1 8/05 Books

The Preface of all of the Sun Cluster 3.1 8/05 books provides a website for Support and Training. This website has been changed to the following websites:

Software Installation Guide

This section discusses errors and omissions from the Sun Cluster Software Installation Guide for Solaris OS.

Implied Support of Java ES Applications on Non-Global Zones

How to Install Data-Service Software Packages (pkgadd) in Sun Cluster Software Installation Guide for Solaris OS describes how to install Sun Java System data services on a cluster that runs the Solaris 10 OS. The procedure uses the pkgadd -G command to install these data services only in the global zone. The -G option ensures that the packages are not propagated to any existing non-global zone or to a non-global zone that is created later.

If the system contains a non-global zone, certain Sun Java Enterprise System (Java ES) applications and other Java ES components might not be supported. This restriction would apply if the non-global zone exists at the time of installation or if the zone is configured afterwards. The use of the pkgadd -G command to install data services for such applications does not override this restriction. If the Java ES application cannot coexist with non-global zones, you cannot use a data service for that application on a cluster that has non-global zones.

See Solaris 10 Zones in Sun Java Enterprise System 2005Q5 Installation Guide for information about Java ES support of Solaris zones.

Resetting Quorum Devices From SCSI-2 to SCSI-3 Brings the Node Down

Performing the procedure How to Update SCSI Reservations After Adding a Node in Sun Cluster Software Installation Guide for Solaris OS as documented might cause the node to panic. To prevent a node panic during this procedure, run the scgdevs command after you remove all quorum devices but before you configure new quorum devices.

Incorrect Release Date for the First Update of the Solaris 10 OS

In Chapter 5, Upgrading Sun Cluster Software, in Sun Cluster Software Installation Guide for Solaris OS, upgrade guidelines and procedures refer to the first update release of the Solaris 10 OS as Solaris 10 10/05. The date of this release is incorrect. At publication time of this document, the expected release date of the first update of the Solaris 10 OS is unknown. Additionally, support of upgrade to this future release is not yet determined. Contact your Sun service representative concerning support of upgrade to future releases of Solaris 10 software.

Manually Install Shared Components When Java ES Applications Are Installed on a Cluster File System (6270408)

Java ES application binaries can be installed on a cluster file system instead of on each cluster node. For Solaris 10 cluster configurations, when you install the data service (agent) by using pkgadd, you must also use pkgadd to manually install the Java ES shared components that the application requires.

See the Sun Java Enterprise System 2005Q5 Installation Guide for the list of shared components that each Java ES application requires and the package list for each shared component product.

Incorrect Commands to Check Product Versions (6288988)

In How to Upgrade Dependency Software Before a Nonrolling Upgrade in Sun Cluster Software Installation Guide for Solaris OS and How to Upgrade Dependency Software Before a Rolling Upgrade in Sun Cluster Software Installation Guide for Solaris OS, the instructions to check the version level of two of the shared components contain an error.

Step 2b, Apache Tomcat

Incorrect:


# patchadd -p | grep 114016

Correct:


# showrev -p | grep 114016

Step 5a, Explorer

Incorrect:


# pkginfo -l SUNWexplo | grep SUNW_PRODVERS

Correct:


# pkginfo -l SUNWexplo | grep VERSION

Rolling Upgrade

Rolling Upgrade might not be supported in a future release of Sun Cluster software. In that case, other procedures will be provided designed to limit Sun Cluster outages during software upgrade.

SunPlex Manager Online Help

This section discusses errors and omissions in SunPlex Manager online help.

Sun Cluster HA for Oracle

In the online help file that is titled “Sun Cluster HA for Oracle,” in the section titled “Before Starting,” a note is incorrect.

Incorrect:

If no entries exist for shmsys and semsys in /etc/system, default values for these variables are automatically inserted in/etc/system. The system must then be rebooted. Check Oracle installation documentation to verify that these values are correct for your database.

Correct:

If no entries exist for the shmsys and semsys variables in the /etc/system file when you install the Oracle data service, you can open /etc/system and insert default values for these variables. You must then reboot the system. Check Oracle installation documentation to verify that the values that you insert are correct for your database.

SunPlex Manager Icons and Conventions

In the online help file that is titled “SPM Icons and Conventions”, two descriptions given in the “Other labels” section are incorrect.

Incorrect:

Table 8 Other labels

Label 

Meaning 

Primary resource group of the failover type

Primary resource group of the failover type 

Secondary resource group of the failover type

Secondary resource group of the failover type 

Correct:

Table 9 Other labels

Label 

Meaning 

Primary node for the resource

Primary node for the resource 

Secondary node for the resource

Secondary node for the resource 

Sun Cluster Concepts Guide

This section discusses errors and omissions from the Sun Cluster Concepts Guide for Solaris OS.

In Chapter 3, the section on “Using the Cluster Interconnect for Data Service Traffic“ should read as follows:

A cluster must have multiple network connections between nodes, forming the cluster interconnect. The clustering software uses multiple interconnects both for high availability and to improve performance. For both internal and external traffic (for example, file system data or scalable services data), messages are striped across all available interconnects.

The cluster interconnect is also available to applications, for highly available communication between nodes. For example, a distributed application might have components running on different nodes that need to communicate. By using the cluster interconnect rather than the public transport, these connections can withstand the failure of an individual link.

To use the cluster interconnect for communication between nodes, an application must use the private hostnames configured when the cluster was installed. For example, if the private hostname for node 1 is clusternode1-priv, use that name to communicate over the cluster interconnect to node 1. TCP sockets opened using this name are routed over the cluster interconnect and can be transparently rerouted in the event of network failure. Application communication between any two nodes is striped over all interconnects. The traffic for a given TCP connection flows on one interconnect at any point. Different TCP connections are striped across all interconnects. Additionally, UDP traffic is always striped across all interconnects.

Note that because the private hostnames can be configured during installation, the cluster interconnect can use any name chosen at that time. The actual name can be obtained from scha_cluster_get(3HA) with the scha_privatelink_hostname_node argument.

System Administration Guide

This section describes errors and omissions in the Sun Cluster System Administration Guide for Solaris OS.

How to Remove a Sun Cluster Patch

The Rebooting Patch (Node) How to Apply a Rebooting Patch (Node) in Sun Cluster System Administration Guide for Solaris OS procedure is not-reversible as a per-node procedure. Similarly, rolling downgrade of Sun Cluster releases is not supported. To remove a Sun Cluster patch or update release, you must re-apply the previous patch or update release by following the How to Apply a Rebooting Patch (Cluster and Firmware) in Sun Cluster System Administration Guide for Solaris OS.

Sun Cluster Data Service for NFS Guide for Solaris OS

Sun Cluster Data Service for NFS Guide for Solaris OS omits some restrictions that apply to the use of Sun Cluster HA for NFS with NFS v3.

If you are using Sun Cluster HA for NFS, do not use the cluster nodes as NFS v3 clients of external NFS servers. This restriction applies even when the external NFS server is a network-attached storage (NAS) device. If you configure your cluster nodes this way, locks that the cluster nodes might have set on the external NFS servers are lost.

This restriction does not apply to NFS v4 clients. You can use NFS v4 to mount external NFS servers.

Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS

This section describes omissions in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.

Referring to SAP Notes for Changing Host Names

When changing any reference to the host name of the system, refer to the corresponding SAP notes. The SAP notes contain the most recent information about changing host names. Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS omits specific references to these SAP notes.

The following sections explain how to change the host name.

Installing the SAP J2EE Engine as a Scalable Resource

The section How to Install and Configure the SAP Web Application Server and the SAP J2EE Engine in Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS omits instructions for installing the SAP J2EE engine when you plan to configure it as a scalable resource. Step 2 and Step 7 of the procedure in this section are incomplete.

Correct Step 2:

If you are using the SAP J2EE engine, install the SAP J2EE engine software.

Refer to the SAP installation documentation.

Correct Step 7:

If you are using the SAP J2EE engine, modify the loghost script to return host names for the SAP J2EE engine.

Modify the script loghost, which was created in Step 6, to return either the logical host names or the physical host names for each instance of the SAP J2EE engine.

Sun Cluster Data Service for Solaris Containers Guide

This section describes errors and omissions in Sun Cluster Data Service for Solaris Containers Guide.

Information Missing From Configuration Restrictions

Configuration Restrictions in Sun Cluster Data Service for Solaris Containers Guide omits the restriction that applies to the autoboot property of a failover zone or a multiple-masters zone.

When creating a failover zone or a multiple-masters zone, ensure that the zone's autoboot property is set to false. Setting a zone's autoboot property to false prevents the zone from being booted when the global zone is booted. The Sun Cluster HA for Solaris Containers data service can manage a zone only if the zone is booted under the control of the data service.

Information Missing From Configuration Requirements

Configuration Requirements in Sun Cluster Data Service for Solaris Containers Guide omits the requirement that applies to the loopback file system (LOFS).

Ensure that the loopback file system (LOFS) is enabled.

The Sun Cluster installation tools disable the LOFS. If you are using Sun Cluster HA for Solaris Containers to manage a zone, enable the LOFS after installing and configuration the Sun Cluster framework. To enable the LOFS, delete the following line from the /etc/system file:

exclude: lofs

Errors in the Procedure for Installing and Configuring a Zone

The procedure How to Install and Configure a Zone in Sun Cluster Data Service for Solaris Containers Guide contains the following errors:

Erroneous Code Samples

The sample code in the following sections is incorrect:

The correct code for both sections is as follows:

# cat /var/tmp/probe-apache2
#!/usr/bin/ksh
if echo "GET; exit" | mconnect -p 80 > /dev/null 2>&1
then
    exit 0
else
    exit 100
fi

Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS

This section discusses errors and omissions from the Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS

Installing a Network Appliance NAS Device in a Sun Cluster Environment

The NetApp NAS unit must be connected directly to a network which has direct connections to all the cluster nodes.

When you set up a NetApp NAS filer, you must complete the following steps in addition to those found in Installing a Network Appliance NAS Device in a Sun Cluster Environment in Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS.

ProcedureHow to Install a Network Appliance NAS Device in a Sun Cluster Environment

Steps
  1. Add the NetApp NAS filer name to /etc/inet/hosts.

    Add a hostname-to-address mapping for the filer in the /etc/inet/hosts file on all cluster nodes. For example:


    netapp-123 192.168.11.123
  2. Add the filer (NAS subset) netmasks to /etc/inet/netmasks.

    Add an entry to the /etc/inet/netmasks file on all cluster nodes for the subnet the filer is on. For example:


    192.168.11.0 255.255.255.0
  3. Verify that the hosts and netmasks entries in /etc/nsswitch.conf file on all cluster nodes have files appearing before nis and dns. If they are not, edit the corresponding line in /etc/nsswitch.conf by moving files before nis and dns.

Man Pages

This section discusses errors and omissions from the Sun Cluster man pages.

Sun Cluster 3.0 Data Service Man Pages

To display Sun Cluster 3.0 data service man pages, install the latest patches for the Sun Cluster 3.0 data services that you installed on Sun Cluster 3.1 8/05 software. See Patches and Required Firmware Levels for more information.

After you have applied the patch, access the Sun Cluster 3.0 data service man pages by issuing the man -M command with the full man page path as the argument. The following example opens the Apache man page.


% man -M /opt/SUNWscapc/man SUNW.apache

Consider modifying your MANPATH to enable access to Sun Cluster 3.0 data service man pages without specifying the full path. The following example describes command input for adding the Apache man page path to your MANPATH and displaying the Apache man page.


% MANPATH=/opt/SUNWscapc/man:$MANPATH; export MANPATH
% man SUNW.apache