Sun Cluster 3.1 Data Services 10/03 Release Notes

Sun Cluster 3.1 Data Services 10/03 Release Notes

This document provides the following information for SunTM Cluster 3.1 Data Services 10/03 software.

What's New in Sun Cluster 3.1 Data Services 10/03

This section describes new features and functionality. Contact your Sun sales representative for the complete list of supported hardware and software.

Enhancements to the Sun Cluster HA for Oracle Data Service

The Sun Cluster HA for Oracle server fault monitor has been enhanced to enable you to customize the behavior of the server fault monitor as follows:

For more information, see Sun Cluster 3.1 Data Service for Oracle Guide.

Enhancements to the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Data Service

The Sun Cluster Support for Oracle Parallel Server/Real Application Clusters data service has been enhanced to enable this data service to be managed by using Sun Cluster commands.

For more information, see Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters Guide.

Enhancements to Resource Types

The following resource types are enhanced in Sun Cluster 3.1 Data Services 10/03:

For general information about upgrading a resource type, see “Upgrading a Resource Type” in Sun Cluster 3.1 Data Service Planning and Administration Guide

New Supported Data Services

Sun Cluster 3.1 Data Services 10/03 supports the following data services:

Supported Products

This section describes the supported software and memory requirements for Sun Cluster 3.1 software.

Sun Cluster Security Hardening

Sun Cluster Security Hardening uses the Solaris Operating Environment hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.

The Sun Cluster Security Hardening documentation is available at http://www.sun.com/blueprints/0203/817–1079.pdf. You can also access the article from http://wwws.sun.com/software/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.x Software.” The documentation describes how to secure Sun Cluster 3.1 deployments in a Solaris 8 and Solaris 9 environment. The description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts.

Table 1–2 Data Services Supported by Sun Cluster Security Hardening

Data Service Agent 

Application Version: Failover 

Application Version: Scalable 

Solaris Version 

Sun Cluster HA for Apache 

1.3.9 

1.3.9 

Solaris 8, Solaris 9 (version 1.3.9) 

Sun Cluster HA for Apache Tomcat 

3.3, 4.0, 4.1 

3.3, 4.0, 4.1 

Solaris 8, Solaris 9 

Sun Cluster HA for DHCP 

S8U7+ 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for DNS 

with OS 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for iPlanet Messaging Server 

6.0 

4.1 

Solaris 8 

Sun Cluster HA for MySQL 

3.23.54a - 4.0.15 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for NetBackup 

3.4  

N/A 

Solaris 8 

Sun Cluster HA for NFS 

with OS 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for Oracle E-Business Suite 

11.5.8 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for Oracle  

8.1.7 and 9i (32 and 64 bit) 

N/A 

Solaris 8, Solaris 9 (HA Oracle 9iR2) 

Sun Cluster Support for Oracle Parallel Server/Real Application Clusters 

8.1.7 and 9i (32 and 64 bit) 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for SAP 

4.6D (32 and 64 bit) and 6.20 

4.6D (32 and 64 bit) and 6.20 

Solaris 8, Solaris 9 

Sun Cluster HA for SWIFTAlliance Access 

4.1, 5.0 

N/A 

Solaris 8 

Sun Cluster HA for Samba 

2.2.2, 2.2.7, 2.2.7a, 2.2.8, 2.2.8a 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for Siebel  

7.5 

N/A 

Solaris 8 

Sun Cluster HA for Sun ONE Application Server 

7.0, 7.0 update 1 

N/A 

Solaris 8,Solaris 9 

Sun Cluster HA for Sun ONE Directory Server 

4.12 

N/A 

Solaris 8, Solaris 9 (version 5.1) 

Sun Cluster HA for Sun ONE Message Queue 

3.0.1 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for Sun ONE Web Server 

6.0 

4.1 

Solaris 8, Solaris 9 (version 4.1) 

Sun Cluster HA for Sybase ASE  

12.0 (32 bit) 

N/A 

Solaris 8 

Sun Cluster HA for BEA WebLogic Server 

7.0 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for WebSphere MQ 

5.2, 5.3 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for WebSphere MQ Integrator 

2.0.2, 2.1 

N/A 

Solaris 8, Solaris 9 

Restrictions

Running Sun Cluster HA for Oracle 3.0 on Sun Cluster 3.1

The Sun Cluster HA for Oracle 3.0 data service can run on Sun Cluster 3.1 only when used with the following versions of the Solaris operating environment:


Note –

The Sun Cluster HA for Oracle 3.0 data service cannot run on Sun Cluster 3.1 when used with the 64-bit version of Solaris 9.


Sun Cluster HA for Oracle Parallel Server/Real Application Cluster

Adhere to the documentation of Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application clusters because you cannot change hostnames after you install Sun Cluster software.

For more information on this restriction on hostnames and node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.

Sun Cluster HA for NetBackup

Sun Cluster HA for NFS

Sun Cluster HA for SAP liveCache

Do not use NIS for naming services in a cluster running Sun Cluster HA for SAP liveCache because the NIS entry is only used if files are not available.

For more procedural information about the nssswitch.conf password requirements related to this restriction, see“Preparing the Nodes and Disks” in Sun Cluster 3.1 Data Service for SAP liveCache Guide .

Installation Issues and Bugs

Installation Guidelines

Identify requirements for all data services before you begin Solaris and Sun Cluster installation. If you do not determine these requirements, you might perform the installation process incorrectly and thereby need to completely reinstall the Solaris and Sun Cluster software.

For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames/node names that you use in the cluster. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software. For more information on the special requirements for the hostnames/node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.

Sun Cluster HA for liveCache nsswitch.conf requirements for passwd make NIS unusable (4904975)

NIS cannot be used in a cluster running liveCache, because the NIS entry is only used if files are not available. For more information, see Sun Cluster HA for SAP liveCache.

Administration Runtime Issues and Bugs

HA Oracle Instances Will Not Start If SCI Interconnect Disabled (4823212)

Oracle instances will not start if an SCI cluster interconnect on one cluster node is disabled using the scconf -c -A command.

HA Oracle Stop Method Times Out (4644289)

If you are running Solaris 9, include the following entries in the /etc/nsswitch.conf configuration files on each node that can be the primary for oracle_server or oracle_listener resource so that the data service starts and stops correctly during a network failure:

passwd: files
groups: files
publickey: files
project:  files

The Sun Cluster HA for Oracle data service uses the super user command, su(1M), to start and stop the database. The network service might become unavailable when a cluster node's public network fails. Adding the above entries ensures that the su command does not refer to the NIS/NIS+ name services.

SAP liveCache Stop Method Times Out (4836272)

If you are running Solaris 9, include one of the following entries for the publickey database in the /etc/nsswitch.conf configuration files on each node that can be the primary for liveCache resources so that the data service starts and stops correctly during a network failure:

publickey: 
publickey:  files
publickey:  files [NOTFOUND=return] nis 
publickey:  files [NOTFOUND=return] nisplus

The Sun Cluster HA for SAP liveCache data service uses the dbmcli command to start and stop the liveCache. The network service might become unavailable when a cluster node's public network fails. Adding one of the above entries, in addition to updates documented in Sun Cluster 3.1 Data Service for SAP liveCache Guide ensures that the su command and the dbmcli command do not refer to the NIS/NIS+ name services.

HA Oracle Listener Probe Timeouts (4900140)

On a heavily loaded system, the Oracle listener probe might time out. To prevent the Oracle listener probe from timing out, increase the value of the Thorough_probe_interval extension property. The time-out value of the Oracle listener probe depends on the value of the Thorough_probe_interval extension property. You cannot set the time-out value of the Oracle listener probe independently.

HA-Siebel Does Not Automatically Restart Failed Siebel Components (4722288)

Sun Cluster HA-Siebel agent does not monitor individual Siebel components. If the failure of a Siebel component is detected, only a warning message is logged in syslog.

To work around this, restart the Siebel server resource group in which components are offline using the command scswitch -R -h node -g resource_group.

xserver_svc_start Reports xserver Unavailable During Start-up (4738554)

The message “SAP xserver is not available” is printed during the start up of SAP xserver due to the fact that xserver is not considered to be available until it is fully up and running.

Ignore this message during the startup of the SAP xserver.

xserver Resource Cannot be Configured as a Failover Resource (4836248)

Do not configure the xserver resource as a failover resource. The Sun Cluster HA for SAP liveCache data service does not failover properly when xserver is configured as a failover resource.

To Utilize Monitor_Uri_List, Type_version Must Be Set to 4 (4924147)

To utilize the Monitor_Uri_List extension property of Sun Cluster HA for Apache and Sun Cluster HA for Sun ONE Web Server, you must set the Type_version property to 4.

You can also upgrade the Type_version property of a resource to 4 at any time. For information on how to upgrade a resource type, see “Upgrading a Resource Type” in Sun Cluster 3.1 Data Service Planning and Administration Guide.

The su Command Resets the Project Identifier (4868654)

Some data services run the su command to set the user identifier (ID) to a specific user. For the Solaris 9 operating environment, the su command resets the project identifier to default. This behavior overrides the setting of the project identifier by the RG_project_name system property or the Resource_project_name system property.

To ensure that the appropriate project name is used at all times, set the project name in the environment file of the user. One method to set the project name in the user's environment file is to add the following line to the .cshrc file of the user:

/usr/bin/newtask -p project-name -c $$

project-name is the project name that is to be used.

Patches and Required Firmware Levels

This section provides information about patches for Sun Cluster configuration.


Note –

You must be a registered SunSolveTM user to view and download the required patches for the Sun Cluster product. If you do not have a SunSolve account, contact your Sun service representative or sales engineer, or register online at http://sunsolve.sun.com.


PatchPro

PatchPro is a patch-management tool designed to ease the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides a Sun Cluster-specific Interactive Mode tool to make the installation of patches easier and an Expert Mode tool to maintain your configuration with the latest set of patches. Expert Mode is especially useful for those who want to get all of the latest patches, not just the high availability and security patches.

To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

SunSolve Online

The SunSolveTM Online Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site at http://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.

You can find Sun Cluster 3.1 patch information by using the Info Docs. To view Info Docs, log on to SunSolve and access the Simple Search selection from the top of the main page. From the Simple Search page, click on Info Docs and type Sun Cluster 3.1 in the search criteria box. This will bring up the Info Docs page for Sun Cluster 3.1 software.

Before you install Sun Cluster 3.1 software and apply patches to a cluster component (Solaris operating environment, Sun Cluster software, volume manager or data services software, or disk hardware), review the Info Docs and any README files that accompany the patches. All cluster nodes must have the same patch level for proper cluster operation.

For specific patch procedures and tips on administering patches, see “Patching Sun Cluster Software and Firmware” in Sun Cluster 3.1 10/03 System Administration Guide.

End-of-Feature-Support Statements

HAStorage

HAStorage might not be supported in a future release of Sun Cluster software. Near-equivalent functionality is supported by HAStoragePlus. To upgrade from HAStorage to HAStoragePlus when you use cluster file systems or device groups, see “Upgrading from HAStorage to HAStoragePlus” in Sun Cluster 3.1 Data Service Planning and Administration Guide.

Sun Cluster 3.1 Data Services 10/03 Software Localization

The following localization packages are available on the Data Services CD-ROM. When you install or upgrade to Sun Cluster 3.1, the localization packages will be automatically installed for the data services you have selected.

Language 

Package Name 

Package Description 

French 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

SUNWfscapc

French Sun Cluster Apache Web Server Component 

SUNWfscbv

French Sun Cluster BV Server Component 

SUNWfscdns

French Sun Cluster Domain Name Server Component 

SUNWfschtt

French Sun Cluster Sun ONE Web Server Component 

SUNWfsclc

French Sun Cluster resource type for SAP liveCache 

SUNWfscnb

French Sun Cluster resource type for netbackup_master server 

SUNWfscnfs

French Sun Cluster NFS Server Component 

SUNWfscnsl

French Sun Cluster Sun ONE Directory Server Component 

SUNWfscor

French Sun Cluster HA Oracle data service 

SUNWfscs1as

French Sun Cluster HA Sun ONE Application Server data service 

SUNWfscs1mq

French Sun Cluster HA Sun ONE Message Queue data service 

SUNWfscsap

French Sun Cluster SAP R/3 Component 

SUNWfscsbl

French Sun Cluster resource types for Siebel gateway and Siebel server 

SUNWfscsyb

French Sun Cluster HA Sybase data service 

SUNWfscwls

French Sun Cluster BEA WebLogic Server Component 

Japanese 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

SUNWjscapc

Japanese Sun Cluster Apache Web Server Component 

SUNWjscbv

Japanese Sun Cluster BV Server Component 

SUNWjscdns

Japanese Sun Cluster Domain Name Server Component 

SUNWjschtt

Japanese Sun Cluster Sun ONE Web Server Component 

SUNWjsclc

Japanese Sun Cluster resource type for SAP liveCache 

SUNWjscnb

Japanese Sun Cluster resource type for netbackup_master server 

SUNWjscnfs

Japanese Sun Cluster NFS Server Component 

SUNWjscnsl

Japanese Sun Cluster Sun ONE Directory Server Component 

SUNWjscor

Japanese Sun Cluster HA Oracle data service 

SUNWjscs1as

Japanese Sun Cluster HA Sun ONE Application Server data service 

SUNWjscs1mq

Japanese Sun Cluster HA Sun ONE Message Queue data service 

SUNWjscsap

Japanese Sun Cluster SAP R/3 Component 

SUNWjscsbl

Japanese Sun Cluster resource types for Siebel gateway and Siebel server 

SUNWjscsyb

Japanese Sun Cluster HA Sybase data service 

SUNWjscwls

Japanese Sun Cluster BEA WebLogic Server Component 

Sun Cluster 3.1 Data Services 10/03 Documentation

The complete Sun Cluster 3.1 Data Services 10/03 user documentation set is available in PDF and HTML format on the Sun Cluster Agents CD-ROM. AnswerBook2TM server software is not needed to read Sun Cluster 3.1 documentation. See the index.html file at the top level of either CD-ROM for more information. This index.html file enables you to read the PDF and HTML manuals directly from the disc and to access instructions to install the documentation packages.


Note –

The SUNWsdocs package must be installed before you install any Sun Cluster documentation packages. You can use pkgadd to install the SUNWsdocs package from either the SunCluster_3.1/Sol_N/Packages/ directory of the Sun Cluster CD-ROM or from the components/SunCluster_Docs_3.1/Sol_N/Packages/ directory of the Sun Cluster Agents CD-ROM, where N is either 8 for Solaris 8 or 9 for Solaris 9. The SUNWsdocs package is also automatically installed when you run the installer from the Solaris 9 Documentation CD.


The Sun Cluster 3.1 documentation set consists of the following collections:

In addition, the docs.sun.comSM web site enables you to access Sun Cluster documentation on the Web. You can browse the docs.sun.com archive or search for a specific book title or subject at the following Web site:

http://docs.sun.com

Documentation Issues

This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.

Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters Guide

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters Guide.

Requirements for Using the Cluster File System

The section “Requirements for Using the Cluster File System” erroneously states that you can store data files on the cluster file system. You must not store data files on the cluster file system. Therefore, ignore all references to data files in this section.

Creating Node-Specific Files and Directories for Use With Oracle Parallel Server/Real Application Clusters Software on the Cluster File System

When Oracle software is installed on the cluster file system, all the files in the directory that the ORACLE_HOME environment variable specifies are accessible by all cluster nodes.

An installation might require that some Oracle files or directories maintain node-specific information. You can satisfy this requirement by using a symbolic link whose target is a file or a directory on a file system that is local to a node. Such a file system is not part of the cluster file system.

To use a symbolic link for this purpose, you must allocate an area on a local file system. To enable Oracle applications to create symbolic links to files in this area, the applications must be able to access files in this area. Because the symbolic links reside on the cluster file system, all references to the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area on the local file system.

How to Create a Node-Specific Directory for Use With Oracle Parallel Server/Real Application Clusters Software on the Cluster File System

Perform this procedure for each directory that is to maintain node-specific information. The following directories are typically required to maintain node-specific information:

For information about other directories that might be required to maintain node-specific information, see your Oracle documentation.

  1. On each cluster node, create the local directory that is to maintain node-specific information.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global directory that is to maintain node-specific information.


    # cp -pr global-dir local-dir-parent
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    -r

    Specifies that the directory and all its files, including any subdirectories and their files, are copied.

    global-dir

    Specifies the full path of the global directory that you are copying. This directory resides on the cluster file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir-parent

    Specifies the directory on the local node that is to contain the local copy. This directory is the parent directory of the directory that you created in Step 1.

  3. Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the global directory.

    1. From any cluster node, remove the global directory that you copied in Step 2.


      # rm -r global-dir
      
      -r

      Specifies that the directory and all its files, including any subdirectories and their files, are removed.

      global-dir

      Specifies the file name and full path of the global directory that you are removing. This directory is the global directory that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the directory to the global directory that you removed in Step a.


      # ln -s local-dir global-dir
      
      -s

      Specifies that the link is a symbolic link

      local-dir

      Specifies that the local directory that you created in Step 1 is the source of the link

      global-dir

      Specifies that the global directory that you removed in Step a is the target of the link


Example 1–1 Creating Node-Specific Directories

This example shows the sequence of operations that is required to create node-specific directories on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the required directories on the local file system, the following commands are run:


    # mkdir -p /local/oracle/network/agent

    # mkdir -p /local/oracle/network/log

    # mkdir -p /local/oracle/network/trace

    # mkdir -p /local/oracle/srvm/log

    # mkdir -p /local/oracle/apache
  2. To make local copies of the global directories that are to maintain node-specific information, the following commands are run:


    # cp -pr $ORACLE_HOME/network/agent /local/oracle/network/.

    # cp -pr $ORACLE_HOME/network/log /local/oracle/network/.

    # cp -pr $ORACLE_HOME/network/trace /local/oracle/network/.

    # cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/.

    # cp -pr $ORACLE_HOME/apache /local/oracle/.

The following operations are performed on only one node:

  1. To remove the global directories, the following commands are run:


    # rm -r $ORACLE_HOME/network/agent

    # rm -r $ORACLE_HOME/network/log

    # rm -r $ORACLE_HOME/network/trace

    # rm -r $ORACLE_HOME/srvm/log

    # rm -r $ORACLE_HOME/apache
  2. To create symbolic links from the local directories to their corresponding global directories, the following commands are run:


    # ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent 

    # ln -s /local/oracle/network/log $ORACLE_HOME/network/log

    # ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace

    # ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log

    # ln -s /local/oracle/apache $ORACLE_HOME/apache

How to Create a Node-Specific File for Use With Oracle Parallel Server/Real Application Clusters Software on the Cluster File System

Perform this procedure for each file that is to maintain node-specific information. The following files are typically required to maintain node-specific information:

For information about other files that might be required to maintain node-specific information, see your Oracle documentation.

  1. On each cluster node, create the local directory that will contain the file that is to maintain node-specific information.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global file that is to maintain node-specific information.


    # cp -p global-file local-dir
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    global-file

    Specifies the file name and full path of the global file that you are copying. This file was installed on the cluster file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir

    Specifies the directory that is to contain the local copy of the file. This directory is the directory that you created in Step 1.

  3. Replace the global file that you copied in Step 2 with a symbolic link to the local copy of the file.

    1. From any cluster node, remove the global file that you copied in Step 2.


      # rm global-file
      
      global-file

      Specifies the file name and full path of the global file that you are removing. This file is the global file that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the file to the directory from which you removed the global file in Step a.


      # ln -s local-file global-dir
      
      -s

      Specifies that the link is a symbolic link

      local-file

      Specifies that the file that you copied in Step 2 is the source of the link

      global-dir

      Specifies that the directory from which you removed the global version of the file in Step a is the target of the link


Example 1–2 Creating Node-Specific Files

This example shows the sequence of operations that is required to create node-specific files on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the local directory that will contain the files that are to maintain node-specific information, the following command is run:


    # mkdir -p /local/oracle/network/admin
  2. To make a local copy of the global files that are to maintain node-specific information, the following commands are run:


    # cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \
      /local/oracle/network/admin/.

    # cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \
      /local/oracle/network/admin/.

The following operations are performed on only one node:

  1. To remove the global files, the following commands are run:


    # rm $ORACLE_HOME/network/admin/snmp_ro.ora

    # rm $ORACLE_HOME/network/admin/snmp_rw.ora
  2. To create symbolic links from the local copies of the files to their corresponding global files, the following commands are run:


    # ln -s /local/oracle/network/admin/snmp_ro.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora

    # ln -s /local/oracle/network/admin/snmp_rw.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora

Sun Cluster 3.1 Data Service for Oracle E-Business Suite Guide

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle E-Business Suite Guide.

How to Register and Configure Sun Cluster HA for Oracle E-Business Suite as a Failover Service

Step 13 of the procedure “How to Register and Configure Sun Cluster HA for Oracle E-Business Suite as a Failover Service” is incorrect. The correct text is as follows:

The example that follows this step is also incorrect. The correct example is as follows:

RS=ebs-cmg-res
RG=ebs-rg
HAS_RS=ebs-has-res
LSR_RS=ebs-cmglsr-res
CON_HOST=lhost1
CON_COMNTOP=/global/mnt10/d01/oracle/prodcomn
CON_APPSUSER=ebs
APP_SID=PROD
APPS_PASSWD=apps
ORACLE_HOME=/global/mnt10/d01/oracle/prodora/8.0.6
CON_LIMIT=70
MODE=32/Y

Sun Cluster 3.1 Data Service 10/03 for Sun ONE Directory Server and Sun ONE Web Server

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Sun ONE Directory Server Guide and Sun Cluster 3.1 Data Service for Sun ONE Web Server Guide.

Name Change for iPlanet Web Server and for iPlanet Directory Server

The names for iPlanet Web Server and iPlanet Directory Server have been changed. The new names are Sun ONE Web Server and Sun ONE Directory Server. The data service names are now Sun Cluster HA for Sun ONE Web Server and Sun Cluster HA for Sun ONE Directory Server.

The application name on the Sun Cluster Agents CD-ROM might still be iPlanet Web Server and iPlanet Directory Server.

Sun Cluster 3.1 Data Service 10/03 for SAP liveCache

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for SAP liveCache.

The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should state that the SAP xserver can only be configured as a scalable resource. Configuring the SAP xserver as a failover resource will cause the SAP liveCache resource not failover. Ignore all references to configuring the SAP xserver resource as a failover resource inSun Cluster 3.1 Data Service for SAP liveCache.

The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should also contain an extra step. After step 10, “Enable the scalable resource group that now includes the SAP xserver resource,” you must register the liveCache resource by entering the following text.


# scrgadm -a -j livecache-resource -g livecache-resource-group \
-t SUNW.sap_livecache -x livecache_name=LC-NAME \
-y resource_dependencies=livecache-storage-resource

After you register the liveCache resource, proceed to the next step, “Set up a resource group dependency between SAP xserver and liveCache.”

Sun Cluster 3.1 Data Service 10/03 for WebLogic Server

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for WebLogic Server.

The “Protection of BEA WebLogic Server Component” table should state that the BEA WebLogic Server database is protected by all databases supported by BEA WebLogic Server and supported on Sun Cluster. The table should also state that the HTTP servers are protected by all HTTP servers supported by BEA WebLogic Server and supported on Sun Cluster.

Sun Cluster 3.1 Data Service 10/03 for Apache

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for Apache Guide.

The “Planning the Installation and Configuration” section should not state a note about using scalable proxy serving a scalable web resource. Use of scalable proxy is not supported.

If you use the Monitor_Uri_List extension property for the Sun Cluster HA for Apache data service, the required value of the Type_version property is 4. You can perform a Resource Type upgrade to Type_version 4.

Sun Cluster 3.1 Data Service 10/03 for Sun ONE Web Server

If you use the Monitor_Uri_List extension property for the Sun Cluster HA for Sun ONE Web Server data service, the required value of the Type_version property is 4. You can perform a Resource Type upgrade to Type_version 4.

Man Pages

SUNW.wls(5)

There is an error in the See Also section of this man page. Instead of referencing the Sun Cluster 3.1 Data Services Installation and Configuration Guide, you should reference the Sun Cluster 3.1 Data Service for WebLogic Server Guide.