Sun Cluster 3.1 Data Service 5/03 Release Notes

Documentation Issues

This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.

Sun Cluster 3.1 Data Service 5/03 for Oracle

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle.

Sun Cluster HA for Oracle Packages

The introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.1 Data Service Planning and Administration Guide does not discuss the additional package needed for users with clusters running Sun Cluster HA for Oracle with 64-bit Oracle. The following section corrects the introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.1 Data Service for Oracle.

Installing Sun Cluster HA for Oracle Packages

Depending on your configuration, use the scinstall(1M) utility to install one or both of the following packages on your cluster. Do not use the -s option to non-interactive scinstall to install all of the data service packages.


Note –

SUNWscor is the prerequisite package for SUNWscorx.


If you installed the SUNWscor data service package as part of your initial Sun Cluster installation, proceed to “Registering and Configuring Sun Cluster HA for Oracle” on page 30. Otherwise, use the procedure documented in Sun Cluster 3.1 Data Service Planning and Administration Guide.

Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

Pre-Installation Considerations

Pre-installation considerations for using Oracle Parallel Server/Real Application Clusters with the cluster file system are missing from “Overview” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

Oracle Parallel Server/Real Application Clusters is a scalable application that can run on more than one node concurrently. You can store all of the files that are associated with this application on the cluster file system, namely:

For optimum I/O performance during the writing of redo logs, ensure that the following items are located on the same node:

For other pre-installation considerations that apply to Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, see “Overview” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

How to Use the Cluster File System

Information on how to use the cluster file system with Oracle Parallel Server/Real Application Clusters is missing from “Installing Volume Management Software With Sun Cluster Support for Oracle Parallel Server/Real Application Clusters” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

To use the cluster file system with Oracle Parallel Server/Real Application Clusters, create and mount the cluster file system as explained in “Configuring the Cluster” in Sun Cluster 3.1 5/03 Software Installation Guide. When you add an entry to the /etc/vfstab file for the mount point, set UNIX file system (UFS) file system specific options for various types of Oracle files as shown in the following table.

Table 1–3 UFS File System Specific Options for Oracle Files

File Type 

Options 

RDBMS data files, log files, control files 

global, logging, forcedirectio

Oracle binary files, configuration files 

global, logging

How to Install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages With the Cluster File System

Information on how to install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters packages with the cluster file system is missing from “Installing Volume Management Software With Sun Cluster Support for Oracle Parallel Server/Real Application Clusters” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

To complete this procedure, you need the Sun Cluster CD-ROM. Perform this procedure on all of the cluster nodes that can run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.


Note –

Due to the preparation that is required prior to installation, the scinstall(1M) utility does not support automatic installation of the data service packages.


  1. Load the Sun Cluster CD-ROM into the CD-ROM drive.

  2. Become superuser.

  3. Change the current working directory to the directory that contains the packages for the version of the Solaris operating environment that you are using.

    • If you are using Solaris 8, run the following command:


      # cd /cdrom/suncluster_3_1/SunCluster_3.1/Sol_8/Packages
      
    • If you are using Solaris 9, run the following command:


      # cd /cdrom/suncluster_3_1/SunCluster_3.1/Sol_9/Packages
      
  4. On each node of the cluster, transfer the contents of the required software packages from the CD-ROM to the node.


    # pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr
    

Caution – Caution –

Before you reboot the nodes, you must ensure that you have correctly installed and configured the Oracle UDLM software. For more information, see “Installing the Oracle Software” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.


Where to Go From Here

Go to “Installing the Oracle Software” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters to install the Oracle UDLM and Oracle RDBMS software.

Using the Sun Cluster LogicalHostname Resource With Oracle Parallel Server/Real Application Clusters

Information on using the Sun Cluster LogicalHostname resource with Oracle Parallel Server/Real Application Clusters is missing from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters .

If a cluster node that is running an instance of Oracle Parallel Server/Real Application Clusters fails, an operation that a client application attempted might be required to time out before the operation is attempted again on another instance. If the TCP/IP network timeout is high, the client application might take a long time to detect the failure. Typically client applications take between three and nine minutes to detect such failures.

In such situations, client applications may use the Sun Cluster LogicalHostname resource for connecting to an Oracle Parallel Server/Real Application Clusters database that is running on Sun Cluster. You can configure the LogicalHostname resource in a separate resource group that is mastered on the nodes on which Oracle Parallel Server/Real Application Clusters is running. If a node fails, the LogicalHostname resource fails over to another surviving node on which Oracle Parallel Server/Real Application Clusters is running. The failover of the LogicalHostname resource enables new connections to be directed to the other instance of Oracle Parallel Server/Real Application Clusters.


Caution – Caution –

Before using the LogicalHostname resource for this purpose, consider the effect on existing user connections of failover or failback of the LogicalHostname resource.


Sun Cluster 3.1 Data Service 5/03 for Sun ONE Directory Server and Sun ONE Web Server

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Sun ONE Directory Server and Sun Cluster 3.1 Data Service for Sun ONE Web Server.

Name Change for iPlanet Web Server and for iPlanet Directory Server

The names for iPlanet Web Server and iPlanet Directory Server have been changed. The new names are Sun ONE Web Server and Sun ONE Directory Server. The data service names are now Sun Cluster HA for Sun ONE Web Server and Sun Cluster HA for Sun ONE Directory Server.

The application name on the Sun Cluster Agents CD-ROM might still be iPlanet Web Server and iPlanet Directory Server.

Sun Cluster 3.1 Data Service 5/03 for Siebel

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for Siebel.

Scalable Sun ONE Web Server Is Not Supported with HA-Siebel

In the "Planning the Sun Cluster HA for Siebel Installation and Configuration" section, the configuration restrictions should state that scalable Sun ONE Web Server (iWS) cannot be used with HA Siebel. You must configure iWS as a failover data service.

Sun Cluster 3.1 Data Service 5/03 for SAP liveCache

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for SAP liveCache.

The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should state that the SAP xserver can only be configured as a scalable resource. Configuring the SAP xserver as a failover resource will cause the SAP liveCache resource not failover. Ignore all references to configuring the SAP xserver resource as a failover resource inSun Cluster 3.1 Data Service for SAP liveCache.

Sun Cluster 3.1 Data Service 5/03 for WebLogic Server

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for WebLogic Server.

The “Protection of BEA WebLogic Server Component” table should state that the BEA WebLogic Server database is protected by all databases supported by BEA WebLogic Server and supported on Sun Cluster. The table should also state that the HTTP servers are protected by all HTTP servers supported by BEA WebLogic Server and supported on Sun Cluster.

Man Pages

SUNW.sap_ci(5)

SUNW.sap_as(5)

rg_properties(5)

The following new resource group property should be added to the rg_properties(5) man page.

Auto_start_on_new_cluster

This property controls whether the Resource Group Manager starts the resource group automatically when a new cluster is forming.

The default is TRUE. If set to TRUE, the Resource Group Manager attempts to start the resource group automatically to achieve Desired_primaries when all nodes of the cluster are simultaneously rebooted. If set to FALSE, the Resource Group does not start automatically when the cluster is rebooted.

SUNW.wls(5)

There is an error in the See Also section of this man page. Instead of referencing the Sun Cluster 3.1 Data Services Installation and Configuration Guide, you should reference the Sun Cluster 3.1 Data Service for WebLogic Server.