This document provides the following information for SunTM Cluster 3.1 Data Services 5/03 software.
This section describes new features and functionality. Contact your Sun sales representative for the complete list of supported hardware and software.
For error messages that were not included on the Sun Cluster CD-ROM, see Sun Cluster 3.1 5/03 Release Notes.
This new Sun Cluster HA for Oracle feature recognizes when a database fails to start because of files left in hot backup mode. This feature takes the necessary action to reopen the database for use. You can turn this feature on and off. The default state is OFF.
For information on the Auto_End_Bkp extension property that enables this feature, see Sun Cluster 3.1 Data Service for Oracle.
As newer versions of resource types are released, you will want to install and register the upgraded resource type. You may also want to upgrade your existing resources to the newer resource type versions. The Resource Type Upgrade feature enables you to upgrade an existing resource to a new resource type version. For documentation on this new feature, see “Upgrading a Resource Type” in Sun Cluster 3.1 Data Service Planning and Administration Guide.
Sun Cluster HA for SAP liveCache is a data service that makes liveCache highly available. Sun Cluster HA for SAP liveCache provides fault monitoring and automatic failover for liveCache and fault monitoring and automatic restart for SAP xserver, eliminating a single point of failure in an SAP Advanced Planner & Optimizer (APO) System. With a combination of Sun Cluster HA for SAP liveCache and other Sun Cluster data services, Sun Cluster software provides a complete solution to protect SAP components in a Sun Cluster environment.
For documentation on Sun Cluster HA for SAP liveCache, see Sun Cluster 3.1 Data Service for SAP liveCache.
Sun Cluster HA for Siebel provides Fault Monitoring and automatic failover for the Siebel application. High availability is provided for the Siebel gateway and Siebel server. With a Siebel implementation, any physical node running the Sun Cluster agent cannot be running the Resonate agent as well. Resonate and Sun Cluster can co-exist within the same Siebel enterprise, but not on the same physical server.
For documentation on Sun Cluster HA for Siebel, see Sun Cluster 3.1 Data Service for Siebel.
Sun Cluster HA for Sun ONE Web Server now supports Sun ONE Proxy Server. For information about the Sun ONE Proxy Server product, see http://docs.sun.com/db/prod/s1.webproxys. For Sun ONE Proxy Server installation and configuration information, see http://docs.sun.com/db/coll/S1_ipwebproxysrvr36.
Sun Cluster 3.1 Data Services 5/03 supports the following data services:
Sun Cluster HA for WebLogic Server – BEA WebLogic Server running on Sun Cluster systems delivers a highly available platform for developing and deploying mission-critical e-commerce applications across distributed, heterogeneous application environments.
The Sun Cluster HA for BEA WebLogic Server provides fault monitoring and high availability for the BEA WebLogic Server application. High availability is provided for the WebLogic Administration Server and WebLogic Managed Servers.
Sun Cluster HA for DHCP – Solaris DHCP provides dynamic TCP/IP configuration to a DHCP client. The Sun Cluster HA for DHCP data service uses the DHCP software that is bundled with Solaris 8 and Solaris 9.
The Sun Cluster HA for DHCP data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the DHCP service.
Sun Cluster HA for Samba – Samba is an Open Source/Freeware suite that provides seamless file and print services to SMB/CIFS clients.
The Sun Cluster HA for Samba data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Samba service.
Sun Cluster HA for WebSphere MQ Integrator – WebSphere MQ Integrator works with WebSphere MQ messaging, extending its basic connectivity and transport capabilities to provide a powerful message broker solution driven by business rules.
The Sun Cluster HA for WebSphere MQ Integrator data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover for the WebSphere MQ Integrator service.
Sun Cluster HA for WebSphere MQ – WebSphere MQ messaging software enables business applications to exchange information across different operating platforms in a way that is easy and straightforward for programmers to implement. Programs communicate using the WebSphere MQ API that assures once only delivery and time independent communications.
The Sun Cluster HA for WebSphere MQ data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the WebSphere MQ service.
This section describes the supported software and memory requirements for Sun Cluster 3.1 software.
Operating environment and patches – Supported Solaris versions and patches are available at the following URL:
For more details, see Patches and Required Firmware Levels.
Volume managers
On Solaris 8 – Solstice DiskSuiteTM 4.2.1 and VERITAS Volume Manager 3.2 and 3.5.
On Solaris 9 – Solaris Volume Manager and VERITAS Volume Manager 3.5.
If you are upgrading from VERITAS Volume Manager (VxVM) 3.2 to 3.5, the Cluster Volume Manger (CVM) feature will not be available until you install the CVM license key for version 3.5. In VxVM 3.5, the CVM license key for version 3.2 does not enable CVM and must be upgraded to the CVM license key for version 3.5.
File systems
On Solaris 8 – Solaris UFS and VERITAS File System 3.4 and 3.5.
On Solaris 9 – Solaris UFS and VERITAS File System 3.5.
Data services (agents) – Contact your Sun sales representative for the complete list of supported data services and application versions. Specify the resource type names when you install the data services by using the scinstall(1M) utility. You should also specify the resource type names when you register the resource types associated with the data service using the scsetup(1M) utility.
Procedures for Sun Cluster HA for Sun ONE Directory Server using iPlanet Directory Server 5.0 and 5.1 (plus Netscape HTTP, versions 4.11, 4.12, 4.13, and 4.16) are located in the Sun Cluster 3.1 Data Service for Sun ONE Directory Server. For later versions of iPlanet Directory Server (now known as Sun ONE Directory Server), see the Sun ONE documentation included with the data service.
Data Service |
Sun Cluster Resource Type |
---|---|
Sun Cluster HA for Apache |
SUNW.apache |
Sun Cluster HA for BroadVision One-To-One Enterprise |
SUNW.bv |
Sun Cluster HA for DHCP |
SUNW.gds |
Sun Cluster HA for DNS |
SUNW.dns |
Sun Cluster HA for Sun ONE Web Server (This data service was formerly known as Sun Cluster HA for iPlanet Web Server) |
SUNW.iws |
Sun Cluster HA for NetBackup |
SUNW.netbackup |
Sun Cluster HA for NFS |
SUNW.nfs |
Sun Cluster HA for Sun ONE Directory Server (This data service was formerly known as Sun Cluster HA for iPlanet Directory Server) |
SUNW.nsldap |
Sun Cluster HA for Oracle |
SUNW.oracle_server SUNW.oracle_listener |
Sun Cluster HA for SAP |
SUNW.sap_ci SUNW.sap_ci_v2 SUNW.sap_as SUNW.sap_as_v2 |
Sun Cluster HA for Sun ONE Application Server |
SUNW.s1as |
Sun Cluster HA for Sun ONE Message Queue |
SUNW.s1mq |
Sun Cluster HA for Sybase ASE |
SUNW.sybase |
Sun Cluster Support for Oracle Parallel Server/Real Application Clusters |
N/A |
Sun Cluster HA for SAP liveCache |
SUNW.sap_livecache SUNW.sap_xserver |
Sun Cluster HA for Samba |
SUNW.gds |
Sun Cluster HA for Siebel |
SUNW.sblgtwy SUNW.sblsrvr |
Sun Cluster HA for WebLogic Server |
SUNW.wls |
Sun Cluster HA for WebSphere MQ |
SUNW.gds |
Sun Cluster HA for WebSphere MQ Integrator |
SUNW.gds |
Memory Requirements – Sun Cluster 3.1 software requires extra memory beyond what is configured for a node under a normal workload. The extra memory equals 128 Mbytes plus ten percent. For example, if a standalone node normally requires 1 Gbyte of memory, you need an extra 256 Mbytes to meet memory requirements.
RSMAPI – Sun Cluster 3.1 software supports the Remote Shared Memory Application Programming Interface (RSMAPI) on RSM-capable interconnects, such as PCI-SCI.
Sun Cluster Security Hardening uses the Solaris Operating Environment hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.
The Sun Cluster Security Hardening documentation is available at http://www.sun.com/blueprints/0203/817–1079.pdf. You can also access the article from http://wwws.sun.com/software/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.x Software.” The documentation describes how to secure Sun Cluster 3.1 deployments in a Solaris 8 and Solaris 9 environment. The description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts.
Table 1–2 Data Services Supported by Sun Cluster Security Hardening
Data Service Agent |
Application Version: Failover |
Application Version: Scalable |
Solaris Version |
---|---|---|---|
Sun Cluster HA for BEA WebLogic Server |
7.0 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for iPlanet Messaging Server |
6.0 |
4.1 |
Solaris 8 |
Sun Cluster HA for Sun ONE Web Server |
6.0 |
4.1 |
Solaris 8, Solaris 9 (version 4.1) |
Sun Cluster HA for Apache |
1.3.9 |
1.3.9 |
Solaris 8, Solaris 9 (version 1.3.9) |
Sun Cluster HA for SAP |
4.6D (32 and 64 bit) and 6.20 |
4.6D (32 and 64 bit) and 6.20 |
Solaris 8, Solaris 9 |
Sun Cluster HA for Sun ONE Directory Server |
4.12 |
N/A |
Solaris 8, Solaris 9 (version 5.1) |
Sun Cluster HA for NetBackup |
3.4 |
N/A |
Solaris 8 |
Sun Cluster HA for Oracle |
8.1.7 and 9i (32 and 64 bit) |
N/A |
Solaris 8, Solaris 9 (HA Oracle 9iR2) |
Sun Cluster HA for Siebel |
7.5 |
N/A |
Solaris 8 |
Sun Cluster HA for Sybase ASE |
12.0 (32 bit) |
N/A |
Solaris 8 |
Sun Cluster Support for Oracle Parallel Server/Real Application Clusters |
8.1.7 and 9i (32 and 64 bit) |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for DNS |
with OS |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for NFS |
with OS |
N/A |
Solaris 8, Solaris 9 |
The Sun Cluster HA for Oracle 3.0 data service can run on Sun Cluster 3.1 only when used with the following versions of the Solaris operating environment:
Solaris 8, 32-bit version
Solaris 8, 64-bit version
Solaris 9, 32-bit version
The Sun Cluster HA for Oracle 3.0 data service cannot run on Sun Cluster 3.1 when used with the 64-bit version of Solaris 9.
Adhere to the documentation of Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application clusters because you cannot change hostnames after you install Sun Cluster software.
For more information on this restriction on hostnames and node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.
If the VERITAS NetBackup client is a cluster, only one logical host can be configured as the client because there is only one bp.conf file.
If the NetBackup client is a cluster and if one of the logical hosts on the cluster is configured as the NetBackup client, NetBackup cannot back up the physical hosts.
On the cluster running the master server, the master server is the only logical host that can be backed up.
Backup media cannot be attached to the master server, so one or more media servers are required.
No Sun Cluster node may be an NFS client of a Sun Cluster HA for NFS-exported file system being mastered on a node in the same cluster. Such cross-mounting of Sun Cluster HA for NFS is prohibited. Use the cluster file system to share files among cluster nodes.
Applications running locally on the cluster must not lock files on a file system exported via NFS. Otherwise, local blocking (for example, flock(3UCB) or fcntl(2)) might interfere with the ability to restart the lock manager (lockd). During restart, a blocked local process may be granted a lock which may be intended to be reclaimed by a remote client. This would cause unpredictable behavior.
Sun Cluster HA for NFS requires that all NFS client mounts be “hard” mounts.
For Sun Cluster HA for NFS, do not use hostname aliases for network resources. NFS clients mounting cluster file systems using hostname aliases might experience statd lock recovery problems.
Sun Cluster 3.1 software does not support Secure NFS or the use of Kerberos with NFS, in particular, the secure and kerberos options to the share_nfs(1M) subsystem. However, Sun Cluster 3.1 software does support the use of secure ports for NFS by adding the entry set nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.
Identify requirements for all data services before you begin Solaris and Sun Cluster installation. If you do not determine these requirements, you might perform the installation process incorrectly and thereby need to completely reinstall the Solaris and Sun Cluster software.
For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames/node names that you use in the cluster. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software. For more information on the special requirements for the hostnames/node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.
broker_user
to NULL
still creates resources (4803317)When creating a Sun ONE Message Queue resource, if smooth_shutdown is set to true, the broker_user extension property is required. The validate method does not check to see if broker_user is set and the validation will succeed even if broker_user is not set.
When setting smooth_shutdown to true be sure that broker_user is also set.
The scinstall(1m) command incorrectly displays that the following data services are not supported on Solaris 9:
Sun Cluster HA for SAP
Sun Cluster HA for SAP liveCache
Solaris 8 and Solaris 9 support Sun Cluster HA for SAP and Sun Cluster HA for SAP liveCache.
When using I/O-intensive data services with a large number of disks configured in the cluster, the application may experience delays due to retries within the I/O subsystem during disk failures. An I/O subsystem may take several minutes to retry and recover from a disk failure. This delay can result in Sun Cluster failing over the application to another node, even though the disk may have eventually recovered on its own.
To avoid failover during these instances, consider increasing the default probe timeout of the data service. If you need more information or help with increasing data service timeouts, contact your local support engineer.
If you are running Solaris 9, include the following entries in the /etc/nsswitch.conf configuration files on each node that can be the primary for oracle_server or oracle_listener resource so that the data service starts and stops correctly during a network failure:
passwd: files groups: files publickey: files project: files
The Sun Cluster HA for Oracle data service uses the super user command, su(1M), to start and stop the database. The network service might become unavailable when a cluster node's public network fails. Adding the above entries ensures that the su command does not refer to the NIS/NIS+ name services.
Sun Cluster HA-Siebel agent does not monitor individual Siebel components. If the failure of a Siebel component is detected, only a warning message is logged in syslog.
To work around this, restart the Siebel server resource group in which components are offline using the command scswitch -R -h node -g resource_group.
The message “SAP xserver is not available” is printed during the start up of SAP xserver due to the fact that xserver is not considered to be available until it is fully up and running.
Ignore this message during the startup of the SAP xserver.
When the node running the Siebel gateway has a path beginning with /home, which depends on network resources such as NFS and NIS, and the public network fails, the Siebel gateway probe times out and causes the Siebel gateway resource to go offline. Without the public network, Siebel gateway probe hangs while trying to open a file on “/home”, causing the probe to timeout.
To prevent the Siebel gateway probe from timing out while trying to open a file on /home, ensure the following for all the nodes of the cluster which can be the Siebel gateway:
Include the following entries are set to files in the /etc/nsswitch.conf file:
passwd: files groups: files publickey: files project: files
Eliminate all NFS or NIS dependencies for any path starting with /home. You may either have a locally mounted/home path or rename the /home mount point to /export/home or another name which does not start with /home.
Comment out the line containing +auto_master in the /etc/auto_master file, and change any /home entries to auto_home.
Comment out the line containing +auto_home in the /etc/auto_home file.
If a hostname in an URI in monitor_uri_list is an unknown host, the agent logs a message stating that the connection attempt has timed out. Normally, a connection that times out will trigger a restart or failover of the application server. However, when the hostname is unknown, the connection will not initiate a restart or failover.
If the agent logs a message saying that a connection timed out but does not take any action, check to ensure that the hostnames in monitor_uri_list are correct.
If you are running Solaris 9, include one of the following entries for the publickey database in the /etc/nsswitch.conf configuration files on each node that can be the primary for liveCache resources so that the data service starts and stops correctly during a network failure:
publickey: publickey: files publickey: files [NOTFOUND=return] nis publickey: files [NOTFOUND=return] nisplus
The Sun Cluster HA for SAP liveCache data service uses the dbmcli command to start and stop the liveCache. The network service might become unavailable when a cluster node's public network fails. Adding one of the above entries, in addition to updates documented in Sun Cluster 3.1 Data Service for SAP liveCache ensures that the su command and the dbmcli command do not refer to the NIS/NIS+ name services.
Do not configure the xserver resource as a failover resource. The Sun Cluster HA for SAP liveCache data service does not failover properly when xserver is configured as a failover resource.
The localized message catalogs for the following agents are not included in Data Services 3.1 5/03:
Sun ONE Application Server
Sun ONE Message Queue
BEA WebLogic
This section provides information about patches for Sun Cluster configuration.
You must be a registered SunSolveTM user to view and download the required patches for the Sun Cluster product. If you do not have a SunSolve account, contact your Sun service representative or sales engineer, or register online at http://sunsolve.sun.com.
PatchPro is a patch-management tool designed to ease the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides a Sun Cluster-specific Interactive Mode tool to make the installation of patches easier and an Expert Mode tool to maintain your configuration with the latest set of patches. Expert Mode is especially useful for those who want to get all of the latest patches, not just the high availability and security patches.
To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.
The SunSolveTM Online Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site at http://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.
You can find Sun Cluster 3.1 patch information by using the Info Docs. To view Info Docs, log on to SunSolve and access the Simple Search selection from the top of the main page. From the Simple Search page, click on Info Docs and type Sun Cluster 3.1 in the search criteria box. This will bring up the Info Docs page for Sun Cluster 3.1 software.
Before you install Sun Cluster 3.1 software and apply patches to a cluster component (Solaris operating environment, Sun Cluster software, volume manager or data services software, or disk hardware), review the Info Docs and any README files that accompany the patches. All cluster nodes must have the same patch level for proper cluster operation.
For specific patch procedures and tips on administering patches, see the Sun Cluster 3.1 System Administration Guide.
HAStorage might not be supported in a future release of Sun Cluster software. Near-equivalent functionality is supported by HAStoragePlus. To upgrade from HAStorage to HAStoragePlus when you use cluster file systems or device groups, see “Upgrading from HAStorage to HAStoragePlus” in Sun Cluster 3.1 Data Service Planning and Administration Guide.
The following localization packages are available on the Data Services CD-ROM. When you install or upgrade to Sun Cluster 3.1, the localization packages will be automatically installed for the data services you have selected.
Language |
Package Name | Package Description |
---|---|---|
French |
SUNWfscapc SUNWfscbv SUNWfscdns SUNWfschtt SUNWfsclc SUNWfscnb SUNWfscnfs SUNWfscnsl SUNWfscor SUNWfscsap |
French Sun Cluster Apache Web Server Component French Sun Cluster BV Server Component French Sun Cluster Domain Name Server Component French Sun Cluster iPlanet Web Server Component French Sun Cluster resource type for SAP liveCache French Sun Cluster resource type for netbackup_master server French Sun Cluster NFS Server Component French Sun Cluster Netscape Directory Server Component French Sun Cluster HA Oracle data service French Sun Cluster SAP R/3 Component |
Japanese |
SUNWjscapc SUNWjscbv SUNWjscdns SUNWjschtt SUNWjsclc SUNWjscnb SUNWjscnfs SUNWjscnsl SUNWjscor SUNWjscsap SUNWjscsbl |
Japanese Sun Cluster Apache Web Server Component Japanese Sun Cluster BV Server Component Japanese Sun Cluster Domain Name Server Component Japanese Sun Cluster iPlanet Web Server Component Japanese Sun Cluster resource type for SAP liveCache Japanese Sun Cluster resource type for netbackup_master server Japanese Sun Cluster NFS Server Component Japanese Sun Cluster Netscape Directory Server Component Japanese Sun Cluster HA Oracle data service Japanese Sun Cluster SAP R/3 Component Japanese Sun Cluster resource types for Siebel gateway and Siebel server |
The complete Sun Cluster 3.1 Data Services 5/03 user documentation set is available in PDF and HTML format on the Sun Cluster Agents CD-ROM. AnswerBook2TM server software is not needed to read Sun Cluster 3.1 documentation. See the index.html file at the top level of either CD-ROM for more information. This index.html file enables you to read the PDF and HTML manuals directly from the disc and to access instructions to install the documentation packages.
The SUNWsdocs package must be installed before you install any Sun Cluster documentation packages. You can use pkgadd to install the SUNWsdocs package from either the SunCluster_3.1/Sol_N/Packages/ directory of the Sun Cluster CD-ROM or from the components/SunCluster_Docs_3.1/Sol_N/Packages/ directory of the Sun Cluster Agents CD-ROM, where N is either 8 for Solaris 8 or 9 for Solaris 9. The SUNWsdocs package is also automatically installed when you run the installer from the Solaris 9 Documentation CD.
The Sun Cluster 3.1 documentation set consists of the following collections:
The Sun Cluster 3.1 Software Collection, which includes the following manuals:
Sun Cluster 3.1 Concepts Guide
Sun Cluster 3.1 Data Services Developer's Guide
Sun Cluster 3.1 Error Messages Guide
Sun Cluster 3.1 Software Installation Guide
The Sun Cluster 3.1 Hardware Administration Collection, which includes the following manuals:
Sun Cluster 3.1 Hardware Administration Manual
Sun Cluster 3.1 With Sun StorEdge 3310 Array Manual
Sun Cluster 3.1 With Sun StorEdge 3900 or 6900 Series System Manual
Sun Cluster 3.1 With Sun StorEdge 9900 Series Storage Device Manual
Sun Cluster 3.1 With Sun StorEdge A1000 or Netra st A1000 Array Manual
Sun Cluster 3.1 With Sun StorEdge A3500/A3500FC System Manual
Sun Cluster 3.1 With Sun StorEdge A5x00 Array Manual
Sun Cluster 3.1 With Sun StorEdge D1000 or Netra st D1000 Disk Array Manual
Sun Cluster 3.1 With Sun StorEdge D2 Array Manual
Sun Cluster 3.1 With Sun StorEdge MultiPack Enclosure Manual
Sun Cluster 3.1 With Sun StorEdge Netra D130 or StorEdge S1 Enclosure Manual
Sun Cluster 3.1 With Sun StorEdge T3 or T3+ Array Partner-Group Configuration Manual
Sun Cluster 3.1 With Sun StorEdge T3 or T3+ Array Single-Controller Configuration Manual
The Sun Cluster 3.1 Data Services Collection, which contains the following manual:
Sun Cluster 3.1 Data Service Planning and Administration Guide
Sun Cluster 3.1 Data Service for Apache
Sun Cluster 3.1 Data Service for BroadVision One-To-One Enterprise
Sun Cluster 3.1 Data Service for DHCP
Sun Cluster 3.1 Data Service for Domain Name Service (DNS)
Sun Cluster 3.1 Data Service for Netbackup
Sun Cluster 3.1 Data Service for Network File System (NFS)
Sun Cluster 3.1 Data Service for Oracle
Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters
Sun Cluster 3.1 Data Service for SAP
Sun Cluster 3.1 Data Service for SAP liveCache
Sun Cluster 3.1 Data Service for Samba
Sun Cluster 3.1 Data Service for Siebel
Sun Cluster 3.1 Data Service for Sun ONE Application Server
Sun Cluster 3.1 Data Service for Sun ONE Directory Server
Sun Cluster 3.1 Data Service for Sun ONE Message Queue
Sun Cluster 3.1 Data Service for Sun ONE Web Server
Sun Cluster 3.1 Data Service for Sybase ASE
Sun Cluster 3.1 Data Service for WebLogic Server
In addition, the docs.sun.comSM web site enables you to access Sun Cluster documentation on the Web. You can browse the docs.sun.com archive or search for a specific book title or subject at the following Web site:
This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.
This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle.
The introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.1 Data Service Planning and Administration Guide does not discuss the additional package needed for users with clusters running Sun Cluster HA for Oracle with 64-bit Oracle. The following section corrects the introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.1 Data Service for Oracle.
Depending on your configuration, use the scinstall(1M) utility to install one or both of the following packages on your cluster. Do not use the -s option to non-interactive scinstall to install all of the data service packages.
SUNWscor: Cluster running Sun Cluster HA for Oracle with 32 bit Oracle or 64-bit Oracle
SUNWscorx: Cluster running Sun Cluster HA for Oracle with 64-bit Oracle
SUNWscor is the prerequisite package for SUNWscorx.
If you installed the SUNWscor data service package as part of your initial Sun Cluster installation, proceed to “Registering and Configuring Sun Cluster HA for Oracle” on page 30. Otherwise, use the procedure documented in Sun Cluster 3.1 Data Service Planning and Administration Guide.
This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.
Pre-installation considerations for using Oracle Parallel Server/Real Application Clusters with the cluster file system are missing from “Overview” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.
Oracle Parallel Server/Real Application Clusters is a scalable application that can run on more than one node concurrently. You can store all of the files that are associated with this application on the cluster file system, namely:
Binary files
Control files
Data files
Log files
Configuration files
For optimum I/O performance during the writing of redo logs, ensure that the following items are located on the same node:
The Oracle Parallel Server/Real Application Clusters database instance
The primary of the device group that contains the cluster file system that holds the following logs of the database instance:
Online redo logs
Archived redo logs
For other pre-installation considerations that apply to Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, see “Overview” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.
Information on how to use the cluster file system with Oracle Parallel Server/Real Application Clusters is missing from “Installing Volume Management Software With Sun Cluster Support for Oracle Parallel Server/Real Application Clusters” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.
To use the cluster file system with Oracle Parallel Server/Real Application Clusters, create and mount the cluster file system as explained in “Configuring the Cluster” in Sun Cluster 3.1 5/03 Software Installation Guide. When you add an entry to the /etc/vfstab file for the mount point, set UNIX file system (UFS) file system specific options for various types of Oracle files as shown in the following table.
Table 1–3 UFS File System Specific Options for Oracle Files
File Type |
Options |
---|---|
RDBMS data files, log files, control files |
global, logging, forcedirectio |
Oracle binary files, configuration files |
global, logging |
Information on how to install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters packages with the cluster file system is missing from “Installing Volume Management Software With Sun Cluster Support for Oracle Parallel Server/Real Application Clusters” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.
To complete this procedure, you need the Sun Cluster CD-ROM. Perform this procedure on all of the cluster nodes that can run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.
Due to the preparation that is required prior to installation, the scinstall(1M) utility does not support automatic installation of the data service packages.
Load the Sun Cluster CD-ROM into the CD-ROM drive.
Become superuser.
Change the current working directory to the directory that contains the packages for the version of the Solaris operating environment that you are using.
If you are using Solaris 8, run the following command:
# cd /cdrom/suncluster_3_1/SunCluster_3.1/Sol_8/Packages |
If you are using Solaris 9, run the following command:
# cd /cdrom/suncluster_3_1/SunCluster_3.1/Sol_9/Packages |
On each node of the cluster, transfer the contents of the required software packages from the CD-ROM to the node.
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr |
Before you reboot the nodes, you must ensure that you have correctly installed and configured the Oracle UDLM software. For more information, see “Installing the Oracle Software” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.
Go to “Installing the Oracle Software” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters to install the Oracle UDLM and Oracle RDBMS software.
Information on using the Sun Cluster LogicalHostname resource with Oracle Parallel Server/Real Application Clusters is missing from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters .
If a cluster node that is running an instance of Oracle Parallel Server/Real Application Clusters fails, an operation that a client application attempted might be required to time out before the operation is attempted again on another instance. If the TCP/IP network timeout is high, the client application might take a long time to detect the failure. Typically client applications take between three and nine minutes to detect such failures.
In such situations, client applications may use the Sun Cluster LogicalHostname resource for connecting to an Oracle Parallel Server/Real Application Clusters database that is running on Sun Cluster. You can configure the LogicalHostname resource in a separate resource group that is mastered on the nodes on which Oracle Parallel Server/Real Application Clusters is running. If a node fails, the LogicalHostname resource fails over to another surviving node on which Oracle Parallel Server/Real Application Clusters is running. The failover of the LogicalHostname resource enables new connections to be directed to the other instance of Oracle Parallel Server/Real Application Clusters.
Before using the LogicalHostname resource for this purpose, consider the effect on existing user connections of failover or failback of the LogicalHostname resource.
This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Sun ONE Directory Server and Sun Cluster 3.1 Data Service for Sun ONE Web Server.
The names for iPlanet Web Server and iPlanet Directory Server have been changed. The new names are Sun ONE Web Server and Sun ONE Directory Server. The data service names are now Sun Cluster HA for Sun ONE Web Server and Sun Cluster HA for Sun ONE Directory Server.
The application name on the Sun Cluster Agents CD-ROM might still be iPlanet Web Server and iPlanet Directory Server.
This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for Siebel.
In the "Planning the Sun Cluster HA for Siebel Installation and Configuration" section, the configuration restrictions should state that scalable Sun ONE Web Server (iWS) cannot be used with HA Siebel. You must configure iWS as a failover data service.
This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for SAP liveCache.
The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should state that the SAP xserver can only be configured as a scalable resource. Configuring the SAP xserver as a failover resource will cause the SAP liveCache resource not failover. Ignore all references to configuring the SAP xserver resource as a failover resource inSun Cluster 3.1 Data Service for SAP liveCache.
This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for WebLogic Server.
The “Protection of BEA WebLogic Server Component” table should state that the BEA WebLogic Server database is protected by all databases supported by BEA WebLogic Server and supported on Sun Cluster. The table should also state that the HTTP servers are protected by all HTTP servers supported by BEA WebLogic Server and supported on Sun Cluster.
There is an error in the Name section. The Name section should read as follows:
sap_ci, SUNW.sap_ci, sap_ci_v2and SUNW.sap_ci_v2 - Resource type implementations for Sun Cluster HA for SAP central instance.
There is an error in the Description section. The Description section should read as follows:
The Resource Group Manager (RGM) manages the SAP data service for Sun Cluster software. Configure the Sun Cluster HA for SAP central instance as a logical-hostname resource and an SAP central instance resource.
There is an error in the Name section. The Name section should read as follows:
sap_as, SUNW.sap_as - Resource type implementation for Sun Cluster HA for SAP as a failover data service.
sap_as_v2, SUNW.sap_as_v2 - Resource type implementation for Sun Cluster HA for SAP as a failover data service or a scalable data service.
There is an error in the Description section. The Description section should read as follows:
The Resource Group Manager (RGM) manages the SAP data service for Sun Cluster software. If you are setting up the Sun Cluster HA for SAP application server as a failover data service configure it as a logical-hostname resource and an SAP application-server resource. If you are setting up the Sun Cluster HA for SAP application-server as a scalable data service configure it as a scalable SAP application-server resource.
The following new resource group property should be added to the rg_properties(5) man page.
Auto_start_on_new_cluster
This property controls whether the Resource Group Manager starts the resource group automatically when a new cluster is forming.
The default is TRUE. If set to TRUE, the Resource Group Manager attempts to start the resource group automatically to achieve Desired_primaries when all nodes of the cluster are simultaneously rebooted. If set to FALSE, the Resource Group does not start automatically when the cluster is rebooted.
There is an error in the See Also section of this man page. Instead of referencing the Sun Cluster 3.1 Data Services Installation and Configuration Guide, you should reference the Sun Cluster 3.1 Data Service for WebLogic Server.