This document provides the following information for SunTM Cluster 3.1 Data Services 10/03 software.
This section describes new features and functionality. Contact your Sun sales representative for the complete list of supported hardware and software.
The Sun Cluster HA for Oracle server fault monitor has been enhanced to enable you to customize the behavior of the server fault monitor as follows:
Overriding the preset action for an error
Specifying an action for an error for which no action is preset
For more information, see Sun Cluster 3.1 Data Service for Oracle Guide.
The Sun Cluster Support for Oracle Parallel Server/Real Application Clusters data service has been enhanced to enable this data service to be managed by using Sun Cluster commands.
For more information, see Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters Guide.
The following resource types are enhanced in Sun Cluster 3.1 Data Services 10/03:
SUNW.oracle_server (see Sun Cluster 3.1 Data Service for Oracle Guide)
SUNW.apache (see Sun Cluster 3.1 Data Service for Apache Guide)
SUNW.iws (see Sun Cluster 3.1 Data Service for Sun ONE Web Server Guide)
For general information about upgrading a resource type, see “Upgrading a Resource Type” in Sun Cluster 3.1 Data Service Planning and Administration Guide
Sun Cluster 3.1 Data Services 10/03 supports the following data services:
Sun Cluster HA for Apache Tomcat – The Sun Cluster HA for Apache Tomcat data service enables orderly startup, orderly shutdown, fault monitoring, and automatic failover of the Apache Tomcat service. Apache Tomcat acts as a servlet engine behind an Apache web server, or it can be configured as a standalone web server including the servlet engine.
Sun Cluster HA for MySQL – The Sun Cluster HA for MySQL data service enables orderly startup, orderly shutdown, fault monitoring, and automatic failover of the MySQL service. The MySQL software delivers a very fast, multithreaded, multiuser, and robust Structured Query Language (SQL) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.
Sun Cluster HA for Oracle E-Business Suite – The Sun Cluster HA for Oracle E-Business Suite data service enables orderly startup, orderly shutdown, fault monitoring, and automatic failover of the Oracle E-Business Suite service. Oracle E-Business Suite is a complete set of business applications that enables customers to efficiently manage business processes, using a unified open architecture. This architecture is a framework for multitiered, distributed computing that supports Oracle products.
Sun Cluster HA for SWIFTAlliance Access – The Sun Cluster HA for SWIFTAlliance Access data service enables orderly startup, orderly shutdown, fault monitoring, and automatic failover of the SWIFTAlliance Access service.
This section describes the supported software and memory requirements for Sun Cluster 3.1 software.
Operating environment and patches – Supported Solaris versions and patches are available at the following URL:
For more details, see Patches and Required Firmware Levels.
Volume managers
On Solaris 8 – Solstice DiskSuiteTM 4.2.1 and VERITAS Volume Manager 3.2 and 3.5.
On Solaris 9 – Solaris Volume Manager and VERITAS Volume Manager 3.5.
If you are upgrading from VERITAS Volume Manager (VxVM) 3.2 to 3.5, the Cluster Volume Manger (CVM) feature will not be available until you install the CVM license key for version 3.5. In VxVM 3.5, the CVM license key for version 3.2 does not enable CVM and must be upgraded to the CVM license key for version 3.5.
File systems
On Solaris 8 – Solaris UFS and VERITAS File System 3.4 and 3.5.
On Solaris 9 – Solaris UFS and VERITAS File System 3.5.
Data services (agents) – Contact your Sun sales representative for the complete list of supported data services and application versions. Specify the resource type names when you install the data services by using the scinstall(1M) utility. You should also specify the resource type names when you register the resource types associated with the data service using the scsetup(1M) utility.
Procedures for the version of Sun Cluster HA for Sun ONE Directory Server that uses iPlanet Directory Server 5.0 and 5.1 (plus Netscape HTTP, versions 4.11, 4.12, 4.13, and 4.16) are located in the Sun Cluster 3.1 Data Service for Sun ONE Directory Server. For later versions of iPlanet Directory Server (now known as Sun ONE Directory Server), see the Sun ONE Directory Server product documentation.
Data Service |
Sun Cluster Resource Type |
---|---|
Sun Cluster HA for Apache |
SUNW.apache |
Sun Cluster HA for Apache Tomcat |
SUNW.sctomcat |
Sun Cluster HA for BroadVision One-To-One Enterprise |
SUNW.bv |
Sun Cluster HA for DHCP |
SUNW.gds |
Sun Cluster HA for DNS |
SUNW.dns |
Sun Cluster HA for MySQL |
SUNW.scmys |
Sun Cluster HA for NetBackup |
SUNW.netbackup_master |
Sun Cluster HA for NFS |
SUNW.nfs |
Sun Cluster HA for Oracle E-Business Suite |
SUNW.scebs |
Sun Cluster HA for Oracle |
SUNW.oracle_server SUNW.oracle_listener |
Sun Cluster Support for Oracle Parallel Server/Real Application Clusters |
SUNW.rac_framework SUNW.rac_udlm SUNW.rac_cvm SUNW.rac_hwraid |
Sun Cluster HA for SAP |
SUNW.sap_ci SUNW.sap_ci_v2 SUNW.sap_as SUNW.sap_as_v2 |
Sun Cluster HA for SAP liveCache |
SUNW.sap_livecache SUNW.sap_xserver |
Sun Cluster HA for SWIFTAlliance Access |
SUNW.scsaa |
Sun Cluster HA for Samba |
SUNW.gds |
Sun Cluster HA for Siebel |
SUNW.sblgtwy SUNW.sblsrvr |
Sun Cluster HA for Sun ONE Application Server |
SUNW.s1as |
Sun Cluster HA for Sun ONE Directory Server (This data service was formerly known as Sun Cluster HA for iPlanet Directory Server) |
SUNW.nsldap |
Sun Cluster HA for Sun ONE Message Queue |
SUNW.s1mq |
Sun Cluster HA for Sun ONE Web Server (This data service was formerly known as Sun Cluster HA for iPlanet Web Server) |
SUNW.iws |
Sun Cluster HA for Sybase ASE |
SUNW.sybase |
Sun Cluster HA for WebLogic Server |
SUNW.wls |
Sun Cluster HA for WebSphere MQ |
SUNW.gds |
Sun Cluster HA for WebSphere MQ Integrator |
SUNW.gds |
Memory Requirements – Sun Cluster 3.1 software requires extra memory beyond what is configured for a node under a normal workload. The extra memory equals 128 Mbytes plus ten percent. For example, if a standalone node normally requires 1 Gbyte of memory, you need an extra 256 Mbytes to meet memory requirements.
RSMAPI – Sun Cluster 3.1 software supports the Remote Shared Memory Application Programming Interface (RSMAPI) on RSM-capable interconnects, such as PCI-SCI.
Sun Cluster Security Hardening uses the Solaris Operating Environment hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.
The Sun Cluster Security Hardening documentation is available at http://www.sun.com/blueprints/0203/817–1079.pdf. You can also access the article from http://wwws.sun.com/software/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.x Software.” The documentation describes how to secure Sun Cluster 3.1 deployments in a Solaris 8 and Solaris 9 environment. The description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts.
Table 1–2 Data Services Supported by Sun Cluster Security Hardening
Data Service Agent |
Application Version: Failover |
Application Version: Scalable |
Solaris Version |
---|---|---|---|
Sun Cluster HA for Apache |
1.3.9 |
1.3.9 |
Solaris 8, Solaris 9 (version 1.3.9) |
Sun Cluster HA for Apache Tomcat |
3.3, 4.0, 4.1 |
3.3, 4.0, 4.1 |
Solaris 8, Solaris 9 |
Sun Cluster HA for DHCP |
S8U7+ |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for DNS |
with OS |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for iPlanet Messaging Server |
6.0 |
4.1 |
Solaris 8 |
Sun Cluster HA for MySQL |
3.23.54a - 4.0.15 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for NetBackup |
3.4 |
N/A |
Solaris 8 |
Sun Cluster HA for NFS |
with OS |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for Oracle E-Business Suite |
11.5.8 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for Oracle |
8.1.7 and 9i (32 and 64 bit) |
N/A |
Solaris 8, Solaris 9 (HA Oracle 9iR2) |
Sun Cluster Support for Oracle Parallel Server/Real Application Clusters |
8.1.7 and 9i (32 and 64 bit) |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for SAP |
4.6D (32 and 64 bit) and 6.20 |
4.6D (32 and 64 bit) and 6.20 |
Solaris 8, Solaris 9 |
Sun Cluster HA for SWIFTAlliance Access |
4.1, 5.0 |
N/A |
Solaris 8 |
Sun Cluster HA for Samba |
2.2.2, 2.2.7, 2.2.7a, 2.2.8, 2.2.8a |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for Siebel |
7.5 |
N/A |
Solaris 8 |
Sun Cluster HA for Sun ONE Application Server |
7.0, 7.0 update 1 |
N/A |
Solaris 8,Solaris 9 |
Sun Cluster HA for Sun ONE Directory Server |
4.12 |
N/A |
Solaris 8, Solaris 9 (version 5.1) |
Sun Cluster HA for Sun ONE Message Queue |
3.0.1 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for Sun ONE Web Server |
6.0 |
4.1 |
Solaris 8, Solaris 9 (version 4.1) |
Sun Cluster HA for Sybase ASE |
12.0 (32 bit) |
N/A |
Solaris 8 |
Sun Cluster HA for BEA WebLogic Server |
7.0 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for WebSphere MQ |
5.2, 5.3 |
N/A |
Solaris 8, Solaris 9 |
Sun Cluster HA for WebSphere MQ Integrator |
2.0.2, 2.1 |
N/A |
Solaris 8, Solaris 9 |
The Sun Cluster HA for Oracle 3.0 data service can run on Sun Cluster 3.1 only when used with the following versions of the Solaris operating environment:
Solaris 8, 32-bit version
Solaris 8, 64-bit version
Solaris 9, 32-bit version
The Sun Cluster HA for Oracle 3.0 data service cannot run on Sun Cluster 3.1 when used with the 64-bit version of Solaris 9.
Adhere to the documentation of Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application clusters because you cannot change hostnames after you install Sun Cluster software.
For more information on this restriction on hostnames and node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.
If the VERITAS NetBackup client is a cluster, only one logical host can be configured as the client because there is only one bp.conf file.
If the NetBackup client is a cluster and if one of the logical hosts on the cluster is configured as the NetBackup client, NetBackup cannot back up the physical hosts.
On the cluster running the master server, the master server is the only logical host that can be backed up.
Backup media cannot be attached to the master server, so one or more media servers are required.
In a Sun Cluster environment, robotic control is only supported on media servers and not on the NetBackup master server running on Sun Cluster.
No Sun Cluster node may be an NFS client of a Sun Cluster HA for NFS-exported file system being mastered on a node in the same cluster. Such cross-mounting of Sun Cluster HA for NFS is prohibited. Use the cluster file system to share files among cluster nodes.
Applications running locally on the cluster must not lock files on a file system exported via NFS. Otherwise, local blocking (for example, flock(3UCB) or fcntl(2)) might interfere with the ability to restart the lock manager (lockd). During restart, a blocked local process may be granted a lock which may be intended to be reclaimed by a remote client. This would cause unpredictable behavior.
Sun Cluster HA for NFS requires that all NFS client mounts be “hard” mounts.
For Sun Cluster HA for NFS, do not use hostname aliases for network resources. NFS clients mounting cluster file systems using hostname aliases might experience statd lock recovery problems.
Sun Cluster 3.1 software does not support Secure NFS or the use of Kerberos with NFS, in particular, the secure and kerberos options to the share_nfs(1M) subsystem. However, Sun Cluster 3.1 software does support the use of secure ports for NFS by adding the entry set nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.
Do not use NIS for naming services in a cluster running Sun Cluster HA for SAP liveCache because the NIS entry is only used if files are not available.
For more procedural information about the nssswitch.conf password requirements related to this restriction, see“Preparing the Nodes and Disks” in Sun Cluster 3.1 Data Service for SAP liveCache Guide .
Identify requirements for all data services before you begin Solaris and Sun Cluster installation. If you do not determine these requirements, you might perform the installation process incorrectly and thereby need to completely reinstall the Solaris and Sun Cluster software.
For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames/node names that you use in the cluster. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software. For more information on the special requirements for the hostnames/node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.
NIS cannot be used in a cluster running liveCache, because the NIS entry is only used if files are not available. For more information, see Sun Cluster HA for SAP liveCache.
Oracle instances will not start if an SCI cluster interconnect on one cluster node is disabled using the scconf -c -A command.
If you are running Solaris 9, include the following entries in the /etc/nsswitch.conf configuration files on each node that can be the primary for oracle_server or oracle_listener resource so that the data service starts and stops correctly during a network failure:
passwd: files groups: files publickey: files project: files
The Sun Cluster HA for Oracle data service uses the super user command, su(1M), to start and stop the database. The network service might become unavailable when a cluster node's public network fails. Adding the above entries ensures that the su command does not refer to the NIS/NIS+ name services.
If you are running Solaris 9, include one of the following entries for the publickey database in the /etc/nsswitch.conf configuration files on each node that can be the primary for liveCache resources so that the data service starts and stops correctly during a network failure:
publickey: publickey: files publickey: files [NOTFOUND=return] nis publickey: files [NOTFOUND=return] nisplus
The Sun Cluster HA for SAP liveCache data service uses the dbmcli command to start and stop the liveCache. The network service might become unavailable when a cluster node's public network fails. Adding one of the above entries, in addition to updates documented in Sun Cluster 3.1 Data Service for SAP liveCache Guide ensures that the su command and the dbmcli command do not refer to the NIS/NIS+ name services.
On a heavily loaded system, the Oracle listener probe might time out. To prevent the Oracle listener probe from timing out, increase the value of the Thorough_probe_interval extension property. The time-out value of the Oracle listener probe depends on the value of the Thorough_probe_interval extension property. You cannot set the time-out value of the Oracle listener probe independently.
Sun Cluster HA-Siebel agent does not monitor individual Siebel components. If the failure of a Siebel component is detected, only a warning message is logged in syslog.
To work around this, restart the Siebel server resource group in which components are offline using the command scswitch -R -h node -g resource_group.
The message “SAP xserver is not available” is printed during the start up of SAP xserver due to the fact that xserver is not considered to be available until it is fully up and running.
Ignore this message during the startup of the SAP xserver.
Do not configure the xserver resource as a failover resource. The Sun Cluster HA for SAP liveCache data service does not failover properly when xserver is configured as a failover resource.
To utilize the Monitor_Uri_List extension property of Sun Cluster HA for Apache and Sun Cluster HA for Sun ONE Web Server, you must set the Type_version property to 4.
You can also upgrade the Type_version property of a resource to 4 at any time. For information on how to upgrade a resource type, see “Upgrading a Resource Type” in Sun Cluster 3.1 Data Service Planning and Administration Guide.
Some data services run the su command to set the user identifier (ID) to a specific user. For the Solaris 9 operating environment, the su command resets the project identifier to default. This behavior overrides the setting of the project identifier by the RG_project_name system property or the Resource_project_name system property.
To ensure that the appropriate project name is used at all times, set the project name in the environment file of the user. One method to set the project name in the user's environment file is to add the following line to the .cshrc file of the user:
/usr/bin/newtask -p project-name -c $$
project-name is the project name that is to be used.
This section provides information about patches for Sun Cluster configuration.
You must be a registered SunSolveTM user to view and download the required patches for the Sun Cluster product. If you do not have a SunSolve account, contact your Sun service representative or sales engineer, or register online at http://sunsolve.sun.com.
PatchPro is a patch-management tool designed to ease the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides a Sun Cluster-specific Interactive Mode tool to make the installation of patches easier and an Expert Mode tool to maintain your configuration with the latest set of patches. Expert Mode is especially useful for those who want to get all of the latest patches, not just the high availability and security patches.
To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.
The SunSolveTM Online Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site at http://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.
You can find Sun Cluster 3.1 patch information by using the Info Docs. To view Info Docs, log on to SunSolve and access the Simple Search selection from the top of the main page. From the Simple Search page, click on Info Docs and type Sun Cluster 3.1 in the search criteria box. This will bring up the Info Docs page for Sun Cluster 3.1 software.
Before you install Sun Cluster 3.1 software and apply patches to a cluster component (Solaris operating environment, Sun Cluster software, volume manager or data services software, or disk hardware), review the Info Docs and any README files that accompany the patches. All cluster nodes must have the same patch level for proper cluster operation.
For specific patch procedures and tips on administering patches, see “Patching Sun Cluster Software and Firmware” in Sun Cluster 3.1 10/03 System Administration Guide.
HAStorage might not be supported in a future release of Sun Cluster software. Near-equivalent functionality is supported by HAStoragePlus. To upgrade from HAStorage to HAStoragePlus when you use cluster file systems or device groups, see “Upgrading from HAStorage to HAStoragePlus” in Sun Cluster 3.1 Data Service Planning and Administration Guide.
The following localization packages are available on the Data Services CD-ROM. When you install or upgrade to Sun Cluster 3.1, the localization packages will be automatically installed for the data services you have selected.
Language |
Package Name |
Package Description |
---|---|---|
French
|
SUNWfscapc |
French Sun Cluster Apache Web Server Component |
SUNWfscbv |
French Sun Cluster BV Server Component |
|
SUNWfscdns |
French Sun Cluster Domain Name Server Component |
|
SUNWfschtt |
French Sun Cluster Sun ONE Web Server Component |
|
SUNWfsclc |
French Sun Cluster resource type for SAP liveCache |
|
SUNWfscnb |
French Sun Cluster resource type for netbackup_master server |
|
SUNWfscnfs |
French Sun Cluster NFS Server Component |
|
SUNWfscnsl |
French Sun Cluster Sun ONE Directory Server Component |
|
SUNWfscor |
French Sun Cluster HA Oracle data service |
|
SUNWfscs1as |
French Sun Cluster HA Sun ONE Application Server data service |
|
SUNWfscs1mq |
French Sun Cluster HA Sun ONE Message Queue data service |
|
SUNWfscsap |
French Sun Cluster SAP R/3 Component |
|
SUNWfscsbl |
French Sun Cluster resource types for Siebel gateway and Siebel server |
|
SUNWfscsyb |
French Sun Cluster HA Sybase data service |
|
SUNWfscwls |
French Sun Cluster BEA WebLogic Server Component |
|
Japanese
|
SUNWjscapc |
Japanese Sun Cluster Apache Web Server Component |
SUNWjscbv |
Japanese Sun Cluster BV Server Component |
|
SUNWjscdns |
Japanese Sun Cluster Domain Name Server Component |
|
SUNWjschtt |
Japanese Sun Cluster Sun ONE Web Server Component |
|
SUNWjsclc |
Japanese Sun Cluster resource type for SAP liveCache |
|
SUNWjscnb |
Japanese Sun Cluster resource type for netbackup_master server |
|
SUNWjscnfs |
Japanese Sun Cluster NFS Server Component |
|
SUNWjscnsl |
Japanese Sun Cluster Sun ONE Directory Server Component |
|
SUNWjscor |
Japanese Sun Cluster HA Oracle data service |
|
SUNWjscs1as |
Japanese Sun Cluster HA Sun ONE Application Server data service |
|
SUNWjscs1mq |
Japanese Sun Cluster HA Sun ONE Message Queue data service |
|
SUNWjscsap |
Japanese Sun Cluster SAP R/3 Component |
|
SUNWjscsbl |
Japanese Sun Cluster resource types for Siebel gateway and Siebel server |
|
SUNWjscsyb |
Japanese Sun Cluster HA Sybase data service |
|
SUNWjscwls |
Japanese Sun Cluster BEA WebLogic Server Component |
The complete Sun Cluster 3.1 Data Services 10/03 user documentation set is available in PDF and HTML format on the Sun Cluster Agents CD-ROM. AnswerBook2TM server software is not needed to read Sun Cluster 3.1 documentation. See the index.html file at the top level of either CD-ROM for more information. This index.html file enables you to read the PDF and HTML manuals directly from the disc and to access instructions to install the documentation packages.
The SUNWsdocs package must be installed before you install any Sun Cluster documentation packages. You can use pkgadd to install the SUNWsdocs package from either the SunCluster_3.1/Sol_N/Packages/ directory of the Sun Cluster CD-ROM or from the components/SunCluster_Docs_3.1/Sol_N/Packages/ directory of the Sun Cluster Agents CD-ROM, where N is either 8 for Solaris 8 or 9 for Solaris 9. The SUNWsdocs package is also automatically installed when you run the installer from the Solaris 9 Documentation CD.
The Sun Cluster 3.1 documentation set consists of the following collections:
The Sun Cluster 3.1 Software Collection, which includes the following manuals:
Sun Cluster 3.1 10/03 Concepts Guide
Sun Cluster 3.1 10/03 Data Services Developer's Guide
Sun Cluster 3.1 10/03 Error Messages Guide
Sun Cluster 3.1 10/03 Software Installation Guide
The Sun Cluster 3.x Hardware Administration Collection, which includes the following manuals:
Sun Cluster 3.x Hardware Administration Manual
Sun Cluster 3.x With Sun StorEdge 3310 Array Manual
Sun Cluster 3.x With Sun StorEdge 3510 FC Array Manual
Sun Cluster 3.x With Sun StorEdge 3900 or 6900 Series System Manual
Sun Cluster 3.x With Sun StorEdge 6120 Array Manual
Sun Cluster 3.x With Sun StorEdge 6320 System Manual
Sun Cluster 3.x With Sun StorEdge 9900 Series Storage Device Manual
Sun Cluster 3.x With Sun StorEdge A1000 or Netra st A1000 Array Manual
Sun Cluster 3.x With Sun StorEdge A3500/A3500FC System Manual
Sun Cluster 3.x With Sun StorEdge A5x00 Array Manual
Sun Cluster 3.x With Sun StorEdge D1000 or Netra st D1000 Disk Array Manual
Sun Cluster 3.x With Sun StorEdge D2 Array Manual
Sun Cluster 3.x With Sun StorEdge MultiPack Enclosure Manual
Sun Cluster 3.x With Sun StorEdge Netra D130 or StorEdge S1 Enclosure Manual
Sun Cluster 3.x With Sun StorEdge T3 or T3+ Array Partner-Group Configuration Manual
Sun Cluster 3.x With Sun StorEdge T3 or T3+ Array Single-Controller Configuration Manual
The Sun Cluster 3.1 Data Services Collection, which contains the following manuals:
Sun Cluster 3.1 Data Service Planning and Administration Guide
Sun Cluster 3.1 Data Service for Apache Guide
Sun Cluster 3.1 Data Service for Apache Tomcat Guide
Sun Cluster 3.1 Data Service for BroadVision One-To-One Enterprise Guide
Sun Cluster 3.1 Data Service for DHCP Guide
Sun Cluster 3.1 Data Service for Domain Name Service (DNS) Guide
Sun Cluster 3.1 Data Service for MySQL Guide
Sun Cluster 3.1 Data Service for Netbackup Guide
Sun Cluster 3.1 Data Service for Network File System (NFS) Guide
Sun Cluster 3.1 Data Service for Oracle E-Business Suite Guide
Sun Cluster 3.1 Data Service for Oracle Guide
Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters Guide
Sun Cluster 3.1 Data Service for SAP Guide
Sun Cluster 3.1 Data Service for SAP liveCache Guide
Sun Cluster 3.1 Data Service for SWIFTAlliance Access Guide
Sun Cluster 3.1 Data Service for Samba Guide
Sun Cluster 3.1 Data Service for Siebel Guide
Sun Cluster 3.1 Data Service for Sun ONE Application Server Guide
Sun Cluster 3.1 Data Service for Sun ONE Directory Server Guide
Sun Cluster 3.1 Data Service for Sun ONE Message Queue Guide
Sun Cluster 3.1 Data Service for Sun ONE Web Server Guide
Sun Cluster 3.1 Data Service for Sybase ASE Guide
Sun Cluster 3.1 Data Service for WebLogic Server Guide
Sun Cluster 3.1 Data Service for WebSphere MQ Guide
Sun Cluster 3.1 Data Service for WebSphere MQ Integrator Guide
In addition, the docs.sun.comSM web site enables you to access Sun Cluster documentation on the Web. You can browse the docs.sun.com archive or search for a specific book title or subject at the following Web site:
This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.
This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters Guide.
The section “Requirements for Using the Cluster File System” erroneously states that you can store data files on the cluster file system. You must not store data files on the cluster file system. Therefore, ignore all references to data files in this section.
When Oracle software is installed on the cluster file system, all the
files in the directory that the ORACLE_HOME
environment variable specifies are accessible by all cluster
nodes.
An installation might require that some Oracle files or directories maintain node-specific information. You can satisfy this requirement by using a symbolic link whose target is a file or a directory on a file system that is local to a node. Such a file system is not part of the cluster file system.
To use a symbolic link for this purpose, you must allocate an area on a local file system. To enable Oracle applications to create symbolic links to files in this area, the applications must be able to access files in this area. Because the symbolic links reside on the cluster file system, all references to the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area on the local file system.
Perform this procedure for each directory that is to maintain node-specific information. The following directories are typically required to maintain node-specific information:
$ORACLE_HOME
/network/agent
$ORACLE_HOME
/network/log
$ORACLE_HOME
/network/trace
$ORACLE_HOME
/srvm/log
$ORACLE_HOME
/apache
For information about other directories that might be required to maintain node-specific information, see your Oracle documentation.
On each cluster node, create the local directory that is to maintain node-specific information.
# mkdir -p local-dir |
Specifies that all nonexistent parent directories are created first
Specifies the full path name of the directory that you are creating
On each cluster node, make a local copy of the global directory that is to maintain node-specific information.
# cp -pr global-dir local-dir-parent |
Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.
Specifies that the directory and all its files, including any subdirectories and their files, are copied.
Specifies the full path of the global directory that you are copying.
This directory resides on the cluster file system under the directory that
the ORACLE_HOME
environment variable
specifies.
Specifies the directory on the local node that is to contain the local copy. This directory is the parent directory of the directory that you created in Step 1.
Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the global directory.
From any cluster node, remove the global directory that you copied in Step 2.
# rm -r global-dir |
Specifies that the directory and all its files, including any subdirectories and their files, are removed.
Specifies the file name and full path of the global directory that you are removing. This directory is the global directory that you copied in Step 2.
From any cluster node, create a symbolic link from the local copy of the directory to the global directory that you removed in Step a.
# ln -s local-dir global-dir |
This example shows the sequence of operations that is required to create node-specific directories on a two-node cluster. This cluster is configured as follows:
The ORACLE_HOME
environment variable specifies the /global/oracle directory.
The local file system on each node is located under the /local directory.
The following operations are performed on each node:
To create the required directories on the local file system, the following commands are run:
# mkdir -p /local/oracle/network/agent |
# mkdir -p /local/oracle/network/log |
# mkdir -p /local/oracle/network/trace |
# mkdir -p /local/oracle/srvm/log |
# mkdir -p /local/oracle/apache |
To make local copies of the global directories that are to maintain node-specific information, the following commands are run:
# cp -pr $ORACLE_HOME/network/agent /local/oracle/network/. |
# cp -pr $ORACLE_HOME/network/log /local/oracle/network/. |
# cp -pr $ORACLE_HOME/network/trace /local/oracle/network/. |
# cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/. |
# cp -pr $ORACLE_HOME/apache /local/oracle/. |
The following operations are performed on only one node:
To remove the global directories, the following commands are run:
# rm -r $ORACLE_HOME/network/agent |
# rm -r $ORACLE_HOME/network/log |
# rm -r $ORACLE_HOME/network/trace |
# rm -r $ORACLE_HOME/srvm/log |
# rm -r $ORACLE_HOME/apache |
To create symbolic links from the local directories to their corresponding global directories, the following commands are run:
# ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent |
# ln -s /local/oracle/network/log $ORACLE_HOME/network/log |
# ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace |
# ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log |
# ln -s /local/oracle/apache $ORACLE_HOME/apache |
Perform this procedure for each file that is to maintain node-specific information. The following files are typically required to maintain node-specific information:
$ORACLE_HOME
/network/admin/snmp_ro.ora
$ORACLE_HOME
/network/admin/snmp_rw.ora
For information about other files that might be required to maintain node-specific information, see your Oracle documentation.
On each cluster node, create the local directory that will contain the file that is to maintain node-specific information.
# mkdir -p local-dir |
Specifies that all nonexistent parent directories are created first
Specifies the full path name of the directory that you are creating
On each cluster node, make a local copy of the global file that is to maintain node-specific information.
# cp -p global-file local-dir |
Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.
Specifies the file name and full path of the global file that you are
copying. This file was installed on the cluster file system under the directory
that the ORACLE_HOME
environment
variable specifies.
Specifies the directory that is to contain the local copy of the file. This directory is the directory that you created in Step 1.
Replace the global file that you copied in Step 2 with a symbolic link to the local copy of the file.
From any cluster node, remove the global file that you copied in Step 2.
# rm global-file |
Specifies the file name and full path of the global file that you are removing. This file is the global file that you copied in Step 2.
From any cluster node, create a symbolic link from the local copy of the file to the directory from which you removed the global file in Step a.
# ln -s local-file global-dir |
This example shows the sequence of operations that is required to create node-specific files on a two-node cluster. This cluster is configured as follows:
The ORACLE_HOME
environment variable specifies the /global/oracle directory.
The local file system on each node is located under the /local directory.
The following operations are performed on each node:
To create the local directory that will contain the files that are to maintain node-specific information, the following command is run:
# mkdir -p /local/oracle/network/admin |
To make a local copy of the global files that are to maintain node-specific information, the following commands are run:
# cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \ /local/oracle/network/admin/. |
# cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \ /local/oracle/network/admin/. |
The following operations are performed on only one node:
To remove the global files, the following commands are run:
# rm $ORACLE_HOME/network/admin/snmp_ro.ora |
# rm $ORACLE_HOME/network/admin/snmp_rw.ora |
To create symbolic links from the local copies of the files to their corresponding global files, the following commands are run:
# ln -s /local/oracle/network/admin/snmp_ro.ora \ $ORACLE_HOME/network/admin/snmp_rw.ora |
# ln -s /local/oracle/network/admin/snmp_rw.ora \ $ORACLE_HOME/network/admin/snmp_rw.ora |
This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle E-Business Suite Guide.
Step 13 of the procedure “How to Register and Configure Sun Cluster HA for Oracle E-Business Suite as a Failover Service” is incorrect. The correct text is as follows:
13. Create a resource for the Oracle E-Business Suite Concurrent Manager Server.
# grep PROD.CON_COMNTOP /var/tmp/config.txt PROD.CON_COMNTOP=/global/mnt10/d01/oracle/prodcomn <- CON_COMNTOP # # grep PROD.DBS_ORA806= /var/tmp/config.txt PROD.DBS_ORA806=/global/mnt10/d01/oracle/prodora/8.0.6 <- ORACLE_HOME |
The example that follows this step is also incorrect. The correct example is as follows:
RS=ebs-cmg-res RG=ebs-rg HAS_RS=ebs-has-res LSR_RS=ebs-cmglsr-res CON_HOST=lhost1 CON_COMNTOP=/global/mnt10/d01/oracle/prodcomn CON_APPSUSER=ebs APP_SID=PROD APPS_PASSWD=apps ORACLE_HOME=/global/mnt10/d01/oracle/prodora/8.0.6 CON_LIMIT=70 MODE=32/Y
This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Sun ONE Directory Server Guide and Sun Cluster 3.1 Data Service for Sun ONE Web Server Guide.
The names for iPlanet Web Server and iPlanet Directory Server have been changed. The new names are Sun ONE Web Server and Sun ONE Directory Server. The data service names are now Sun Cluster HA for Sun ONE Web Server and Sun Cluster HA for Sun ONE Directory Server.
The application name on the Sun Cluster Agents CD-ROM might still be iPlanet Web Server and iPlanet Directory Server.
This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for SAP liveCache.
The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should state that the SAP xserver can only be configured as a scalable resource. Configuring the SAP xserver as a failover resource will cause the SAP liveCache resource not failover. Ignore all references to configuring the SAP xserver resource as a failover resource inSun Cluster 3.1 Data Service for SAP liveCache.
The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should also contain an extra step. After step 10, “Enable the scalable resource group that now includes the SAP xserver resource,” you must register the liveCache resource by entering the following text.
# scrgadm -a -j livecache-resource -g livecache-resource-group \ -t SUNW.sap_livecache -x livecache_name=LC-NAME \ -y resource_dependencies=livecache-storage-resource |
After you register the liveCache resource, proceed to the next step, “Set up a resource group dependency between SAP xserver and liveCache.”
This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for WebLogic Server.
The “Protection of BEA WebLogic Server Component” table should state that the BEA WebLogic Server database is protected by all databases supported by BEA WebLogic Server and supported on Sun Cluster. The table should also state that the HTTP servers are protected by all HTTP servers supported by BEA WebLogic Server and supported on Sun Cluster.
This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for Apache Guide.
The “Planning the Installation and Configuration” section should not state a note about using scalable proxy serving a scalable web resource. Use of scalable proxy is not supported.
If you use the Monitor_Uri_List extension property for the Sun Cluster HA for Apache data service, the required value of the Type_version property is 4. You can perform a Resource Type upgrade to Type_version 4.
If you use the Monitor_Uri_List extension property for the Sun Cluster HA for Sun ONE Web Server data service, the required value of the Type_version property is 4. You can perform a Resource Type upgrade to Type_version 4.
There is an error in the See Also section of this man page. Instead of referencing the Sun Cluster 3.1 Data Services Installation and Configuration Guide, you should reference the Sun Cluster 3.1 Data Service for WebLogic Server Guide.