Sun Cluster 3.1 Data Service 5/03 Release Notes

Sun Cluster 3.1 Data Service 5/03 Release Notes

This document provides the following information for SunTM Cluster 3.1 Data Services 5/03 software.

What's New in Sun Cluster 3.1 Data Services 5/03

This section describes new features and functionality. Contact your Sun sales representative for the complete list of supported hardware and software.

New Error Messages

For error messages that were not included on the Sun Cluster CD-ROM, see Sun Cluster 3.1 5/03 Release Notes.

Automatic End Backup Facility

This new Sun Cluster HA for Oracle feature recognizes when a database fails to start because of files left in hot backup mode. This feature takes the necessary action to reopen the database for use. You can turn this feature on and off. The default state is OFF.

For information on the Auto_End_Bkp extension property that enables this feature, see Sun Cluster 3.1 Data Service for Oracle.

Resource Type Upgrade

As newer versions of resource types are released, you will want to install and register the upgraded resource type. You may also want to upgrade your existing resources to the newer resource type versions. The Resource Type Upgrade feature enables you to upgrade an existing resource to a new resource type version. For documentation on this new feature, see “Upgrading a Resource Type” in Sun Cluster 3.1 Data Service Planning and Administration Guide.

Sun Cluster HA for SAP liveCache

Sun Cluster HA for SAP liveCache is a data service that makes liveCache highly available. Sun Cluster HA for SAP liveCache provides fault monitoring and automatic failover for liveCache and fault monitoring and automatic restart for SAP xserver, eliminating a single point of failure in an SAP Advanced Planner & Optimizer (APO) System. With a combination of Sun Cluster HA for SAP liveCache and other Sun Cluster data services, Sun Cluster software provides a complete solution to protect SAP components in a Sun Cluster environment.

For documentation on Sun Cluster HA for SAP liveCache, see Sun Cluster 3.1 Data Service for SAP liveCache.

Sun Cluster HA for Siebel

Sun Cluster HA for Siebel provides Fault Monitoring and automatic failover for the Siebel application. High availability is provided for the Siebel gateway and Siebel server. With a Siebel implementation, any physical node running the Sun Cluster agent cannot be running the Resonate agent as well. Resonate and Sun Cluster can co-exist within the same Siebel enterprise, but not on the same physical server.

For documentation on Sun Cluster HA for Siebel, see Sun Cluster 3.1 Data Service for Siebel.

Support for Sun ONE Proxy Server

Sun Cluster HA for Sun ONE Web Server now supports Sun ONE Proxy Server. For information about the Sun ONE Proxy Server product, see http://docs.sun.com/db/prod/s1.webproxys. For Sun ONE Proxy Server installation and configuration information, see http://docs.sun.com/db/coll/S1_ipwebproxysrvr36.

New Supported Data Services

Sun Cluster 3.1 Data Services 5/03 supports the following data services:

Supported Products

This section describes the supported software and memory requirements for Sun Cluster 3.1 software.

Sun Cluster Security Hardening

Sun Cluster Security Hardening uses the Solaris Operating Environment hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.

The Sun Cluster Security Hardening documentation is available at http://www.sun.com/blueprints/0203/817–1079.pdf. You can also access the article from http://wwws.sun.com/software/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.x Software.” The documentation describes how to secure Sun Cluster 3.1 deployments in a Solaris 8 and Solaris 9 environment. The description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts.

Table 1–2 Data Services Supported by Sun Cluster Security Hardening

Data Service Agent 

Application Version: Failover 

Application Version: Scalable 

Solaris Version 

Sun Cluster HA for BEA WebLogic Server 

7.0 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for iPlanet Messaging Server 

6.0 

4.1 

Solaris 8 

Sun Cluster HA for Sun ONE Web Server 

6.0 

4.1 

Solaris 8, Solaris 9 (version 4.1) 

Sun Cluster HA for Apache 

1.3.9 

1.3.9 

Solaris 8, Solaris 9 (version 1.3.9) 

Sun Cluster HA for SAP 

4.6D (32 and 64 bit) and 6.20 

4.6D (32 and 64 bit) and 6.20 

Solaris 8, Solaris 9 

Sun Cluster HA for Sun ONE Directory Server 

4.12 

N/A 

Solaris 8, Solaris 9 (version 5.1) 

Sun Cluster HA for NetBackup 

3.4  

N/A 

Solaris 8 

Sun Cluster HA for Oracle  

8.1.7 and 9i (32 and 64 bit) 

N/A 

Solaris 8, Solaris 9 (HA Oracle 9iR2) 

Sun Cluster HA for Siebel  

7.5 

N/A 

Solaris 8 

Sun Cluster HA for Sybase ASE  

12.0 (32 bit) 

N/A 

Solaris 8 

Sun Cluster Support for Oracle Parallel Server/Real Application Clusters 

8.1.7 and 9i (32 and 64 bit) 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for DNS 

with OS 

N/A 

Solaris 8, Solaris 9 

Sun Cluster HA for NFS 

with OS 

N/A 

Solaris 8, Solaris 9 

Restrictions

Running Sun Cluster HA for Oracle 3.0 on Sun Cluster 3.1

The Sun Cluster HA for Oracle 3.0 data service can run on Sun Cluster 3.1 only when used with the following versions of the Solaris operating environment:


Note –

The Sun Cluster HA for Oracle 3.0 data service cannot run on Sun Cluster 3.1 when used with the 64-bit version of Solaris 9.


Sun Cluster HA for Oracle Parallel Server/Real Application Cluster

Adhere to the documentation of Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application clusters because you cannot change hostnames after you install Sun Cluster software.

For more information on this restriction on hostnames and node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.

Sun Cluster HA for NetBackup

Sun Cluster HA for NFS

Installation Issues and Bugs

Installation Guidelines

Identify requirements for all data services before you begin Solaris and Sun Cluster installation. If you do not determine these requirements, you might perform the installation process incorrectly and thereby need to completely reinstall the Solaris and Sun Cluster software.

For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames/node names that you use in the cluster. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software. For more information on the special requirements for the hostnames/node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.

Setting broker_user to NULL still creates resources (4803317)

When creating a Sun ONE Message Queue resource, if smooth_shutdown is set to true, the broker_user extension property is required. The validate method does not check to see if broker_user is set and the validation will succeed even if broker_user is not set.

When setting smooth_shutdown to true be sure that broker_user is also set.

scinstall Supports Sun Cluster HA for SAP and Sun Cluster HA for SAP liveCache (4776411)

The scinstall(1m) command incorrectly displays that the following data services are not supported on Solaris 9:

Solaris 8 and Solaris 9 support Sun Cluster HA for SAP and Sun Cluster HA for SAP liveCache.

Administration Runtime Issues and Bugs

Timeout-Period Guideline (4499573)

When using I/O-intensive data services with a large number of disks configured in the cluster, the application may experience delays due to retries within the I/O subsystem during disk failures. An I/O subsystem may take several minutes to retry and recover from a disk failure. This delay can result in Sun Cluster failing over the application to another node, even though the disk may have eventually recovered on its own.

To avoid failover during these instances, consider increasing the default probe timeout of the data service. If you need more information or help with increasing data service timeouts, contact your local support engineer.

HA Oracle Stop Method Times Out (4644289)

If you are running Solaris 9, include the following entries in the /etc/nsswitch.conf configuration files on each node that can be the primary for oracle_server or oracle_listener resource so that the data service starts and stops correctly during a network failure:

passwd: files
groups: files
publickey: files
project:  files

The Sun Cluster HA for Oracle data service uses the super user command, su(1M), to start and stop the database. The network service might become unavailable when a cluster node's public network fails. Adding the above entries ensures that the su command does not refer to the NIS/NIS+ name services.

HA-Siebel Does Not Automatically Restart Failed Siebel Components (4722288)

Sun Cluster HA-Siebel agent does not monitor individual Siebel components. If the failure of a Siebel component is detected, only a warning message is logged in syslog.

To work around this, restart the Siebel server resource group in which components are offline using the command scswitch -R -h node -g resource_group.

xserver_svc_start Reports xserver Unavailable During Start-up (4738554)

The message “SAP xserver is not available” is printed during the start up of SAP xserver due to the fact that xserver is not considered to be available until it is fully up and running.

Ignore this message during the startup of the SAP xserver.

Public Network Failure Might Cause Siebel Gateway Probe to Timeout (4764204)

When the node running the Siebel gateway has a path beginning with /home, which depends on network resources such as NFS and NIS, and the public network fails, the Siebel gateway probe times out and causes the Siebel gateway resource to go offline. Without the public network, Siebel gateway probe hangs while trying to open a file on “/home”, causing the probe to timeout.

To prevent the Siebel gateway probe from timing out while trying to open a file on /home, ensure the following for all the nodes of the cluster which can be the Siebel gateway:

The s1as Resource Does Not Restart When the Second URI is Down (4803242)

If a hostname in an URI in monitor_uri_list is an unknown host, the agent logs a message stating that the connection attempt has timed out. Normally, a connection that times out will trigger a restart or failover of the application server. However, when the hostname is unknown, the connection will not initiate a restart or failover.

If the agent logs a message saying that a connection timed out but does not take any action, check to ensure that the hostnames in monitor_uri_list are correct.

SAP liveCache Stop Method Times Out (4836272)

If you are running Solaris 9, include one of the following entries for the publickey database in the /etc/nsswitch.conf configuration files on each node that can be the primary for liveCache resources so that the data service starts and stops correctly during a network failure:

publickey: 
publickey:  files
publickey:  files [NOTFOUND=return] nis 
publickey:  files [NOTFOUND=return] nisplus

The Sun Cluster HA for SAP liveCache data service uses the dbmcli command to start and stop the liveCache. The network service might become unavailable when a cluster node's public network fails. Adding one of the above entries, in addition to updates documented in Sun Cluster 3.1 Data Service for SAP liveCache ensures that the su command and the dbmcli command do not refer to the NIS/NIS+ name services.

xserver Resource Cannot be Configured as a Failover Resource (4836248)

Do not configure the xserver resource as a failover resource. The Sun Cluster HA for SAP liveCache data service does not failover properly when xserver is configured as a failover resource.

Missing Localized Message Catalogs

The localized message catalogs for the following agents are not included in Data Services 3.1 5/03:

Patches and Required Firmware Levels

This section provides information about patches for Sun Cluster configuration.


Note –

You must be a registered SunSolveTM user to view and download the required patches for the Sun Cluster product. If you do not have a SunSolve account, contact your Sun service representative or sales engineer, or register online at http://sunsolve.sun.com.


PatchPro

PatchPro is a patch-management tool designed to ease the selection and download of patches required for installation or maintenance of Sun Cluster software. PatchPro provides a Sun Cluster-specific Interactive Mode tool to make the installation of patches easier and an Expert Mode tool to maintain your configuration with the latest set of patches. Expert Mode is especially useful for those who want to get all of the latest patches, not just the high availability and security patches.

To access the PatchPro tool for Sun Cluster software, go to http://www.sun.com/PatchPro/, click Sun Cluster, then choose either Interactive Mode or Expert Mode. Follow the instructions in the PatchPro tool to describe your cluster configuration and download the patches.

SunSolve Online

The SunSolveTM Online Web site provides 24-hour access to the most up-to-date information regarding patches, software, and firmware for Sun products. Access the SunSolve Online site at http://sunsolve.sun.com for the most current matrixes of supported software, firmware, and patch revisions.

You can find Sun Cluster 3.1 patch information by using the Info Docs. To view Info Docs, log on to SunSolve and access the Simple Search selection from the top of the main page. From the Simple Search page, click on Info Docs and type Sun Cluster 3.1 in the search criteria box. This will bring up the Info Docs page for Sun Cluster 3.1 software.

Before you install Sun Cluster 3.1 software and apply patches to a cluster component (Solaris operating environment, Sun Cluster software, volume manager or data services software, or disk hardware), review the Info Docs and any README files that accompany the patches. All cluster nodes must have the same patch level for proper cluster operation.

For specific patch procedures and tips on administering patches, see the Sun Cluster 3.1 System Administration Guide.

End-of-Feature-Support Statements

HAStorage

HAStorage might not be supported in a future release of Sun Cluster software. Near-equivalent functionality is supported by HAStoragePlus. To upgrade from HAStorage to HAStoragePlus when you use cluster file systems or device groups, see “Upgrading from HAStorage to HAStoragePlus” in Sun Cluster 3.1 Data Service Planning and Administration Guide.

Sun Cluster 3.1 Data Services 5/03 Software Localization

The following localization packages are available on the Data Services CD-ROM. When you install or upgrade to Sun Cluster 3.1, the localization packages will be automatically installed for the data services you have selected.

Language 

Package Name 

 Package Description

French 

SUNWfscapc

SUNWfscbv

SUNWfscdns

SUNWfschtt

SUNWfsclc

SUNWfscnb

SUNWfscnfs

SUNWfscnsl

SUNWfscor

SUNWfscsap

French Sun Cluster Apache Web Server Component 

French Sun Cluster BV Server Component 

French Sun Cluster Domain Name Server Component 

French Sun Cluster iPlanet Web Server Component 

French Sun Cluster resource type for SAP liveCache 

French Sun Cluster resource type for netbackup_master server 

French Sun Cluster NFS Server Component 

French Sun Cluster Netscape Directory Server Component 

French Sun Cluster HA Oracle data service 

French Sun Cluster SAP R/3 Component 

Japanese 

SUNWjscapc

SUNWjscbv

SUNWjscdns

SUNWjschtt

SUNWjsclc

SUNWjscnb

SUNWjscnfs

SUNWjscnsl

SUNWjscor

SUNWjscsap

SUNWjscsbl

Japanese Sun Cluster Apache Web Server Component 

Japanese Sun Cluster BV Server Component 

Japanese Sun Cluster Domain Name Server Component 

Japanese Sun Cluster iPlanet Web Server Component 

Japanese Sun Cluster resource type for SAP liveCache 

Japanese Sun Cluster resource type for netbackup_master server 

Japanese Sun Cluster NFS Server Component 

Japanese Sun Cluster Netscape Directory Server Component 

Japanese Sun Cluster HA Oracle data service 

Japanese Sun Cluster SAP R/3 Component 

Japanese Sun Cluster resource types for Siebel gateway and Siebel server 

Sun Cluster 3.1 Data Services 5/03 Documentation

The complete Sun Cluster 3.1 Data Services 5/03 user documentation set is available in PDF and HTML format on the Sun Cluster Agents CD-ROM. AnswerBook2TM server software is not needed to read Sun Cluster 3.1 documentation. See the index.html file at the top level of either CD-ROM for more information. This index.html file enables you to read the PDF and HTML manuals directly from the disc and to access instructions to install the documentation packages.


Note –

The SUNWsdocs package must be installed before you install any Sun Cluster documentation packages. You can use pkgadd to install the SUNWsdocs package from either the SunCluster_3.1/Sol_N/Packages/ directory of the Sun Cluster CD-ROM or from the components/SunCluster_Docs_3.1/Sol_N/Packages/ directory of the Sun Cluster Agents CD-ROM, where N is either 8 for Solaris 8 or 9 for Solaris 9. The SUNWsdocs package is also automatically installed when you run the installer from the Solaris 9 Documentation CD.


The Sun Cluster 3.1 documentation set consists of the following collections:

In addition, the docs.sun.comSM web site enables you to access Sun Cluster documentation on the Web. You can browse the docs.sun.com archive or search for a specific book title or subject at the following Web site:

http://docs.sun.com

Documentation Issues

This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.

Sun Cluster 3.1 Data Service 5/03 for Oracle

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle.

Sun Cluster HA for Oracle Packages

The introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.1 Data Service Planning and Administration Guide does not discuss the additional package needed for users with clusters running Sun Cluster HA for Oracle with 64-bit Oracle. The following section corrects the introductory paragraph to “Installing Sun Cluster HA for Oracle Packages” in the Sun Cluster 3.1 Data Service for Oracle.

Installing Sun Cluster HA for Oracle Packages

Depending on your configuration, use the scinstall(1M) utility to install one or both of the following packages on your cluster. Do not use the -s option to non-interactive scinstall to install all of the data service packages.


Note –

SUNWscor is the prerequisite package for SUNWscorx.


If you installed the SUNWscor data service package as part of your initial Sun Cluster installation, proceed to “Registering and Configuring Sun Cluster HA for Oracle” on page 30. Otherwise, use the procedure documented in Sun Cluster 3.1 Data Service Planning and Administration Guide.

Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

Pre-Installation Considerations

Pre-installation considerations for using Oracle Parallel Server/Real Application Clusters with the cluster file system are missing from “Overview” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

Oracle Parallel Server/Real Application Clusters is a scalable application that can run on more than one node concurrently. You can store all of the files that are associated with this application on the cluster file system, namely:

For optimum I/O performance during the writing of redo logs, ensure that the following items are located on the same node:

For other pre-installation considerations that apply to Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, see “Overview” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

How to Use the Cluster File System

Information on how to use the cluster file system with Oracle Parallel Server/Real Application Clusters is missing from “Installing Volume Management Software With Sun Cluster Support for Oracle Parallel Server/Real Application Clusters” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

To use the cluster file system with Oracle Parallel Server/Real Application Clusters, create and mount the cluster file system as explained in “Configuring the Cluster” in Sun Cluster 3.1 5/03 Software Installation Guide. When you add an entry to the /etc/vfstab file for the mount point, set UNIX file system (UFS) file system specific options for various types of Oracle files as shown in the following table.

Table 1–3 UFS File System Specific Options for Oracle Files

File Type 

Options 

RDBMS data files, log files, control files 

global, logging, forcedirectio

Oracle binary files, configuration files 

global, logging

How to Install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages With the Cluster File System

Information on how to install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters packages with the cluster file system is missing from “Installing Volume Management Software With Sun Cluster Support for Oracle Parallel Server/Real Application Clusters” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.

To complete this procedure, you need the Sun Cluster CD-ROM. Perform this procedure on all of the cluster nodes that can run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.


Note –

Due to the preparation that is required prior to installation, the scinstall(1M) utility does not support automatic installation of the data service packages.


  1. Load the Sun Cluster CD-ROM into the CD-ROM drive.

  2. Become superuser.

  3. Change the current working directory to the directory that contains the packages for the version of the Solaris operating environment that you are using.

    • If you are using Solaris 8, run the following command:


      # cd /cdrom/suncluster_3_1/SunCluster_3.1/Sol_8/Packages
      
    • If you are using Solaris 9, run the following command:


      # cd /cdrom/suncluster_3_1/SunCluster_3.1/Sol_9/Packages
      
  4. On each node of the cluster, transfer the contents of the required software packages from the CD-ROM to the node.


    # pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr
    

Caution – Caution –

Before you reboot the nodes, you must ensure that you have correctly installed and configured the Oracle UDLM software. For more information, see “Installing the Oracle Software” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters.


Where to Go From Here

Go to “Installing the Oracle Software” in Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters to install the Oracle UDLM and Oracle RDBMS software.

Using the Sun Cluster LogicalHostname Resource With Oracle Parallel Server/Real Application Clusters

Information on using the Sun Cluster LogicalHostname resource with Oracle Parallel Server/Real Application Clusters is missing from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters .

If a cluster node that is running an instance of Oracle Parallel Server/Real Application Clusters fails, an operation that a client application attempted might be required to time out before the operation is attempted again on another instance. If the TCP/IP network timeout is high, the client application might take a long time to detect the failure. Typically client applications take between three and nine minutes to detect such failures.

In such situations, client applications may use the Sun Cluster LogicalHostname resource for connecting to an Oracle Parallel Server/Real Application Clusters database that is running on Sun Cluster. You can configure the LogicalHostname resource in a separate resource group that is mastered on the nodes on which Oracle Parallel Server/Real Application Clusters is running. If a node fails, the LogicalHostname resource fails over to another surviving node on which Oracle Parallel Server/Real Application Clusters is running. The failover of the LogicalHostname resource enables new connections to be directed to the other instance of Oracle Parallel Server/Real Application Clusters.


Caution – Caution –

Before using the LogicalHostname resource for this purpose, consider the effect on existing user connections of failover or failback of the LogicalHostname resource.


Sun Cluster 3.1 Data Service 5/03 for Sun ONE Directory Server and Sun ONE Web Server

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Sun ONE Directory Server and Sun Cluster 3.1 Data Service for Sun ONE Web Server.

Name Change for iPlanet Web Server and for iPlanet Directory Server

The names for iPlanet Web Server and iPlanet Directory Server have been changed. The new names are Sun ONE Web Server and Sun ONE Directory Server. The data service names are now Sun Cluster HA for Sun ONE Web Server and Sun Cluster HA for Sun ONE Directory Server.

The application name on the Sun Cluster Agents CD-ROM might still be iPlanet Web Server and iPlanet Directory Server.

Sun Cluster 3.1 Data Service 5/03 for Siebel

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for Siebel.

Scalable Sun ONE Web Server Is Not Supported with HA-Siebel

In the "Planning the Sun Cluster HA for Siebel Installation and Configuration" section, the configuration restrictions should state that scalable Sun ONE Web Server (iWS) cannot be used with HA Siebel. You must configure iWS as a failover data service.

Sun Cluster 3.1 Data Service 5/03 for SAP liveCache

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for SAP liveCache.

The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should state that the SAP xserver can only be configured as a scalable resource. Configuring the SAP xserver as a failover resource will cause the SAP liveCache resource not failover. Ignore all references to configuring the SAP xserver resource as a failover resource inSun Cluster 3.1 Data Service for SAP liveCache.

Sun Cluster 3.1 Data Service 5/03 for WebLogic Server

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for WebLogic Server.

The “Protection of BEA WebLogic Server Component” table should state that the BEA WebLogic Server database is protected by all databases supported by BEA WebLogic Server and supported on Sun Cluster. The table should also state that the HTTP servers are protected by all HTTP servers supported by BEA WebLogic Server and supported on Sun Cluster.

Man Pages

SUNW.sap_ci(5)

SUNW.sap_as(5)

rg_properties(5)

The following new resource group property should be added to the rg_properties(5) man page.

Auto_start_on_new_cluster

This property controls whether the Resource Group Manager starts the resource group automatically when a new cluster is forming.

The default is TRUE. If set to TRUE, the Resource Group Manager attempts to start the resource group automatically to achieve Desired_primaries when all nodes of the cluster are simultaneously rebooted. If set to FALSE, the Resource Group does not start automatically when the cluster is rebooted.

SUNW.wls(5)

There is an error in the See Also section of this man page. Instead of referencing the Sun Cluster 3.1 Data Services Installation and Configuration Guide, you should reference the Sun Cluster 3.1 Data Service for WebLogic Server.