JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Geographic Edition 3.3 5/11 Release Notes
search filter icon
search icon

Document Information

Preface

Oracle Solaris Cluster Geographic Edition 3.3 5/11 Release Notes

New Features and Functionality

HA for Oracle with Oracle Data Guard Replication

Three-Data-Center (3DC) Configuration

ZFS Storage Pools

Accessibility Information

Supported Products

Data Replication

Oracle Solaris Cluster Software

Volume Management Software

Restrictions

Support for EMC Symmetrix Remote Data Facility Configurations

Oracle 9i

Commands Modified in This Release

Known Issues and Bugs

Depending on the Order in Which Zone Cluster Setup and Geographic Edition Software Installation Is Done, Some Files Might Not Get Copied to the Zone (6968002)

geo-failovercontrol Resource Fails to Start on Second Node if Connection Lost to Primary Node (6932775)

Required Patches

Oracle Solaris Cluster Geographic Edition 3.3 5/11 Documentation

Documentation Addendum

Switchover Cannot Be Performed with Asynchronous SRDF (6389092)

Known Issues and Bugs

The following known issues and bugs affect the operation of the Oracle Solaris Cluster Geographic Edition 3.3 5/11 release.

Depending on the Order in Which Zone Cluster Setup and Geographic Edition Software Installation Is Done, Some Files Might Not Get Copied to the Zone (6968002)

Problem Summary: If you install Geographic Edition after a zone cluster is created and on a cluster that is booted into noncluster mode, not all files that are needed for Geographic Edition to work in zone clusters are copied to the zone-cluster nodes.

Workaround: Perform the following task on each node of the zone cluster:

  1. Copy the following files from the /etc/cluster/geocmass/ directory to the /etc/cacao/instances/default/modules/ directory.

    com.sun.cluster.agent.cluster.xml 
    com.sun.cluster.agent.config_access.xml 
    com.sun.cluster.agent.event.xml 
    com.sun.cluster.agent.failovercontrol.xml 
    com.sun.cluster.agent.logquery.xml 
    com.sun.cluster.agent.node.xml 
    com.sun.cluster.agent.rgm.xml 
    com.sun.cluster.agent.devicegroup.xml 
  2. Restart the common agent container.

    # /usr/sbin/cacaoadm restart

geo-failovercontrol Resource Fails to Start on Second Node if Connection Lost to Primary Node (6932775)

Problem Summary: When many protection groups are configured, failure of the primary node of a cluster might cause the startup of the Geographic Edition infrastructure to time out while starting on another node.

Workaround: Increase the START_TIMEOUT property of the geo-failovercontrol resource from the default of 600 seconds. This enables the geo-infrastructure resource group to fail over successfully. The required value might need to be determined by calculation, depending on the number of protection groups that are configured.