JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 4.1 Release Notes     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris Cluster 4.1 Release Notes

What's New in the Software

Support for Oracle Solaris 11.2 OS

New clsetup Wizards to Create a Zone Cluster

Support for solaris10 Brand Zone Clusters

Support for Exclusive-IP Zone Clusters

Support for Trusted Extensions With Zone Clusters

Resource Dependencies Can Be Defined on a Per-Node Basis

Support for Kernel Cage Dynamic Reconfiguration (DR)

Cluster Security Framework Is Enhanced

Support for Socket Direct Protocol Over the Cluster Interconnect

Faster Failure Detection and Response by Storage Monitors

ZFS Storage Pools

New clsetup Wizard to Configure the Oracle PeopleSoft Application Server Data Service

New clsetup Wizard to Configure the Oracle WebLogic Server Data Service

Support for MySQL and MySQL Cluster Data Services

New Data Service for PostgreSQL

New Data Service for Samba

New Data Service for SAP liveCache

New Data Service for SAP MaxDB

New Data Service for Siebel 8.2.2

New Data Service for Sybase ASE

New Data Service for Oracle Traffic Director

New Data Service for Oracle TimesTen

New Manual for SAP NetWeaver Data Service

New Data Service for Oracle External Proxy

New Data Service for Oracle PeopleSoft Enterprise Process Scheduler

New Data Service for Oracle Web Tier

Support for Oracle E-Business 12.1.1 Data Service

Support for Sun ZFS Storage Appliance Data Replication With Geographic Edition

Support for EMC Symmetrix Remote Data Facility With Geographic Edition

Support for MySQL Replication With Geographic Edition

New Man Pages for the ccradm and dcs_config Advanced Maintenance Commands

Selected Support for Non-Global Zones

What's Not Included in the Oracle Solaris Cluster 4.1 Software

Restrictions

Solaris Volume Manager Disk Sets in a Zone Cluster

Commands Modified in This Release

Compatibility Issues

Logical Host Does not Fail Over with Public Net Fault (16979921)

Oracle ASM With Solaris Volume Manager Mirrored Logical Volumes

osysmond Core Dumps in S10 Brand Zone During GI root.sh and Starting of CRS (14456069)

Oracle Clusterware Fails to Create All SIDs for ora.asm Resource (12680224)

Oracle Solaris 11 SRU Installation Might Fail Due to Out-of-Date pkg Command

Unable to Install Just Patches Using clzonecluster install-cluster to the solaris10 Branded Zone Cluster (7200532)

Adding Main Adapter to IPMP Group Removes DNS Configuration (7198718)

SAP JAVA Issue Affects HA for SAP NetWeaver Ability to Fail Over in Unplanned Outage (7191360)

Zone Does Not Boot if pkg:/system/resource-mgmt/resource-cap Is Not Installed and capped-memory Is Configured (7087700)

Active:Active ZFS Storage Appliance Clustered Configurations Are Not Supported With Geographic Edition (6770212)

Accessibility Information

Supported Products

Data Replication

Data Service

File Systems

Geographic Edition Software Requirements

Memory Requirements

Oracle Solaris Operating System

Oracle VM Server for SPARC

Volume Management Software

Product Localization

Known Issues and Bugs

Administration

A clzc reboot Command Causes the solaris10 Brand Exclusive-IP Zone Cluster to Panic the Global Zone Nodes (16941521)

The /usr/sbin/shutdown Command in a Zone of an Exclusive-IP Zone Cluster Can Result in a Halt of Other Running Zones of the Zone Cluster (16963753)

The svc_private_network:default SMF Service Goes Into Maintenance in a solaris10 Brand Exclusive-IP Zone Cluster (16716992)

Cannot Set the Jumbo Frame MTU Size for the clprivnet Interface (16618736)

Public Net Failure Does Not Fail Over DB Server Resource with SCAN Listener (16231523)

The Data Service Configuration Wizards Do Not Support Storage Resources and Resource Groups for Scalable HAStoragePlus (7202824)

Removing a Node From an Exclusive-IP Zone Cluster Panics Cluster Nodes (7199744)

Nonexisting privnet Stops Zone Clusters From Booting Despite Good privnet (7199431)

The clzonecluster Command Fails to Verify That defrouter Cannot Be Specified Without allowed-addr, CCR Has Failed Configuration (7199135)

clzonecluster boot, reboot, and halt Subcommands Fail if Any One of the Cluster Nodes Is Not in the Cluster (7193998)

Cluster File System Does Not Support Extended Attributes (7167470)

Using chmod to Set setuid Permission Returns Error in a Non-Global Zone on PxFS Secondary Server (7020380)

Cannot Create a Resource From a Configuration File With Non-Tunable Extension Properties (6971632)

Disabling Device Fencing While Cluster Is Under Load Results in Reservation Conflict (6908466)

EMC SRDF Rejects Switchover When Replicated Device-Group Status Will Cause Switchover and Switchback to Fail (6798901)

Removing Nodes From the Cluster Configuration Can Result in Node Panics (6735924)

More Validation Checks Needed When Combining DIDs (6605101)

Data Services

Active-Standby Configuration Not Supported for HA for TimesTen (16861602)

Failure to Update Properties of SUNW.ScalMountPoint Resource Configured with NAS for Zone Cluster (7203506)

Global File System Configured in Zone Cluster's Scalable HAStoragePlus Resource Is Not Accessible (7197623)

RAC Wizard Failing With "ERROR: Oracle ASM is either not installed or the installation is invalid!" (7196184)

clsetup Wizard Fails While Configuring WebLogic Server Domain in the Zones/Zone Cluster With WebLogic Server Installed in the NFS (7196102)

With a Large Number of Non-Network-Aware GDS Resources, Some Fail to Restart and Remain Offline (7189659)

SUNW.Proxy_SMF_failover sc_delegated_restarter File Descriptor Leak (7189211)

When set Debug_level=1, pas-rg Fails Over to Node 2 And Cannot Start on Node 1 Anymore (7184102)

Scalable Applications Are Not Isolated Between Zone Clusters (6911363)

Running clnas add or clnas remove Command on Multiple Nodes at the Same Time Could Cause Problem (6791618)

Developer Environment

clresource show -p Command Returns Wrong Information (7200960)

Geographic Edition

Cluster Node Does Not Have Access to Sun ZFS Storage Appliance Projects or iSCSI LUNs (15924240)

DR State Stays Reporting unknown on One Partner (7189050)

Takeover to the Secondary Is Failing Because fs umount Failed On the Primary (7182720)

ZFS Storage Appliance Protection Group Creation And Validation Fail if Project Replication Is Stopped by Using the BUI (7176292)

Multiple Notification Emails Sent From Global Cluster When Zone Clusters Are in Use (7098290)

Installation

Unable to Install Data Service Agents on Existing 3.3 5/11 solaris10 Brand Zone Without Specifying Patch Options (7197399)

clzonecluster Does Not Report Errors When install Is Used Instead of install-cluster for solaris10 Branded Zones (7190439)

ASM Instance Proxy Resource Creation Errored When a Hostname Has Uppercase Letters (7190067)

Wizard Won't Discover the ASM SID (7190064)

RAC Proxy Resource Creation Fails When the Cluster Node's Hostname Has Uppercase Letters (7189565)

Hard to Get Data Service Names for solaris10 Brand Zone Noninteractive Data Service Installation (7184714)

cacao Cannot Communicate on Machines Running Trusted Extensions (7183625)

The Command clnode remove -F nodename Fails to Remove the Node nodename From Solaris Volume Manager Device Groups (6471834)

Autodiscovery Should Find Only One Interconnect Path for Each Adapter (6299097)

Runtime

Logical Hostname Failover Could Create Duplicate Addresses, Lead To Outage (7201091)

sc_delegated_restarter Does Not Take Into Account Environment Variable Set in Manifest (7173159)

Unable to Re-enable Transport Interface After Disabling With ipadm disable-if -t interface (7141828)

Failure of Logical Hostname to Fail Over Caused by getnetmaskbyaddr() (7075347)

Upgrade

x86: scinstall -u update Sometimes Fails to Upgrade the Cluster Packages on an x86 Node (7201491)

Software Updates

Patch Management Tools

My Oracle Support

Oracle Solaris Cluster 4.1 Documentation Set

Documentation Issues

Upgrade Guide

HA for Oracle Guide

HA for Oracle RAC Guide

HA for Oracle Solaris Zones Guide

Solaris Volume Manager

Geographic Edition Data Replication Guide for Oracle Solaris Availability Suite

Man Pages

clzonecluster(1CL)

ORCL.sapcenter(5)

ORCL.saprepenq(5)

ORCL.saprepenq_preempt(5)

ORCL.sapstartsrv(5)

scdpmd.conf(4)

scha_check_app_user(1HA)

SUNW.HAStoragePlus(5)

SUNW.ScalDeviceGroup(5)

SUNW.ScalMountPoint(5)

A.  ORCL.otd(5) Man Page

Documentation Issues

This section discusses errors or omissions for documentation in the Oracle Solaris Cluster and Geographic Edition 4.1 release.

Upgrade Guide

In multiple chapters, the syntax for the scinstall -u update command is missing the option to specify license information, when needed. The full command syntax is the following:

# scinstall -u update -b [-b bename] [-L accept,licenses]

For more information about the -L option, see the scinstall(1M) man page.

HA for Oracle Guide

HA for Oracle RAC Guide

The procedure How to Set the Necessary Privileges for Oracle RAC Software in a Zone Cluster in Chapter 1 contains incorrect information. The correct procedure is as follows:

  1. Become superuser on the global cluster node that hosts the zone cluster.

  2. Configure the limitpriv property by using the clzonecluster command.

    # clzonecluster configure zcname
    clzonecluster:zcname>set limitpriv ="default,proc_priocntl,proc_clock_highres"
    clzonecluster:zcname>commit
  3. Beginning with Oracle RAC version 11g release 2, prevent Oracle Clusterware time synchronization from running in active mode.

    1. Log in to the zone-cluster node as root.

    2. Create an empty /etc/inet/ntp.conf file.

      # touch /etc/inet/ntp.conf

HA for Oracle Solaris Zones Guide

The following instruction is missing from How to Install a Zone and Perform the Initial Internal Zone Configuration in Oracle Solaris Cluster Data Service for Oracle Solaris Zones Guide. Perform this step immediately after Step 6b:

For example:

phys-schost-2# zoneadm list -cv
…
   1 myzone1       running    /zones/myzone1   solaris  shared
…

phys-schost-2# zlogin myzone1 beadm list -H
solaris;4391e8aa-b8d2-6da9-a5aa-d8b3e6ed6d9b;NR;/;606941184;static;1342165571

phys-schost-2# zfs set org.opensolaris.libbe:parentbe=8fe53702-16c3-eb21-ed85-d19af92c6bbd \
rpool/zones/myzone1/rpool/ROOT/solaris

Solaris Volume Manager

Oracle Solaris Cluster 4.1 software supports Solaris Volume Manager software. The Oracle Solaris 11 documentation set does not include a manual for Solaris Volume Manager software. However, you can still use the Solaris Volume Manager Administration Guide from the Oracle Solaris 10 9/10 release, which is valid with the Oracle Solaris Cluster 4.1 release.

Geographic Edition Data Replication Guide for Oracle Solaris Availability Suite

The following instruction is missing from the procedure How to Add an Application Resource Group to an Availability Suite Protection Group:

If the application resource group to add is configured with a raw-disk device group, that device group must be specified in the resource group configuration by its data volume, rather than by its device group name. This ensures that the resource will remain monitored after the application resource group is added to a protection group.

For example, if the device group rawdg has a corresponding data volume of /dev/global/rdsk/d1s0, you must set the GlobalDevicePaths property of the application resource group with the data volume, as follows:

# clresourcegroup set -p GlobalDevicePaths=/dev/global/rdsk/d1s0 rawdg

Man Pages

This section discusses errors, omissions, and additions in the following Oracle Solaris Cluster man pages:

clzonecluster(1CL)

ORCL.sapcenter(5)

ORCL.saprepenq(5)

In the Description section, the seventh bullet point must read as follows:

The resource group weak positive affinities must ensure that the SAP central service resource group fails over to the node where the SAP replicated enqueue resource group is online. If an ORCL.saprepenq_preempt resource is not configured, it must be implemented strong negative by affinities such that the replicated enqueue server resource group is off-loaded from the failover target node before the SAP central service resource group is started.

ORCL.saprepenq_preempt(5)

In the Description section, the eighth bullet point must read as follows:

The resource group weak positive affinities must ensure that the SAP central service resource group fails over to the node where the SAP replicated enqueue resource group is online. If an ORCL.saprepenq_preempt resource is not configured, it must be implemented by strong negative affinities such that the replicated enqueue server resource group is off-loaded from the failover target node before the SAP central service resource group is started. If the replicated enqueue preempter resource is configured, it is the task of this resource to off-load the replicated enqueue server resource group to a spare node after the enqueue tables are copied.

ORCL.sapstartsrv(5)

scdpmd.conf(4)

The minimum value for the Ping_interval property is incorrect. The value should be 20, not 60.

scha_check_app_user(1HA)

The use of “effective user ID” in this man page is incorrect. The correct term in all places is “real user ID”. For information about the distinction between a real user ID and an effective user ID, see the setuid(2) man page.

SUNW.HAStoragePlus(5)

In the description for the RebootOnFailure property, the second paragraph is incorrect. The correct paragraph is the following:

If RebootOnFailure is set to TRUE and at least one device is found available for each entity specified in the GlobalDevicePaths, FileSystemMountPoints, or Zpools property, the local system is rebooted. The local system refers to the global-cluster node or the zone-cluster node where the resource is online.

SUNW.ScalDeviceGroup(5)

In the description for the RebootOnFailure property, the second paragraph is incorrect. The correct paragraph is the following:

If RebootOnFailure is set to TRUE and at least one device is found available for each entity specified in the GlobalDevicePaths, FileSystemMountPoints, or Zpools property, the local system is rebooted. The local system refers to the global-cluster node or the zone-cluster node where the resource is online.

SUNW.ScalMountPoint(5)

In the description for the RebootOnFailure property, the second paragraph is incorrect. The correct paragraph is the following:

If RebootOnFailure is set to TRUE and at least one device is found available for each entity specified in the GlobalDevicePaths, FileSystemMountPoints, or Zpools property, the local system is rebooted. The local system refers to the global-cluster node or the zone-cluster node where the resource is online.