JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 4.0 Release Notes     Oracle Solaris Cluster 4.0
search filter icon
search icon

Document Information


Oracle Solaris Cluster 4.0 Release Notes

What's New in the Software

Automated Installer Support

New Cluster Package Names

Default Root File System of Oracle Solaris ZFS

Selected Support for Non-Global Zones

HA for Oracle with Oracle Data Guard Replication

What's Not Included in the Oracle Solaris Cluster 4.0 Software


Solaris Volume Manager Disk Sets in a Zone Cluster

Commands Modified in This Release

Compatibility Issues

Oracle Clusterware Fails to Create All SIDs for ora.asm Resource (12680224)

IP Addresses on a Failed IP Interface Can No Longer Be Used Locally (7099852)

Zone Does Not Boot if pkg:/system/resource-mgmt/resource-cap Is Not Installed and capped-memory Is Configured (7087700)

DID Disk Add to Solaris Zone Is Not Accepting Wild Card for *dsk (7081090)

Accessibility Information

Supported Products

Data Replication

Data Service

File Systems

Oracle Solaris Cluster Geographic Edition Software Requirements

Memory Requirements

Oracle Solaris Operating System

Oracle VM Server for SPARC

Volume Management Software

Product Localization

Known Issues and Bugs


x86: clzonecluster export Command Fails (7066586)

Using chmod to setuid Returns Error in Non-Global Zone on PxFS Secondary Server (7020380)

Cannot Create a Resource From a Configuration File With Non-Tunable Extension Properties (6971632)

Cluster.CCR: libpnm system error: Failed to resolve pnm proxy pnm_server.2.zonename (6942090)

Missing /dev/rmt Causes Incorrect Reservation Usage When Policy Is pathcount (6920996)

Disabling Device Fencing While Cluster Is Under Load Results in Reservation Conflict (6908466)

Removing Nodes From the Cluster Configuration Can Result in Node Panics (6735924)

Data Services

Share Mount Point Matching Is Incorrect for Combination of UFS and ZFS Starting With a Common Pattern (7093237)

'Unable to Determine Oracle CRS Version' Error After Applying Patch 145333-09 (7090390)

SPARC: HA for Oracle VM Server for SPARC Default STOP_TIMEOUT is Too Low - Need Better Monitoring Of Domain Migration Progress (7069269)

Scalable Applications Are Not Isolated Between Zone Clusters (6911363)

Running clnas add or clnas remove Command on Multiple Nodes at the Same Time Could Cause Problem (6791618)


cluster check Fails for cacaoadm With Insufficient Data Before Node Is Configured in Cluster (7104375)

Some Cluster Services Might Be Missing After Configuring Cluster on a Boot Environment That Previously Had the Cluster Software Installed (7103721)

scinstall Tries to Create an IPMP Group on a Standby Interface (7095759)

The Command clnode remove -F nodename Fails to Remove the Node nodename From Solaris Volume Manager Device Groups (6471834)

Autodiscovery Should Find Only One Interconnect Path for Each Adapter (6299097)


Failure of Logical Hostname to Fail Over Caused by getnetmaskbyaddr() (7075347)

ssm_start Fails Due to Unrelated IPMP Down (6938555)

Software Updates

Patch Management Tools

My Oracle Support

Oracle Solaris Cluster 4.0 Documentation Set

Documentation Issues

HA for Zones Procedure Moved to the Data Service Manual

Solaris Volume Manager

Man Pages

Section 3HA Man Pages







-c config_profile.xml Option

Correction to Default Set of Packages That Are Installed by the Automated Installer

Missing Description of the export Subcommand






A.  ORCL.ohs(5) and ORCL.opmn(5) Man Pages

Compatibility Issues

This section contains information about Oracle Solaris Cluster compatibility issues with other products, as of initial release. Contact Oracle support services to see if a code fix becomes available.

Oracle Clusterware Fails to Create All SIDs for ora.asm Resource (12680224)

Problem Summary: When creating an Oracle Solaris Cluster resource for an Oracle ASM instance, the error message ORACLE_SID (+ASM2) does not match the Oracle ASM configuration ORACLE_SID () within CRS or ERROR: Oracle ASM is either not installed or the installation is invalid! is reported by the clsetup utility. This situation occurs because, after Oracle Grid Infrastructure is installed, the value for GEN_USR_ORA_INST_NAME@SERVERNAME of the ora.asm resource does not contain all the Oracle ASM SIDs that are running on the cluster.

Workaround: Use the crsctl command to add the missing SIDs to the ora.asm resource.

# crsctl modify res ora.asm \

IP Addresses on a Failed IP Interface Can No Longer Be Used Locally (7099852)

Problem Summary: This problem affects data services that use the connect() call to probe the health of the application through its logical hostname IP address. In a cluster-wide network outage scenario, there is a change in the behavior of the connect() call on the Oracle Solaris 11 software from the Oracle Solaris 10 release. The connect() call fails if the IPMP interface, on which the logical hostname IP is plumbed, goes down. This makes the agent probe fail if the network outage is longer than the probe_timeout and eventually brings the resource and the associated resource group to the offline state.

Workaround: Configure the application to listen on localhost:port to ensure that the monitoring program does not fail the resource in a public-network outage scenario.

Zone Does Not Boot if pkg:/system/resource-mgmt/resource-cap Is Not Installed and capped-memory Is Configured (7087700)

Problem Summary: If the package pkg:/system/resource-mgmt/resource-cap is not installed and a zone is configured with capped-memory resource control as part of the configuration, the zone boot fails. Output is similar to the following:

zone 'zone-1': enabling system/rcap service failed: entity not found 
zoneadm: zone 'zone-1': call to zoneadmd failed

Workaround: Install pkg:/system/resource-mgmt/resource-cap into the global zone. Once the resource-cap package is installed, the zone can boot.

DID Disk Add to Solaris Zone Is Not Accepting Wild Card for *dsk (7081090)

Problem Summary: When using the zonecfg utility, if you add a DID disk to a non-global zone by using a wild card (*) and without specifying the paths, the addition fails.

Workaround: Specify the raw device paths and block device paths explicitly. The following example adds the d5 DID device:

root@phys-cluster-1:~# zonecfg -z foo 
zonecfg:foo> add device 
zonecfg:foo:device> set match=/dev/did/dsk/d5s* 
zonecfg:foo:device> end 
zonecfg:foo> add device 
zonecfg:foo:device> set match=/dev/did/rdsk/d5s* 
zonecfg:foo:device> end 
zonecfg:foo> exit