Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 3.3 5/11 Release Notes Oracle Solaris Cluster |
Oracle Solaris Cluster 3.3 5/11 Release Notes
What's New in the Oracle Solaris Cluster 3.3 5/11 Software
Enhancements to the cluster check Command as a Cluster Validation Tool
Fencing Support for Sun ZFS Storage Appliance as a NAS Device
Support for Oracle ACFS as a Cluster File System
Zone Cluster Support for Loopback Mounts With HAStoragePlus Using ZFS
Configuration Wizard Support for Oracle 11g Release 2 with HA-Oracle and Oracle RAC
Support for Zone Clusters Without IP Addresses
Support for SWIFTAlliance Access 7.0 and SWIFTAlliance Gateway 7.0
Oracle ACFS as a Cluster File System
Veritas Volume Manager Cluster Feature No Longer Supported
Commands Modified in This Release
Node Fails To Start Oracle Clusterware After a Panic (uadmin 5 1) Fault Injection (11828322)
Need Support for Clusterized fcntl by Oracle ACFS (11814449)
Unable to Start Oracle ACFS in Presence of Oracle ASM in a Non-Global Zone (11707611)
SAP startsap Fails to Start the Application Instance if startsrv Is Not Running (7028069)
Problem Using Sun ZFS Storage Appliance as Quorum Device Through Fibre Channel or iSCSI (6966970)
Cluster Zone Won't Boot Up After Live Upgrade on ZFS Root (6955669)
Oracle Solaris Operating System
Sun StorageTek Availability Suite
Unable to Register Resource Type SUNW.scalable_acfs_proxy in a Zone Cluster (7023590)
Oracle's SPARC T3-4 Fails During Reboot (6993321)
Missing /dev/rmt Causes Incorrect Reservation Usage When Policy Is pathcount (6920996)
The global_fencing Property Code is Broken When the Value is Changed to prefer3 (6879360)
Autodiscovery Does Not Work on LDoms With Hybrid I/O (6870171)
Removing Nodes from the Cluster Configuration Can Result in Node Panics (6735924)
More Validation Checks Needed When Combining DIDs (6605101)
Solaris Cluster Manager Fails to Come Up in a 16-Node Cluster (6594485)
Apache Tomcat Does Not Start Due to Missing Script (7022690)
Gateway Probe Will Ping Pong if Database Listener Is Not Reachable (6927071)
Scalable Applications Are Not Isolated Between Zone Clusters (6911363)
clresourcegroup add-node Triggers an HAStoragePlus Resource to Become Faulted State (6547896)
GDS Returns Incorrect Exit Status in STOP Method for Non-PMF Services (6831988)
Result of System Requirements Checking Is Wrong (6495984)
ssm_start Fails Due to Unrelated IPMP Down (6938555)
Zones With ip-type=exclusive Cannot Host SUNW.LogicalHostname Resources After Upgrade (6702621)
Patches and Required Firmware Levels
Applying an Oracle Solaris Cluster 3.3 5/11 Core Patch
How to Apply the Oracle Solaris Cluster 3.3 5/11 Core Patch
Removing an Oracle Solaris Cluster 3.3 5/11 Core Patch
How to Remove an Oracle Solaris Cluster 3.3 5/11 Core Patch
Patch for Cluster Support for Sun StorageTek 2530 Array
This section contains the following information about Oracle Solaris Cluster compatibility issues with other products:
Node Fails To Start Oracle Clusterware After a Panic (uadmin 5 1) Fault Injection (11828322)
Need Support for Clusterized fcntl by Oracle ACFS (11814449)
Unable to Start Oracle ACFS in Presence of Oracle ASM in a Non-Global Zone (11707611)
SAP startsap Fails to Start the Application Instance if startsrv Is Not Running (7028069)
Problem Using Sun ZFS Storage Appliance as Quorum Device Through Fibre Channel or iSCSI (6966970)
See also the following information:
Additional Oracle Solaris Cluster framework compatibility issues are documented in Chapter 1, Planning the Oracle Solaris Cluster Configuration, in Oracle Solaris Cluster Software Installation Guide.
Additional Oracle Solaris Cluster upgrade compatibility issues are documented in Upgrade Requirements and Software Support Guidelines in Oracle Solaris Cluster Upgrade Guide.
For other known problems or restrictions, see Known Issues and Bugs.
Problem Summary: This problem occurs when calling rename(2) to rename a subdirectory in an Oracle ACFS file system to its parent directory, where the parent directory is a subdirectory under the Oracle ACFS file-system mount point. An example would be an Oracle ACFS file system mounted at /xxx, with a directory called /xxx/dir1 and a child directory called /xxx/dir1/dir2. Calling rename(2) with /xxx/dir1/dir2 and /xxx/dir1 as the arguments produces the error.
Workaround: None. Do not rename an Oracle ACFS directory as the name of its parent directory.
Problem Summary: This problem occurs on a two-node Oracle Solaris Cluster configuration running single instance Oracle Database on clustered Oracle ASM with DB_HOME on Oracle ACFS. After a panic fault on one of the nodes, the node boots up but CRS start fails.
# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4535: Cannot communicate with Cluster Ready Services CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online # crsctl start crs CRS-4640: Oracle High Availability Services is already active CRS-4000: Command Start failed, or completed with errors.
Workaround: Reboot the node a second time.
Problem Summary: Oracle ACFS in Oracle 11g release 2 Grid Infrastructure provides node-local fcntl only. In an Oracle Solaris Cluster configuration, applications that are configured as scalable applications might be active from more than one node of the cluster. A scalable application might issue write requests to the underlying file system from multiple nodes at the same time. Depending on the implementation of the application, those with dependency on clusterized fcntl() cannot be configured as scalable resources. To support scalable applications on Oracle ACFS in an Oracle Solaris Cluster configuration, Oracle ACFS must support clusterized fcntl.
Workaround: There is no workaround at this time. Do not configure scalable applications on Oracle ACFS in an Oracle Solaris Cluster configuration.
Problem Summary: This problem occurs when a configuration with Oracle 11g release 2 Grid Infrastructure runs in the global zone and Oracle 10g release 2 ASM runs in a non-global zone. A general-purpose Oracle ACFS file system is created in the global zone with mountpath set to a path under the zone root path of the non-global zone. The Oracle ASM admin user in the global zone is different from the Oracle ASM user in the non-global zone. The user ID of the Oracle ASM admin user in the non-global zone does not exist in the global zone.
After reboot of the global—cluster node, the attempt to start the Oracle ACFS file system fails with messages similar to the following:
phys-schost# /u01/app/11.2.0/grid/bin/srvctl start filesystem -d /dev/asm/dummy-27 -n phys-schost PRCR-1013 : Failed to start resource ora.dbhome.dummy.acfs PRCR-1064 : Failed to start resource ora.dbhome.dummy.acfs on node phys-schost CRS-5016: Process "/u01/app/11.2.0/grid/bin/acfssinglefsmount" spawned by agent "/u01/app/11.2.0/grid/bin/orarootagent.bin" for action "start" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0/grid/log/phys-schost/agent/crsd/orarootagent_root/orarootagent_ro ot.log" CRS-2674: Start of 'ora.dbhome.dummy.acfs' on 'phys-schost' failed
The orarootagent_root.log file has messages similar to the following:
2011-02-01 16:15:53.417: [ora.dbhome.dummy.acfs][8] {2:53487:190} [start] (:CLSN00010:)su: Unknown id: 303
The user ID 303 that is identified as Unknown is the ID for the Oracle ASM admin user in the non-global zone.
Workaround: Use the same user ID for the Oracle ASM admin user in both the global zone and the non-global zone.
Problem Summary: Configuring a ScalMountPoint resource for a Sun ZFS Appliance file system fails if the file system is not set to inherit its NFS properties from its parent project.
Ensure that Inherit from project is selected for the file system when you set up the ScalMountPoint resource. To check this setting, edit the file system in the ZFS Appliance GUI and navigate to the Protocols tab.
After you configure the ScalMountPoint resource, you can optionally deselect Inherit from project to turn fencing off.
Problem Summary: In SAP 7.11, the startsap program fails to start the application instance if the startsrv program is not running.
Workaround: Use the following entries in the wrapper script to start the application instance, adapting them to your system information, such as instance number, SID, and so forth.
ps -e -o args|grep sapstartsrv|grep DVEB if (( ${?} != 0 )) then /usr/sap/FIT/DVEBMGS03/exe/sapstartsrv pf=/usr/sap/FIT/SYS/profile/FIT_DVEBMGS03_lzkosi2c -D fi
Problem Summary: When Oracle's Sun ZFS Storage Appliance (formerly Sun Storage 7000 Unified Storage Systems) over Fibre Channel or iSCSI is used as a quorum device with fencing enabled, Oracle Solaris Cluster uses it as a SCSI quorum device. In such a configuration, certain SCSI actions requested by the Oracle Solaris Cluster software might not be addressed in a correct manner. In addition, the cluster reconfiguration's default timeout of 25 seconds for the completion of quorum operations might not be adequate for such a quorum configuration.
If you see messages on the cluster nodes saying that such a Sun ZFS Storage Appliance quorum device is unreachable, or if you see failures of cluster nodes with the message CMM: Unable to acquire the quorum device, there might be a problem with the quorum device or the path to it.
Workaround: Check that both the quorum device and the path to it are functional. If the problem persists, apply Sun ZFS Storage Appliance Firmware release 2010Q3.3 to correct the problem.
If there is a reason to not install this firmware, or you need an interim mitigation of the issue, use one of the following alternatives:
Use a different quorum device.
Remove the quorum device from the configuration, disable fencing for the device, and configure the device again as a quorum device. The device will now use software quorum.
Note - A software-quorum device does not guarantee the same level of protection that SCSI fencing provides. Avoid configuring a data disk as a software-quorum device.
Increase the quorum timeout to a high value, as shown in the following steps.
Note - For Oracle Real Application Clusters (Oracle RAC), do not change the default quorum timeout of 25 seconds. In certain split-brain scenarios, a longer timeout period might lead to the failure of Oracle RAC VIP failover, due to the VIP resource timing out. If the quorum device being used is not conforming with the default 25 seconds timeout, use a different quorum device.
Become superuser.
On each cluster node, edit the /etc/system file to set the timeout to a high value.
The following example sets the timeout to 700 seconds.
phys-schost# vi /etc/system ... set cl_haci:qd_acquisition_timer=700
From one node, shut down the cluster.
phys-schost-1# cluster shutdown -g0 -y
Boot each node back into the cluster.
Changes to the /etc/system file are initialized after the reboot.
For a global cluster that uses ZFS for the root file system and which has zone clusters configured, when using Live Upgrade to upgrade to Solaris 10 8/10, the upgraded boot environment does not boot.
Contact your Oracle support representative to learn whether a patch or workaround is available.
The Enhanced Storage module of Solaris Management Console (Solaris Volume Manager) is not compatible with Oracle Solaris Cluster software. Use the command-line interface or Oracle Solaris Cluster utilities to configure Solaris Volume Manager software.