Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 4.1 Release Notes Oracle Solaris Cluster 4.1 |
1. Oracle Solaris Cluster 4.1 Release Notes
Support for Oracle Solaris 11.2 OS
New clsetup Wizards to Create a Zone Cluster
Support for solaris10 Brand Zone Clusters
Support for Exclusive-IP Zone Clusters
Support for Trusted Extensions With Zone Clusters
Resource Dependencies Can Be Defined on a Per-Node Basis
Support for Kernel Cage Dynamic Reconfiguration (DR)
Cluster Security Framework Is Enhanced
Support for Socket Direct Protocol Over the Cluster Interconnect
Faster Failure Detection and Response by Storage Monitors
New clsetup Wizard to Configure the Oracle PeopleSoft Application Server Data Service
New clsetup Wizard to Configure the Oracle WebLogic Server Data Service
Support for MySQL and MySQL Cluster Data Services
New Data Service for PostgreSQL
New Data Service for SAP liveCache
New Data Service for SAP MaxDB
New Data Service for Siebel 8.2.2
New Data Service for Sybase ASE
New Data Service for Oracle Traffic Director
New Data Service for Oracle TimesTen
New Manual for SAP NetWeaver Data Service
New Data Service for Oracle External Proxy
New Data Service for Oracle PeopleSoft Enterprise Process Scheduler
New Data Service for Oracle Web Tier
Support for Oracle E-Business 12.1.1 Data Service
Support for Sun ZFS Storage Appliance Data Replication With Geographic Edition
Support for EMC Symmetrix Remote Data Facility With Geographic Edition
Support for MySQL Replication With Geographic Edition
New Man Pages for the ccradm and dcs_config Advanced Maintenance Commands
Selected Support for Non-Global Zones
What's Not Included in the Oracle Solaris Cluster 4.1 Software
Solaris Volume Manager Disk Sets in a Zone Cluster
Commands Modified in This Release
Logical Host Does not Fail Over with Public Net Fault (16979921)
Oracle ASM With Solaris Volume Manager Mirrored Logical Volumes
osysmond Core Dumps in S10 Brand Zone During GI root.sh and Starting of CRS (14456069)
Oracle Clusterware Fails to Create All SIDs for ora.asm Resource (12680224)
Oracle Solaris 11 SRU Installation Might Fail Due to Out-of-Date pkg Command
Adding Main Adapter to IPMP Group Removes DNS Configuration (7198718)
SAP JAVA Issue Affects HA for SAP NetWeaver Ability to Fail Over in Unplanned Outage (7191360)
Geographic Edition Software Requirements
Oracle Solaris Operating System
Cannot Set the Jumbo Frame MTU Size for the clprivnet Interface (16618736)
Public Net Failure Does Not Fail Over DB Server Resource with SCAN Listener (16231523)
Removing a Node From an Exclusive-IP Zone Cluster Panics Cluster Nodes (7199744)
Nonexisting privnet Stops Zone Clusters From Booting Despite Good privnet (7199431)
Cluster File System Does Not Support Extended Attributes (7167470)
Cannot Create a Resource From a Configuration File With Non-Tunable Extension Properties (6971632)
Disabling Device Fencing While Cluster Is Under Load Results in Reservation Conflict (6908466)
Removing Nodes From the Cluster Configuration Can Result in Node Panics (6735924)
More Validation Checks Needed When Combining DIDs (6605101)
Active-Standby Configuration Not Supported for HA for TimesTen (16861602)
SUNW.Proxy_SMF_failover sc_delegated_restarter File Descriptor Leak (7189211)
When set Debug_level=1, pas-rg Fails Over to Node 2 And Cannot Start on Node 1 Anymore (7184102)
Scalable Applications Are Not Isolated Between Zone Clusters (6911363)
clresource show -p Command Returns Wrong Information (7200960)
Cluster Node Does Not Have Access to Sun ZFS Storage Appliance Projects or iSCSI LUNs (15924240)
DR State Stays Reporting unknown on One Partner (7189050)
Takeover to the Secondary Is Failing Because fs umount Failed On the Primary (7182720)
Multiple Notification Emails Sent From Global Cluster When Zone Clusters Are in Use (7098290)
ASM Instance Proxy Resource Creation Errored When a Hostname Has Uppercase Letters (7190067)
Wizard Won't Discover the ASM SID (7190064)
RAC Proxy Resource Creation Fails When the Cluster Node's Hostname Has Uppercase Letters (7189565)
cacao Cannot Communicate on Machines Running Trusted Extensions (7183625)
Autodiscovery Should Find Only One Interconnect Path for Each Adapter (6299097)
Logical Hostname Failover Could Create Duplicate Addresses, Lead To Outage (7201091)
sc_delegated_restarter Does Not Take Into Account Environment Variable Set in Manifest (7173159)
Unable to Re-enable Transport Interface After Disabling With ipadm disable-if -t interface (7141828)
Failure of Logical Hostname to Fail Over Caused by getnetmaskbyaddr() (7075347)
x86: scinstall -u update Sometimes Fails to Upgrade the Cluster Packages on an x86 Node (7201491)
This section discusses errors or omissions for documentation in the Oracle Solaris Cluster and Geographic Edition 4.1 release.
In multiple chapters, the syntax for the scinstall -u update command is missing the option to specify license information, when needed. The full command syntax is the following:
# scinstall -u update -b [-b bename] [-L accept,licenses]
For more information about the -L option, see the scinstall(1M) man page.
In Setting HA for Oracle Extension Properties in Oracle Solaris Cluster Data Service for Oracle Guide, the list of required extension properties for the Oracle server resource is valid only if Oracle Grid Infrastructure is used. If you are not using Oracle Grid Infrastructure, the following extension properties are also required for the Oracle server resource:
Connect_string
Alert_log_file
This information is also missing from Step 9 of How to Register and Configure HA for Oracle Without Oracle ASM (CLI) in Oracle Solaris Cluster Data Service for Oracle Guide.
For information about the Connect_string and Alert_log_file extension properties, see the SUNW.oracle_server(5) man page.
In How to Prepare the Nodes in Oracle Solaris Cluster Data Service for Oracle Guide, Step 7 is corrected and Step 8 is added as follows:
7. If you are using a zone cluster, configure the limitpriv property by using the clzonecluster command.
# clzonecluster configure zcname clzonecluster:zcname>set limitpriv="default,proc_priocntl,proc_clock_highres" clzonecluster:zcname>commit
8. On each zone-cluster node, prevent Oracle Clusterware time synchronization from running in active mode.
Log in to the zone-cluster node as root.
Create an empty /etc/inet/ntp.conf file.
# touch /etc/inet/ntp.conf
The procedure How to Set the Necessary Privileges for Oracle RAC Software in a Zone Cluster in Chapter 1 contains incorrect information. The correct procedure is as follows:
Become superuser on the global cluster node that hosts the zone cluster.
Configure the limitpriv property by using the clzonecluster command.
# clzonecluster configure zcname clzonecluster:zcname>set limitpriv ="default,proc_priocntl,proc_clock_highres" clzonecluster:zcname>commit
Beginning with Oracle RAC version 11g release 2, prevent Oracle Clusterware time synchronization from running in active mode.
Log in to the zone-cluster node as root.
Create an empty /etc/inet/ntp.conf file.
# touch /etc/inet/ntp.conf
The following instruction is missing from How to Install a Zone and Perform the Initial Internal Zone Configuration in Oracle Solaris Cluster Data Service for Oracle Solaris Zones Guide. Perform this step immediately after Step 6b:
c. In the node where you updated the new UUID in the boot environment, if there are other non-global zones of brand type solaris configured, set the same UUID on the active boot environment for each non-global zone of brand type solaris.
phys-schost-2# zfs set org.opensolaris.libbe:parentbe=uuid poolname/zonepath/rpool/ROOT/bename
For example:
phys-schost-2# zoneadm list -cv … 1 myzone1 running /zones/myzone1 solaris shared … phys-schost-2# zlogin myzone1 beadm list -H solaris;4391e8aa-b8d2-6da9-a5aa-d8b3e6ed6d9b;NR;/;606941184;static;1342165571 phys-schost-2# zfs set org.opensolaris.libbe:parentbe=8fe53702-16c3-eb21-ed85-d19af92c6bbd \ rpool/zones/myzone1/rpool/ROOT/solaris
Oracle Solaris Cluster 4.1 software supports Solaris Volume Manager software. The Oracle Solaris 11 documentation set does not include a manual for Solaris Volume Manager software. However, you can still use the Solaris Volume Manager Administration Guide from the Oracle Solaris 10 9/10 release, which is valid with the Oracle Solaris Cluster 4.1 release.
The following instruction is missing from the procedure How to Add an Application Resource Group to an Availability Suite Protection Group:
If the application resource group to add is configured with a raw-disk device group, that device group must be specified in the resource group configuration by its data volume, rather than by its device group name. This ensures that the resource will remain monitored after the application resource group is added to a protection group.
For example, if the device group rawdg has a corresponding data volume of /dev/global/rdsk/d1s0, you must set the GlobalDevicePaths property of the application resource group with the data volume, as follows:
# clresourcegroup set -p GlobalDevicePaths=/dev/global/rdsk/d1s0 rawdg
This section discusses errors, omissions, and additions in the following Oracle Solaris Cluster man pages:
The (cluster) ip-type property incorrectly states that the only supported value is supported is shared. Both shared and exclusive ip-types are supported.
The privnet resource name incorrectly contains a hyphen (priv-net). The correct resource name is privnet.
In the Description section, the seventh bullet point must read as follows:
The resource group weak positive affinities must ensure that the SAP central service resource group fails over to the node where the SAP replicated enqueue resource group is online. If an ORCL.saprepenq_preempt resource is not configured, it must be implemented by strong negative affinities such that the replicated enqueue server resource group is off-loaded from the failover target node before the SAP central service resource group is started.
In Example 1, make the following change:
Change: -p resource_dependencies=bono-1,db-rs,scs-strt-rs To: -p resource_dependencies=db-rs,scs-strt-rs
In the Description section, the seventh bullet point must read as follows:
The resource group weak positive affinities must ensure that the SAP central service resource group fails over to the node where the SAP replicated enqueue resource group is online. If an ORCL.saprepenq_preempt resource is not configured, it must be implemented strong negative by affinities such that the replicated enqueue server resource group is off-loaded from the failover target node before the SAP central service resource group is started.
In the Description section, the eighth bullet point must read as follows:
The resource group weak positive affinities must ensure that the SAP central service resource group fails over to the node where the SAP replicated enqueue resource group is online. If an ORCL.saprepenq_preempt resource is not configured, it must be implemented by strong negative affinities such that the replicated enqueue server resource group is off-loaded from the failover target node before the SAP central service resource group is started. If the replicated enqueue preempter resource is configured, it is the task of this resource to off-load the replicated enqueue server resource group to a spare node after the enqueue tables are copied.
In the Name section, the sentence describing the resource type must read as follows:
resource type implementation for processing sapstartsrv of Oracle Solaris Cluster HA for SAP NetWeaver
In Example 1, make the following change:
Change: /usr/cluster/bin/clrs create -d -g pas-rg -t sapstartsrv To: /usr/cluster/bin/clrs create -d -g scs-rg -t sapstartsrv
The minimum value for the Ping_interval property is incorrect. The value should be 20, not 60.
The use of “effective user ID” in this man page is incorrect. The correct term in all places is “real user ID”. For information about the distinction between a real user ID and an effective user ID, see the setuid(2) man page.
In the description for the RebootOnFailure property, the second paragraph is incorrect. The correct paragraph is the following:
If RebootOnFailure is set to TRUE and at least one device is found available for each entity specified in the GlobalDevicePaths, FileSystemMountPoints, or Zpools property, the local system is rebooted. The local system refers to the global-cluster node or the zone-cluster node where the resource is online.
In the description for the RebootOnFailure property, the second paragraph is incorrect. The correct paragraph is the following:
If RebootOnFailure is set to TRUE and at least one device is found available for each entity specified in the GlobalDevicePaths, FileSystemMountPoints, or Zpools property, the local system is rebooted. The local system refers to the global-cluster node or the zone-cluster node where the resource is online.
In the description for the RebootOnFailure property, the second paragraph is incorrect. The correct paragraph is the following:
If RebootOnFailure is set to TRUE and at least one device is found available for each entity specified in the GlobalDevicePaths, FileSystemMountPoints, or Zpools property, the local system is rebooted. The local system refers to the global-cluster node or the zone-cluster node where the resource is online.