|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris Cluster 4.1 Release Notes Oracle Solaris Cluster 4.1|
This section contains information about Oracle Solaris Cluster compatibility issues with other products, as of initial release. Contact your Oracle support representative to see whether a fix becomes available.
Problem Summary: IPMP groups in exclusive-IP zone clusters fail to recognize link failures that cause dependent logical hostname resources to remain online, even when the base network interface link is broken.
Workaround: Enable transitive probing for the IPMP network service or create probe-based IPMP groups in exclusive-IP zone clusters.
Problem Summary: If your Oracle Solaris Cluster HA for Oracle Database or Support for Oracle RAC configuration requires using Oracle ASM with Solaris Volume Manager mirrored logical volumes, you might experience failures of the SUNW.ScalDeviceGroup probe. These failures result in a loss of availability of any service that is dependent on the SUNW.ScalDeviceGroup resource.
Workaround: You can mitigate the failures by increasing the IOTimeout property setting for the SUNW.ScalDeviceGroup resource type. See Article 603825.1 at My Oracle Support for additional information.
Problem Summary: This problem involves Oracle RAC 11g release 2 configured in a solaris10 brand zone cluster. When the Grid Infrastructure root.sh script is run or when Cluster Ready Services (CRS) is started, the osysmond process might dump core one or more times.
Workaround: Contact Oracle Support to learn whether a patch or workaround is available.
Problem Summary: When creating an Oracle Solaris Cluster resource for an Oracle ASM instance, one of the following error messages might be reported by the clsetup utility:
ORACLE_SID (+ASM2) does not match the Oracle ASM configuration ORACLE_SID () within CRS
ERROR: Oracle ASM is either not installed or the installation is invalid!
This situation occurs because, after Oracle Grid Infrastructure 11g release 2 is installed, the value for GEN_USR_ORA_INST_NAME@SERVERNAME of the ora.asm resource does not contain all the Oracle ASM SIDs that are running on the cluster.
Workaround: Use the crsctl command to add the missing SIDs to the ora.asm resource.
# crsctl modify res ora.asm \ -attr "GEN_USR_ORA_INST_NAME@SERVERNAME(hostname)"=ASM_SID
Problem Summary: When you install an Oracle Solaris 11 SRU to your cluster prior to the upgrade to Oracle Solaris 11.1, you might receive an error message similar to the following:
WARNING: pkg(5) appears to be out of date, and should be updated before running update. Please update pkg(5) by executing 'pkg install pkg:/package/pkg' as a privileged user and then retry the update.
Workaround: Follow the instructions in the error message.
Problem Summary: The clzonecluster install-cluster command might fail to install a patch on a solaris10 brand zone if Oracle Solaris Cluster patch 145333-15 (SPARC) or 145334-15 (x86) is installed in the zone. For example:
# clzonecluster install-cluster -p patchdir=/var/tmp/patchdir,patchlistfile=plist S10ZC Installing the patches ... clzc: (C287410) Failed to execute command on node "zcnode1": scpatchadm: Logging reports to "/var/cluster/logs/install/scpatchadm.log.123" scpatchadm.log.123 would show the message: scpatchadm: Failed to install the following patches: 123456-01 clzc: (C287410) Failed to execute command on node "zcnode1"
Workaround: Log in to the zone and install the patch by using the patchadd command.
Contact your Oracle support representative to learn whether an Oracle Solaris Cluster 3.3 patch becomes available.
Problem Summary: A problem occurs if you delete a network adapter then recreate it for an IPMP group, such as in the following example commands:
# ipadm delete-ip adapter # ipadm create-ip adapter # ipadm create-ipmp -i adapter sc_ipmp0 # ipadm create-addr -T static -a local=hostname/24 sc_ipmp0/v4
Soon after the IPMP address is created, the /etc/resolv.conf file disappears and LDAP service becomes disabled. Even an enabled service stays at the offline state.
Workaround: Before you delete the network adapter with the ipadm delete-ip command, run the svcadm refresh network/location:default command.
The SAP JAVA stack has a severe problem that affects the failover of dialogue instances in an HA for SAP NetWeaver configuration. On an unplanned node outage, like a panic or power outage, the SAP message server does not accept the connection of a dialogue instance on a different node until a timeout is over. This leads to the following behavior:
Once a node that is hosting a failover dialogue instance panics or experiences an outage, the dialogue instance does not start on the target node on the first try. The dialogue instance will do one of the following:
Come online after one or more retries.
Fail back to the original node if that node comes back up early enough.
This behavior occurs only on unplanned outages. Any orderly shutdown of a node does not experience this problem. Also, ABAP or dual-stack configurations are not affected.
Problem Summary: If the package pkg:/system/resource-mgmt/resource-cap is not installed and a zone is configured with capped-memory resource control as part of the configuration, the zone boot fails. Output is similar to the following:
zone 'zone-1': enabling system/rcap service failed: entity not found zoneadm: zone 'zone-1': call to zoneadmd failed
Workaround: Install pkg:/system/resource-mgmt/resource-cap into the global zone. Once the resource-cap package is installed, the zone can boot.
At initial release of Oracle Solaris Cluster 4.1 software, an active:active remote replication in a clustered configuration, where both heads are replicating data, is not supported by Sun ZFS Storage Appliance. Contact your Oracle support representative to learn whether a patch or workaround is available.
However, active-passive configurations are currently supported in a clustered configuration.