|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris Cluster 4.0 Release Notes Oracle Solaris Cluster 4.0|
This section contains information about Oracle Solaris Cluster compatibility issues with other products, as of initial release. Contact Oracle support services to see if a code fix becomes available.
Problem Summary: When creating an Oracle Solaris Cluster resource for an Oracle ASM instance, the error message ORACLE_SID (+ASM2) does not match the Oracle ASM configuration ORACLE_SID () within CRS or ERROR: Oracle ASM is either not installed or the installation is invalid! is reported by the clsetup utility. This situation occurs because, after Oracle Grid Infrastructure 184.108.40.206 is installed, the value for GEN_USR_ORA_INST_NAME@SERVERNAME of the ora.asm resource does not contain all the Oracle ASM SIDs that are running on the cluster.
Workaround: Use the crsctl command to add the missing SIDs to the ora.asm resource.
# crsctl modify res ora.asm \ -attr "GEN_USR_ORA_INST_NAME@SERVERNAME(hostname)"=ASM_SID
Problem Summary: This problem affects data services that use the connect() call to probe the health of the application through its logical hostname IP address. In a cluster-wide network outage scenario, there is a change in the behavior of the connect() call on the Oracle Solaris 11 software from the Oracle Solaris 10 release. The connect() call fails if the IPMP interface, on which the logical hostname IP is plumbed, goes down. This makes the agent probe fail if the network outage is longer than the probe_timeout and eventually brings the resource and the associated resource group to the offline state.
Workaround: Configure the application to listen on localhost:port to ensure that the monitoring program does not fail the resource in a public-network outage scenario.
Problem Summary: If the package pkg:/system/resource-mgmt/resource-cap is not installed and a zone is configured with capped-memory resource control as part of the configuration, the zone boot fails. Output is similar to the following:
zone 'zone-1': enabling system/rcap service failed: entity not found zoneadm: zone 'zone-1': call to zoneadmd failed
Workaround: Install pkg:/system/resource-mgmt/resource-cap into the global zone. Once the resource-cap package is installed, the zone can boot.
Problem Summary: When using the zonecfg utility, if you add a DID disk to a non-global zone by using a wild card (*) and without specifying the paths, the addition fails.
Workaround: Specify the raw device paths and block device paths explicitly. The following example adds the d5 DID device:
root@phys-cluster-1:~# zonecfg -z foo zonecfg:foo> add device zonecfg:foo:device> set match=/dev/did/dsk/d5s* zonecfg:foo:device> end zonecfg:foo> add device zonecfg:foo:device> set match=/dev/did/rdsk/d5s* zonecfg:foo:device> end zonecfg:foo> exit