Problem Summary: When using a ZFS Storage Appliance, during a power failure test, after powering off all the cluster nodes and then powering them back on, the database might not come back online and the whole application might fail. Whenever a power cycle happens, the application might not be available until you manually clear the NFS locks from ZFS Storage Appliance storage.
Workaround: For ZFS Storage Appliance storage (NFS file systems), from the ZFS Storage Appliance GUI, go to maintenance, select workflows and then click Clear Locks (with hostname and IP address).
Problem Summary: This issue might occur on a system configured with a SUNW.HAStoragePlus (HASP) resource managing a ZFS storage pool.
When a large zfs send and zfs recv is performed with a snapshot from another system to a separate ZFS sub-volume on the same zpool that is managed by HASP, the HASP resources might fail in Oracle Solaris Cluster 4.3 running on Oracle Solaris 11.2 or Oracle Solaris 11.3.
Workaround: Before starting data replication of the file system that is actively managed under the Oracle Solaris Cluster resource, do either of the following:
Execute the following command to disable the HASP resource:
# clresource disable hasp-resource-name
Execute the following command to disable monitoring of the HASP resource:
# clresource unmonitor hasp-resource-name
Once data replication is successfully completed, bring the HASP resource to a monitored and online state.
Note that even with the workaround, if a failover of HASP happens during zfs receive, the snapshot replication will not complete. You must manually resume the replication on the node that the HASP fails over to.
Problem Summary: The existing data service configuration wizards do not support configuring scalable HAStoragePlus resources and resource groups. In addition, the wizards are also not able to detect existing resources and resource groups for scalable HAStoragePlus.
For example, while configuring HA for WebLogic Server in multi-instance mode, the wizard will display No highly available storage resources are available for selection even when there are existing scalable HAStoragePlus resources and resource groups on the cluster.
Workaround: Configure data services that use scalable HAStoragePlus resources and resource groups as follows:
Use the clresourcegroup and clresource commands to configure HAStoragePlus resources groups and resources in scalable mode.
Use the clsetup wizard to configure data services as if they are on local file systems, meaning as if no storage resources are involved.
Use the CLI to create an offline-restart dependency on the scalable HAStoragePlus resources configured in Step 1, and a strong positive affinity on the scalable HAStoragePlus resource groups.
Problem Summary: If scalable applications configured to run in different zone clusters bind to INADDR_ANY and use the same port, then scalable services cannot distinguish between the instances of these applications that run in different zone clusters.
Workaround: Do not configure the scalable applications to bind to INADDR_ANY as the local IP address, or bind them to a port that does not conflict with another scalable application.
Problem Summary: When you reboot or shutdown a cluster node where the Oracle Solaris Cluster HA for NFS resource is online, if an NFS client had an open file or directory which is under write operation, the NFS client might see the Stale NFS file handle error.
Workaround: Before you reboot or shut down the cluster node where the Oracle Solaris Cluster HA for NFS resource is online, execute a resource group switchover to a different target cluster node.
# clrg switch -n target_host nfs-rg
where target_host is the target cluster node for the switchover of resource group nfs-rg.
Problem Summary: Oracle Grid startup might hang indefinitely when using Oracle Solaris 11.3 and Oracle Grid 188.8.131.52.0.
Workaround: You can use Oracle Grid 184.108.40.206.0 or Oracle Solaris 11.2 to avoid this problem. Contact Oracle support representative to learn whether a workaround or fix is available.
Problem Summary: When using Oracle Solaris Cluster HA for Oracle with Solaris Volume Manger (SVM) or UFS file system devices in an x64 cluster environment, Oracle Database log corruption might occur.
Workaround: To avoid data corruption when using SVM or UFS based file systems with HA for Oracle database, place the Oracle binaries and Oracle data on separate file systems. In Oracle data file systems, set forcedirectio in /etc/vfstab to avoid the bug. You must use forcedirectio only for the Oracle data file system, thus requiring separate file systems for Oracle binaries and Oracle data.