Go to main content

Oracle® Solaris Cluster 4.3 Release Notes

Exit Print View

Updated: June 2021
 
 

Data Services Issues

Oracle Database/WLS Resource Fails to Come Online Due to Locking Issue (15713853)

Problem Summary: When using a ZFS Storage Appliance, during a power failure test, after powering off all the cluster nodes and then powering them back on, the database might not come back online and the whole application might fail. Whenever a power cycle happens, the application might not be available until you manually clear the NFS locks from ZFS Storage Appliance storage.

Workaround: For ZFS Storage Appliance storage (NFS file systems), from the ZFS Storage Appliance GUI, go to maintenance, select workflows and then click Clear Locks (with hostname and IP address).

HASP Resources Fail in Oracle Solaris Cluster 4.3 on Oracle Solaris 11.2 and Oracle Solaris 11.3 With zfs recv (17365301)

Problem Summary: This issue might occur on a system configured with a SUNW.HAStoragePlus (HASP) resource managing a ZFS storage pool.

When a large zfs send and zfs recv is performed with a snapshot from another system to a separate ZFS sub-volume on the same zpool that is managed by HASP, the HASP resources might fail in Oracle Solaris Cluster 4.3 running on Oracle Solaris 11.2 or Oracle Solaris 11.3.

Workaround: Before starting data replication of the file system that is actively managed under the Oracle Solaris Cluster resource, do either of the following:

  • Execute the following command to disable the HASP resource:

    # clresource disable hasp-resource-name
  • Execute the following command to disable monitoring of the HASP resource:

    # clresource unmonitor hasp-resource-name

Once data replication is successfully completed, bring the HASP resource to a monitored and online state.

Note that even with the workaround, if a failover of HASP happens during zfs receive, the snapshot replication will not complete. You must manually resume the replication on the node that the HASP fails over to.

Data Service Configuration Wizards Do Not Support Storage Resources and Resource Groups for Scalable HAStoragePlus (15820415)

Problem Summary: The existing data service configuration wizards do not support configuring scalable HAStoragePlus resources and resource groups. In addition, the wizards are also not able to detect existing resources and resource groups for scalable HAStoragePlus.

For example, while configuring HA for WebLogic Server in multi-instance mode, the wizard will display No highly available storage resources are available for selection even when there are existing scalable HAStoragePlus resources and resource groups on the cluster.

Workaround: Configure data services that use scalable HAStoragePlus resources and resource groups as follows:

  1. Use the clresourcegroup and clresource commands to configure HAStoragePlus resources groups and resources in scalable mode.

  2. Use the clsetup wizard to configure data services as if they are on local file systems, meaning as if no storage resources are involved.

  3. Use the CLI to create an offline-restart dependency on the scalable HAStoragePlus resources configured in Step 1, and a strong positive affinity on the scalable HAStoragePlus resource groups.

Scalable Applications Are Not Isolated Between Zone Clusters (15611122)

Problem Summary: If scalable applications configured to run in different zone clusters bind to INADDR_ANY and use the same port, then scalable services cannot distinguish between the instances of these applications that run in different zone clusters.

Workaround: Do not configure the scalable applications to bind to INADDR_ANY as the local IP address, or bind them to a port that does not conflict with another scalable application.

NFS Server Failover Triggers Stale NFS File Handle (21459179)

Problem Summary: When you reboot or shutdown a cluster node where the Oracle Solaris Cluster HA for NFS resource is online, if an NFS client had an open file or directory which is under write operation, the NFS client might see the Stale NFS file handle error.

Workaround: Before you reboot or shut down the cluster node where the Oracle Solaris Cluster HA for NFS resource is online, execute a resource group switchover to a different target cluster node.

# clrg switch -n target_host nfs-rg

where target_host is the target cluster node for the switchover of resource group nfs-rg.

Upgrading From Oracle Solaris 11.2 to Oracle Solaris 11.3 Results in Oracle Grid 12.1.0.1.0 Startup Hang (21511528)

Problem Summary: Oracle Grid startup might hang indefinitely when using Oracle Solaris 11.3 and Oracle Grid 12.1.0.1.0.

Workaround: You can use Oracle Grid 12.1.0.2.0 or Oracle Solaris 11.2 to avoid this problem. Contact Oracle support representative to learn whether a workaround or fix is available.

ORA-00742: Log Read Detects Lost Write (21186724)

Problem Summary: When using Oracle Solaris Cluster HA for Oracle with Solaris Volume Manger (SVM) or UFS file system devices in an x64 cluster environment, Oracle Database log corruption might occur.

Workaround: To avoid data corruption when using SVM or UFS based file systems with HA for Oracle database, place the Oracle binaries and Oracle data on separate file systems. In Oracle data file systems, set forcedirectio in /etc/vfstab to avoid the bug. You must use forcedirectio only for the Oracle data file system, thus requiring separate file systems for Oracle binaries and Oracle data.