Changes in 12cR1.2.1.24

The following changes were made in Oracle NoSQL Database 12cR1.2.1.24.

Bug and Performance Fixes

  1. Under certain circumstances, a replication node which was on the verge of shutting down or in the midst of transitioning from master to replica state could experience this failure while cleaning up outstanding requests. Since the node would automatically restart, and the operation would be retried, the failure was transparent to the application, but could cause an unnecessary node failover. This has been fixed. [#22152]

    java.lang.IllegalStateException: Transaction 30 detected open cursors while aborting
        at com.sleepycat.je.txn.Txn.abortInternal(Txn.java:1190)
        at com.sleepycat.je.txn.Txn.abort(Txn.java:1100)
        at com.sleepycat.je.txn.Txn.abort(Txn.java:1073)
        at com.sleepycat.je.Transaction.abort(Transaction.java:207)
        at oracle.kv.impl.util.TxnUtil.abort(TxnUtil.java:80)
        at oracle.kv.impl.api.RequestHandlerImpl.executeInternal(RequestHandlerImpl.java:469)
        at oracle.kv.impl.api.RequestHandlerImpl.access$300(RequestHandlerImpl.java:122)
        at oracle.kv.impl.api.RequestHandlerImpl$2.execute(RequestHandlerImpl.java:301)
        at oracle.kv.impl.api.RequestHandlerImpl$2.execute(RequestHandlerImpl.java:290)
        at
    oracle.kv.impl.fault.ProcessFaultHandler.execute(ProcessFaultHandler.java:135>
  2. In past releases of NoSQL DB, a replication node which transitioned from master to replica state would have to close and reopen its database environment as part of the change in status. This transition has now been streamlined so that in the majority of cases, the database environment is not perturbed, the transition requires fewer resources, and the node is more available. [#22627]

  3. The plan deploy-topology command has additional safeguards to increase the reliability of the topology rebalance and redistribute plans. When moving a replication node from one Storage Node to another, the command will now check that the Storage Nodes involved in the operation are up and running before any action is taken. [#22850]

  4. Under certain circumstances it was possible for a replication node to use out of date master identity information when joining a shard. This could cause a delay if the targeted node was unavailable. This has been fixed. [#22851]

  5. Under certain circumstances operations would end prematurely with oracle.kv.impl.fault.TTLFaultException. This exception is now handled internally by the server and client library and the operation is retried. If the fault condition continues the operation will eventually fail with a oracle.kv.RequestTimeoutException. [#22860]

  6. Previously, there were cases where a replication node would require the transfer of a copy of the shard data in order to come up and join the shard, even though it was unnecessary. This has been fixed. [#22782]

  7. When new storage nodes are added to an Oracle NoSQL DB deployment and a new topology is deployed, the store takes that opportunity to redistribute master roles for optimal performance. In some cases, the store might not notice the new storage nodes until other events, such as failovers or mastership changes had occurred, which caused a delay in master balancing. This has been fixed. [#22888]

  8. The setting of the JE configuration parameter: je.evictor.criticalPercentage used by the store has been corrected. It used to be set to 105 and has been changed to 20. This new setting will provide better cache management behavior in cases where the data set size exceeds the optimal memory settings. [#22899]

Utility and Documentation Changes

  1. A timestamp has been added to the output of the CLI "ping" command. [#22859]