Oracle NoSQL Database Change Log

Release 11gR2.2.0.26 Enterprise Edition

Oracle NoSQL Database 2.0 has moved to Admin version 2. This is an on-disk format change which affects internal data stored by the Admin services. The change is forward compatible in that Admin services deployed using 2.0 can read data created by older releases. The change is not backwards compatible in that Admin services which have been deployed with the new release cannot be restarted using older NoSQL releases.

See the section on Updating an Existing Oracle NoSQL Database Deployment in the Admin Guide.


Changes in 11gR2.2.0.26 Enterprise Edition

New Features:

  1. This release adds the capability to remove an Admin service replica. If you have deployed more than one Admin, you can remove one of them using the following command:
    plan remove-admin -admin <adminId>
    

    You cannot remove the sole Admin if only one Admin instance is configured.

    For availability and durability reasons, it is highly recommended that you maintain at least three Admin instances at all times. For that reason, if you try to remove an Admin when the removal would result in there being fewer than three, the command will fail unless you give the -force flag.

    If you try to remove the Admin that is currently the master, mastership will transfer to another Admin. The plan will be interrupted, and subsequently can be re-executed on the new master Admin. To re-execute the interrupted plan, you would use this command:

    plan execute -id <planId>
    

  2. The Admin CLI verify has an added check to verify that the Replication Nodes hosted on a single Storage Node have memory settings that fit within the Storage Node's memory budget. This guards against mistakes that may occur if the system adminstrator overrides defaults and manually sets Replication Node heap sizes.[#21727]

  3. The Admin CLI verify command now labels any verification issues as violations or notes. Violations are of greater importance, and the system administrator should determine how to adjust the system to address the problem. Notes are warnings, and are of lesser importance. [#21950]

Bug fixes:

  1. Several corrections were made to latency statistics. These corrections apply to the service-side statistics in the Admin console, CLI show perf command, .perf files and .csv files, as well as the client-side statistics returned by KVStore.getStats. However, corrections to the 95% and 99% values do not apply to the client-side statistics, since these values do not appear in the client-side API. [#21763]

  2. Modified the Administration Process to allocate ports from within a port range if one is specified by the -servicerange argument to the makebootconfig utility. If the argument is not specified the Administration Process will use any available port. Please see the Admin Guide for details regarding the configuration of ports used by Oracle NoSQL Database. [#21962]

  3. Modified the replication node to handle the unlikely case that the locally stored topology is missing. A missing topology results in a java.lang.NullPointerException being thrown in the TopologyManager and will prevent the replication node from starting. [#22015]

  4. Replication Node memory calculations are more robust for Storage Nodes that host multiple Replication Nodes. In previous releases, using the plan change-params command to reduce the capacity parameter for a Storage Node which hosts multiple Replication Nodes could result in an over aggressive increase in RN heap, which would make the Replication Nodes fail at startup. The problem would be fixed when a topology was rebalanced, but until that time, the Replication Nodes were unavailable. The default memory sizing calculation now factors in the number of RNs resident on a Storage Node, and adjusts RN heap sizes as Replication Nodes are relocated by the deploy-topology command. [#21942]

  5. Fixed a bug that could cause a NullPointerException, such as the one below, during RN start-up. The exception would appear in the RN log and the RN would fail to start. The conditions under which this problem occurred include partition migration between shards along with multiple abnormal RN shutdowns. If this bug is encountered, it can be corrected by upgrading to the current release, and no data loss will occur.
    Exception in thread "main" com.sleepycat.je.EnvironmentFailureException: (JE
    5.0.XX) ...  last LSN=.../... LOG_INTEGRITY: Log information is incorrect,
    problem is likely persistent. Environment is invalid and must be closed.
        at com.sleepycat.je.recovery.RecoveryManager.traceAndThrowException(RecoveryManager.java:2793)
        at com.sleepycat.je.recovery.RecoveryManager.undoLNs(RecoveryManager.java:1097)
        at com.sleepycat.je.recovery.RecoveryManager.buildTree(RecoveryManager.java:587)
        at com.sleepycat.je.recovery.RecoveryManager.recover(RecoveryManager.java:198)
        at com.sleepycat.je.dbi.EnvironmentImpl.finishInit(EnvironmentImpl.java:610)
        at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:208)
        at com.sleepycat.je.Environment.makeEnvironmentImpl(Environment.java:246)
        at com.sleepycat.je.Environment.(Environment.java:227)
        at com.sleepycat.je.Environment.(Environment.java:170)
        ...
    Caused by: java.lang.NullPointerException
        at com.sleepycat.je.log.entry.LNLogEntry.postFetchInit(LNLogEntry.java:406)
        at com.sleepycat.je.txn.TxnChain.(TxnChain.java:133)
        at com.sleepycat.je.txn.TxnChain.(TxnChain.java:84)
        at com.sleepycat.je.recovery.RollbackTracker$RollbackPeriod.getChain(RollbackTracker.java:1004)
        at com.sleepycat.je.recovery.RollbackTracker$Scanner.rollback(RollbackTracker.java:477)
        at com.sleepycat.je.recovery.RecoveryManager.undoLNs(RecoveryManager.java:1026)
        ... 10 more
    
    [#22052]

  6. Fixed a bug that causes excess memory to be used in the storage engine cache on an RN, which could result in poor performance as a result of cache eviction and additional I/O. The problem occurred only when the KVStore.storeIterator or KVStore.storeKeysIterator method was used. [#21973]

Performance and other General Changes:

  1. The replicas in a shard now dynamically configure the JE property RepParams.REPLAY_MAX_OPEN_DB_HANDLES which controls the size of the cache used to hold database handles during replication. The cache size is determined dynamically based upon the number of partitions currently hosted by the shard. This improved cache sizing can result in better write performance for shards hosting large numbers of partitions. [#21967]

  2. The names of the client and server JAR files no longer include release version numbers. The files are now called:
    lib/kvstore.jar
    lib/kvclient.jar
    

    This change should reduce the amount of work needed to switch to a new release because the names of JAR files will no longer change between releases. Note that the name of the installation directory continues to include the release version number. [#22034]


  3. A SEVERE level message is now logged and an admin alert is fired when the storage engine's average log cleaner (disk reclamation) backlog increases over time. An example of the message text is below.
    121215 13:48:57:480 SEVERE [...] Average cleaner backlog has grown from 0.0 to
    6.4. If the cleaner continues to be unable to make progress, the JE cache size
    and/or number of cleaner threads are probably too small. If this is not
    corrected, eventually all available disk space will be used.
    
    For more information on setting the cache size appropriately to avoid such problems, see "Determining the Per-Node Cache Size" in the Administrator's Guide. [#21111]

  4. The storage engine's log cleaner will now delete files in the latter portion of the log, even when the application is not performing any write operations. Previously, files were prohibited from being deleted in the portion of the log after the last application write. When a log cleaner backlog was present (for example, when the cache had been configured too small, relative to the data set size and write rate), this could cause the cleaner to operate continuously without being able to delete files or make forward progress. [#21069]

  5. NoSQL DB 2.0.23 introduced a performance regression over R1.2.23. The kvstore client library and Replication Node consumed a greater percentage of system cpu time. This regression has been fixed. [#22096]


Changes in 11gR2.2.0.23

New Features:

  1. This release provides the ability to add storage nodes to the system after it has been deployed. The system will rebalance and redistribute the data onto the new nodes without stopping operations. See Chapter 6, of the Admin Guide, Determining your Store's Configuration, for more details.
  2. A new oracle.kv.lob package provides operations that can be used to read and write Large Objects (LOBs) such as audio and video files. As a general rule, any object larger than 1 MB is a good candidate for representation as a LOB. The LOB API permits access to large values without having to materialize the value in its entirety by providing streaming APIs for reading and writing these objects.
  3. A C API has been added. The implementation uses Java JNI and requires a Java virtual machine to run on the client. It is available as a separate download.
  4. Added a new remove-storagenode plan. This command will remove a storage node which is not hosting any NoSQL Database components from the system's topology. Two examples of when this might be useful are:
    A storage node was incorrectly configured, and cannot be deployed.
    A storage node was once part of a NoSQL Database, but all components have been migrated from it using the migrate-storagenode command, and the storage node should be decommissioned.
    [#20530]
  5. Added the ability to specify additional physical configuration information about storage nodes including: This information is used by the system to make more intelligent choices about resource allocation and consumption. The administration documentation discusses how these parameters are set and used. [#20951]
  6. Added Avro support. The value of a kv pair can now be stored in Avro binary format. An Avro schema is defined for each type of data stored. The Avro schema is used to efficiently and compactly serialize the data, to guarantee that the data conforms to the schema, and to perform automatic evolution of the data as the schema changes over time. Bindings are supplied that allow representing Avro data as a POJO (Plain Old Java Object), a JSON object, or a generic Map-like data structure. For more information, see Chapter 7 - Avro Schemas and Chapter 8 - Avro Bindings in the Getting Started Guide. The oracle.kv.avro package is described in the Javadoc. The use of the Avro format is strongly recommended. NoSQL DB will leverage Avro in the future to provide additional features and capabilities. [#21213]
  7. Added Avro support for the Hadoop KVInputFormat classes. A new oracle.kv.hadoop.KVAvroInputFormat class returns Avro IndexedRecords to the caller. When this class is used in conjunction with Oracle Loader for Hadoop, it is possible to read data directly from NoSQL Database using OLH without using an interim Map-Reduce job to store data in HDFS. [#21157]
  8. Added a feature which allows Oracle Database External Tables to be used to access Oracle NoSQL Database records. There is more information in javadoc for the oracle.kv.exttab package and an "cookbook" example in the examples/externaltables directory. [#20981] Javadoc

API Changes:

Performance and other General Changes:

  1. The following new methods: have been added to allow clients to configure the socket timeouts used to make client requests. Please review the javadoc for details.

    R1 installations must ensure that the software on the storage nodes has been upgraded as described in the upgrade documentation accompanying this release before using the above APIs on the client. [#20997]

  2. New service parameters have been added to control the backlog associated with sockets created by NoSQL Database. These are controllable for the Rep Node and Storage Nodes' Monitor, Admin, and Registry Handler interfaces. The parameters are rnRHSOBacklog (default 1024), rnMonitorSOBacklog (default 0), rnAdminSOBacklog (default 0), rnAdminSOBacklog (default 0), snAdminSOBacklog (default 0), snMonitorSOBacklog (default 0), and snRegistrySOBacklog (default 1024). [#21322]
  3. Previously, calling Key.isPrefix with an argument containing a smaller major or minor path than the target Key object caused an IndexOutOfBoundsException in certain cases. This has been fixed.
  4. The KeyRange() constructor now checks that the start Key is less than the end Key if both are specified, otherwise an IllegalArgumentException is thrown. KeyRange also has toString() and fromString() methods for encoding and decoding KeyRange instances, similar to the same methods in Key. [#21470]

Utility Changes:

  1. Many new commands have been added to the CLI. See Appendix A - Command Line Interface (CLI) Command Reference of the Administrator's Guide for details.
  2. The Admin Console is now for monitoring only.
  3. Administration CLI commands have been changed so that component ids match the ids used in the topology display. Previously Datacenters, Storage Nodes, Admin instances and Replication Nodes were identified only by number. For example, the syntax to add Storage Node 17 to a Storage Node pool, or to show the parameters for a given Replication Node was:
    joinPool myStorageNodePool 17
    show repnode-params 5,3
    
    Datacenters can now be expressed as # or dc#
    Admin instances can now be expressed as # or admin#
    Storage Nodes can now be expressed as # or sn#
    Replication Nodes can now be expressed as groupNum,nodeNum, or rgX-rnY

    The commands shown above are still valid, but can also be expressed as:

    joinPool myStorageNodePool sn17
    show repnode-params rg5-rn3
    
    [#21099]

Documentation, Installation and Integration:

  1. The javadoc for the Key.createKey methods has been improved to warn that List instances passed as parameters are owned by the Key object after calling the method. To avoid unpredictable results, they must not be modified. [#20530]


Changes in 11gR2.1.2.123

Bug fixes:

  1. Previously, executing a change-repnode-params plan in order to change Replication Node parameters for a node other than the one running the Admin service would fail. This operation will now succeed. [#20901]

  2. A deploy-storage-node plan which ran into problems when attempting to deploy a new storage node would leave the problematic SN in the store. This would require that the user either take manual action to remove the bad SN, or fix the problem and retry the plan. For convenience, the deploy-storage-node plan will now clean up if it runs into errors, and will not leave the failed SN behind. [#20530]

Performance and other General Changes:

  1. The command line interface's snapshot create command has been made significantly faster. Previously, it could take minutes if executed on a store with a large amount of data. This should be reduced to seconds. [#20772]

Utility Changes:

  1. The two scripts for starting kvlite and executing control commands, bin/run-kvlite.sh and bin/kvctl, have been replaced by a java -jar lib/kvstore-M.N.P.jar command. This provides portability to all Java platforms, including Windows. The two scripts are deprecated, but will be supported for at least one release cycle.

    The translation from the old script commands to the new -jar commands is as follows:

    Old script commandNew -jar command
    bin/run-kvlite.sh args... java -jar lib/kvstore-M.N.P.jar kvlite args...
    bin/kvctl command args... java -jar lib/kvstore-M.N.P.jar command args...

    There are a few differences to be aware of between the old and new commands.