Oracle NoSQL Database Change Log

Release 12cR1.3.3.4 Enterprise Edition

Upgrade Requirements

Upgrading an existing store to release 3.3 requires that the store be running with a 2.0 release or later. If you want to use release 3.3 with a store created prior to the 2.0 release, be sure to upgrade the store to a 2.0 release before upgrading it to the 3.3 release. Once a store has been upgraded to release 3.3, it cannot be downgraded to an earlier release.

See the section on Updating an Existing Oracle NoSQL Database Deployment in the Admin Guide.

Release 3.3 is compatible with Java SE 7 and later, and has been tested and certified against Oracle JDK 7u67. We encourage you to upgrade to the latest Java releases to take advantage of the latest bug fixes and performance improvements.

Attempting to use this release with a version of Java earlier than Java 7 will produce an error message similar to:

Exception in thread "main" java.lang.UnsupportedClassVersionError:
  oracle/kv/impl/util/KVStoreMain : Unsupported major.minor version 51.0

Changes in 12cR1.3.3.4 Enterprise Edition

New Features

  1. A number of new security features have been added.

    Users are now able to enforce table-level access checks through both the API and administrative command line interface (Admin CLI) by using the following new series of table-specific privileges:
    Privilege Description
    READ_ANY_TABLE Read from any table in kvstore
    DELETE_ANY_TABLE Delete data from any table in kvstore
    INSERT_ANY_TABLE Insert and update data to any tables in kvstore
    READ_TABLE Read from a specific table in kvstore
    DELETE_TABLE Delete data from a specific table in kvstore
    INSERT_TABLE Insert and update data to a specific table in kvstore
    CREATE_ANY_TABLE Create any table in kvstore
    DROP_ANY_TABLE Drop any table in kvstore
    EVOLVE_ANY_TABLE Evolve any table in kvstore
    CREATE_ANY_INDEX Create any index on any table in kvstore
    DROP_ANY_INDEX Drop any index on any table in kvstore
    EVOLVE_TABLE Evolve a specific table
    CREATE_INDEX Create index on a specific table
    DROP_INDEX Drop index on a specific table

    Users are now able to create new roles to group together privileges or other roles. This provides a way to grant a group of desired privileges to a user. New role management commands have been added to support create and drop roles, and to grant and revoke privileges or roles to and from other roles.

    Modifications to the data definition language (DDL) have been added to provide a declarative interface to all security operations. The language reference is in Security Guide. The language is accessible via the API as well as via the "execute" command in the Admin CLI.

    Passwords now have lifetimes and will expire when they have been in use beyond the specified lifetime. It is also possible now to explicitly expire a password when adding a new user or by altering the profile of an existing user. Users are required to renew the expired password before they can log in to the store successfully. [#23951]

  2. Admin service configuration has been enhanced to better match the use of primary and secondary zones. It is now possible for an Admin service to be a primary or a secondary Admin. This Admin type is analogous to zone type. A primary Admin can serve as a master or replica of the Admin shard, and vote in master elections. Secondary Admins can only serve as replicas and do not vote in master elections. All Admins created in earlier releases are primary Admins. With this release new Admins are created with the same type as their containing zone.

    Admin services created in a secondary zone by previous release will have the wrong type (primary). This mismatch will be reported as a violation from the verify configuration command. This condition can be remedied through the plan repair-topology command.

    In addition to the changes in Admin service type behavior, new rules have been put in place regarding Admin service deployment. In general it is recommended that Admin services follow the same rules as data nodes, specifically the number of Admin services in a zone should match the zone's replication factor. [#23985][#24182]

  3. The Command Line Interface (Admin CLI) now supports a read only mode if the Admin shard does not have quorum or if the master Admin is unreachable. In this read-only mode any commands which require the Admin to update persistent state, such as plan creation and execution, are disabled. However, most commands which provide status or configuration information will function. In addition to a notification in the CLI informing the user that the CLI is in this mode, the show admins command will also indicate that the CLI is connected read-only.

    Additional re-connect capabilities have been added to the Admin CLI to improve robustness of the CLI in the face of Admin node failures. [#23943]

API Changes

  1. The methods oracle.kv.table.TableAPI.execute and oracle.kv.table.TableAPI.executeSync have been deprecated in favor of the new APIs, oracle.kv.KVStore.execute and oracle.kv.KVStore.executeSync. The motivation for the change is the introduction of new DDL statements which manage objects that are above the scope of a single table, such as users and roles. Likewise, the classes oracle.kv.table.ExecutionFuture and oracle.kv.table.StatementResult are deprecated in favor of oracle.kv.ExecutionFuture and oracle.kv.StatementResult. [#23937]
  2. New methods have been added to oracle.kv.table.RecordValue to provide the ability to use JSON values for complex fields in a table. In the past, JSON could be used to specify the entire row, but not portions of a row. The following 6 JSON input methods for complex types in RecordValue are new: [#24069]
    RecordValue putRecordAsJson(String fieldName, String jsonInput, boolean exact);
    RecordValue putRecordAsJson(String fieldName, InputStream jsonInput, boolean exact);
    RecordValue putArrayAsJson(String fieldName, String jsonInput, boolean exact);
    RecordValue putArrayAsJson(String fieldName, InputStream jsonInput, boolean exact);
    RecordValue putMapAsJson(String fieldName, String jsonInput, boolean exact);
    RecordValue putMapAsJson(String fieldName, InputStream jsonInput,boolean exact);
    
  3. New methods have been added to better support DDL use by stateless applications. oracle.kv.ExecutionFuture.toByteArray() returns a serialized version of the future that can later be passed to oracle.kv.KVStore.getFuture(byte[]) to recreate a Future instance. [#24228]
  4. There are changes to the way the results and status of DDL operations are reported in the API. Information about the execution of a DDL operation continues to be returned via oracle.kv.StatementResult. In past releases, both return values and execution status were returned via StatementResult.getInfo(), StatementResult.getInfoAsJson(). Now, results are return via a new method, StatementResult.getResult(), while status is returned via StatementResult.getInfo(), getInfoAsJson. [#24277]

Utility Changes

  1. plan deploy-admin now deploys an Admin with the same type (primary or secondary) as the zone in which it resides. Previously the Admin was always created as a primary.
  2. plan repair-topology will change an Admin's type so that the type matches the zone in which it resides.
  3. verify configuration will report the following:
  4. The put command in the Admin CLI now has a new -exact flag. When true, the input json string or file specified to the command must contain values for all columns in the table, and cannot contain extraneous fields.
  5. The Admin CLI now supports a new "show versions" command, which displays client and server version information. [#24005]
  6. The Admin CLI now provides the ability to use the semicolon character as a command terminator to enter multiple commands, or to enter commands on multiple lines.

    For example, the following command would have had to be typed in a single line:

      kv-> execute "create table users(name string, address string, primary key (name))"
    
    or
      kv-> execute "create table users \
        ->                 (name string, \
        ->                  address string, \
        ->                  primary key (name))"
    
    but can now be entered this way:
      kv-> execute "create table users
        ->                 (name string,
        ->                  address string,
        ->                  primary key (name))";
    
    In addition, multiple commands can be entered as below, using a semicolon as a terminator.
      kv-> show table -name users; get table -name users;
    
    [#24119]
  7. A new command line utility has been added that disables services associated with a storage node. You can use the new utility when starting a storage node whose services had configuration changes while the node was offline, to allow the configuration to be updated so that the services can be started with the proper configuration.

    The new utility is invoked this way:

      java jar -kvstore.jar disable-services -root ROOT_DIRECTORY [-config CONFIG_FILE]
    
    [#23988]
  8. Improvements have been made to the output of the Admin CLI verify configuration command, and both the Admin CLI and top level versions of the ping command:

    [#23981]

  9. A new "verify" option has been added to the diagnostics command line utility. This option does a variety of health and configuration checks, such as:
    java -jar kvstore.jar diagnostics verify -help
    
    Usage: verify -checkLocal |
                  -checkMulti
    
    

Bug and Performance Fixes

  1. Changes have been made to transfer the RN master nodes on SN shutdown. When a SNA shuts down it checks if any of the RNs managed by the SNA is a master for its replication group. If an RN is a master, the SNA causes a master-transfer before shutting down the RN. The implementation performs the transfers one at a time so that each transfer can take into account the results of the previous one, to avoid overloading the target SNs. [#22426]
  2. Modified the replication node and admin services to permit them to start up even if other members of a service's replication group were managed by storage nodes whose hostnames cannot be resolved via DNS. This change allows the store to continue to function, and in particular to support restarting services, if failures of storage nodes cause the nodes to have unresolvable DNS names. [#23120]
  3. Modified the 'topology validate' CLI command. In addition to the 'violations' and 'notes' that are currently displayed by that command, it now also notes when the topology contains any zones that are empty; that is, the zones that contain no SNs. [#23222]
  4. Currently the 'verify configuration' CLI command performs a component-by-component comparison of the store's current state or configuration against what is reflected in the store's Admin database, and displays any inconsistencies. Modified the command to also note when any empty zones exist in the store. With respect to the current the flags taken by the 'verify configuration' command, any changes related to empty zones should apply to all arguments except the optional -sn argument; in which case, the new behavior related to identifying empty data centers should not be executed. [#23223]
  5. The CLI 'snapshot' commands currently allow one to collect a snapshot from each Admin and RN of every reachable SN in the store, or to remove one or all instances of previously collected snapshot data. These commands have been modified to take a '-zn' or '-znname' flag so that the command applies to all the SNs executing in the zone with the specified id or name. [#23224]
  6. A bug has been fixed that could cause the Admin service to hang if the Admin web console is enabled, and is displaying log file output within the logtail pane, and an administrative command that updates Admin service configuration, such as plan deploy-admin, is running. [23907]
  7. Clarified an error message that would occur when a field name was used instead of elementof() in a CHECK expression involving a map or array. [#24055]
  8. Relaxed the DDL PRIMARY KEY expression, allowing the statement to redundantly specify single primary key fields as a SHARD keys as well. In a top-level (non-child) table the primary key is equivalent to the shard key unless otherwise specified. This change makes statements like this legal even though use of "SHARD" is redundant: CREATE TABLE mytable(id INTEGER, PRIMARY KEY(SHARD(id))). [#24105]
  9. Changes have been made to the request dispatcher to favor more rapid request failover on a node failure, reducing the possibility of a RequestTimeoutException. As part of this change, the default socket open timeouts have been reduced from 5 seconds to 3 seconds to permit a redispatch of requests within the request timeout period. [#24152]

Changes in 12cR1.3.2.15

New Features

  1. A declarative data definition language has been defined that allows declarative creation and management of tables and indexes. The language reference is in Data Definition Language for Tables. The language is accessible via APIs in Java as well as the new C driver. It can also be executed using the new "execute" command in the administrative command line interface (Admin CLI).
  2. The Command Line Interface (Admin CLI) has a new "execute" command that can be used to execute data definition language (DDL) statements. The Java or C API is the preferred mechanism for defining tables, but if you wish to define and manage tables via the Admin CLI, the "execute" command is preferred over the previous "table" command.
  3. A new C driver has been created to operate on tables and indexes in Oracle NoSQL Database. It is a separately downloadable product on the same page as this distribution. It is source code and must be compiled, along with its dependent libraries. Complete instructions are in its distribution.
  4. The implementation of indexes on maps has been enhanced so that it is now possible to create an index on the key strings of a map as well as an index on the values of a map, making these indices much more useful for a variety of data models. These indexes result in multiple index entries for a given row, up to the number of map entries. Additional APIs were added to help use these indexes.
  5. When using the Admin CLI interactively, you can now use backslash to enter a command on multiple lines. For example, you can now type:
    kv-> show events -type stat
     or
    kv-> show events \
    > -type stat
    
  6. A new diagnostics command line utility is available to provide support for troubleshooting an Oracle NoSQL DB cluster. Currently, the "collect" functionality lets a user easily retrieve and package logfiles each node in the cluster for further analysis. Additional functionality will be rolled out over time. The new utility is invoked this way:
    java -jar kvstore.jar diagnostics
    
    diagnostics-> help
    Oracle NoSQL Database Diagnostic Utility Commands:
    	setup
    	collect
    	exit
    	help
    
    diagnostics->
    

API Changes

  1. oracle.kv.table.TableAPI.execute(String) and oracle.kv.table.TableAPI.executeSync(String) were added to execute Data Definition Language (DDL) statements.
  2. New interfaces, oracle.kv.table.StatementResult and oracle.kv.table.ExecutionFuture were added to handle results of the new statement execution methods on TableAPI.
  3. oracle.kv.table.MapValue.putNull(String) and the constant, oracle.kv.table.MapValue.ANONYMOUS, were added in order to handle new the map indexes.
  4. oracle.kv.table.Index.createMapKeyFieldRange(String) and oracle.kv.table.Index.createMapValueFieldRange(String) were added to create FieldRange instances for the new map indexes.
  5. The method oracle.kv.table.TableAPI.execute(List<TableOperation>, WriteOptions) used to throw oracle.kv.OperationExecutionException. However, OperationExecutionException provides information that applies to the key value API in the oracle.kv package rather than the Table API in oracle.kv.table.The method has been changed to throw a new exception, oracle.kv.table.TableOpExecutionException, which provides information suitable for the Table API.

    Note that this is an incompatible API change. Applications which invoked TableAPI.execute(List<TableOperation>, WriteOptions) must be modified to handle TableOpExecutionException instead of OperationExecutionException.

Bug and Performance Fixes

  1. The CLI verbose and hidden commands have been enhanced so they can be specified using on/off parameters in addition to the previous toggle type functionality. [#23245]
  2. Modified the Admin CLI help to not display deprecated commands by default.A new -includeDeprecated flag was added for the CLI help command. If this flag is used, the deprecated commands are listed along with other commands. [#23365]
  3. The java -jar kvstore.jar load utility did not provide proper feedback when there were errors in operation, and incorrectly returned "Load succeeded". Now it returns both a non-zero status code when an error has been found and an accurate status message. [#23681]
  4. Fixed a security weakness that let arbitrary authenticated clients modify the store's topology by specifying an altered topology when propagating topology changes from one RN to another. A signature-based topology integrity check is employed to prevent it. [#23709]
  5. A window existed where a secondary index could incorrectly lose entries if the store was expanded while updates to the underlying table were happening. This window has been closed. [#23724]
  6. Fixed an issue that required users to reauthenticate to the store in order for changes made by granting or revoking roles for the current user to take effect. A real-time session update mechanism was introduced that allows existing login sessions to reflect role changes immediately without reauthentication. [#23839]
  7. Fixed a bug that sometimes prevented re-execution of interrupted change-parameters plans, when Admin parameters were being changed. [#23880]
  8. Fixed a problem where a client performing a parallel scan against a secure store without having permission to read the store would get a timeout exception rather than having the scan return no values. This problem could be encountered when using the get kv -all command with the administrative CLI:
    kv-> get kv -all
    Error handling command get kv -all: Failed to iterate records :
    Parallel storeIterator Request Queue take timed out.
    (12.1.3.1.3) Timeout: 5000ms
    
    If the scan was performed via the API using the KVStore.storeIterator or storeKeysIterator method overloadings that supply a StoreIteratorConfig, the problem could cause the method call to throw a RequestTimeoutException. In all of these cases, the iteration now returns no elements. [#23881]
  9. Fixed an issue that would result in duplicate values returned from the Hadoop APIs when a complete primary key is specified. [#23958]
  10. Fixed a bug where iteration over array indexes when the result set was larger than the batch size (default 100) could result in incorrect results or a hang. [#23977].
  11. Fixed a slow memory leak in the handling of socket timeouts: The socket object associated with a failed socket connect operation continued to be referenced and could not be garbage collected. [#24039]

Changes in 12cR1.3.1.7

New Features

  1. Added a convenience method on the Table interface to construct a MultiRowOptions instance from a single list of table names and optional FieldRange. [#23807]
  2. Added FieldRange.getField() to return the FieldDef instance that was used to construct the FieldRange object. [#23908]

Bug and Performance Fixes

  1. Fixed a problem that caused index creation on an empty table to take as much as 10 minutes to complete. [#23826]
  2. Fixed a performance regression in the CLI delete command. This fix may result in some delete commands being slower because the problem was an optimization for some delete cases, but in the general case delete will have the same performance it had in release 3.0.5. [#23918]
  3. Removed unnecessary non-topology metadata broadcasts to replica rep nodes. Since only a master rep node can update the metadata for the shard there is no point in attempting to update a replica. With this change two new admin parameters have been added to control the metadata broadcast:
    Note that this fix only affects the broadcast of table and security metadata. The topology broadcast parameters and behavior remain unchanged. [#23362]

Changes in 12cR1.3.1.5

New Features

  1. Added support for role-based authorization for both the API and the administrative CLI. Authorization is built on the authentication mechanism introduced in release 3.0. Each authenticated user of the KVStore can be granted roles which determine which APIs and CLI commands the user can access. This release only supports built-in roles, including the "read" and "write" roles for data access control, and the "sysadmin" and "dbadmin" roles for system and database operation control, respectively. More details of this feature are described in the Oracle NoSQL Database Security Guide.

    The KVStore logging system has also been enhanced to log security sensitive events. Two new logging levels named SEC_WARNING and SEC_INFO have been introduced. Messages logged at SEC_WARNING level will produce critical events in the CLI. High level security events like failed login attempts and unauthorized operations will be recorded as SEC_WARNING KVStore events. Execution of CLI commands which require sysadmin or dbadmin roles will be recorded as SEC_INFO events. Logging messages of all security sensitive events have "timestamp KVAuditInfo" as the prefix for the ease of grepping and filtering. [#23423]

  2. The makebootconfig command has been enhanced to do validity checking of all parameters. The -force flag has been added to let the user manually override the default validity checks if desired. [#23422]

Bug and Performance Fixes

  1. Fixed a bug that could cause the plan change-parameters command to fail in some cases, reporting a NullPointerException in the plan history. This could only come about in some situations where the parameter change requires restarting all nodes, but not all nodes are available. [#22673]
  2. Modified heap size calculations for replication nodes to reduce heap sizes in some cases to permit using compressed object references.

    The supported Java environments can use compressed object references when the heap size is smaller than a certain size, currently between 25 and 32 GB depending on the implementation. Because the space savings provided by compressed references are substantial, using a larger heap size typically results in less usable space unless the heap size is at least 50% larger than the largest size that supports compressed references.

    The system now automatically reduces the replication node heap size if the value of the memoryMB storage node parameter produces a heap size that is too large to support compressed references but not large enough to provide more usable space. The heap size is not reduced if the application specifies a javaMiscParams replication node parameter that explicitly specifies either non-compressed references or the maximum heap size. Note that, if the memoryMB parameter is not specified explicitly, or is set to zero, the system sets it to the amount of physical memory available on the host, and will reduce the heap size in the same way as when memoryMB is set explicitly to a non-zero value. [#22695]

  3. The Version.getVersion method has been deprecated and removed from the public documentation. It will probably be removed entirely in a future release. The getVersion method returns an internal version number which is only meaningful relative to a particular shard. For that reason, it is not a safe to compare the method results for arbitrary versions, and so it has been removed. Applications can continue to use the equals method to compare versions for identity, but there is currently no official way to compare versions to determine which is newer. We could add that capability in the future if applications have a need for it. [#23526]
  4. The plan add-index command has been changed to check for index creation status more promptly, which lets the command finish sooner. [#23568]
  5. Added additional type validation when JSON input is used to construct Row and PrimaryKey instances. Previously it was possible to provide incorrect types that were silently cast to the correct type based on the table schema, possibly resulting in loss of information. [#23765].
  6. Fixed a problem that could be encountered during the configuration of a NoSQL Database cluster hosted within an IBM J9 JVM environment. If the Storage Nodes were deployed without providing an explicit value for the optional memory_mb parameter, the available memory calculation for the Replication Node hosted on that SN was incorrect, which might cause an "OutOfMemoryError" for that replication node when the plan deploy-topology command was run. This could only happen on the J9 JVM, and has been fixed. [#23737].
  7. Both Hadoop and Oracle External Tables employ independent processes to read data from a store in parallel. A change has been made to the way work is distributed among these processes. The new algorithm attempts to distribute work in a way that minimizes contention on a single replication node, while maximizing the amount of parallelism across the store. The number of shards, the replication factor, and the requested consistency, are input into the calculation. This change may also affect the number of Hadoop processes, or splits which are generated. In addition to the change in work distribution, both Hadoop and external table processes may use the Parallel Scan APIs, allowing multiple threads to operate in a single process. [#23749]
    Note that the public method oracle.kv.hadoop.KVInputFormatBase.setDirection(Direction) has been deprecated since only Direction.UNORDERED is supported.
  8. Fixed a problem where, if an index was created on an array, or field in an array, and the array itself is null in a Row, a ClassCastException could be thrown during index key extraction on a server node. [#23757].
  9. Modified the Ant build script so that the default target, which builds the JAR files, works for the Community Edition. You can now use ant to build new versions of the JAR files after making modifications to the source code. [#23764]
  10. If a client program made an unexpected, non-secure call to the trusted login service of an Storage Node, a flaw in the service implementation caused the service to hang, causing future user logins to fail. This bug has been fixed. [#23786]
  11. Fixed an issue with a runaway number of client threads when multiple index iterators are opened and closed in quick succession. Specifically, the iterator's close() method will now wait for its threads to exit before returning. This is the same behavior as the parallel scan iterators, though it is documented that it does not wait. The documentation has been corrected. [#23797]
  12. Fixed a bug which could result in a deadlock situation when an index iterator is closed. As part of this fix the Javadoc for the oracle.kv.KVStore and oracle.kv.table.TableAPI interfaces have been updated to make it clear that the iterators returned by those interfaces can only be used safely by one thread at a time unless synchronized externally. [#23799]
  13. In the course of upgrading a NoSQL cluster from R2.X to R3.x, when the first Storage Node upgrade occurs, the Replication Node it hosts will transition into "STARTING" status and will wait until more nodes are upgraded. If the user runs the "verify upgrade" command at this point, they would see the following, misleading error:
       kv-> verify upgrade
       Unknown Exception: class oracle.kv.impl.rep.admin.RepNodeAdminFaultException
       RepNode is not RUNNING, current status is STARTING (12.1.3.1.2)
       oracle.kv.impl.rep.admin.IllegalRepNodeServiceStateException:
       RepNode is not RUNNING, current status is STARTING
      
    This has been fixed. [#23859]
  14. The following parameters used by the Berkeley DB JE storage engine used within NoSQL Database have been changed to reflect changes in the updated version of JE used by this release:
  15. Fixed a performance issue where the stop utility command resulted in an incomplete checkpoint if the checkpoint took more than 10 seconds. The incomplete checkpoint could result in the replication node taking longer to restart than it would with a complete checkpoint. The stop command has also been modified to initiate checkpoints in parallel across all the replication nodes hosted by the storage node being shutdown, thus making the command run faster.
  16. Fixed a problem that caused upgrading from earlier releases to a 3.0 release to fail in some cases, with the following stack trace in the Admin log. [#23879]
    com.sleepycat.persist.evolve.IncompatibleClassException: (JE 6.2.5) Changes to the fields or superclass were detected when evolving class:
    oracle.kv.impl.admin.plan.task.StopAdmin version: 1 to class: oracle.kv.impl.admin.plan.task.StopAdmin version:
    1 Error: A new higher version number must be assigned
    ---
    (Note that when upgrading an application in a replicated environment, this exception may indicate that the Master was mistakenly upgraded
    before this Replica could be upgraded, and the solution is to upgrade this Replica.)
        at com.sleepycat.persist.impl.PersistCatalog.init(PersistCatalog.java:512)
        at com.sleepycat.persist.impl.PersistCatalog.initAndRetry(PersistCatalog.java:268)
        at com.sleepycat.persist.impl.PersistCatalog.<init>(PersistCatalog.java:228)
        at com.sleepycat.persist.impl.Store.<init>(Store.java:202)
        at com.sleepycat.persist.EntityStore.<init>(EntityStore.java:190)
        at oracle.kv.impl.admin.Admin.initEstore(Admin.java:2126)
    

Changes in 12cR1.3.0.14

New Features

  1. When using the Table API, it is now possible to create indices on fields in records, maps, and arrays. [#23091]
  2. The integration of Oracle NoSQL Database with Oracle Coherence has been updated to support Coherence 12c (12.1.2). As of Coherence 12.1.2, cache configuration parameters are specified within a custom XML namespace and are processed by the NoSQL Database namespace handler at runtime. Though it's possible to use this updated module with Coherence version 3.7.1, we highly recommend that you upgrade Coherence to the latest version. Please see the javadoc for oracle.kv.coherence package for information on how to configure a NoSQL Database backed cache with Coherence 12.1.2, or the earlier Coherence 3.7.1. [#23350]
  3. It is now possible to use Oracle External Tables to access Oracle NoSQL Database tables created with the Table API. In addition to the usual required properties, users need to specify the table name in the external table configuration file. Please see KVHOME/examples/externaltables/cookbook.html for details.[#23605]
  4. Added a new "size" option to the Admin CLI table command, to estimate the in-memory size of the given table. The results of the size command can be used as inputs when planning resource requirements for a store. [#23444]
  5. Oracle Enterprise Manager can now monitor instances of Oracle NoSQL Database. For more information, please see Integrating Oracle Enterprise Manager with Oracle NoSQL Database in the Admin Guide.
  6. It is now possible to access data written to an Oracle NoSQL Database via the Table API from within a Hadoop MapReduce job. In addition to the usual required properties, users now need to specify the table name in the command line used to initiate the MapReduce job. Please refer to the javadoc of the class, KVHOME/examples/hadoop/table/CountTableRows.java for additional details.[#23714]
  7. It is now possible to execute Hive queries against data written to an Oracle NoSQL Database via the Table API. To employ this feature, one must create a Hive external table that is 'STORED BY' the new oracle.kv.hadoop.hive.table.TableStorageHandler class, with fields similar to the fields with which the Oracle NoSQL Database table was created; that is, same number and types. Additionally, the Hive TBLPROPERTIES must specify the store name, the helper host and ports, and the table name (which does not have to be equal to the Hive table name).[#23714]

Bug and Performance Fixes

  1. Added more sanity checking and improved error messages for the securityconfig add/remove-security commands. [#23311]
  2. Fixed a bug where specifying an invalid value for the Storage Node parameter, "mgmtClass" could cause a crash in the Admin service. [#23227]
  3. Modified the oracle.kv.RequestLimitConfig constructor to improve bounds checking and correct problems with integer overflow. [#23244]
  4. Fixed a bug that sometimes caused Admin parameters not to take effect until the Admin's hosting SNA was rebooted. [#23429]
  5. Added a new attribute (String replicationState) in the RepNode MBean presented via JMX, to indicate the state of the RepNode's membership in its replication group. Typically the value will show "MASTER" or "REPLICA", but it can also report "DETACHED" or "UNKNOWN". This same value is reported via SNMP in the repNodeReplicationState object, as defined in nosql.mib. [#23459]
  6. Modified the Durability, RequestLimitConfig, and Version classes to implement Serializable to permit serializing instances of oracle.kv.KVStoreConfig, which was already serializable. [#23474]
  7. Fixed a bug where an operation using the Table API might see a SecondaryIntegrityException if there is a replication node failover while secondary index is populated. [#23520]
  8. Prior to this release, it was not possible use the Load utility to create a new store from snapshot files that had been taken against a store with security enabled, or a store that was using the Table API. This has been fixed. The -security, -username, -load-admin, and -force flags were added to the load utility to use in this case. See the Administrator's Guide for more information. [#23528]
  9. Fixed a bug where invoking the "history" command in the Admin CLI with the "-last" option and a value that is greater than the total number of commands executed in the store could result in a java.lang.ArrayIndexOutOfBoundsException. [#23579]
  10. Fixed a bug where an Admin service might exit with the following exception. Before the fix, administrative functionality would seamlessly fail over to another admin service, but the process exit was unnecessary and would show up as an alertable event.
    com.sleepycat.je.rep.UnknownMasterException:
    Transaction -XXX cannot execute write operations because this node is no longer a master
    
    [#23580]
  11. Fixed a bug in the Table API so that enum fields may have names that begin with an underscore.
  12. In rare cases, when a store has been deployed with Storage Nodes with capacity > 1 and there are concurrent delete operations, table iteration operations, and a transfer of mastership roles in a shard, it could be possible for the iteration operation to incorrectly skip a value that should have been returned by the iterator. This has been fixed. [#23608]
  13. Fixed a GC configuration issue that could cause the CMS phase of the Java GC to run repeatedly, consuming CPU resources on an otherwise idle RepNode. The fix changed the default JVM CMSInitiatingOccupancyFraction from 77 to 80. Our testing indicates that this is a better configuration under a broad range of application access patterns. However, if you need to override this new configuration in some unusual circumstance, you can use the Admin's change-policy command and, if it's an existing store, the plan change-parameter command, as below:
    change-policy -params "javaMiscParams=-XX:CMSInitiatingOccupancyFraction=77"
    plan change-parameters -all-rns -params "javaMiscParams=-XX:CMSInitiatingOccupancyFraction=77"
    
    [#23652]
  14. Fixed the following bugs which could occur in a store with security enabled:
  15. Fixed a bug where application requests might unnecessarily time out for a brief period of time directly after an elasticity change [#23705]

Packaging and Documentation Changes:

  1. The version of the Oracle Coherence library bundled with Oracle NoSQL Database has been upgraded to the more recent Coherence 12.1.2. This requires a change in the way cache configuration parameters are specified for the NoSQL Database backed cache.
  2. New documentation has been added on how to use the Large Object API. See the index page, and "Oracle NoSQL Database Large Object API".

Changes in 12cR1.3.0.9

New Features

  1. Modified the administrative CLI to save its command line history to a file so that it is available after restart. If you want to disable this feature, the following Java property should be set while running runadmin:
    java -Doracle.kv.shell.jline.disable=true -jar KVHOME/kvstore.jar runadmin -host <hostname> -port <portname>
    

    The CLI attempts to save the history in a KVHOME/.jlineoracle.kv.impl.admin.client.CommandShell.history file, which is created and opened automatically. The default history saved is 500 lines. If the history file cannot be opened, it will fail silently and the CLI will run without saved history.

    The default history file path can be overridden by setting the oracle.kv.shell.history.file=path Java property.

    The default number of lines to save to the file can be modified by setting the oracle.kv.shell.history.size=int_value Java property. [#22690]

  2. Modified the admin CLI aggregate command to provide subcommands for tables and key/value entries. The aggregate table subcommand performs simple data aggregation operations on numeric fields of a table, while the aggregate kv subcommand performs aggregation operations on keys. [#23258]

Bug and Performance Fixes

  1. Modified the implementation of index iterators to use weak references so that the garbage collector can remove the resources associated with unused index iterators. [#23306]
  2. Improved the handling of metadata propagation and other internal operations. [#23355] [#23368] [#23385]
  3. Modified the external tables integration to distribute the concurrent processing load more evenly across processes. [#23363]
  4. Fixed a problem where a failure during a partition migration performed during a topology redistribution for a store that has indexes resulted in SecondaryIntegrityExceptions being thrown when the migration was restarted. [#23392]
  5. Removed the FieldRange.setEndDate method, in favor of the existing setEnd method. [#23399]
  6. Modified schema evolution to prevent changes that could resurrect an old field using a different type. Such a change would cause old data to become unreadable by the current table. This fix prevents resurrection of a field name unless it exactly matches the previous definition. [#23403]
  7. Fixed a problem with handling network timeouts that could result in FaultException being thrown from KVStore operations instead of RequestTimeout when a timeout occurs. Here's a sample stack trace:
    Caused by: oracle.kv.FaultException: Problem during unmarshalling (12.1.2.1.24)
    Fault class name: java.rmi.UnmarshalException
        at oracle.kv.impl.api.RequestDispatcherImpl.faultIfWrite(RequestDispatcherImpl.java:968)
        at oracle.kv.impl.api.RequestDispatcherImpl.handleRemoteException(RequestDispatcherImpl.java:883)
        at oracle.kv.impl.api.RequestDispatcherImpl.handleDispatchException(RequestDispatcherImpl.java:736)
        at oracle.kv.impl.api.RequestDispatcherImpl.execute(RequestDispatcherImpl.java:572)
        at oracle.kv.impl.api.RequestDispatcherImpl.execute(RequestDispatcherImpl.java:1031)
        at oracle.kv.impl.api.KVStoreImpl.executeRequest(KVStoreImpl.java:1251)
        at oracle.kv.impl.api.KVStoreImpl.putIfVersion(KVStoreImpl.java:990)
        at oracle.kv.impl.api.KVStoreImpl.putIfVersion(KVStoreImpl.java:968)
        [...]
        ... 41 more
    Caused by: java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is:
        java.net.SocketException: Socket closed
        at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:228)
        at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
        at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:194)
        at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:148)
        at com.sun.proxy.$Proxy21.execute(Unknown Source)
        at oracle.kv.impl.api.RequestHandlerAPI.execute(RequestHandlerAPI.java:94)
        at oracle.kv.impl.api.RequestDispatcherImpl.execute(RequestDispatcherImpl.java:560)
        ... 46 more
    Caused by: java.net.SocketException: Socket closed
        at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:121)
        at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
        at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
        at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
        at java.io.ObjectOutputStream$BlockDataOutputStream.flush(ObjectOutputStream.java:1822)
        at java.io.ObjectOutputStream.flush(ObjectOutputStream.java:718)
        at sun.rmi.transport.StreamRemoteCall.releaseOutputStream(StreamRemoteCall.java:114)
        at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:212)
        ... 52 more
    
    [#23411]
  8. Modified table iteration to optimize performance when all matching entries fall within a single shard. [#23412]
  9. Fixed an issue where the index scan iterator would fail to return records from a shard if there was a record that compared equal to a record from another shard. This situation occurred when there was more than one shard in a store and there were equivalent index entries for a given index in both shards. The symptom was index iteration returning fewer rows than expected. [#23421]
  10. Added several interfaces to the table package: The related javadoc was also updated to indicate that the lists and maps returned from these, and similar interfaces, are immutable. [#23433]
  11. The data Command Line Interface (CLI) has a method to input table rows from a file with a JSON representation. This input method had an issue where a blank line could cause an infinite loop in the input path. This has been fixed in a way that will result in silently skipping blank lines as well as comment lines (those whose first non-whitespace character is "#"). [#23449]
  12. Fixed handling of null values in indexed fields and in IndexKey. Previously, a null value in an indexed field could cause a server side exception. During a put, null values in indexed fields will result in no index entries for indexes in which that field participates. Further, null values are not allowed in IndexKey instances. IllegalArgumentException is thrown if an attempt is made to set a null value in an IndexKey. [#23588]
  13. Modified the Admin Service to listen on all interfaces on a host. This change permits deployment of KVStore in heterogeneous network environments, where a hostname may be resolved to different IP addresses to make the best possible use of the available network hardware. [#23524]

Changes in 12cR1.3.0.5

New Features

  1. A new client interface has been added that includes a set of datatypes and a tabular data model using those types. The tabular data model is used to provide support for secondary indexes which are defined on fields in a table. The model is discussed in the Getting Started Guide for Tables

    Tables and indexes are defined using the administrative CLI and accessed via programmatic API. The data CLI has been enhanced to perform operations on tables and indexes as well. The API is documented in the Oracle NoSQL Database javadoc, and is primarily in the oracle.kv.table package.

    It is possible to define tables that overlay data created with NoSQL DB Release 2 if that data was created using a conforming Avro schema. This overlay is required in order to create secondary indexes on conforming Release 2 data.

    The existing key/value interface remains available.

  2. It is now possible to define secondary indexes for records. See the previous changelog entry about tables. Index entries for a given record have transactional consistency with their corresponding primary records. Index iteration operations are part of the new table API. Index scan operations allow applications to iterate over raw indexes in 3 ways -- forward order, reverse order, and unordered. It is possible to define exact match and range scans in indexes. Indexes can be on single fields or defined as composite indexes on multiple fields in a table.
  3. Support for username/password authentication and secure network communications has been added. Existing applications that do not require this feature are not impacted except for a change to makebootconfig, which adds a new required argument (-store-security). Users that wish to use the new capabilities should be aware of the following areas of change:

    Users should also familiarize themselves with security property files, which are required when using a KVStore command-line utility program against a secure store, and which may also be useful when running an application against a secure store.

    This feature is described in much greater depth in the Oracle NoSQL Database Security Guide, as well as in the Administrators Guide and product Javadoc.

  4. The administrative CLI has been modified to use new terminology to refer to data centers. Data centers are now called zones. The new terminology is meant to clarify that these node groupings may not always coincide with physical data centers. A zone is a collection of nodes that have good network connectivity with each other and have some level of physical separation from nodes in other zones. That physical separation may mean that different zones are located in different physical data center buildings, but could also represent different floors, rooms, pods, or racks, depending on the particular deployment.

    Commands that contained the word "datacenter" have been deprecated, and are replaced with commands using the word "zone". The previous commands will continue to work in this release. New commands are:

    Command flags that specify a zone have been changed to -zn, for a zone ID, and -znname, for a zone name. The earlier -dc and -dcname flags have been deprecated but will continue to work in this release. In addition, zone IDs can now be specified using the "zn" prefix, with the earlier "dc" prefix still currently supported.

    The administrative GUI has also been modified to use the new Zone terminology. [#22878]

  5. There are now two types of zones. Primary zones contain electable nodes, which can serve as masters or replicas, and vote in master elections. All zones (or data centers) created in earlier releases are primary zones, and new zones are created as primary zones by default. Secondary zones contain nodes of the new secondary node type, which can only serve as replicas and do not vote in master elections. Secondary zones can be used to make a copy of the data available at a distant location, or to maintain an extra copy of the data to increase redundancy or read capacity. [#22483]
  6. The show plan command now provides an estimated migration completion time. For example:
    Plan Deploy Topo (12)
    State:                 RUNNING
    Attempt number:        1
    Started:               2014-01-14 17:35:09 UTC
    Ended:                 2014-01-14 17:35:27 UTC
    Total tasks:           27
     Successful:           12
     Incomplete:           15
    Incomplete tasks
       3 partition migrations queued
       1 partition migrations running
       11 partition migrations succeeded, avg migration time = 550164 ms.
    Estimated completion:  2014-01-14 19:57:37 UTC
    
    [#22183]
  7. A new read consistency option has been added for this release. oracle.kv.Consistency.NONE_REQUIRED_NO_MASTER can now be used to specify that the desired read operations must always be serviced by a replica, never the master. For read-heavy applications (ex. analytics), it may be desirable to isolate read requests so that they are performed only on replicas, never a master; reducing the load on the master. The preferred mechanism for achieving this sort of read isolation is the new secondary zone feature; which users are encouraged to employ for this purpose. But for cases where the use of secondary zones is not desired or impractical, oracle.kv.Consistency.NONE_REQUIRED_NO_MASTER can be used to achieve a similar effect, without the additional resources that secondary zones may require. [#22338]
  8. The following methods have been added:

    These methods make it possible to require that read operations only be performed on nodes located in the specified zones.

  9. The show plans command has been changed so that a range of plan history can be specified. With no arguments, show plans now displays only the ten most recently created plans, but new arguments can be used to select ranges by creation time and by plan id. Issue the command "show plans -help" to see the complete set of options.
  10. The makebootconfig utility has a new optional -runadmin command-line argument, which allows the SNA to force the start of a bootstrap admin even if the value of -admin is set to 0.

    The option -port of plan deploy-admin within the admin CLI has been changed to be able to control the start of the admin web service. No web service of the admin will be started if the -port is set to 0 in deploying.

    Users can also change the http port of an admin after deployment via plan change-param command of admin CLI to change the setting for whether an admin runs a web server. [#22344]

  11. The plan change-param command has been changed to allow changing the parameters for a single admin service. [#22244]
  12. NoSQL topology information is stored both in the Admin services and on Storage Nodes, and can become inconsistent if topology changing plans such as deploy-topology and migrate-sn are canceled before completion. Inconsistencies can be repaired by redeploying the target topology. In this release, a "plan repair-topology" command is also provided as an additional way of repairing topology inconsistencies. The verify configuration command now generates recommendations for when it may be beneficial to use repair-topology. [#22753]

Bug and Performance Fixes

  1. The makebootconfig command now prints a message when it declines to overwrite existing configuration files. [#23012]
  2. The "plan remove-admin" now permits removal of an Admin that is hosted by Storage Node that is not running. [#23061]
  3. Fixed a bug that sometimes caused a duplication of the admin section in a Storage Node's config.xml file. As a result, the "plan change-parameters" command, when applied to an Admin service with this configuration irregularity, could unexpectedly have no effect. The bug could be provoked by attempting to deploy an Admin that is already deployed; but it could also happen when re-executing a failed "plan migrate-storagenode" command. [#23152]
  4. Fixed a problem that caused the topology redistribute command to ignore storage directory settings when creating new replication nodes. [#23161]
  5. Previously, when there was no activity during a RepNode's metrics-gathering period (the statsInterval), the previous period's metric values would be reported via JMX and SNMP. This behavior has changed so that the metrics are updated at every interval. [#22842][#22537]
  6. NoSQL DB automatically adjusts mastership identity so that master nodes are distributed across a store for optimal performance. Fixed a problem that prevented Master Balancing from being performed across multiple zones. [#22857]
  7. Modified the LOB implementation to repeat calls to InputStream.skip as needed to position the input stream to the start location, so long as the calls return non-zero values. An IllegalArgumentException will be thrown if the calls do not advance the stream to the required start location.

Utility Changes:

  1. The administrative and data command line interfaces (CLI) have been merged into a single program. The usage of the merged CLI is compatible with most old usage but has additional options that allow it to work for administrative operations, data operations, or both. This change requires the use of kvstore.jar for data operations where in previous releases, the data CLI only required kvcli.jar, which depended on kvclient.jar.
  2. The CLI has been enhanced with commands necessary to manage tables, indexes, security information, and zones.

Documentation Changes:

  1. With the introduction of the tabular data model and secondary indexes, a new Getting Started with the Table API guide has been added.
  2. With the introduction of the new security features, a new Security Guide has been added.

Packaging Changes:

  1. The versions of the Avro and Jackson libraries bundled with Oracle NoSQL database have been upgraded to the more recent Avro 1.7.6 and Jackson 1.9.3. These versions are compatible with the previous API versions.

Changes in 12cR1.2.1.57

New Features

  1. The new method KVStore.appendLOB() now permits appending to an existing LOB (Large Object). As part of this change, the method PartialLOBException.isPartiallyDeleted() has been deprecated in favor of the new method: PartialLOBException.getPartialState(). Please consult the javadoc associated with these new methods, as well as the updated doc for the interface KVLargeObject, for a detailed description of this new functionality.

    This release is backwards compatible with LOBs created in previous releases, with one exception: Only LOBs created in this, or a later, release support the append operation. Attempts to use the append operation on LOBs created in previous releases will result in the method throwing an UnsupportedOperationException.

    LOBs created in this release cannot be read or deleted by clients using earlier releases. Such operations will typically fail with a ConcurrentModificationException. Please ensure that all clients are updated to this release before creating new LOBs. [#22876]

  2. GC log files for the Admin and RepNode services are now generated by default and placed in the KVROOT/<storename>/log directory (the standard location for all NoSQL related logging information). This default behavior only applies when using JDK release 1.7 or a later release, since gc log rotation is only supported in the more recent JDKs. The logging has minimal resource overheads. Having these log files readily available, conforms to deployment best practices for production java applications making it simpler to diagnose GC issues should the need arise. [#22858]

Bug and Performance Fixes

  1. The heap requirement of the Admin service, when operating on a store that has undergone numerous changes, has been reduced. [#21143]
  2. Fixed a bug in the Admin CLI "show plan -id <id>" command, which resulted in the omission of information about partition migration tasks from the plan history report. The command now correctly includes information about partition migrations. [#22611]
  3. Reduce internal timeout values, associated with the network connection between a master and a replica, to permit faster master failover upon encountering a network hardware failure. [#22861]
  4. An attempt to resume a failed put operation on a LOB larger than 3968K bytes could result in an incorrect ConcurrentModificationException in some circumstances. The bug has been fixed in this release. [#22876]
  5. Changed the way plans are represented in the Admin's memory. Previously, there was no limit on the potential size of the in-memory representation of currently active and historical plans. With this fix, only active plans are kept in memory. [#22963]
  6. Eliminated deadlocks in plan management in the Admin. [#22992]
  7. A bug in the argument checking for the StoreIteratorConfig setter methods has been fixed. [#23010]
  8. The makebootconfig command now prints a message when it declines to overwrite existing configuration files. [#23012]
  9. The Replication Node configuration has been tuned to reduce CPU utilization when the Replication Node's cache is smaller than required, and cache eviction is taking place. [#23026]
  10. The remove-admin command now permits removal of an Admin that is hosted by Storage Node that is not running. [#23061]
  11. The show plans command could sometimes cause a crash in the Admin CLI because it would consume too much memory. This has been fixed. [#23105].

Changes in 12cR1.2.1.54

New Features

  1. Oracle NoSQL Database now offers a client only package. Oracle NoSQL Database Client Software Library is licensed pursuant to the Apache 2.0 License (Apache 2.0). The Apache License and third party notices for the NoSQL DB Client Software Library may be viewed at this location or in the downloaded software.
  2. A new overloading of the KVStore.storeIterator() and KVStore.storeKeysIterator() implements Parallel Scans. The other storeIterator() methods scan all shards and Replication Nodes in serial order. The new Parallel Scan methods allow the programmer to specify a number of client-side threads that are used to scan Replication Nodes in parallel. [#22146]

Bug and Performance Fixes

  1. Improved error messages in the Data Command Line Interface (kvshell). For example, a put command with invalid inputs might have returned this error message in the past:
    kvshell-> put -key /test -value ./emp.insert -file -json Employee
      Could not create JSON from input:
      Unable to serialize JsonNode
    
    but will now produce this more useful response:
    kvshell-> put -key /test -value ./emp.insert -file -json Employee
    Exception handling command put -key /test -value ./emp.insert -file -json Employee:
      Could not create JSON from input:
      Expected Avro type STRING but got JSON value: null in field Address of Employee
    
    [#22791]
  2. Fixed a bug when using the plan deploy-admin command. In some cases, if an Admin service encountered an error at start up, the process would become unresponsive. The correct behavior is for the process to shut down and be restarted by its owning Storage Node. [#22908]

Changes in 12cR1.2.1.25

Bug Fixes

  1. If a Storage Node Agent process received a master balancing related remote request while shutting down, it could in rare instances throw an exception that would disable the master balancing function in the Storage Node Agent that initiated the request. This problem can be identified via the following (or similar) output in the log of the Storage Node Agent that initiated the request:
    
    2014-03-28 12:13:34.544 UTC SEVERE [sn2] MasterRebalanceThread thread exiting due to exception.
    null (12.1.2.1.24) java.lang.NullPointerException
        at oracle.kv.impl.sna.StorageNodeAgentImpl$27.execute(StorageNodeAgentImpl.java:838)
        at oracle.kv.impl.sna.StorageNodeAgentImpl$27.execute(StorageNodeAgentImpl.java:831)
        at oracle.kv.impl.fault.ProcessFaultHandler.execute(ProcessFaultHandler.java:119)
        at oracle.kv.impl.sna.StorageNodeAgentImpl.getMDInfo(StorageNodeAgentImpl.java:829)
        at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        ...
        at com.sun.proxy.$Proxy1.getMDInfo(Unknown Source)
        at oracle.kv.impl.sna.StorageNodeAgentAPI.getMDInfo(StorageNodeAgentAPI.java:492)
        at oracle.kv.impl.sna.masterBalance.RebalanceThread.orderMSCNs(RebalanceThread.java:580)
        at oracle.kv.impl.sna.masterBalance.RebalanceThread.candidateTransfers(RebalanceThread.java:485)
        at oracle.kv.impl.sna.masterBalance.RebalanceThread.run(RebalanceThread.java:301)
    2014-03-28 12:13:34.546 UTC INFO [sn2] Master balance manager shutdown
    2014-03-28 12:13:34.546 UTC INFO [sn2] MasterRebalanceThread thread exited.
    
    
    [#23419]

Changes in 12cR1.2.1.24

Bug and Performance Fixes

  1. Under certain circumstances, a replication node which was on the verge of shutting down or in the midst of transitioning from master to replica state could experience this failure while cleaning up outstanding requests. Since the node would automatically restart, and the operation would be retried, the failure was transparent to the application, but could cause an unnecessary node failover. This has been fixed.
    java.lang.IllegalStateException: Transaction 30 detected open cursors while aborting
        at com.sleepycat.je.txn.Txn.abortInternal(Txn.java:1190)
        at com.sleepycat.je.txn.Txn.abort(Txn.java:1100)
        at com.sleepycat.je.txn.Txn.abort(Txn.java:1073)
        at com.sleepycat.je.Transaction.abort(Transaction.java:207)
        at oracle.kv.impl.util.TxnUtil.abort(TxnUtil.java:80)
        at oracle.kv.impl.api.RequestHandlerImpl.executeInternal(RequestHandlerImpl.java:469)
        at oracle.kv.impl.api.RequestHandlerImpl.access$300(RequestHandlerImpl.java:122)
        at oracle.kv.impl.api.RequestHandlerImpl$2.execute(RequestHandlerImpl.java:301)
        at oracle.kv.impl.api.RequestHandlerImpl$2.execute(RequestHandlerImpl.java:290)
        at
    oracle.kv.impl.fault.ProcessFaultHandler.execute(ProcessFaultHandler.java:135>
    
    [#22152]
  2. In past releases of NoSQL DB, a replication node which transitioned from master to replica state would have to close and reopen its database environment as part of the change in status. This transition has now been streamlined so that in the majority of cases, the database environment is not perturbed, the transition requires fewer resources, and the node is more available. [#22627]
  3. The plan deploy-topology command has additional safeguards to increase the reliability of the topology rebalance and redistribute plans. When moving a replication node from one Storage Node to another, the command will now check that the Storage Nodes involved in the operation are up and running before any action is taken. [#22850]
  4. Under certain circumstances it was possible for a replication node to use out of date master identity information when joining a shard. This could cause a delay if the targeted node was unavailable. This has been fixed. [#22851]
  5. Under certain circumstances operations would end prematurely with oracle.kv.impl.fault.TTLFaultException. This exception is now handled internally by the server and client library and the operation is retried. If the fault condition continues the operation will eventually fail with a oracle.kv.RequestTimeoutException. [#22860]
  6. Previously, there were cases where a replication node would require the transfer of a copy of the shard data in order to come up and join the shard, even though it was unnecessary. This has been fixed. [#22782]
  7. When new storage nodes are added to an Oracle NoSQL DB deployment and a new topology is deployed, the store takes that opportunity to redistribute master roles for optimal performance. In some cases, the store might not notice the new storage nodes until other events, such as failovers or mastership changes had occurred, which caused a delay in master balancing. This has been fixed. [#22888]
  8. The setting of the JE configuration parameter: je.evictor.criticalPercentage used by the store has been corrected. It used to be set to 105 and has been changed to 20. This new setting will provide better cache management behavior in cases where the data set size exceeds the optimal memory settings. [#22899]

Utility and Documentation Changes

  1. A timestamp has been added to the output of the CLI "ping" command. [#22859]

Changes in 12cR1.2.1.19

Documentation Changes

  1. This release includes a new document, Oracle NoSQL Database Availability and Failover. It explains the general concepts and issues surrounding data availability when using Oracle NoSQL Database. The intended audiences for this document are system architects and developers. The new information can be found under the "For the Developer" section in the documentation index page.
  2. Clarify the instructions for adding .avsc files to the classpath for the example on Avro bindings in <KVHOME>/examples/avro. Improve the error message when the .avsc files are not properly available.

Bug and Performance Fixes

  1. Increased an internal parameter for lock timeouts from 500ms to 10 seconds. Since NoSQL DB ensures that data access is deadlock free, the small timeout values were unnecessary and could cause spurious errors in the face of transient network failures. [#22583]
  2. Changing the store topology through the plan deploy-topology command could result in the following error if there was a transient network failure, or if the movement of the replication node took longer than a few seconds. Although the store state was still consistent, and the command could be manually retried, the command should be more resilient to communication glitches.
    ... [admin1] Task 2/RelocateRN ended in state ERROR with
    java.lang.RuntimeException: Time out while waiting for rg4-rn1 to come
    up on sn1 and become consistent with the master of the shard before
    deleting the RepNode from its old home on sn4 2/RelocateRN failed.
    
    java.lang.RuntimeException: Time out while waiting for rg4-rn1 to come
    up on sn1 and become consistent with the master of the shard before
    deleting the RepNode from its old home
    

    The command will now adjust waiting times and retry appropriately to ascertain whether the movement of a replication node has finished.[#22596]

  3. Fixed a bug where a replication node would not restart automatically if the directory containing its data files was removed, or its data files were corrupted, but were later repaired. [#22626]
  4. Added additional testing to reinforce the existing, correct behavior that a client directs write requests to the authoritative master in a segmented network split brain scenario. [#22636]
  5. In some cases, the java -jar kvstore.jar ping command could generate spurious messages about components that are no longer legitimately within the store.
    Failed to connect to service commandService
    
    Connection refused to host: 10.32.17.12; nested exception is:
      java.net.ConnectException: Connection refused
            SNA at hostname:localhost registry port: 6000 has no available
            Admins or RNs registered.
    
    In particular, these messages could happen for bootstrap Admins on Storage Nodes that do not host deployed Admin Services. While the store was consistent, the error messages were confusing and have been removed. [#22639]
  6. Fixed a small timing window in Replication Node master transfer that could incorrectly cause the transfer transaction catch up point to regress, when a master transfer is occurring under heavy application load. The result is that shard mastership can take too long a time or too short a time to transfer. If the transfer time is too short, the target master may not be optimally caught up, and a third member of the shard may detect this and throw an exception. [#22658]
  7. Preemptively shut down and restart the replication node when a node transitions from master to replica, to reduce GC cost from refreshing the database environment. [#22658]
  8. Made changes to the NoSQL client library to adapt to replication node failures more rapidly, by retrying or forwarding data requests when it detects that its original target is unavailable sooner. [#22661]
  9. A NoSQL deployment could see this transient error when undergoing topology changes. Although the store remained consistent, the error messages were confusing and could incorrectly cause a plan deploy-topology command to fail. This has been corrected.
     ... INFO [rg1-rn1] Failed pushing entire topology push to rg1-rn3
    updating from topo seq#: 0 to 1001 Problem:Update to topology seq#
    1001 failed ... oracle.kv.impl.fault.OperationFaultException:
      Update to topology seq# 1001 failed
        at oracle.kv.impl.rep.admin.RepNodeAdminImpl$6.execute(RepNodeAdminImpl.java:261)
        at oracle.kv.impl.fault.ProcessFaultHandler.execute(ProcessFaultHandler.java:169)
        at oracle.kv.impl.rep.admin.RepNodeAdminFaultHandler.execute(RepNodeAdminFaultHandler.java:117)
    
     ...INFO [rg1-rn3] Topology update skipped. Current seq #: 1001 Update seq #: 1001
    
    [#22678]
  10. Fixed a bug where a replication node which experiences an out of memory error did not restart automatically. [#22679]
  11. Corrected the default calculation of available Storage Node memory when the Storage Node has been configured without a value for the bootstrap memory_mb parameter. In the past, the calculation was done using units of decimal megabytes, rather than MB, resulting in an overestimation of the appropriate replication node heap size. This default calculation is only used if the store has been configured without any bootstrap value for the memory_mb property, and the memory_mb storage node parameter has never been set. [#22687]
  12. Update the Storage Node more quickly about the replica/master status of the replication nodes it hosts. The fix applies when executing the plan deploy-topology command on a store that contains Storage Nodes that have capacity values greater than 1, and can host multiple Replication Nodes. A delay in notifying the Storage Node of its replication nodes status can make the distribution of mastership responsibilities less optimal. [#22689]
  13. Fixed a bug where the Admin service became unresponsive when executing a plan deploy-topology command. During this time, the admin service process appeared idle, only burning a second or two of CPU time once in a while and would not respond to new attempts to connect with the Admin CLI. The problem would likely only occur in large clusters with hundreds of components. [#22694]
  14. Topology changes invoked by the plan deploy-topology which result in the movement of a replication node from one storage node to another are now more resilient to transient network failures. There are now more advance checks to ensure that the shard and storage nodes involved in the movement are available and ready to accept the change. In the advent of a network failure mid-move, the command is better at handling retries issued by the system administrator. [#22722]
  15. Fixed a bug where application requests failed to be processed while the store is executing topology changes that require partition migration under heavy load. [#22778]
  16. Adjust the default replication node garbage collection parameters to be more optimal, reducing CPU utilization in some cases. [#22779]
  17. Reduce the time taken for a replica Replication Node to become up to date and available to handle application requests when it has fallen significantly behind due to downtime or to network communication failures. Previously, it exited and restarted the process before starting the catch up stage, but will now skip the restart. [#22783]
  18. Fixed a bug where an internal queue in the Storage Node could fill up if its Replication Node repeatedly and unsuccessfully attempts to restart, as might happen when a resource is unavailable. In that case, the Storage Node was no longer able to automatically restart the replication node, and would have to be rebooted. [#22786]
  19. Fixed a bug where a Replication Node that has been stopped due to repeated errors, perhaps due to a lack of resource, and then re-enabled with the "plan start-service" command, still did not restart. [#22828]
  20. Fixed a bug where the following null pointer exception could happen for a restarting Replication Node. The problem was transient.
       INFO [sn1] rg2-rn2: ProcessMonitor: startProcess
       INFO [sn1] rg2-rn2: ProcessMonitor: stopProcess
       SEVERE [sn1] rg2-rn2: ProcessMonitor: Unexpected exception in
    MonitorThread:
            java.lang.NullPointerExceptionjava.lang.NullPointerException
            at oracle.kv.impl.sna.ProcessMonitor$MonitorThread.run(ProcessMonitor.java:404)
    
    [#22830]
  21. Improve the client library's interpretation of UnknownHostException and ConnectIOException so that it more rapidly detects a network problem and updates its set of unavailable replication nodes. [#22841]

Changes in 12cR1.2.1.8

New Features

  1. This release includes support for upgrading the NoSQL DB software (client or server) without taking the store offline or without significant impact to ongoing operations. In addition, upgrades can be made incrementally, that is, it should not be necessary to update the software on every component at the same time. This support includes client and server code changes and new command line interface (CLI) commands. [#22421]

    The new CLI commands provide the administrator tools to help with the upgrade process. Using these commands, the general upgrade procedure is:

    1. Install the new software on a Storage Node running an admin service1.
    2. Install the new client and connect to the store.
    3. Use the verify prerequisite command to verify that the entire store is at the proper software version to be upgraded (All 2.0 versions of NoSQL DB will qualify as prerequisites).
    4. Use show upgrade-order to get an ordered list of nodes to upgrade.
    5. Install the new software on the Storage Nodes (individually or in groups based on the ordered list).
    6. Use the verify upgrade to monitor progress and verify that the upgrade was successful.

    1 In future releases this step will not be necessary

    If the upgrade procedure is interrupted steps 4-6 can be repeated as necessary to complete the upgrade.

Bug and Performance Fixes:

  1. Unless configured specifically by the application, NoSQL DB specifies the -XX:ParallelGCThread jvm flag for each Replication Node process to indicate the number of garbage collector threads that the process should use. In the past, the algorithm in use generated a minimum value of 1 thread. After more testing, the minimum value has been raised to the min(4, <the number of cores on the node>). [#22475]

API Changes

  1. The admin command line interface (CLI) provides the following new commands [#22422]:

    verify prerequisite [-silent] [-sn snX]*

    This command will verify that a set of storage nodes in the store meets the required prerequisites for upgrading to the current version and display the components which do not meet prerequisites or cannot be contacted. It will also check and report an illegal downgrade situation where the installed software is of a newer minor release than the current version. In this command the current version is the version of the software running the command line interface. If no storage nodes are specified, all of the nodes in the store will be checked.

    verify upgrade [-silent] [-sn snX]*

    This command will verify that a set of storage nodes in the store has been successfully upgraded to the current version and display the components which have not yet been upgraded or cannot be contacted. In this command the current version is the version of the software running the command line interface. If no storage nodes are specified, all of the nodes in the store will be checked.

    show upgrade-order

    This command will display the list of storage nodes in the order that they should be upgraded to maintain the store's availability. This command will display one or more storage nodes on a line. Multiple storage nodes on a line are separated by a space. If multiple storage nodes appear on a single line, then those nodes can be safely upgraded at the same time. When multiple nodes are upgraded at the same time, the upgrade must be completed on all nodes before the nodes next on the list can be upgraded.

    The verify [-silent] command has been deprecated and is replaced by verify configuration [-silent]. The verify [-silent] command will continue to work in this release.

Utility and Documentation Changes

  1. In this release, the sample code provided by the utility class WriteOperations (located in the examples directory) now includes methods that perform write operations for large objects (or LOB, see KVLargeObject). The new utility methods added in this release will properly retry the associated LOB operation when a FaultException is encountered. Prior to this release, the WriteOperations utility only provided retry methods for objects that are not large objects. [#21966]
  2. The number of JE lock tables used by Replication Nodes (controlled via the je.lock.nLockTables JE configuration parameter) has been increased from 1 to 97. This change helps improve performance of applications characterized by very high levels of concurrent updates, by reducing lock contention. [#22373]
  3. The Administration CLI now permits the creation of multiple Datacenters. By choosing Datacenter replication factors so that each Datacenter holds less than a quorum of replicas, this change makes it possible to create store layouts where the failure of a single Datacenter does not result in the loss of write availability for any shards in the store. In the current release, nodes in any Datacenter can participate in master elections and contribute to durability acknowledgments. As a consequence, master failover and durability acknowledgments will take longer if they involve datacenters that are separated by large distances. Future releases will provide greater flexibility in this area. [#20905]

Changes in 11gR2.2.0.39

New Features

  1. An integration with Oracle Coherence has been provided that allows Oracle NoSQL Database to be used as a cache for Oracle Coherence applications, also allowing applications to directly access cached data from Oracle NoSQL Database. This integration is a feature of the Enterprise Edition of the product and implemented as a new, independent jar file. It requires installation of the Oracle Coherence product as well. The feature is described in the Getting Started Guide as well as the javadoc. [#22291].
  2. The Enterprise Edition now has support for semantic technologies. Specifically, the Resource Description Framework (RDF), SPARQL query language, and a subset of the Web Ontology Language (OWL) are now supported. These capabilities are referred to as the RDF Graph feature of Oracle NoSQL Database. The RDF Graph feature provides a Java-based interface to store and query semantic data in Oracle NoSQL Database Enterprise Edition. The feature is described in the RDF Graph manual.

Bug Fixes

  1. The preferred approach for setting NoSQL DB memory resources is to specify the memory_mb parameter for each SN when running the makebootconfig utility, and to let the system calculate the ideal Replication Node heap and cache sizes. However, it is possible to override the standard memory configurations by explicitly setting heap and cache sizes using the Replication Node javaMiscParams and cacheSize parameters. In past releases, setting the explicit values worked correctly when using the plan change-parameters command, but did not work correctly when using the change-policy command. This has been fixed, so that if desired, one can use the change-policy command for the javaMiscParams and cacheSize parameters to override the default memory allocation heuristics. [#22097]
  2. A NoSQL DB deployment that executes on a node with no network available, as might happen when running a NoSQL DB demo or tutorial, would fail with this error:
    java.net.InetAddress.getLocalHost() returned loopback address:<hostname> and
      no suitable address associated with network interfaces.
    
    This has been fixed. [#22252]

API Changes

  1. Prior to this release, if a write operation encountered an exception from the underlying persistent store indicating that the write completed on the shard's master but not necessarily on the desired combination of replicas within the specified time interval, then that exception would be swallowed and thus never propagated to the client. Originally, this behavior was considered desirable because not only is that exception rare (because of various preceding checks performed by the implementation), but swallowing the exception would keep the API simple by avoiding the introduction of an additional exception and/or additional communication at the API level. After further thought and discussion, the team concluded that clients should know when a write operation fails to complete because of such an exception. As a result, when such a condition occurs during a write operation, a RequestTimeoutException will now be propagated to the client; wrapping the original exception from the underlying persistent store as the cause. For additional information, including strategies one might employ when this exception is encountered, refer to the RequestTimeoutException javadoc.

    This has been fixed. [#21210]

  2. A new parameter has been added which controls the display of records in exception and error messages. When hideUserData is set to true, as it is by default, error messages which are printed to the server side logs or are displayed via the show CLI commands replace any key/values with the string "[hidden]". To see the actual record content in errors, set the parameter to false. [#22376]

Utility and Documentation Changes

  1. In previous releases, information about errors that occurred during NoSQL DB component start up as a result of a plan deploy-topology command would often be visible only within the NoSQL DB logs, which made installation troubleshooting difficult. In this release, such start up errors can now be seen via the Admin CLI show plan -id <id> command. [#22101]
  2. The Storage Node Agent exposes MBeans on a non-default MBeanServer instance. In this release, the non-default MBeanServer now exposes the standard JVM platform MBeans as well as those relating only to Oracle NoSQL Database.
  3. In both SNMP and JMX interfaces, the new totalRequests metric is now available. This metric counts the number of multi-operation sequences that occurred during the sampling period.
  4. Prior to this release, the product was compiled and built against the 1.x version of Hadoop (CDH3). Thus, when employing a previous release, if one were to run the examples.hadoop.CountMinorKeys example against a cluster based on the 2.x version of Hadoop (CDH4), the MapReduce job initiated by that example would fail as a result of an IncompatibleClassChangeError; which is caused by an incompatibility introduced in org.apache.hadoop.mapreduce.JobContext between Hadoop 1.x and Hadoop 2.x. This failure occurs whether the example is compiled and built against Hadoop 1.x or Hadoop 2.x. Because the product's customer base almost exclusively uses Hadoop 2.x, this release will provide support for Hadoop 2.x instead of 1.x. Future releases may revisit support for both Hadoop version paths, but doing so will involve refactoring the codebase and its associated release artifacts, as well as substantial changes to the product's current build process.

    Support of Hadoop 2.x (CDH4) has been provided. [#22157]

  5. The java -jar kvstore.jar makebootconfig -mount flag has been changed to -storagedir. The "plan change-mountpoints -path <storage directory>" command is deprecated in favor of "plan change-storagedir -storagedir <storage directory>". [#21880]
  6. The concept of Storage Node capacity is better explained in the Administrator's Guide.
  7. The Administrator's Guide has a revamped section on how to calculate the resources needed for operating a NoSQL DB deployment.

Changes in 11gR2.2.0.26

New Features:

  1. This release adds the capability to remove an Admin service replica. If you have deployed more than one Admin, you can remove one of them using the following command:
    plan remove-admin -admin <adminId>
    

    You cannot remove the sole Admin if only one Admin instance is configured.

    For availability and durability reasons, it is highly recommended that you maintain at least three Admin instances at all times. For that reason, if you try to remove an Admin when the removal would result in there being fewer than three, the command will fail unless you give the -force flag.

    If you try to remove the Admin that is currently the master, mastership will transfer to another Admin. The plan will be interrupted, and subsequently can be re-executed on the new master Admin. To re-execute the interrupted plan, you would use this command:

    plan execute -id <planId>
    
  2. The Admin CLI verify has an added check to verify that the Replication Nodes hosted on a single Storage Node have memory settings that fit within the Storage Node's memory budget. This guards against mistakes that may occur if the system administrator overrides defaults and manually sets Replication Node heap sizes.[#21727]
  3. The Admin CLI verify command now labels any verification issues as violations or notes. Violations are of greater importance, and the system administrator should determine how to adjust the system to address the problem. Notes are warnings, and are of lesser importance. [#21950]

Bug fixes:

  1. Several corrections were made to latency statistics. These corrections apply to the service-side statistics in the Admin console, CLI show perf command, .perf files and .csv files, as well as the client-side statistics returned by KVStore.getStats. However, corrections to the 95% and 99% values do not apply to the client-side statistics, since these values do not appear in the client-side API. [#21763]
  2. Modified the Administration Process to allocate ports from within a port range if one is specified by the -servicerange argument to the makebootconfig utility. If the argument is not specified the Administration Process will use any available port. Please see the Admin Guide for details regarding the configuration of ports used by Oracle NoSQL Database. [#21962]
  3. Modified the replication node to handle the unlikely case that the locally stored topology is missing. A missing topology results in a java.lang.NullPointerException being thrown in the TopologyManager and will prevent the replication node from starting. [#22015]
  4. Replication Node memory calculations are more robust for Storage Nodes that host multiple Replication Nodes. In previous releases, using the plan change-params command to reduce the capacity parameter for a Storage Node which hosts multiple Replication Nodes could result in an over aggressive increase in RN heap, which would make the Replication Nodes fail at start up. The problem would be fixed when a topology was rebalanced, but until that time, the Replication Nodes were unavailable. The default memory sizing calculation now factors in the number of RNs resident on a Storage Node, and adjusts RN heap sizes as Replication Nodes are relocated by the deploy-topology command. [#21942]
  5. Fixed a bug that could cause a NullPointerException, such as the one below, during RN start-up. The exception would appear in the RN log and the RN would fail to start. The conditions under which this problem occurred include partition migration between shards along with multiple abnormal RN shutdowns. If this bug is encountered, it can be corrected by upgrading to the current release, and no data loss will occur.
    Exception in thread "main" com.sleepycat.je.EnvironmentFailureException: (JE
    5.0.XX) ...  last LSN=.../... LOG_INTEGRITY: Log information is incorrect,
    problem is likely persistent. Environment is invalid and must be closed.
        at com.sleepycat.je.recovery.RecoveryManager.traceAndThrowException(RecoveryManager.java:2793)
        at com.sleepycat.je.recovery.RecoveryManager.undoLNs(RecoveryManager.java:1097)
        at com.sleepycat.je.recovery.RecoveryManager.buildTree(RecoveryManager.java:587)
        at com.sleepycat.je.recovery.RecoveryManager.recover(RecoveryManager.java:198)
        at com.sleepycat.je.dbi.EnvironmentImpl.finishInit(EnvironmentImpl.java:610)
        at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:208)
        at com.sleepycat.je.Environment.makeEnvironmentImpl(Environment.java:246)
        at com.sleepycat.je.Environment.<init>(Environment.java:227)
        at com.sleepycat.je.Environment.<init>(Environment.java:170)
        ...
    Caused by: java.lang.NullPointerException
        at com.sleepycat.je.log.entry.LNLogEntry.postFetchInit(LNLogEntry.java:406)
        at com.sleepycat.je.txn.TxnChain.<init>(TxnChain.java:133)
        at com.sleepycat.je.txn.TxnChain.<init>(TxnChain.java:84)
        at com.sleepycat.je.recovery.RollbackTracker$RollbackPeriod.getChain(RollbackTracker.java:1004)
        at com.sleepycat.je.recovery.RollbackTracker$Scanner.rollback(RollbackTracker.java:477)
        at com.sleepycat.je.recovery.RecoveryManager.undoLNs(RecoveryManager.java:1026)
        ... 10 more
    
    [#22052]
  6. Fixed a bug that causes excess memory to be used in the storage engine cache on an RN, which could result in poor performance as a result of cache eviction and additional I/O. The problem occurred only when the KVStore.storeIterator or KVStore.storeKeysIterator method was used. [#21973]

Performance and other General Changes:

  1. The replicas in a shard now dynamically configure the JE property RepParams.REPLAY_MAX_OPEN_DB_HANDLES which controls the size of the cache used to hold database handles during replication. The cache size is determined dynamically based upon the number of partitions currently hosted by the shard. This improved cache sizing can result in better write performance for shards hosting large numbers of partitions. [#21967]
  2. The names of the client and server JAR files no longer include release version numbers. The files are now called:
    lib/kvstore.jar
    lib/kvclient.jar
    

    This change should reduce the amount of work needed to switch to a new release because the names of JAR files will no longer change between releases. Note that the name of the installation directory continues to include the release version number. [#22034]

  3. A SEVERE level message is now logged and an admin alert is fired when the storage engine's average log cleaner (disk reclamation) backlog increases over time. An example of the message text is below.
    121215 13:48:57:480 SEVERE [...] Average cleaner backlog has grown from 0.0 to
    6.4. If the cleaner continues to be unable to make progress, the JE cache size
    and/or number of cleaner threads are probably too small. If this is not
    corrected, eventually all available disk space will be used.
    
    For more information on setting the cache size appropriately to avoid such problems, see "Determining the Per-Node Cache Size" in the Administrator's Guide. [#21111]
  4. The storage engine's log cleaner will now delete files in the latter portion of the log, even when the application is not performing any write operations. Previously, files were prohibited from being deleted in the portion of the log after the last application write. When a log cleaner backlog was present (for example, when the cache had been configured too small, relative to the data set size and write rate), this could cause the cleaner to operate continuously without being able to delete files or make forward progress. [#21069]
  5. NoSQL DB 2.0.23 introduced a performance regression over R1.2.23. The kvstore client library and Replication Node consumed a greater percentage of system CPU time. This regression has been fixed. [#22096]

Changes in 11gR2.2.0.23

New Features:

  1. This release provides the ability to add storage nodes to the system after it has been deployed. The system will rebalance and redistribute the data onto the new nodes without stopping operations. See Chapter 6, of the Admin Guide, Determining your Store's Configuration, for more details.
  2. A new oracle.kv.lob package provides operations that can be used to read and write Large Objects (LOBs) such as audio and video files. As a general rule, any object larger than 1 MB is a good candidate for representation as a LOB. The LOB API permits access to large values without having to materialize the value in its entirety by providing streaming APIs for reading and writing these objects.
  3. A C API has been added. The implementation uses Java JNI and requires a Java virtual machine to run on the client. It is available as a separate download.
  4. Added a new remove-storagenode plan. This command will remove a storage node which is not hosting any NoSQL Database components from the system's topology. Two examples of when this might be useful are:
    A storage node was incorrectly configured, and cannot be deployed.
    A storage node was once part of a NoSQL Database, but all components have been migrated from it using the migrate-storagenode command, and the storage node should be decommissioned.
    [#20530]
  5. Added the ability to specify additional physical configuration information about storage nodes including: This information is used by the system to make more intelligent choices about resource allocation and consumption. The administration documentation discusses how these parameters are set and used. [#20951]
  6. Added Avro support. The value of a kv pair can now be stored in Avro binary format. An Avro schema is defined for each type of data stored. The Avro schema is used to efficiently and compactly serialize the data, to guarantee that the data conforms to the schema, and to perform automatic evolution of the data as the schema changes over time. Bindings are supplied that allow representing Avro data as a POJO (Plain Old Java Object), a JSON object, or a generic Map-like data structure. For more information, see Chapter 7 - Avro Schemas and Chapter 8 - Avro Bindings in the Getting Started Guide. The oracle.kv.avro package is described in the Javadoc. The use of the Avro format is strongly recommended. NoSQL DB will leverage Avro in the future to provide additional features and capabilities. [#21213]
  7. Added Avro support for the Hadoop KVInputFormat classes. A new oracle.kv.hadoop.KVAvroInputFormat class returns Avro IndexedRecords to the caller. When this class is used in conjunction with Oracle Loader for Hadoop, it is possible to read data directly from NoSQL Database using OLH without using an interim Map-Reduce job to store data in HDFS. [#21157]
  8. Added a feature which allows Oracle Database External Tables to be used to access Oracle NoSQL Database records. There is more information in javadoc for the oracle.kv.exttab package and an "cookbook" example in the examples/externaltables directory. [#20981]

Performance and other General Changes:

  1. The following new methods: have been added to allow clients to configure the socket timeouts used to make client requests. Please review the javadoc for details.

    R1 installations must ensure that the software on the storage nodes has been upgraded as described in the upgrade documentation accompanying this release before using the above APIs on the client. [#20997]

  2. New service parameters have been added to control the backlog associated with sockets created by NoSQL Database. These are controllable for the Rep Node and Storage Nodes' Monitor, Admin, and Registry Handler interfaces. The parameters are rnRHSOBacklog (default 1024), rnMonitorSOBacklog (default 0), rnAdminSOBacklog (default 0), rnAdminSOBacklog (default 0), snAdminSOBacklog (default 0), snMonitorSOBacklog (default 0), and snRegistrySOBacklog (default 1024). [#21322]
  3. Previously, calling Key.isPrefix with an argument containing a smaller major or minor path than the target Key object caused an IndexOutOfBoundsException in certain cases. This has been fixed.
  4. The KeyRange() constructor now checks that the start Key is less than the end Key if both are specified, otherwise an IllegalArgumentException is thrown. KeyRange also has toString() and fromString() methods for encoding and decoding KeyRange instances, similar to the same methods in Key. [#21470]

Utility Changes:

  1. Many new commands have been added to the CLI. See Appendix A - Command Line Interface (CLI) Command Reference of the Administrator's Guide for details.
  2. The Admin Console is now for monitoring only.
  3. Administration CLI commands have been changed so that component ids match the ids used in the topology display. Previously Datacenters, Storage Nodes, Admin instances and Replication Nodes were identified only by number. For example, the syntax to add Storage Node 17 to a Storage Node pool, or to show the parameters for a given Replication Node was:
    joinPool myStorageNodePool 17
    show repnode-params 5,3
    
    Datacenters can now be expressed as # or dc#
    Admin instances can now be expressed as # or admin#
    Storage Nodes can now be expressed as # or sn#
    Replication Nodes can now be expressed as groupNum,nodeNum, or rgX-rnY

    The commands shown above are still valid, but can also be expressed as:

    joinPool myStorageNodePool sn17
    show repnode-params rg5-rn3
    
    [#21099]

Documentation, Installation and Integration:

  1. The javadoc for the Key.createKey methods has been improved to warn that List instances passed as parameters are owned by the Key object after calling the method. To avoid unpredictable results, they must not be modified. [#20530]

Changes in 11gR2.1.2.123

Bug fixes:

  1. Previously, executing a change-repnode-params plan in order to change Replication Node parameters for a node other than the one running the Admin service would fail. This operation will now succeed. [#20901]
  2. A deploy-storage-node plan which ran into problems when attempting to deploy a new storage node would leave the problematic SN in the store. This would require that the user either take manual action to remove the bad SN, or fix the problem and retry the plan. For convenience, the deploy-storage-node plan will now clean up if it runs into errors, and will not leave the failed SN behind. [#20530]

Performance and other General Changes:

  1. The command line interface's snapshot create command has been made significantly faster. Previously, it could take minutes if executed on a store with a large amount of data. This should be reduced to seconds. [#20772]

Utility Changes:

  1. The two scripts for starting kvlite and executing control commands, bin/run-kvlite.sh and bin/kvctl, have been replaced by a java -jar lib/kvstore-M.N.P.jar command. This provides portability to all Java platforms, including Windows. The two scripts are deprecated, but will be supported for at least one release cycle.

    The translation from the old script commands to the new -jar commands is as follows:

    Old script commandNew -jar command
    bin/run-kvlite.sh args... java -jar lib/kvstore-M.N.P.jar kvlite args...
    bin/kvctl command args... java -jar lib/kvstore-M.N.P.jar command args...

    There are a few differences to be aware of between the old and new commands.