Changes in 12cR1.4.2.10

The following changes were made in Oracle NoSQL Database 12cR1.4.2.10.

New Features

  1. The topology builder has been enhanced to take storage directory sizes into account when laying out shards and partitions. What this means is a RepNode with larger storage directory will be assigned a larger portion of the store's data than a RepNode with smaller directory. Storage directory size layout will happen automatically during elasticity operations when directory sizes have been configured. To enable this feature on an existing store the size of storage directory for each RepNode needs to be set, usually by the administrator. Then a rebalance, redistribute, or contract topology operation will adjust the shards and/or partitions to account for differences in directory sizes.

    To set or change the directory size on existing stores there is a new -storagedirsize flag on the plan change-storagedir Admin CLI command. To set the directory size on a new configuration the flag -storagedirsize has been added to makebootconfig. Both flags can accept size values with units, for example: "1_TB" can be used to specify a 1 terabyte directory. [#24981], [#25166]

  2. DDL operations are now logged with the standard audit logging prefix, "timestamp KVAuditInfo", to support searching and filtering. [#25460]

  3. Added Arbiter functionality. Using this feature, KVStore DML operations using Durability.ReplicaAckPolicy.SIMPLE_MAJORITY durability succeeds even with one RepNode node unavailable in a shard. The data written with "relaxed" durability will be migrated to the other RepNode when it becomes available. Also, only the node with the "relaxed" durability data can become master until the data is migrated to the other RepNode. This fact relaxes the write availability to a single node failure as opposed to write availability with any node failure (on a shard basis). [#20590]

  4. Added the new 'topology contract' command to the Administrative CLI. This command provides support for removing a specified set of storage nodes from the topology and shrinking the size of the store by removing shards. Also added the new 'pool leave' command to simplify the process of specifying which storage nodes should be removed. [#24425]

  5. Secondary indexes now contain entries for rows that have null values for index fields. This serves two purposes:

    1. It means that an index will have an entry for every row in the table, regardless of values, which was not true previously

    2. It makes composite indexes (those with multiple fields) more useful because an application can use a partial key and know that rows will not be skipped because of a null in another field

    This change required a modification of the format of the index databases themselves. All new indexes will have this new format and will support null values. Existing indexes cannot support null values and will continue to operate as they have in the past, without nulls. For this reason, it is recommended that indexes be dropped and re-added if null values are desired. [#24785]

  6. Added new policy parameters that permit administrators to enforce password complexity requirements when users create a new passwords or change existing passwords. [#24985]

  7. Added new position-based put and get methods on RecordValue that work more efficiently than the name-based methods if the application knows the position of a field within a RecordValue. [#25214]

  8. Includes a preview release of Java API support for a field type of JSON. Use of this type is for preview only. Any stores that use this type will not be supported, and must be removed and not re-used when full support is provided. This preview includes: [#23589]

    • The ability to declare a field as type JSON. Doing so means that any type that can be interpreted as valid JSON can be used in this field, including the numeric types, boolean, string, map (JSON object), array (JSON array).

    • Methods on FieldDef and FieldValue and related types to put and get JSON values as well as to navigate into JSON fields in a Row.

    The preview release has the following known limitations and issues, which can result in failures or unpredictable results:

    • Indexes into JSON are not yet supported. This is a feature that will be supported in a future release.

    • Queries involving JSON are not yet supported, and will fail to compile and/or execute. This is a feature that will be supported in a future release

    • Not all numeric values in JSON are supported. Any number that cannot be represented as a long or double will fail to be handled. This feature will be supported in a future release.

    • Map keys are case-insensitive, which extends to JSON objects. This is a bug and will be fixed in a future release.

    • Support for declarations of MAP(JSON) and ARRAY(JSON) in fields is not complete. These declarations will work in DDL statements, but rows in such tables will not be usable and must be avoided. This is a bug and will be fixed in a future release. release.

Bug and Performance Fixes

  1. Enhanced the Load utility to use a disk-ordered cursor to read entries from multiple databases of snapshot. This change improves the input throughput, thus improving the overall performance of loading. [#25294]

  2. Fixed a problem where specifying a large value for the rnHeapMaxMB storage node parameter could result in the replication nodes hosted by that storage node being given heap sizes that do not support compressed object references, even though a smaller heap size that supports compressed object references would be more efficient. [#25472]

  3. Fixed a bug that the TextIndexFeeder failed to act accordingly when the topology changed and partition migrated between replication groups. The fix ensures that the TextIndexFeeder will stream all writes to Elasticsearch when the store undergoes elasticity operations. [#25334]

  4. Fixed a problem where an unexpected InsufficientAcksException caused by a lack of shard quorum during an access control change could cause a replication node to be restarted. [#25439]

  5. Fixed a problem where a lack of shard quorum during an access control change could result in a temporary deadlock and an unexpected InsufficientAcksException. [#25442]

  6. Fixed a problem where an upgrade of a 3.x version store would fail if the store contained a table that previously had been schema evolved. [#25532]

  7. Disable full text search (FTS) in secure store. If a KV store is configured as a secure store, FTS will be disabled and user will not be allowed to register an external Elasticsearch cluster. In addition, for a KV store in which FTS is already enabled, user will not be able to re-configure the store from a non-secure to a secure store. Instead, she need to drop all text indices, deregister the external Elasticsearch cluster, and re-configure the non-secure store to a secure store. [#25245], [#25246]

  8. RepNodes are now configured to use the Java option -XX:+AlwaysPreTouch by default on Linux platforms when using the Oracle Java virtual machine. This change slows Java startup slightly (roughly 10 sec for a 32GB heap using 4K pages and less than 1 sec if using 2MB Large pages), but reduces subsequent latency pauses as larger amounts of heap storage are used by the RepNode. [#25161]

  9. RepNodes now use the Garbage First Garbage Collector (G1 GC) by default. The G1 GC typically provides shorter GC pause times than the Concurrent Mark Sweep (CMS) collector, which was the previous default. As a result, the G1 GC should reduce both average and peak latency for store operations, and improve throughput.

    Applications can revert to the previous GC settings using the CMS GC by including -XX:+UseConcMarkSweepGC in the value of the RepNode javaMiscParams parameter. [#24695]

  10. The representation of IndexKey has been changed in an incompatible manner. The reason is a combination of ease of use, function, and performance. Prior to this release an IndexKey shared schema (the RecordDef) with the corresponding Row objects for the same table. The old representation was confusing and cumbersome in the face of indexes on fields in deeply nested records, maps, and arrays. The new representation is a flattened version of the fields indexed, with only a single level of structure, where the field name is a path to the indexed field.

    Consider this table and index:

    CREATE TABLE user (id INTEGER, PRIMARY KEY(id), address RECORD(city String, state String))
    CREATE INDEX City on user(address.city)

    Prior to this release the Java code required to create an IndexKey used to iterate the City index would look like this:

    /* assume userTable is a handle on the table */
    Index index = userTable.getIndex("City");
    IndexKey indexKey = index.createIndexKey();
    indexKey.createRecord("address").put("city", "Chicago");

    Note the need to create the structure of the record to use it. The new flattened representation is:

    /* assume userTable is a handle on the table */
    Index index = userTable.getIndex("City");
    IndexKey indexKey = index.createIndexKey();
    indexKey.put("address.city", "Chicago");

    The syntax for indexes involving maps and arrays is as follows:

    • Map keys: keys(path-to-map-field). For example keys(this_is_a_map)

    • Map values: path-to-map-field[]. For example, this_is_a_map[]. If the index is on a field within a map of records it would look like map_of_records[].path-to-field

    • Array values: these are similar to map values and require use of "[]": path-to-array[].

    The same syntax rules are used for these paths as are used in the statements that create the indexes themselves. Similar syntax works for indexes on other complex types such as maps and arrays.

    This is an incompatible change and will require any applications that use indexes on complex types to be modified to use the new format and syntax. If unchanged, errors will be seen as IllegalArgumentException thrown from IndexKey.put*() calls as well as FieldRange construction for such fields. Applications that do not use indexes on records, maps, or arrays will continue to work without modification. See the documentation for details on how to represent paths to complex index fields. [#25090]

  11. Changed to allow an existing client to access store without reopening a KVStore handle during updating SSL certificate of a secure store. [#25062]

  12. The exception, KVSecurityException, extended FaultException, which was not correct, according to the contract for FaultException. KVSecurityException now extends RuntimeException.

    This is a minor API change but it affects the possible exceptions thrown from most of the methods on the KVStore and TableAPI interfaces. They can now throw KVSecurityException, which no longer falls under the umbrella of FaultException. Applications that need to catch one or more of the KVSecurityException instances — UnauthorizedException, AuthenticationRequiredException, and AuthenticationFailureException — need to do so explicitly. [#24967]

  13. To make text indexing more efficient, index population now uses Elasticsearch's bulk operations interface. [#25040]

  14. Duplicate results from map and array indexes have been eliminated. Previously, when performing a query involing map value or array indexes, it was possible to see duplicate results for rows that have multiple independent entries in the index. This would occur when using the index iteration methods as well. These duplicates are now removed so that the user will see only one instance of each matching record. [#25023]

  15. To avoid a compatibility issue, NoSQL Database by default now connects to an Elasticsearch cluster via Elasticsearch's Transport Client rather than its Node Client. [#25146]

  16. Records that contain only empty strings in text-indexed fields are now correctly added to the text index. Previously, such records were omitted. [#25058]

  17. Two race conditions that could occur during text index creation have been fixed. [#25093], [#25182]

  18. The size of the service port range required for storage nodes has been reduced. Previously, when -servicerange was used when calling makebootconfig to configure a storage node that hosts an admin, the range required 8 ports for a non-secure deployment. That number has now been reduced to 3. For details, see the documentation for the servicePortRange parameter in the section on Storage Node Parameters in the Admin Guide. [#24708]

  19. Allow the anonymous user to use a DDL statement in the Admin CLI to create the first user for a secure store. [#25051]

  20. Fixed a problem where a client with an incorrect set of helper hosts could cause replication nodes to fail and not restart. The problem could occur if the client was configured with helper hosts from unrelated stores and attempted to forward topology updates obtained from one store to another store. Such an occurrence is now logged but no longer causes the replication node to fail. [#24693]

  21. Changed the maximum value of the memoryMB parameter from 500 GB to 128 TB, to accommodate machines with large amounts of memory. [#25017]

  22. Fixed a problem in the statistics gathering code that could cause a replication node to fail because of an unexpected InterruptedException: [#25046]

    2016-04-12 13:34:42.678 UTC INFO [rg1-rn3] Exception accessing the migration db {0}
    com.sleepycat.je.ThreadInterruptedException: (JE 7.0.5) Environment must be closed, caused by: com.sleepycat.je.ThreadInterruptedException: Environment invalid because of previous exception: (JE 7.0.5) rg1-rn3(3):/scratch/yfei/nftest/dm_mode/kv_isolate_rn/scratch/kvroot/mystore/sn3/rg1-rn3/env java.lang.InterruptedException THREAD_INTERRUPTED: InterruptedException may cause incorrect internal state, unable to continue. Environment is invalid and must be closed.
           at com.sleepycat.je.ThreadInterruptedException.wrapSelf(ThreadInterruptedException.java:135)
           at com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:1720)
           at com.sleepycat.je.Transaction.checkEnv(Transaction.java:886)
           at com.sleepycat.je.Transaction.commit(Transaction.java:350)
           at oracle.kv.impl.rep.migration.PartitionMigrations.fetch(PartitionMigrations.java:295)
           at oracle.kv.impl.rep.migration.MigrationManager$3.call(MigrationManager.java:1286)
           at oracle.kv.impl.rep.migration.MigrationManager$3.call(MigrationManager.java:1282)
           at oracle.kv.impl.rep.migration.MigrationManager.tryDBOperation(MigrationManager.java:1464)
           at oracle.kv.impl.rep.migration.MigrationManager.getMigrations(MigrationManager.java:1282)
           at oracle.kv.impl.rep.migration.MigrationService.pendingSources(MigrationService.java:180)
           at oracle.kv.impl.rep.migration.MigrationManager.awaitIdle(MigrationManager.java:445)
           at oracle.kv.impl.rep.table.MaintenanceThread.run(MaintenanceThread.java:149)
  23. Fixed an unhandled security exception in the master rebalancing code that could cause a replication node to fail unexpectedly when a storage node is being restarted. [#25134]

  24. Fixed a problem where a replication node in a secure store could exit with a NullPointerException if it received a request before it is fully initialized. [#25092]

    2016-05-03 18:08:42.191 UTC SEVERE [rg5-rn3] Process exiting
    java.lang.NullPointerException
    at oracle.kv.impl.rep.login.KVSessionManager.initializeKVStore(KVSessionManager.java:495)
    at oracle.kv.impl.rep.login.KVSessionManager.isReady(KVSessionManager.java:259)
    at oracle.kv.impl.rep.login.KVSessionManager.resolve(KVSessionManager.java:425)
    at oracle.kv.impl.security.login.TokenResolverImpl.resolvePersistentToken(TokenResolverImpl.java:216)
    at oracle.kv.impl.security.login.TokenResolverImpl.resolve(TokenResolverImpl.java:168)
    at oracle.kv.impl.security.login.TokenVerifier.verifyToken(TokenVerifier.java:97)
    at oracle.kv.impl.security.AccessCheckerImpl.identifyRequestor(AccessCheckerImpl.java:141)
    at oracle.kv.impl.security.ExecutionContext.create(ExecutionContext.java:181)
    at oracle.kv.impl.security.SecureProxy$CheckingHandler$1.execute(SecureProxy.java:609)
    at oracle.kv.impl.security.SecureProxy$CheckingHandler$1.execute(SecureProxy.java:600)
    at oracle.kv.impl.fault.ProcessFaultHandler.execute(ProcessFaultHandler.java:148)
    at oracle.kv.impl.rep.admin.RepNodeAdminFaultHandler.execute(RepNodeAdminFaultHandler.java:124)
    at oracle.kv.impl.security.SecureProxy$CheckingHandler.invoke(SecureProxy.java:598)
    at oracle.kv.impl.security.SecureProxy.invoke(SecureProxy.java:144)
    at com.sun.proxy.$Proxy9.getTopoSeqNum(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
    at sun.rmi.transport.Transport$1.run(Transport.java:200)
    at sun.rmi.transport.Transport$1.run(Transport.java:197)
    at java.security.AccessController.doPrivileged(Native Method)
    at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
    at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
    at java.security.AccessController.doPrivileged(Native Method)
    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
  25. Added TableOperationResult.getPreviousExpirationTime() to return the expiration time of a previous row, if defined. Previously the expiration time was available from the previous Row, which was a partial solution. [#25229]

  26. Fixed a problem that client may push update of topology without signature against a secure store. Before this fix, server audit logging may have warning entries like: [#25273]

    2016-06-13 06:53:32.476 UTC SEC_WARNING [rg2-rn1] Empty signature. Verification failed for topology seq# 38
    
  27. Fixed a bug where an application that used a KVStore handle from an old, defunct store, against a new, recreated store on the same hosts with the same ports could cause replication node restarts in the new store. Now the store is unaffected and the application will receive an oracle.kv.StaleStoreHandleException, letting it know that the handle should be closed and reopened. [#24693]

Utility Changes

  1. If users need to configure Kerberos manually, for example when using Active Directory, they can now specify "none" for the "-kadmin-path" argument in the makebootconfig and securityconfig commands. In this case, the keytab is not automatically generated and must be generated and copied manually. [#25445]

  2. The verify configuration Administrative CLI command now shows a warning if a failed deployment resulted in a store topology with shards that have no partitions. [#22098]

  3. Added security configuration utilities: [#24948]

    • security config update

      Update security parameters in given security configuration.

    • security config verify

      Verify the consistency and correctness of given security configuration.

    • security config show

      Print out all information of given security configuration.

  4. Enhanced "oracle.kv.util.Load" utility to support loading data from multiple shard backup directories concurrently, and use bulk put to improve performance of loading data. [#25085]

  5. Running the admin web service is no longer supported for secure stores, which results in the following changes to utilities: [#25249]

    • Using the makebootconfig command to configure a storage node for a secure store must specify -runadmin if the storage node should run an admin, and either specify the admin port as 0 or leave the admin port unspecified

    • An admin deployed on a secure store with plan deploy-admin must specify 0 for the value of the -port flag

    • The securityconfig add-security command fails when performed against a non-secure store that has an admin with an admin web service

    • The plan change-parameters command fails when attempting to specify a non-zero admin port on a secure store

    • The plan migrate-sn command fails when attempting to migrate a storage node with a non-zero admin port on a secure store

    Storage nodes in a secure store that run an admin with the admin web service enabled will not start successfully after upgrading to this release. Make sure to disable the admin web service before performing an upgrade.

    If you attempt to upgrade a storage node for a secure store that has the admin web service enabled, the node will fail to start with a message like:

    Failed to start SNA: Cannot start the storage node agent for a secure
    store when the storage node has an admin with the admin web service
    enabled.  Please start this storage node and reconfigure the storage
    node to disable the admin web service, or disable store security with
    12.1.4.0.9 before starting this storage node.

    If you see this message when starting a storage node, you can work around the problem by reverting to the previous release software and restarting the storage node. Then, use the plan change-parameters command to disable the admin web service. For example, to disable the web service on admin1, you could use the command:

    plan change-parameters -service admin1 -wait -params adminHttpPort=0
    

SQL Query Language and Shell Changes

  1. The Oracle NoSQL Database SQL query language has been enhanced with the following features:

    • Support for OFFSET and LIMIT [#25078]

    • ALTER TABLE, used for schema evolution, has been modified to allow modification of RECORD types nested within other records, maps, and arrays. [#24049]

  2. Paths used in DDL statements to reference indexes involving arrays, map keys, and map values have been modified to be consistent with those used in queries. The statements affected include CREATE INDEX and DESCRIBE TABLE. [#25167]

    There are 3 types of paths:

    1. Paths to array values.

      Previous syntax: path-to-array or path-to_array[]
      New syntax: path-to-array[] ("[]" is required)
      For example, to create an index on the values in an array:
      	CREATE TABLE MyTable(id INTEGER, myArray ARRAY(INTEGER), PRIMARY KEY(id))
      	CREATE INDEX ArrayIndex on MyTable(myArray[])
    2. Paths to map values:

      Previous syntax: path-to-map or path-to-map[]
      New syntax: path-to-map[] ("[]" is required)
      For example, to describe the field used in a map:
      	CREATE TABLE MyTable(id INTEGER, myMap MAP(STRING), PRIMARY KEY(id))
      	DESCRIBE AS JSON MyTable(myMap[])
    3. Paths to map keys:

      Previous syntax: KEYOF(path-to-map)
      New syntax: KEYS(path-to-map)
      For example, to create an index on the keys of a map:
      	CREATE TABLE MyTable(id INTEGER, myMap MAP(STRING), PRIMARY KEY(id))
      	CREATE INDEX MapIndex on MyTable(myMap[])
  3. The CREATE FULLTEXT INDEX and DROP INDEX DDL statements have acquired new syntax to allow overriding of the new constraints. [#24809]

  4. New constraints are now enforced when text indexes are created and deleted. These operations are not allowed unless the health status of the Elasticsearch cluster is GREEN. [#25093]

  5. Fixed a bug where executing a query statement that contains != condition in the SQL shell could result in a java.io.IOException: Invoke method readLine of Jline.ConsoleReader failed: !=... : event not found. [#25065]

  6. Added a command "show query <statement>" to display the query plan for a query to the SQL shell. This is not part of the query language itself; it is a feature of the SQL shell. [#25170]

Storage Engine Changes (JE 7.2)

  1. Fixed a problem where checkpointing sometimes did not occur after log cleaning when application write operations stopped, which prevented reclamation of disk space. This was a common problem with tests that expect disk space to be reclaimed after write operations have stopped. In production systems it could also be a problem during repair of an out-of-disk situation. Note that an earlier fix [#23180] in JE 7.1 caused cleaning to occur in this situation, but a checkpoint is also needed to reclaim disk space, so the earlier fix was incomplete. [#25364]

  2. Unexpected JE data file (.jdb file) deletions are now automatically detected by a background task. Normally all JE data file deletions should be performed internally as a result of JE log cleaning. If an external file deletion is detected, JE assumes this was accidental. This will now cause the RN to fail very quickly, so that the problem is made visible as soon as possible. Previously, the problem could be undetected for a period of time if the deleted file was not frequently accessed. [#25201]

  3. Further reduced the possibility that multiple node failures could cause the loss of data (JE's RollbackProhibitedException). When NO_SYNC durability is used, JE flushes the log to disk periodically to avoid this. Previously, an fsync (flush to the storage device) was performed every 5 minutes. Now, a flush to the file system is performed every 5 seconds and an fsync is performed every 20 seconds. [#25417]

  4. JE's DbCacheSize utility has been improved for applications like NoSQL DB that use CacheMode.EVICT_LN and an off-heap cache. The -offheap argument should be specified when running DbCacheSize during NoSQL DB capacity planning. The documentation for running DbCacheSize has been updated in the Determine JE Cache Size section of the NoSQL DB Administrator's Guide, C. Initial Capacity Planning. This documentation was previously incorrect because it did not take into account use of the off-heap cache. [#25380]

Deprecated Features

  1. The SNMP agent (oracle.kv.impl.mgmt.snmp.SnmpAgent), which makes store metrics available for access via the SNMP protocol, is deprecated in this release. It will be removed from the product in a future release, circa July 2017.