1 Oracle Database Release 18c New Features

This chapter contains descriptions of all of the features that are new to Oracle Database Release 18c.

Application Development

APEX

Application Express 5.1: New and Updated Packaged Applications

New productivity apps in Oracle Application Express release 5.1 include:

  • Competitive Analysis - Create side-by-side comparisons that can be edited by many users simultaneously. You can score and display these comparisons in aggregated chart form, or in a more detailed text form.
  • Quick SQL - Provides a quick and intuitive way to generate a relational SQL data model based on text in a markdown-like format. Additionally, the app provides many options to generate SQL including generating triggers, APIs and history tables.
  • REST Client Assistant - Enables developers to access RESTful services defined in both Application Express workspaces and public services. The app provides metadata-driven mapping from service response data to SQL result set columns. You can use the generated SQL and PL/SQL code in Oracle Application Express applications.

Existing sample and productivity apps updated to include new 5.1 functionality include:

  • Sample Charts - Revamped to demonstrate new Oracle JET Charts and data visualization capabilities.
  • Sample Master Detail - Highlights the different ways related tables can be displayed using a marquee page or different combinations of interactive grids.
  • Sample Interactive Grids - Demonstrates interactive grid features and functionality. The sample pages highlight the following features: read only capabilities, pagination options, editing capabilities, and advanced techniques.
  • Sample Projects - Highlights the new features such as interactive grid and JET charts.
  • Sample REST Services - Demonstrates how to access external REST services from Oracle Application Express pages.

All productivity and sample apps have been updated to include release 5.1 enhancements.

Application Express 5.1: Interactive Grid

Interactive grids present end users with a set of data in a searchable, customizable report. From a functional perspective, an interactive grid includes the customization capabilities available in interactive reports plus the ability to rearrange the report interactively using the mouse. In an editable interactive grid, users can also add to, modify, and refresh the data set directly on the page.

Key features include:

  • Full Featured Grid - Includes powerful reporting capabilities, including fixed headers, frozen columns, scroll pagination, multiple filters, sorting, aggregates, and computations. 
  • Extensible and Customizable - Edit text, numerical data, and list of values. Interactive grids support all item types and item type plug-ins.
  • Master Detail - Create master-detail relationships that are any number of levels deep and across. Create master-detail relationships that are any number of levels deep and across.

Interactive grid is a rich, client-side region type that supports rapid editing of multiple rows of data in a dynamic, JSON-enabled grid. Interactive grid combines the best features from both interactive reports and tabular forms.

Application Express 5.1: Font APEX Icon Library

Font APEX is a new icon library specifically designed for Oracle Application Express and the Universal Theme. Font APEX includes over 1,100 icons designed as 16 x 16 line icons. Font APEX is a super set of Font Awesome with additional business and database related icons. To learn more about Font APEX, install the Universal Theme Sample Application.

The Font APEX Icon Library is specifically designed to complement the development of Oracle Application Express business applications by incorporating beautiful, scalable icons.

Application Express 5.1: Page Designer Enhancements

Enhancements to Page Designer in release 5.1 include:

  • Drag and Drop Tab Reordering - Developers can now customize Page Designer by reordering tabs across panes.
  • Property Editor, Filter Properties Search - Search for a group or an attribute in the Property Editor by entering keyword in the Filter Properties field.
  • Property Editor Change Indicator - Changed attributes display with a blue marker until the developers saves the page.
  • Component View Tab – The Component View tab assists developers with the transition to Page Designer. Similar in appearance to Legacy Component View, the Page Designer Component View tab presents user interface elements and application logic by component type. However, when you click a component and the Property Editor highlights the corresponding attribute in the right pane.
  • Two Pane Mode - Page Designer now enables developers to view just two panes instead of three. Two Pane mode enables developers to focus on two panes at a time.

Enhancements to Page Designer in this release are designed to improve developer productivity and assist developers in the transition from Legacy Component View.

Application Express 5.1: Calendar Enhancements

Calendar in release 5.1 offers many improvements:

  • End Date Displayed Inclusively - In previous releases, the calendar considered the end date of an all-day event as exclusive. In release 5.1, the end date is inclusive like all other Oracle Application Express components.
  • JavaScript Customization - Developers can add JavaScript code to support customization of the FullCalendar initialization using the new Initialization JavaScript Code attribute.
  • Dynamic Actions Events - Developers can capture events within the calendar and define dynamic actions against these events.
  • Keyboard Support - When the calendar grid has focus, users can use arrow keys to navigate within the calendar.

Calendar enhancements in Oracle Application Express release 5.1 improve developer productivity.

Application Express 5.1: Oracle JET Charts

Charts in Oracle Application Express release 5.1 are powered by the Oracle Javascript Extension Toolkit (Oracle JET) Data Visualizations. These are a component of Oracle JET, which is an open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles. Oracle JET data visualizations are fast, customizable, and versatile. Charts are fully HTML5 capable and responsive, meaning they work on any modern browser, regardless of platform, screen size, or features. Easily migrate existing AnyChart charts using the Upgrade Application Wizard.

By integrating Oracle JET Data Visualization into Oracle Application Express 5.1, developers can now deliver highly customizable, accessible, interactive, and responsive charts within their applications. They can now build charts that are beautiful, fast, and extremely versatile.

General

OCI Call Timeout Attribute

This feature introduces a new attribute allowing OCI applications to specify a millisecond timeout value for round trip calls to the database.

Gives applications fine-grained control, allowing them to avoid unexpected amounts of time for OCI calls such as statement execution.

Private Temporary Tables

Private temporary tables are temporary database objects that are automatically dropped at the end of a transaction or a session. A private temporary table is stored in memory and is visible only to the session that created it.

A private temporary table confines the scope of a temporary table to a session or a transaction, thus providing more flexibility in application coding, leading to easier code maintenance and a better ready-to-use functionality.

JDBC Support for Key Store Service (KSS)

JDBC Driver support for Key Store Service (KSS).

JDBC applications can now integrate Key Store Service configuration in a WebLogic server or other Java containers, then

Wallet Support in JDBC for Database Cloud Services

A self-configuring SSL capabilities including default values or programmatic logic for resolving the connection configuration values without manual input.

Make SSL connections to database in the Oracle Cloud easier and simpler.

Graph

Support for Collaborative Filtering with SQL-based Property Graph Queries

A set of PL/SQL APIs (OPG_APIS.CF) implement a Collaborative Filtering algorithm using SQL and PL/SQL against Property Graph tables in Oracle Database.

Collaborative filtering, also referred to as social filtering, enriches information by using the recommendations of other people. Collaborative filtering is widely used in systems that recommend products based on purchases by others with similar preferences. These APIs implement the collaborative filtering algorithm in Oracle Database making it accessible to SQL applications.

Node.js and Zepplin support for In-Memory Analyst (PGX)

The in-memory analyst (PGX) includes a Node.js client and an interpreter implementation for Apache Zeppelin version 0.7.0.  

This PGX client implementation in Node.js supports development of graph applications using Node.js. The Zeppelin interpreters let developers and data scientists use Zeppelin notebooks to develop applications that invoke PGX algorithms, include PGQL queries, and access Property Graph functions in Oracle Database.

Property Graph Analytics

New property graph analytics are available in Oracle Spatial and Graph. The new Approximate Pagerank analytic is a faster variant of Pagerank that can be used when less precision is acceptable. The new  Weighted Pagerank analytic considers edge weights. The new Personalized SALSA analytic evaluates the relative importance of nodes with respect to a given set of hub nodes. The new K-Core analytic computes k-core decomposition of a graph.

These new property graph analytics extend the library of built-in analytics to forty of the most popular for social network analysis, enabling quicker development and time to value. They provide faster and richer ways to discover the most popular or important entities and groups in a network. For example, finding influencers and making recommendations are accomplished by considering the relative importance of an entity's relationships, and by evaluating how dense or sparse the relationships are, overall, in the graph.

Property Graph Query Language

A new SQL-like declarative language for querying property graph data has been added to the property graph feature of Oracle Spatial and Graph. Property Graph Query Language (PGQL) includes a rich set of graph pattern matching capabilities, WHERE, SELECT, ORDER BY, GROUP BY clauses with support for arithmetic expressions, and MIN, MAX, and COUNT aggregates.

Now with PGQL, developers have a choice of using a declarative language or Java API's to formulate property graph pattern matching and path queries. The SQL-like nature of PGQL increases query expressivity, reduces developers' learning time, facilitates faster and easier query writing, and makes queries simpler for others to read.

Composite partitioning for RDF Graph networks

DBAs can now create RDF Graph networks with list-hash composite partitioning.  With this scheme, an RDF network is list partitioned by Model ID and the RDF quads within each partition are further divided into subpartitions based on a hash of the RDF predicate.

Using this new partitioning method can improve query performance from five to ten times through increased parallelism and by enabling more efficient query execution plans.

RDF Semantic Graph In-Memory Columnar Graph

Users can now create an RDF graph in-memory virtual model for faster graph queries using the Oracle Spatial and Graph and Oracle Database In-Memory options.

RDF graph queries can be up to one hundred times faster on an in-memory virtual model using Oracle Database In-Memory without increasing persistent storage requirements.

JSON

SQL Enhancements for JSON

Multiple SQL enhancements are provided for storing and querying JSON data in the database.

* You can specify that a given SQL expression returns JSON data, using TREAT (... AS JSON).

* SQL/JSON query and generation functions can return results as LOB data.

* SQL/JSON generation functions can accept input in a variety of SQL data types, and they can optionally be required to return well-formed JSON data.

* SQL/JSON path expressions accept additional item methods, to filter data that is not convertible to a given SQL data type. Item methods can be used in more SQL/JSON functions. Item method string() can return LOB data.

* You can use data-dictionary views to extract JSON field path and type information that is recorded in a data guide-enabled JSON search index. 

* SQL/JSON function json_table supports simpler syntax: dot-notation to access fields and automatic naming of projected columns. Materialized views created using json_table can be automatically synchronized.  

TREAT (... AS JSON) lets you specify that the return value from a given SQL expression is to be treated as JSON data. Such expressions can include PL/SQL function calls and columns specified by a SQL WITH clause. New data-guide views make it easy to access path and type information for JSON fields, which is recorded for index-backed data guides. Returning generated and queried JSON data in LOB instances widens the scope of the use of relational data.

SODA for C

Simple Oracle Document Access (SODA) for C lets C and C++ programs interact with SODA document collections stored in Oracle Database. This includes performing CRUD (create, read, update, delete) operations on JSON documents. Interaction with a collection is consistent and safe, regardless of which SODA implementation (PL/SQL, Java, C, or REST) is used to create or access its documents. SODA for C is part of Oracle Call Interface (OCI).

SODA lets you use a schemaless, NoSQL-style development model. An application built using SODA can persist data as JSON documents, which can facilitate changing the application data model as requirements evolve.

SODA for PL/SQL

Simple Oracle Document Access (SODA) for PL/SQL lets PL/SQL programs interact with SODA document collections stored in Oracle Database. This includes performing CRUD (create, read, update, delete) operations on JSON documents. Interaction with a collection is consistent and safe, regardless of which SODA implementation (PL/SQL, Java, C, or REST) is used to create or access its documents.

SODA lets you use a schemaless, NoSQL-style development model. An application built using SODA can persist data as JSON documents, which can facilitate changing the application data model as requirements evolve.

Support Indexing of JSON Key Names Longer than 64 Characters

The upper limit is increased for JSON key names that can be indexed by the JSON Search index. The JSON key name upper limit in Oracle Database 18c is 255 bytes. In previous releases, JSON search indexes that were created did not index key names greater than 64 bytes.

Key names greater than 64 bytes are quite common when object serializers, such as GSON and JACKSON, are used to serialize Java Hash maps, such as JSON. Operations on JSON Path expressions containing JSON Key names up to 255 characters long can now be optimized by the JSON Search Index. Raising the limit on the size of the key name for indexing increases the efficiency of searching JSON documents generated from HASH MAP-like structures.

PL/SQL

PL/SQL Hierarchical Profiler (DBMS_HPROF) Enhancements

Starting with Oracle Database 18c, a new DBMS_HPROF procedure creates necessary tables and structures to collect and analyze raw profiler output as an alternative to the raw profiler data file. 

A PL/SQL performance engineer with 'INSERT', 'CREATE', and 'SELECT' privileges can record profiler-run information into the database tables and generate reports with tools such as SQL*Plus or SQL Developer. The engineer need not have additional file permissions to access the script files located in operating system directories.

PL/SQL Qualified Expressions

Through Oracle Database 12c release 2, it was possible to supply the value of non-scalar datatype with an expression, for example by using the type constructor for a nested table. Starting with Oracle Database Release 18c, any PL/SQL value can be provided by an expression (for example for a record or for an associative array) like a constructor provides an abstract datatype value. In PL/SQL, we use the terms “qualified expression” and “aggregate” rather than the SQL term “type constructor”, but the functionality is the same.

Aggregates and their necessary adjunct, qualified expressions, improve program clarity and programmer productivity.

Spatial

Support for Sharded Databases with Spatial data types

Database tables with SDO_Geometry columns and spatial indexes, operators and functions can now be used with Oracle Sharding.

This allows customers who desire the horizontal scalability and geographic distribution of data enabled with Oracle Sharding to include spatial data types. Applications that run on a sharded database architecture can achieve linear scalability, extreme data availability and geographic data distribution.

Support for distributed and Oracle XA transactions

The use of R-tree spatial indexes is supported in distributed and Oracle XA transactions.

It is no longer necessary for applications to include specialized connection management code for distributed web-based DML operations.

Enhancements to Open Geospatial Consortium (OGC) Web Catalog Service (CS-W) support

The International Standards Organization (ISO) has defined new metadata profiles for the OGC Web Catalog Services. These Catalogue Services are used to locate, manage and maintain distributed geospatial data, applications and services. Oracle Spatial and Graph now supports this application profile for ISO 19115/ISO 19119 metadata with support for XML encoding per ISO 19139 based on the CSW 2.0 specification.

These new profiles will enable the easy exchange of spatial data and metadata among application and organizations. Previously, the developer of systems and services was required to define the information model that a specific catalog service exposed based on CS-W standard interfaces and capabilities. By conforming to this ISO profile, Oracle Spatial and Graph CS-W services can interoperate with API requests, query languages, search terms, etc. that also implement the ISO profile.

Enhanced Spatial JSON support

JSON support adds capabilities beyond what is available in GeoJSON.  It supports a larger range of geometries, including 2D and 3D, solid, surface, and LRS geometries. While the GeoJSON-specific APIs are still supported, these provide a more comprehensive set of spatial JSON features including:

SDO_UTIL.FROM_JSON, which converts a JSON object to an SDO_GEOMETRY object, can also read and convert GeoJSON objects.

SDO_UTIL.TO_JSON and SDO_UTIL.TO_JSON_VARCHAR, which convert an SDO_GEOMETRY object to a JSON object (CLOB or VARCHAR2, respectively), can convert any geometry supported by Spatial and Graph.

JSON and REST APIs are the standard for cloud applications. By supporting JSON representation for all geometry data types, any application that invokes REST APIs can now use spatial data and operators in Oracle Database.

Text

Text: Automatic Background Index Maintenance

An automatic background task can now move index data from the update staging area to the main index, defragmenting as it goes.

Frequent inserts, updates and deletes will no longer cause fragmentation of the main index, improving query performance. For many applications there will be no need to run manual index optimization at all, even when the "sync on commit" option is used.

Text: Faceted Navigation Support

Faceted navigation support enables summary information to be presented alongside search results. This enables you to drill down into query results, restricting the search according to the summary information. For example, if you are shown a set of authors with a count of the number of books by each author, you can narrow the search by choosing one or more authors.

Faceted navigation support enables faster and more effective navigation of applications.

Text: Wildcard Search

A new wordlist preference, WILDCARD_INDEX replaces the current options SUBSTRING_INDEX, PREFIX_INDEX, and REVERSE_INDEX.

Wildcard search is more efficient, faster and simpler to understand than the previous options. Indexing and storage overhead are less compared to previous options.

Text: Concurrent DML Support

The mechanism for handling DML requests (inserts, updates, and deletes) on a table with a text index has been updated. While generally these changes are internal and will not be visible to users, there will be some changes to the CTX_USER_PENDING view, since many updates will be handled automatically without ever having to be queued in that table.

Elimination of a major bottleneck where many processes are updating the same text-indexed table simultaneously.

Text: New Options to Optimize Index

Optimize Index has two new options which can limit the number of tokens to be optimized and the types of section to be indexed.

This allows you to run a short optimize session which can optimize only the most critical tokens or section types, thus greatly improving query performance without a heavy optimization load.

Availability

Application Continuity

Server Draining ahead of relocating or stopping services or PDB

Beginning with Oracle Database 18, the database, itself, drains the sessions before planned maintenance starts. This is done by marking the sessions for draining when the PDB or service is relocated. Once marked for draining, the drivers and database start looking to satisfy rules to drain the sessions where the application is not interrupted. The database itself uses an extensible set of rules to detect when to take the database session away. When draining starts, the database session persists at the database until a rule is satisfied.

The benefits of this approach is that organizations don't need to experience the costly loss of access to their application to support maintenance operations including updates and repairs.

Oracle Database sees Request Boundaries

Request Boundaries demark where applications and application servers borrow and return connections from their connection pools. Without request boundaries, the database has no insight as to how the applications and application servers are managing their connections.  Making request boundaries visible to the Oracle database allows functionality such as draining for planned maintenance, load balancing, and multiplexing to be isolated at the database, so sessions can be re-established with no visible disruption to the application layers. Request boundaries are supported in AWR, system, session, and service statistics.

Using request boundaries, the database itself supports planned draining, load rebalance, and also provides AWR statistics for the rate of work completed and level of protection.

Transparent Application Continuity (TAC)

Transparent Application Continuity (TAC) transparently tracks and records session and transactional state so recoverable outages are hidden from your users. This is done safely and with no reliance on application knowledge or code changes, allowing you to enable Application Continuity for all applications.  Transparent Application Continuity is fully automated.  Transparency is achieved by consuming a new state-tracking infrastructure that understands session state usage as the application issues user calls. You do not need to know how the application is implemented or be concerned about or restrict future changes. Transparent Application Continuity is available for applications using thin Java, OCI and ODP.NET unmanaged provider.

Transparency is achieved safely and with no reliance on application knowledge or application code changes, allowing Transparent Application Continuity to be enabled as a standard for applications.

Data Guard

Oracle Data Guard Multi-Instance Redo Apply Supports Use of Block Change Tracking Files for RMAN Backups

The RMAN block change tracking file can now be enabled on an Oracle Active Data Guard standby that is using multi-instance Redo Apply.

You can now use the fastest redo apply technology and incremental backup technology on the same Oracle Active Data Guard standby to gain the best of both features.

Automatic Correction of Non-logged Blocks at a Data Guard Standby Database

Database nologging functionality has been extended to provide better support for use in an Oracle Active Data Guard environment without significantly increasing the amount of redo generated. There are two new nologging modes:

  • Standby Nologging for Load Performance - ensures that standbys receive nonlogged data changes with minimum impact on loading speed at the primary but at the cost of allowing the standby to have transient nonlogged blocks. These nonlogged blocks are automatically resolved by managed standby recovery.
  • Standby Nologging for Data Availability - ensures all standbys have the data when the primary load commits but at the cost of throttling the speed of loading data at the primary, which means the standbys never have any nonlogged blocks.

You can use database nologging when loading data into your production databases without compromising the integrity of your Oracle Data Guard standby databases, choosing the level of synchronization between the primary and standby databases.

General

Shadow Lost Write Protection

Shadow lost write protection detects a lost write before it can result in a major data corruption. You can enable shadow lost write protection for a database, a tablespace, or a data file without requiring an Oracle Data Guard standby database.

Shadow lost write protection provides fast detection and immediate response to a lost write, thus minimizing the data loss that can occur in a database due to data corruption.

Duplicate PDBs between encrypted and non-encrypted CDBs

RMAN can duplicate an unencrypted PDB or tablespaces within the PDB such that they are encrypted in the destination CDB. Use the AS ENCRYPTED clause of the DUPLICATE command to perform this duplication.

Similarly, use the AS DECRYPTED clause of the DUPLICATE command to duplicate an encrypted PDB to a destination CDB without using encryption.

Easily and securely leverage Oracle Cloud, whenever needed, by seamlessly duplicating databases between an on-premise database and Oracle Cloud.

RMAN recover standby simplification

Enhancements to the RECOVER command enable a standby database to catch up with changes made to the primary database with minimal manual intervention.

Simplify the process of rolling forward a standby database and minimize possible user errors with manual steps.

PDB backups usable after plugging in to a new CDB

You can relocate a PDB by unplugging it from a source CDB and plugging it into a target CDB. PDB backups that were created on the source CDB are called preplugin backups. RMAN can perform PDB restore and recovery operations on the target CDB by using preplugin backups. The PREPLUGIN clause of the RESTORE and RECOVER commands is used with prelpugin backups.

RMAN supports use of the PREPLUGIN clause with the following commands: CATALOG, CHANGE, CROSSCHECK, DELETE, and LIST.

Maintain backup compliance after moving from one CDB to another CDB.

Backups from non-CDBs are usable after migration to CDB

A non-CDB can be migrated and plugged in as a PDB within an existing CDB. Backups created on the source non-CDB, before the migration, are referred to as preplugin backups. RMAN enables you to perform restore and recovery operations on the plugged in PDB by using preplugin backups. The PREPLUGIN clause of the RESTORE and RECOVER commands is used with preplugin backups.

You can also use the PREPLUGIN clause with the following RMAN commands on the destination CDB: CATALOG, CHANGE, CROSSCHECK, DELETE, and LIST.

Maintain backup compliance after moving to a CDB by being able to leverage previously created non-CDB backups to RESTORE, RECOVER, CATALOG, CHANGE, CROSSCHECK, DELETE, and LIST. 

RMAN duplicate PDB into existing CDB

RMAN enables you to use the DUPLICATE command to duplicate a PDB to an existing CDB.

Leverage the existing CDBs in your environment to duplicate PDBs, instead of having to create additional CDBs for deduplication tasks.

Backup to archive storage

Enhances Oracle Cloud SBT library to support archive storage.

RMAN can send backups to archive storage

Sharding

Centralized Diagnosability and Manageability for Sharded Databases

The SHARDS clause enables users to query Oracle supplied objects, such as V$, DBA/USER/ALL views and dictionary tables across all shards from a central shard catalog. A query with the SHARDS clause can only be executed on the shard catalog database.

This feature enhances centralized management by providing the ability to query performance, diagnostic, and audit data from Oracle supplied objects (V$, DBA/USER/ALL views, dictionary objects and tables) across all shards from a central shard catalog.

Support for PDBs as Shards and Catalogs

In this release, Oracle Sharding supports a shard or shard catalog as a single PDB in a CDB. Multiple PDBs in a CDB is not supported.

In support of using PDBs as shards, the GDSCTL commands ADD SHARD and CREATE SHARD are extended, and new commands ADD CDB, CREATE CDB, MODIFY CDB, CONFIG CDB, and REMOVE CDB are implemented.

Support for PDBs provides many manageability benefits for a sharded database, including consolidation of shards, management of many as one, and database upgrades.

User-Defined Sharding Method

The user-defined sharding method in Oracle Sharding allows you to define a LIST or RANGE based partitioning strategy and explicitly specify mapping of data to individual shards.

User-defined sharding is used when, because of performance, regulatory, or other reasons, certain data needs to be stored on a particular shard, and you need to have full control over moving data between shards. User-defined sharding also provides that, in case of planned or unplanned downtime of a shard, you know exactly what data is not available. User-defined sharding does come with a cost; it requires that you monitor and maintain balanced distribution of data and workload across shards.

Sharding Support for JSON, LOBs, and Spatial Objects

Oracle Binary Large Objects (BLOBs) and Character Large Objects (CLOBs) can now be used in sharded configurations. This enables Oracle Sharding for applications which work directly or indirectly directly with BLOBs and CLOBs, including those that leverage JSON and Spatial Functionality.

Query and DML statements:

  • Cross-shard queries are supported. DML operations involving a single shard are supported. DMLs involving more than one shard are not supported

APIs

  • All current DBMS_LOB and OCILob functionality is supported, except for operations on BFILEs. Functionality that operates on multiple lobs is restricted to lobs that come from the same shard.
  • A new method, DBMS_LOB.isremote and the OCILob descriptor OCI_ATTR_LOB_REMOTE return TRUE for a LOB obtained from a sharded table.

BLOBs and CLOBs are widely used in Oracle Database. This feature enables the use of BLOBs CLOBs, JSON, and Spatial Objects in an Oracle Sharding environment, which is useful for applications that use these data types where storage in sharded tables would facilitate business requirements.

Consistency Levels for Multi-Shard Queries

You can specify different consistency levels for queries across multiple shards in a sharded database. For example, you might want some queries to avoid the cost of SCN synchronization across shards, and these shards could be globally distributed. Another use case is when you use standbys for replication and slightly stale data is acceptable for cross-shard queries, as the results could be fetched from the primary and its standbys.  You can use the initialization parameter MULTISHARD_QUERY_DATA_CONSISTENCY to set different consistency levels when executing multi-shard queries across shards.

This feature enables you to avoid the cost of SCN synchronization while executing multi-shard queries across shards and these shards potentially could be distributed globally.

For multi-shard queries, this feature allows slightly stale data from the standby databases.

Optimizer Enhancements for Sharded Database Queries

Sharding was introduced in Oracle Database 12c Release 2 with the aim to allow storing and managing data across separate databases with a common schema. Various enhancements have been made to improve the robustness and fault tolerance of shard queries. Query explain plan has been enhanced to display information for all shards participating in the query.

These optimizer enhancements improve shard query robustness and diagnostics.

Automatic Deployment of Oracle GoldenGate

This feature automates the configuration of Oracle GoldenGate bi-directional replication between shards. Conflict detection and resolution is automatically enabled without any modification to the application.

Read-write workloads can be distributed across one or more synchronized copies of a partition to increase utilization and provide continuous access to data in the event that a copy becomes unavailable. Automatic conflict detection and resolution reduces administrative overhead and eliminates the effort that was previously required to enable applications for bi-directional replication.

Oracle GoldenGate Automatic CDR

This features extends the Conflict Detection and Resolution (CDR) functionality in Oracle GoldenGate Oracle-to-Oracle replication to make large scale active-active replication easier to use by providing a built-in conflict detection and resolution mechanism that is simple to specify, easy to administer, and which requires no user application changes. The CDR mechanism extends the conflict detection and resolution mechanisms to handle all delete conflicts and LOB columns.

This features enables the deployment of applications in large active-active replication environments using the Oracle Database and Oracle GoldenGate Replication, by providing a built-in comprehensive Conflict Detection and Resolution mechanism that does not require user application changes and is easy to configure and use.

Oracle RAC Sharding

Oracle RAC Sharding affinitizes table partitions to Oracle RAC instances, and routes database requests that specify a partitioning key to the instance that logically holds the corresponding partition. This provides better cache utilization and dramatically reduces block pings across instances. The partitioning key can only be added to the most performance critical requests. Requests that don’t specify the key still work transparently and can be routed to any instance. No changes to the database schema are required to enable this feature.

Oracle RAC Sharding provides performance and scalability benefits with minimal application changes.

Automatic CDR Support of Tables with Unique Indexes/Constraints.

Auto CDR was introduced in Oracle Database 12c Release 2 (and Oracle GoldenGate 12.3) to automate the conflict detection and resolution configuration in active-active GoldenGate replication setups. However, Auto CDR was only allowed on tables with primary keys. In Oracle Database 18c, Auto CDR is supported on tables with just unique keys/indexes but no primary keys.

Active-active Oracle GoldenGate replication customers can use the Automatic Conflict Detection and Resolution feature on more types of tables simplifying their active-active replication setups.

Big Data and Data Warehousing

Analytic Views

Analytic View FILTER FACT and ADD MEASURE Keywords

Queries that SELECT from analytic views may now include the FILTER FACT keywords to filter the data accessed by the analytic view prior to aggregation. For example, a query that returns aggregate data for year-level data might use the FILTER FACT keywords to restrict the query to use only the months January and February. In that case, the aggregate value for a year is the aggregate of January and February values rather than those for all months of the year.

Using the new ADD MEASURES keywords, you can define calculated measures within a SELECT statement, so that an application can now define its own measures dynamically. Previously, you could only add calculated measures in the analytic view definition.

Both the FILTER FACT and the ADD MEASURES keywords provide application developers with additional calculation power and flexibility, and with new opportunities to simplify application development.

Analytic View Support for Denormalized Fact Tables and Snowflake Schema

In addition to using tables in a star schema, analytic views now can use tables in a snowflake-style schema and can use denormalized fact tables, in which dimension attributes and fact data are in the same table. The REFERENCES DISTINCT keywords in the CREATE ANALYTIC VIEW statement support the use of a denormalized fact table. The JOIN PATH keywords in the CREATE ATTRIBUTE DIMENSION statement support the use of snowflake-style dimension tables.

Both features provide data warehouse and application developers the ability to use analytic views with more data sets and offer additional opportunities to simplify application development and schemas.

New Analytic View Calculation Functions

Analytic views now support a variety of new functions that you can use in calculated measure expressions. The new functions include RANK_*, PERCENTILE_*, STATS_*, COVAR_*, HIER_DEPTH, HIER_LEVEL, HIER_MEMBER_NAME, HIER_MEMBER_UNIQUE_NAME, HIER_CAPTION, and HIER_DESCRIPTION. Also, you can now use hierarchical attributes in the definition of calculated measures. For example, you can use attributes in a CASE statement to specify different calculation expressions based on the value of an attribute.

These new functions extend the calculation capabilities of analytic views, allowing them to support a broader range of applications and offering additional opportunities to simplify application development.

Data Mining

Algorithm Meta Data Registration

The algorithm meta data registration will simplify and streamline the integration of new algorithms in the R extensibility framework. This feature will allow a uniform consistent approach of registering new algorithm functions and their settings.

The integration of new algorithms in the extensibility framework will be simplified. The GUI will be able to seamlessly pick up and support such new algorithms.

Decomposition-based attribute and row importance

The CUR algorithm allows users to find the columns and features that best explain their data. This algorithm has gained popularity because it  allows the user to gain insight into their data using easily understandable terms. In contrast, decomposition method like SVD derive implicit features that are hard to interpret. CUR is trying to use the insights derived from SVD but translate them in terms of the original rows and columns.

A CUR-based attribute and row importance can be used to provide data insight as well as a data filter followed by additional analytical processing. This will be the first OAA algorithm that singles out not only important columns but important rows.

Exponential Smoothing

A new mining function, Time Series, has been added together with an algorithm, Exponential Smoothing, for performing time series analysis. Exponential Smoothing Methods (ESM) are widely used for forecasting from time series data. Originally, thought to be less flexible and accurate than competitors, such as ARIMA, ESM has more recently been shown to cover a broader class of models and has been extended to increase both its descriptive realism and accuracy. Oracle ESM includes many of these recent extensions, a total of 14 models, including the popular Holt (trend) and Holt-Winters (trend and seasonality) models, and the ability to handle irregular time series intervals.

The exponential smoothing techniques have been used successfully in time series analysis. They are especially useful when a high number of time series need to be processed simultaneously and more complex approaches are impractical.

Random Forest

Random Forest is a power machine learning algorithm. It uses an ensemble method that combines multiple trees built with random feature selection. Effectively, individual trees are built in random subspaces and combined using the bagging ensemble method.

Random forest is a very popular algorithm which has excellent performance on a number of benchmarks. It is part of ORE but the implementation is based on a public R package. Implementing it as kernel code brings significant performance and scalability benefits.

Neural Network

The Neural Network algorithm is a biologically inspired approach where a collection of interconnected units (neurons)  learn to approximate a function. Neural Networks are appropriate for nonlinear approximation in both classification and regression problems.

Neural networks are powerful algorithms that can learn arbitrary nonlinear functions. There have been successfully used in a number of hard problems, including non-linear regression/ time series, computer vision, and speech recognition.

Explicit Semantic Analysis extension to classification

Explicit Semantic Analysis (ESA) is exposed in Oracle Database 12c Release 2 as a topic model only under FEATURE_EXTRACTION. It typically uses hundreds of thousands of explicit features. The algorithm can be easily adapted to perform classification and can address use cases with hundreds of thousands of classes which are an important but very challenging classification problem that is not appropriately addressed by the current OAA algorithms.

The task of large text classification is very important in the context of big data. Extending ESA to Classification significantly enhances our offering in the text classification domain and allows OAA to address use cases which are currently intractable for the product.

General

New parallel statement queue timeout and dequeue actions

The new resource manager directive PQ_TIMEOUT_ACTION enables the database administrator to specify the action to take on a parallel SQL statement that is timed out of the parallel SQL statement queue. The database administrator can also manually dequeue a parallel SQL statement.

This feature allows the database administrator to manage the parallel SQL statement queue better for critical SQL statements. This achieves better SQL execution performance for critical users.

Manual termination of run-away queries

A SQL query consuming excessive resources, including parallel servers, can be terminated using the ALTER SYSTEM CANCEL SQL statement.

This feature enables the database administrator to terminate a runaway SQL query using a SQL statement. This frees the system resources consumed by the runaway SQL query so that other SQL queries can use them.

Inline External Tables

Inline external tables enable the runtime definition of an external table to be part of a SQL statement, thus eliminating the need to create an external table as a persistent database object in the data dictionary.

Inline external tables remove the need for explicitly creating external tables to access data outside an Oracle database. This simplifies accessing external data, and enables developing simpler and efficient database applications.

Approximate Top-N Query Processing

Data analysis applications heavily use aggregate functions. Approximate query processing (available since Oracle Database 12c Release 1) aims to deliver faster results for these queries. The approximate results are not identical to the exact results but they are very close. New approximate SQL functions for rank, sum and count are now available for Top-N style queries.

By making use of approximate query processing, you can instantly improve the performance of existing analytic workloads and enable faster ad-hoc data exploration.

Polymorphic Tables Enhancements

A table function is a function that returns a collection of rows and that can be called as part of the FROM clause within SQL query block. 

A polymorphic table function (PTF) is a new type of table function whose return type is determined by the arguments passed into the PTF. It provides a new framework that is simple to use and which provides a highly efficient and scalable implementation.

The new polymorphic table function (PTF) provides an efficient and scalable framework to extend the analytical capabilities of the Oracle Database. The SQL report writer is able to call these functions without knowing the details of the implementation of the PTF and the PTF doesn't need to know about the details or how the function is being executed (e.g. in serial or parallel) or whether the input rows are partitioned and/or ordered.

Therefore, a PTF is useful when SQL developers and DBAs want to provide generic extensions which work for arbitrary input tables or queries.

LOB support with IMC, Big Data SQL

Starting with this release, Oracle Database allows LOBs and LOB related functionality to be used with Oracle Database In-Memory and Big Data SQL.It also provides multiple enhancements to improve the performance of LOB operations and, SQL statements and DMLs using LOB columns.

Large Objects (Binary and Text) can now be used with Oracle Database In-Memory and Big Data SQL.

Users will experience marginal boost in performance while performing LOB operations and using LOB columns in their DML and SQL operations.

Database Overall

Connection Management

Oracle Connection Manager in Traffic Director Mode

This feature allows the Oracle Connection Manager (CMAN) to be configured in Traffic Director Mode and furnish applications connecting to different database services, with HA, security, performance and scalability without code change.

The Oracle Connection Manager in Traffic Director mode furnishes the following benefits:

  • Transparent performance enhancements and connection multiplexing
    • With multiple CMAN in Traffic Director mode instances, applications get increased scalability through client-side connection-time load balancing or with a load balancer (BIG-IP, NGINX, and others)

  • Zero application downtime including: planned database maintenance or pluggable database (PDB) relocation and unplanned database outages for read-mostly workloads.
  • High Availability of CMAN in Traffic Director mode to avoid a single point of failure.
  • Security and isolation: CMAN in Traffic Director mode furnishes:
    • Database Proxy supporting transmission control protocol/transmission control protocol secure (TCP/TCPS) and protocol conversion

    • Firewall based on the IP address, service name, and secure socket layer/transport layer security (SSL/TLS) wallets

    • Tenant isolation in a multi-tenant environment

    • Protection against denial-of-service and fuzzing attacks

    • Secure tunneling of database traffic across Oracle Database on-premises and Oracle Cloud

Container Database Architecture

Copying a PDB in an Oracle Data Guard Environment

You can now automatically maintain standby databases when performing PDB remote clones or plugins by using two new initialization parameters: STANDBY_PDB_SOURCE_FILE_DBLINK and STANDBY_PDB_SOURCE_FILE_DIRECTORY.

This feature provides increased high availability as well as more autonomy for PDB standby operations in an Oracle Data Guard environment.

DBCA PDB Clone

The Oracle Multitenant architecture, represented by the pluggable databases (PDBs) in a multitenant container database (CDB), is the standard for Oracle Database deployments. DBCA PDB Clone is a Database Configuration Assistant (DBCA)-based functionality that enables the GUI-based cloning of PDBs for the purpose of creating multiple, identical PDBs. 

PDB Clone helps with the mass creation of identical PDBs for test and development purposes. 

PDB Lockdown Profile Enhancements

This release introduces several enhancements for PDB lockdown profiles.

  • You now can create PDB lockdown profiles in the application root, as well as in the CDB root. In previous releases, you only could create the profile in the CDB root. The ability to create a PDB lockdown profile in an application container enables you to more finely control access to the applications that are associated with the application container.
  • You now can create a PDB lockdown profile that is based on another PDB lockdown profile, either a static base profile or a dynamic base profile. You can control whether subsequent changes to the base profile are reflected in the newly created profile that uses the base profile.
  • Three default PDB lockown profiles have been added for this release: <code>PRIVATE_DBAAS</code>, <code>SAAS</code>, and <code>PUBLIC_DBAAS</code>. These profiles benefit Cloud environments.
  • new dynamic data dictionary view, <code>V$LOCKDOWN_RULES</code>, is available. This view enables a local user to find the contents of a PDB lockdown profile.

This feature benefits environments that need enforced security and isolation in PDB provisioning.

Refreshable PDB Switchover

A refreshable clone PDB can switch roles with its master PDB.

A refreshable clone PDB is an incrementally updated copy of a master PDB. The new switchover feature enables a change of roles between master and clone PDBs, which is useful in the following situations:

1. Planned switchover. The former master becomes the new clone, while the former clone becomes the new master. Switching roles may be useful for load balancing between two CDBs.

2. Unplanned switchover. If the master PDB fails, then operations can resume on the refreshable clone PDB. This approach is useful when the failure is isolated to the master PDB, leaving other PDBs unaffected. In such cases, a full CDB-level Oracle Data Guard failover is unnecessary. Data Guard Broker can perform per-PDB failover when two pairs of CDBs reside on two computers, with Data Guard replication in opposite directions for each pair. However, this configuration requires four CDBs. The refreshable PDB approach enables unplanned switchover from a failed master PDB to a refreshable clone PDB between a pair of CDBs.

CDB Fleet Management

Oracle Database 18c introduces a SQL interface for cloud-scale database monitoring and management across a fleet of container databases (CDBs) and hosted pluggable databases (PDBs). A CDB fleet is a collection of CDBs registered with a lead CDB and managed as one logical CDB. The lead CDB is the central location for all CDB and PDB deployment, usage, and connection metadata. Registered CDBs synchronize with the lead CDB through automated proxy PDB connections. All PDBs in the fleet are  accessible in the lead CDB, enabling you to manage PDBs as if they were hosted in the lead CDB.

Cloud-scale applications may support tens of thousands of tenants. A CDB can host up to 4096 tenant PDBs. By configuring a CDB fleet, an application can extend across tens of thousands of CDBs while maintaining a single master application. Reporting, monitoring, and management scale across disparate tenants through a single interface, which reduces capital and operational costs and provides greater efficiencies to the business.

PDB Snapshot Carousel

A PDB snapshot is a named copy of a PDB at a specific point in time. When a PDB is enabled for PDB snapshots, you can create up to eight snapshots of it. The set of snapshots is called a snapshot carousel.

A PDB snapshot carousel maintains a frequently refreshed library of recent copies of PDBs. You can perform either point-in-time cloning or point-in-time restore of any snapshot in the carousel.

A typical use case is to clone a PDB for testing. For example, while the production PDB is n use, you create a refreshable clone PDB named <code>test_master</code>. You configure <code>test_master</code> to create automatic PDB snapshots every day. When you need new PDBs for testing, create a full clone of any snapshot in the carousel, and then create sparse clones of this snapshot using <code>CREATE PLUGGABLE DATABASE ... SNAPSHOT COPY</code>.

Another use case is for point-in-time restore. For example, if you take a snapshot of a PDB before a data load, and if the data load accidentally corrupts the production PDB, then you can create a new PDB based on the snapshot. You can then drop the corrupted PDB and retry the data load.

Globalization

Collations for bind variables in OCI

It is now possible to pass a collation for a bind variable in a query or DML statement through the value of the attribute OCI_ATTR_COLLATION_ID.

By setting a collation of a bind variable, you can control the collation used to compare the value of the bind variable in comparison conditions and other SQL operations to which the bind variable is an argument. Controlling the collation with the OCI attribute is preferable over dynamically inserting the COLLATE operator into SQL statement text. The latter increases the risk of introducing a SQL injection issue.

Partitioning on columns with any declared collations

Table partitioning key columns can now have any declared collations. In the earlier Oracle Database release, the table partitioning key columns were restricted to have only BINARY, USING_NLS_COMP, USING_NLS_SORT, and USING_NLS_SORT_CS collations.

You can now partition a table by character values in a case-insensitive way by declaring a case-insensitive collation for the table partitioning key column.

Additional Database Locale Support

Three new languages and thirty new territories are added to Oracle Database to improve the overall locale coverage.

This feature expands the database locale coverage to meet the requirements of local users' cultural conventions.

Unicode 9.0 Support

The National Language Support (NLS) data files for AL32UTF8 and AL16UTF16 character sets have been updated to match version 9.0 of the Unicode Standard character database.

This enhancement enables Oracle Database to conform to the latest version of the Unicode Standard.

UTL_I18N functions for character data validations

The new overloaded functions VALIDATE_CHARACTER_ENCODING are added to the PL/SQL package UTL_I18N to validate the character encoding of VARCHAR2, NVARCHAR2, CLOB, and NCLOB data.

This feature enables applications to detect invalid character data before further processing.

Install, config, and patch

Engineered Systems support

This release introduces support for Oracle Exadata infrastructure patching.  In addition to the existing support for the Oracle Database and Grid Infrastructure Homes, the software for the database nodes, storage cells and InfiniBand switches software can now be patched with RHP.

Integration of Oracle Exadata components support in to Rapid Home Provisioning allows the management and tracking of maintenance for these components through the centralized Rapid Home Provisioning Service's inventory.

Zero-Downtime Database Upgrade

Zero-Downtime Database Upgrade automates all of the steps required for a Database upgrade. It can minimize or even eliminate application downtime during the upgrade process. It can also minimize resource requirements. It also provides a fallback path to roll back upgrades if necessary.

By automating all of the steps involved in the upgrade process, Zero-Downtime Database Upgrade reduces the effort, risk and application impact of database upgrades. 

Gold Image Distribution among RHP Servers

In the Rapid Home Provisioning architecture, one central RHP Server operates on a set of targets within a given data center (or network segment of a data center).  Large enterprises typically host multiple data centers, and within each data center there may be separate network segments. Each will need a dedicated RHP Server. For these customers, this feature provides a simple and secure mechanism for sharing Gold Images among the RHP Servers.

While each data center may have some unique requirements in terms of the Gold Images that target machines will use, the goal of standardization points to using the same Gold Images across all data centers whenever possible. To that end, RHP supports peer-to-peer sharing of Gold Images, to easily propagate Gold Images among multiple RHP Servers.

Local Switch Home for Applying Updates

The Rapid Home Provisioning automatons for updating Oracle Database and Grid Infrastructure Homes can be executed in a local mode, with no RHP Server or Client in the architecture. 

These automatons feature the same user interface, outcome, and many of the command line options as the server and client modes.  This provides for a consistent, standardized maintenance approach across environments that are orchestrated with a central RHP Server and those environments that do not employ the RHP Server.

Authentication Plug-in

For authenticating communication between a Rapid Home Provisioning Server and target servers, Rapid Home Provisioning enables you to provide login credentials or, when communicating with a Rapid Home Provisioning Client, automatically handles authentication for most operations, internally. A new plug-in framework enables support for additional, user-defined authentication procedures.

Host-to-host authentication in customer environments, particularly in compliance-conscious industries such as financials and e-commerce, will typically leverage advanced technologies and products which are not supported natively by Rapid Home Provisioning. This feature enables integrating Rapid Home Provisioning's authentication with the mechanisms in use at a customer's data center.

Configuration Drift Reporting and Resolution

Provisioned copies of gold images can potentially drift from their deployed configuration, for example if one-off patches are applied.  Rapid Home Provisioning provides two capabilities for reporting and reconciling this drift: 1.    Rapid Home Provisioning compares a specific deployed home to its parent gold image and lists any patches that are applied to the home but that are not in the gold image. 2.    Rapid Home Provisioning compares a specific gold image to all deployed copies and lists the aggregation of all patches applied to those homes that are not in the gold image. This provides a build specification for a new gold image that could be applied to all of the copies of the original gold image, such that no patches will be lost from any of those deployments when the new version is applied. Operating system configurations are also at risk of drift.  The Rapid Home Provisioning Server can collect and retain operating system configuration and the root file system contents of specified Rapid Home Provisioning Clients and targets. This information can then be used to

-    View current configuration -    Report any drift from the previous collection

These capabilities support the goal of maintaining standardized deployments across the database estate.

Command Scheduler and Bulk Operations

Rapid Home Provisioning commands can now be scheduled in advance.  Tasks can be scheduled to run periodically.  The command queue be queried and modified.  Also, commands can applied to a list of clients.

The ability to schedule and bundle automated tasks is essential for maintenance of a large database estate.  RHP now supports scheduling key tasks such as provisioning software homes, switching to a new home, and scaling a cluster.  Also, a list of clients can now be added to a command, facilitating large-scale operations.

Dry Run Command Validation

The workflows encapsulated in Rapid Home Provisioning commands are composed of multiple smaller steps, of which some could potentially fail.  The new release includes a "dry run" command mode which allows finding and fixing many potential errors prior to command execution.

While RHP allows a failed command to be resumed after an error condition is corrected, it is often preferable to address as many potential issues as possible before the command is executed.  This minimizes problems and the distraction of corrective measures that could arise during a maintenance window.  The new "dry run" command mode will test the preconditions for a given command, without making any changes, and report on potential problems.  These can then be corrected before the command is actually executed.

New Default Location of Oracle Database Password File

The default location for the Oracle Database password file is now in ORACLE_BASE instead of ORACLE_HOME.

Having the Oracle Database password file in ORACLE_BASE instead of ORACLE_HOME allows Oracle home to be used as a static repository of database files. This enables Oracle home to be configured in read-only mode and used as a software image that can be shared across multiple database servers. This simplifies patching and mass rollout as only one Oracle home image needs to be updated to distribute a patch to multiple database servers.

Read-Only Oracle Home

Starting with Oracle Database 18c, if you choose a read-only Oracle home, then the database tools and processes write under the ORACLE_BASE path instead of under the Oracle home directory.

A read-only Oracle home separates the software from the database configuration information and log files. This separation enables you to easily share the software across different deployments. A read-only Oracle home also simplifies version control and standardization.

RPM-based Database Installation

RPM-based Database Installation (RDI) enables an RPM-based installation of the Oracle Database software. Using the rpm-ivh command, an RPM-based database installation performs the preinstallation validations, extracts the packaged software, reassigns the ownership of the extracted software to the preconfigured user and groups, maintains the Oracle inventory, and executes all the root operations required to complete the Oracle Database software installation.

RPM-based Database Installation enables you to leverage an RPM framework to easily deploy Oracle Database.

Partitioning

Parallel Partition-Wise SQL Operations

Parallel partition-wise joins are used commonly for processing large joins efficiently and fast. This feature introduces parallel partition-wise execution for the following SQL operations:

  • SELECT DISTINCT
  • Window functions

Parallel partition-wise SQL operations improve query performance on partitioned tables significantly which leads to better response time for users. 

Online Merging of Partitions and Subpartitions

You can use the ONLINE keyword with the ALTER TABLE MERGE PARTITION and SUBPARTITION SQL statements to enable online merge operations for regular (heap-organized) tables, providing concurrent data manipulation language (DML) operations with the ongoing partition merge operation.

Enabling partition maintenance operations online enables you to schedule and execute all of the operations as needed, without the necessity to plan around periods of query-only windows. This capability both increases application availability and simplifies application development.

Modifying the Partitioning Strategy

You can change the partitioning strategy of a regular (heap-organized) table with the ALTER TABLE MODIFY PARTITION SQL statement. Modifying the partitioning strategy, such as hash partitioning to range partitioning, can be performed offline or online. Indexes are maintained as part of the table modification. When performed in online mode, the conversion has no impact on ongoing DML operations.

This functionality enables partitioned tables to evolve without manually recreating the tables. Changing an existing partitioning strategy of a table online enables applications to adjust partitioning for new business requirements without application downtime.

Tuning

SQL Tuning Advisor Exadata Enhancements

SQL Tuning Advisor has additional algorithms to better tune SQL executing on Oracle Exadata Database Machine.

On Oracle Exadata Database Machine, the cost of smart scans depends on the system statistics I/O seek time, multiblock read count, and I/O transfer speed. The values of these system statistics are usually different on Oracle Exadata Database Machine, so an analysis to determines whether these system statistics are not up to date. If gathering these statistics would improve the plan, then SQL Tuning Advisor recommends accepting a SQL profile.

New SQL Tuning Set API

DBMS_SQLSET enables you to manipulate SQL Tuning Sets. The subprograms in DBMS_SQLSET are equivalent to the SQL tuning set subprograms in DBMS_SQLTUNE.

SQL tuning sets are the inputs to several performance advisors and tools. Separation of the SQL tuning set API into a separate PL/SQL package makes it easier to manipulate SQL tuning sets programmatically.

Concurrent SQL Execution with SQL Performance Analyzer

The default behavior of SQL Performance Analyzer is to execute SQL sequentially when testing to measure performance. In some cases, when very large SQL Tuning Sets are involved, it is beneficial to execute SQL concurrently to expedite the testing process.  This feature enables concurrent execution for SQL Performance Analyzer.

Enabling concurrent SQL execution with SQL Performance Analyzer enables the user to complete testing more quickly, and with added flexibility.

SQL Performance Analyzer Result Set Validation

SQL Performance Analyzer(SPA) Result Set Validation enables users to validate whether the result sets which were returned during the initial SPA test and the subsequent tests are similar. It adds additional assurance that the queries are executing exactly as expected.

SPA Result Set Validation gives users the assurance that the repeated SQL queries are correctly returning the exact same set of data. This validates that the queries are executing as expected and is required in certain regulatory environments.

Utilities

Data Pump Skip Continues Loading When Data Format Error is Encountered

A new option CONTINUE_LOAD_ON_FORMAT_ERROR is added to Data Pump Import (impdp) parameter DATA_OPTIONS. This option directs Data Pump to skip forward to the start of the next granule if a stream format error is encountered while loading data. Most stream format errors are caused by corrupt dump files. The option CONTINUE_LOAD_ON_FORMAT_ERROR can be used if Data Pump encounters a stream format error and the original export database is not available to export the table data again. If Data Pump skips over data, not all data from the source database is imported potentially skipping hundreds or thousands of rows.

The DATA_OPTIONS parameter for the DBMS_DATAPUMP.SET_PARAMETER has a new option KU$_DATAOPT_CONT_LOAD_ON_FMT_ERR that can also enable this behavior.

The current behavior prevents Data Pump from loading any table data if there is a corruption in any part of the data stream. With this new option, Data Pump is able to recover at least the data that is readable.

Diagnosability

General

Wallet Integration in ORAchk

ORAchk daemon runs ORAchk client jobs at pre-defined interval. Earlier, the parameters configured for this daemon including passwords, configuration parameters such as autorun schedule, database credentials to upload ORAchk collections were stored in memory. These parameters will now be stored in a client-side password protected Oracle wallet.

With a move towards a larger consolidation with multiple clusters, it is becoming increasingly difficult to manage these clusters on a one-by-one basis. In its previous releases, ORAchk eliminated the need to manually monitor these clusters through its daemon mode functionality. Once scheduled and started on a cluster, ORAchk daemon ran ORAchk client jobs at pre-defined intervals by itself. However, the daemon still had to be restarted manually in situations such as a machine restart. In this release, ORAchk now has a wallet integration which allows ORAchk to restart by itself and run on its previously scheduled intervals.

TFA service in Domain Services Cluster for anomaly detection

Currently, TFA has been collecting relevant information about an issue across multiple nodes through TFA Collector. In this project, TFA is now set up in the receiver mode as well which will do the centralized log monitoring and analysis. In a Cluster Domain setup, TFA will be running receiver mode in Cluster Domain and collector mode in member clusters automatically. The collector will monitor and collect health and utilization metrics from all diagnostic destinations and stream that information to the receiver.  TFA Receiver will process this data and pass to anomaly detection engine. Anomaly detection engine then uses this data to diagnose issues and provide corrective actions based on this data. Users can then access this anomaly analysis through a browser interface.

A complex software system consists of components generating diagnostic data continuously. In the presence of this sheer volume of log files, finding anomalies in them is like finding a needle in a haystack. Also, there is Service Level Agreement present for every problem (ticket / Service Request) so delay in solving it affects the business.  Today, TFA collector takes care of finding relevant information pertaining to an issue. However, this information is still required to be given to Oracle Support for issue diagnosis and resolution; and they too would try to figure out the problem manually.  TFA service can proactively do the issue analysis. Users can query this analysis and get a suggested fix for the issue from TFA service. This will save money and time for both Oracle and customer.

TFA collector to implement service request data collectors

Trace File Analyzer monitors diagnostic trace files and collects relevant diagnostic data automatically for various errors and events in a timely fashion across multiple nodes.   The diagnostic collections are driven by the components such as CRS, ASM, DB and currently, all the data relevant to those components are collected with the default (no component specified) being to collect all data for all components valid for a given time period.  Within Support, there are Support Request Data Collection documents that list the diagnostics that customers should collect for different issues such as ORA-00600 or Instance Evictions.  The project models this data from Support to provide an automated way to gather only that specific data which is pertinent to the issue.

Today, database systems have multiple components working together to fulfill the business requirements. These components generate a lot of diagnostic data which needs to be parsed to diagnose an issue when it occurs. Currently, TFA collector automates this process to a certain extent by collecting only the logs relevant to the issue across multiple nodes. However, this collection is still quite broad and includes information from logs of multiple components within the issue timeline. With this collection, Oracle Support is still required to manually find the specific information pertinent to the issue. TFA SRDC (service request data collectors) collect the information more precisely collecting only the information relevant to the issue based on the models created from data from Service Requests. This reduces the issue resolution time which saves time and money for both Oracle and customer. 

Pre-configuration of ORAchk with Default Values

ORAchk daemon runs ORAchk client jobs at pre-defined interval. Earlier, users had to set the configuration parameters such as database configuration for uploading ORAchk collections and start the daemon atleast once on every cluster. ORAchk will now come pre-configured with default values and will start automatically once the grid infrastructure is installed.

In DBaaS and other highly-consolidated environments with multiple clusters, it becomes difficult to manage these clusters individually. In its previous releases, ORAchk daemon would run on its own only after the users manually configure it atleast once on each cluster. ORAchk daemon now comes with pre-configured default values eliminating the need to manually start ORAchk daemon at every cluster. It starts automatically once the grid infrastructure is installed.

Performance

Database In-Memory

Dynamic Capture Window for In-Memory Expressions

Users can define a time window in which to capture In-Memory Expressions.

In previous releases, users could only specify a capture interval for the past 24 hours or since database creation. The capture window is now flexible and dynamic, enabling users to adapt it to their workload cycle.

Automatic In-Memory

Automatic In-Memory uses Heat Map data, column statistics, and other relevant statistics to manage objects in the IM column store. When under memory pressure, the IM column store evicts inactive segments if more frequently accessed segments would benefit from population.

In previous releases, it was difficult to know which segments would benefit the most from the IM column store. Although the In-Memory Advisor is helpful, actual usage is the best way to determine the optimum set of segments to populate. Automatic In-Memory Management maximizes the benefit of DRAM memory allocated to the IM column store and provides the best analytic response time for analytic workloads.

Database In-Memory Support for External Tables

The In-Memory Column Store supports population of external tables, enabling users to perform advanced analytical queries on external data stores. This feature is particularly valuable for repeated queries of external data because it leverages the performance features of Oracle Database In-Memory and avoids unnecessary access of external storage.

Users benefit from being able to run advanced analytics on a much larger data domain than their Oracle databases. Data from external sources such as Hadoop Distributed File System (HDFS) or other Big Data sources can be summarized and populated into the IM column store. Users can run repeated ad hoc analytic queries that might be too expensive to run on source data.

Flexible Parallelization Using In-Memory Dynamic Scans

In-Memory Dynamic Scans enable transparent and dynamic parallel execution of In-Memory table scans. In previous releases, the database used traditional Oracle parallel execution. Although the traditional form is still supported, IM dynamic scans are specific to the IM column store, and dynamically adjust according to CPU utilization. IM dynamic scans require the use of Oracle Database Resource Manager.

IM dynamic scans parallelize operations within an IMCU, which means that single-column operations run in parallel. Dynamic scans also parallelize across IMCUs, which means that scans of IMCUs run in parallel. In this way, In-Memory scans can run much faster by leveraging available CPU resources dynamically and transparently.

In-Memory Optimized Arithmetic

In-Memory Optimized Arithmetic encodes Oracle Numbers as fixed-width native integers scaled by a common exponent. The database can use the new SIMD-optimized format to perform calculations in hardware.

Because not all row sources support the new format, the IM column store must also store Oracle Numbers in their native format. The increased space overhead can be as high as 15% on some tables. Because the performance improvement may not benefit all customer environments, this feature is disabled by default. You can enable it by setting an initialization parameter and configuring specific compression levels for In-Memory objects.

Both simple and GROUP BY aggregations can benefit significantly from In-Memory Optimized Arithmetic, for a relatively small increase in memory overhead. Performance engineers have observed consistent gains of 3X to 9X.

General

Scalable Sequences

A sequence can be made scalable by specifying the SCALE clause in the CREATE SEQUENCE or ALTER SEQUENCE statement. A scalable sequence is particularly efficient when used to generate unordered primary or unique keys for data ingestion workloads having high level of concurrency. Scalable sequences significantly reduce the sequence and index block contention and provide better data load scalability compared to the solution of configuring a very large sequence cache using the CACHE clause of CREATE SEQUENCE or ALTER SEQUENCE statement.

Scalable sequences improve the performance of concurrent data load operations, especially when the sequence values are used for populating primary key columns of tables in single Oracle database instances as well as Oracle RAC databases.

Memoptimized Rowstore

The Memoptimized Rowstore enables fast lookup of data for the tables that are frequently queried based on primary key columns.

The Memoptimized Rowstore improves the data query performance of the applications, such as Internet of Things (IoT), which frequently query tables based on primary key columns.

RAC and Grid

ASM and ACFS

Storage Conversion for Member Clusters

You can use ASMCMD commands to administer the configuration of member clusters. For example, you can change the storage method from direct Oracle ASM to indirect Oracle ASM, or change from indirect Oracle ASM to direct Oracle ASM.

ASM Data Reliability Enhancements

This enhancements represents two changes. The first extends the default disk failure timeout interval (DISK_REPAIR_TIME) from 3.6 hours to 12 hours. In many environments, 12 hours is better suited for safe guarding against data loss because of multiple disk failures, and at the same time, reducing unnecessary overhead from prematurely dropping a disk during a transient failure. The second enhancement provides a new Disk Group Attribute called CONTENT_HARDCHECK.ENABLED that allows optionally enabling or disabling Hardware Assisted Redundancy Data (HARD) checking in Exatadata environments. 

These two enhancements provide Exadata customers greater control for how ASM provides essential data protection. Specifically Hardware Assisted Redundancy Data checking and automatic dropping of failed disks from an ASM Disk Group.

ASM Database Cloning

ASM database cloning provides cloning of Multitenant databases (PDBs). This feature works by leveraging ASM redundancy. Previously, as a protection against data loss during hardware failure, ASM provided up to two additional redundant copies of a file’s extents. Flex Disk Groups now can provide up to five redundant copies, in which one or more of the copies can be split off to provide a near instantaneous replica.

The advantage of ASM database cloning, when compared with storage array-based replication, is that ASM database clones replicate complete databases (PDBs) rather than files or blocks of physical storage. Storage array or file system-based replication, in a database environment, requires coordination between database objects being replicated with the underlying technology doing the replication. With ASM database clones, the administrator does not need to understand the physical storage layout. This is another aspect of database-oriented storage management provided with ASM Flex Disk Groups.

Dropping Oracle ASM File Groups With a Cascade Option

You can drop a file group and its associated files (drop including content) using the CASCADE keyword with ALTER DISKGROUP ... DROP FILEGROUP SQL statement.

Converting Normal or High Redundancy Disk Groups to Flex Disk Groups without Restricted Mount

You can convert a conventional disk group (disk group created before Oracle Database18c) to an Oracle ASM flex disk group without using the restrictive mount (MOUNTED RESTRICTED) option.

ASM Flex Disk Groups provides several new capabilities such as quota management and database cloning. In Oracle 18c customers migrating from a Normal or High Redundancy Disk Group environments will benefit by having a seamless means for converting existing Disks Groups to Flex Disk Groups. Before 18c, customers migrating Disk Groups had to have the Disk Groups mounted in a restricted mode that prevented any configuration change during the transition. 

Oracle ACFS Remote Service for Member Clusters

In addition to support for Oracle member clusters with attached local storage, Oracle ACFS provides Oracle ACFS remote service for native Oracle ACFS functionality on member clusters with no attached local storage (indirect storage member clusters). Utilizing an Oracle ACFS deployment on the Oracle Domain Services Cluster (DSC), Oracle ACFS remote service can be used for both Oracle Application Clusters and database member clusters to enable a flexible and file system-based deployment of applications and databases. Unlike NFS-based exports, Oracle ACFS remote service fully supports advanced Oracle ACFS features; such as replication, snapshots, and tagging; on the destination member cluster.

Cluster Health Advisor

Cluster Health Advisor Cross Database Analysis Support

In consolidated and DBaaS private cloud deployments multiple databases are sharing the same physical server and its resources. In its previous release Cluster Health Advisor analyzed each hosted database instance individually and could only detect whether the cause of a performance or availability issue was within itself or external. Which its new cross database analysis support, external issues can be targeted to a specific database resulting in higher confidence diagnosis and improved corrective actions.

Early warnings, targeted diagnosis and corrective actions are critical capabilities for modern database deployments designed to be available and performant 24x7. Consolidated and DBaaS private clouds are particularly difficult due to interactions between databases sharing the same physical resources and the one to many DBA to DB staffing within these deployments. Oracle Cluster Health Advisor now supports  analyzing these complex multi-database environments. By surfacing early warning notifications with specific database cause and corrective action speeds triage and allows admins to proactively maintain availability and performance saving IT staffing and downtime dollars.

Cluster Health Advisor Cross Cluster Analysis Support

In its previous release, Oracle Cluster Health Advisor analyzed each cluster node individually. Oracle Cluster Health Advisor could only detect whether the cause of a performance or availability issue was within itself, or external. With the new cross-cluster analysis support, Oracle Cluster Health Advisor can target external issues to a specific cluster node resulting in higher confidence diagnosis and improved corrective actions.

Oracle Cluster Health Advisor's support for targeting the cause of database or cluster performance degradation or impending problems to a specific root cause on a specific node, greatly improves the response time to apply corrective actions and prevent loss of databases availability or violations of SLAs.

General

Shared Single Client Access Names

A shared single client access name (SCAN) enables the sharing of one set of SCAN virtual IPs (VIPs) and Listeners (referred to as the SCAN setup) on one dedicated cluster in the data center with other clusters to avoid the deployment of one SCAN setup per cluster, which not only reduces the number of SCAN-related DNS entries, but also the number of VIPs that need to be deployed for a cluster configuration. 

A shared SCAN simplifies the deployment and management of groups of clusters in the data center by providing a shared SCAN setup that can be used by multiple systems at the same time. 

NodeVIP-Less Cluster

NodeVIP-Less Cluster enables the configuration of a cluster without the need to explicitly configure nodevips on the public network. While the VIP resources on Clusterware level will still be maintained, there is no need to provision additional IPs for each node in the cluster, which in larger cluster estates can potentially save hundreds of IPs per subnet. 

NodeVIP-Less Cluster simplifies cluster deployments and management by eliminating the need for additional IPs per node in the cluster.

Cluster Domain Proxies

Cluster domain proxies provide resource state change notifications from one cluster to another, and enable resources in one cluster to act on behalf of dependencies on resources in another cluster. You can use cluster domain proxies, for example, to ensure that an application in an Oracle Application Member Cluster only starts if its associated database hosted in an Oracle Database Member Cluster is available. Similarly, you can use cluster domain proxies to ensure that a database in an Oracle Database Member Cluster only starts if at least one Oracle Automatic Storage Management (Oracle ASM) instance on the Domain Services Cluster is available.

Cluster Dependency Proxies simplify manageability and increase availability for applications running on distributed infrastructures spanning multiple clusters. 

gridSetup-based Management

Gold image-based installation, using gridSetup.sh or gridSetup.bat, replaces the method of using Oracle Universal Installer for installing Oracle Grid Infrastructure. You can use gridSetup-based management to perform management tasks such as cloning, addNode operations, deleteNode operations, and downgrade using the gridSetup.sh or the gridSetup.bat command.

gridSetup-based management simplifies deployment and deployment-related management tasks with a unified and simple tool.

Reader Nodes Performance Isolation

In the Reader Nodes architecture, the updates made on the read-write instances on the Hub nodes are immediately propagated to the read-only instances on the Leaf nodes, where they can be used for online reporting or instant queries. Reader Nodes Performance Isolation enables OLTP workload on Hub nodes to continue although the associated database instances on the Leaf nodes fail to process the updates.

Horizontal scaling using Reader Nodes is further improved by Reader Nodes Performance Isolation as slow Leaf node-based instances will neither slow down OLTP workload nor otherwise impact it. 

UCP Support for RAC Affinity Sharding

RAC affinity sharding ensures that a group of related cache entries is contained within a single cache partition. When Data Affinity is enabled on the Oracle RAC database, then data on the affinitized tables are partitioned in such a way that a particular partition, or subset of rows for a table, is affinitized to a particular Oracle RAC database instance.

The improved cache locality and reduced internode synchronization with Data Affinity leads to higher application performance and scalability.

Security

General

Ability to Create a User-Defined Master Encryption Key

This release introduces the ability to create a user-defined master encryption key, also known as “bring your own key.”

To create the user-defined key, you supply your own master key identification value when you create the master encryption key by using the ADMINISTER KEY MANAGEMENT SET [ENCRYPTION] KEY or ADMINISTER KEY MANAGEMENT CREATE [ENCRYPTION] KEY statements. This enhancement applies to master encryption keys that are used in software keystores only, not hardware keystores. It can be used in non-multitenant, standalone environments and in multitenant environments.

Primary benefits of this feature is that it enables you to generate keys outside the Oracle Database, in your own trusted environment. It further allows you to use those keys as the master encryption keys for Oracle Advanced Security TDE.

For example, in cloud deployments, you can create these keys in your trusted environment that you control instead of relying on a cloud provider. If you want to insert a key that you trust for encryption but later on decide that you must replace the key, then you can insert another key without the intervention of the cloud provider.

To complement this feature, you can also configure the automatic removal of inactive master encryption keys from an Oracle Data Guard standby database.

Ability to Use Encrypted Passwords for Database Links with Oracle Data Pump

The behavior for handling database link passwords has changed in this release.

Passwords in database links are now encrypted. Oracle Data Pump handles the export and import of these passwords. Import operations from older versions and export operations to older versions can still be used.

The benefit of this feature is that it prevents an intruder from decrypting an encrypted database link password.

Ability to Create a Keystore for Each Pluggable Database

Each pluggable database (PDB) now can have its own keystore, instead of there being only one keystore for the entire container database (CDB).

Each PDB keystore can have its own password in addition to having its own master encryption key.

The advantage of this feature is that it enables independent key management operations to be performed by each tenant (PDB) in a multitenant environment rather than having to share a keystore at the CDB root level. This feature benefits both multitenant and non-multitenant environments because it provides parameters to facilitate the configuration of the keystore location and the keystore type, eliminating the need for editing the sqlnet.ora file.

This feature continues to support existing behavior where PDBs share a single keystore with the container database (CDB).

This feature further allows some PDBs to share the keystore with the container database (CDB) and some PDBs to have their own keystores.

 

Isolation of keystores amongst tenants is a desired aspect in multitenant environment.

This feature offers greater isolation between PDBs because of following reasons:

1. Each PDB has its own keystore, not shared with the container or with other PDBs attached to the same CDB

2. Each PDB has its own password, not shared with the container or with other PDBs attached to the same CDB

3.Enables independent key management operations to be performed by each tenant (PDB) in a multitenant environment rather than having to share a keystore at the CDB root level.

 

Ability to Use Oracle Data Pump to Export and Import the Unified Audit Trail

Starting with this release, you can export and import the unified audit trail as part of a full database export or import operation using Oracle Data Pump.

There is no change to the user interface. When you perform the export or import operation of a database, the unified audit trail is automatically included in the Data Pump dump files.

This feature benefits users who, as in previous releases, must create dump files of audit records.

Integration of Active Directory Services with Oracle Database

Starting with this release, you can authenticate and authorize users directly with Microsoft Active Directory.

With centrally managed users (CMU) Oracle database users and roles can map directly to Active Directory users and groups without using Oracle Enterprise User Security (EUS) or another intermediate directory service. EUS is not being replaced or deprecated; this new feature is another simpler option if you only want to authenticate and authorize users with Microsoft Active Directory.

The direct integration with Microsoft Active Directory supports better security through faster and easier configuration with the enterprise identity management architecture. In the past, users may have avoided the security practice of integrating the database with directory services due to the difficulty and complexity. With the direct integration, you can improve your security posture by more easily integrating the Database to the enterprise directory service.

Ability to Create Schema Only Accounts

You now can create schema only accounts, for object ownership without allowing clients to log in to the schema.

A user (or other client) cannot log in to the database schema unless the account is modified to accept an authentication method. However, this type of schema user can proxy in a single session proxy.

Ability to Encrypt Sensitive Credential Data in the Data Dictionary

Starting with this release, you can encrypt sensitive credential data that is stored in the data dictionary SYS.LINK$ and SYS.SCHEDULER$_CREDENTIAL system tables.

In previous releases, and by default in this release, the data in these tables is obfuscated. However, because of the rise of de-obfuscation algorithms that are available on the Internet, it is important to use a more secure solution to protect this type of sensitive data. You can manually encrypt this data by using the ALTER DATABASE DICTIONARY SQL statement.

Encryption of Sensitive Data in Database Replay Files

Database replay captures database workloads and replays them on a test system. In previous versions, the capture files were not encrypted, possibly allowing unauthorized viewing of sensitive data. This enhancement encrypts all sensitive data in the database capture files.

The data captured during database replay are stored in capture files. This enhancement encrypts the data stored in the capture files. Encrypting the data protects it from unauthorized access and allows users to ensure compliance with the required security standards.

Oracle Database Vault Support for Oracle Database Replay

In this release, you now can perform Oracle Database Replay operations in an Oracle Database Vault environment

Enhancements to Oracle Database Vault Simulation Mode

Oracle Database Vault has had a number of changes to simulation mode for this release. 

  • Simulation mode now captures all mandatory realm violations from a SQL statement.
  • Simulation mode can capture the full call stack information.
  • The default trusted path context factors are now available as separate columns instead of being concatenated together.

Capturing all mandatory realm violations from a SQL statement enables you to see all changes that you may need to make. Otherwise, the first mandatory realm violation may mask other violations that would not be noticed until the original fix is completed and another regression test is run. This enhancement enables faster regression test and application certification. Seeing the full call stack helps you to identify the SQL statement that has the violation. In many cases, similar SQL statements are called by different parts of the application. This feature helps an application developer to quickly identify exactly which application code triggered the violation. Context factors are used to build trusted paths for realms and command rules.  There are some commonly used factors for trusted paths, so these were extracted from the single string representation in the last release into their own columns. This enhancement makes it much easier to identify the factors to use in trusted path rule sets.