Develop Applications with Oracle Database API for MongoDB

Considerations when developing or migrating applications — a combination of (1) how-to information and (2) descriptions of differences and possible adjustments.

Indexing and Performance Tuning

Oracle AI Database offers multiple technologies to accelerate queries over JSON data, including indexes, materialized views, in-memory column storage, and Exadata storage-cell pushdown. Which performance-tuning approaches you take depend on the needs of your application.

If your Oracle AI Database compatible parameter is 23 or greater, then you can use MongoDB index operations createIndex and dropIndex to automatically create and drop the relevant Oracle indexes. If parameter compatible parameter is less than 23, then such MongoDB index operations are not supported; they are ignored.

Regardless of your database release you can create whatever Oracle Database indexes you need directly, using (1) the JSON Page of Using Oracle AI Database Actions (see Creating Indexes for JSON Collections), (2) Simple Oracle Document Access (SODA), or (3) SQL — see Indexes for JSON Data in Oracle AI Database JSON Developer’s Guide. Using the JSON page is perhaps the easiest approach to indexing JSON data.

Note: MongoDB allows different collections in the same “database” to have indexes of the same name. This is not allowed in Oracle AI Database — the name of an index must be unique across all collections of a given database schema (“database”).

Consider, for example, indexing a collection, named orders, of purchase-order documents such as this one:

{ "PONumber" : 1600,
  "User" : "ABULL",
  "LineItems" : [{ "Part"     : { "Description" : "One Magic Christmas",
                                  "UnitPrice"   : 19.95,
                                  "UPCCode"     : 13131092899 },
                   "Quantity" : 9.0 },
                 { "Part"     : { "Description" : "Lethal Weapon",
                                  "UnitPrice"   : 19.95,
                                  "UPCCode"     : 85391628927
                                },
                   "Quantity" : 5.0 } ]}

Two important use cases are (1) indexing a singleton scalar field, that is, a field that occurs only once in a document (2) indexing a scalar field in objects within the elements of an array. Indexing the value of field PONumber is an example of the first case. Indexing the value of field UPCCode is an example of the second case.

Example 2-1, Example 2-2, and Example 2-3 illustrate the first case. Example 2-5 illustrates the second case.

You can also index GeoJSON (spatial) data, using a function-based SQL index that returns SDO_GEOMETRY data. And for all JSON data you can create a JSON search index, and then perform full-text queries using SQL/JSON condition json_textcontains.

Example 1 Indexing a Singleton Scalar Field Using the JSON Page of Database Actions

To create an index for field PONumber using the JSON Page, do the following.

  1. Right-click the collection name (orders) and select Indexes from the popup menu.

    Description of json_page_create_index_001.png follows

    Description of the illustration json_page_create_index_001.png

    1. In the New Index page:
    • Type * in the Properties search box.

      This populates the Properties list with paths to all scalar fields in your collection. These paths are provided by sampling the collection data using a JSON data guide — see JSON_DATAGUIDE in Oracle AI Database SQL Language Reference.

      If you turn on option Advanced, by pushing its slider to the right, then the types of the listed scalar fields are also shown. The types shown are those picked up by sampling the collection. But you can change the type of a field for indexing purposes.

    • Select the paths of the fields to be indexed. In this case we want only a single scalar field indexed, PONumber, so select that.

      Note: This dialog box lets you select multiple paths. If you select more than one path then a composite index is created for the data at those paths.Foot 1 But if you want to index two different fields separately then create two indexes, not one composite index (which indexes both fields together).

      The index data type is determined automatically by the types of the data at the selected paths, but you can control this by turning on Automatic and changing the data types. For example, JSON numbers in the collection data for a given field cause a type of number to be listed, but you can edit this to VARCHAR2 to force indexing as a string value.

    The values of field PONumber are unique — the same numeric value is not used for the field more than once in the collection, so select Unique index.

    Select Index Nulls also. This is needed for queries that use ORDER BY to sort the results. It causes every document to have an entry in the index.

    The values in field PONumber are JSON numbers, which means the index can be used for numerical comparison.

    Description of json_page_create_index_002.png follows

    Description of the illustration json_page_create_index_002.png

    Example 2-2 Indexing a Singleton Scalar Field Using SODA

Each SODA implementation (programming language or framework) that supports indexing provides a way to create an index. They all use a SODA index specification to define the index to be created. For example, with SODA for REST you use an HTTP POST request, passing URI argument action=index, and providing the index specification in the POST body.

This is a SODA index specification for a unique index named poNumIdx on field PONumber:

{ "name" : "**poNumIdx**",
  "unique" : true,
  "fields" : [ { "path"     : "**PONumber**",
                 "dataType" : "NUMBER",
                 "order"    : "ASC" } ] }

Example 3 Indexing a Singleton Scalar Field Using SQL

You can use Database Actions to create an index for field PONumber in column data of tableorders with this SQL code. This uses SQL/JSON function json_value to extract values of field PONumber.

The code uses ERROR ON ERROR handling, to raise an error if a document has no PONumber field or it has more than one.

Item method numberOnly() is used in the path expression that identifies the field to index, to ensure that the field value is numeric.

Method numberOnly() is used instead of method number(), because number() allows also for conversion of non-numeric fields to numbers. For example, number() converts a PONumber string value of "42" to the number 42.

Other such “only” item methods, which similarly provide strict type checking, include stringOnly(), dateTimeOnly(), and binaryOnly(), for strings, dates, and binary values, respectively.

CREATE UNIQUE INDEX **"poNumIdx"** ON **orders**
  (json_value(**data**, '$.PONumber.**numberOnly()**' **ERROR ON ERROR**))

See Also: SQL/JSON Path Expression Item Methods in Oracle AI Database JSON Developer’s Guide

Example 2-4 Creating a Multivalue Index For Fields Within Elements of an Array

Starting with Oracle AI Database 21c you can create a multivalue index for the values of fields that can occur multiple times in a document because they are contained in objects within an array (objects as elements or at lower levels within elements).

This example creates a multivalue index on collection orders for values of field UPCCode. It example uses item method numberOnly(), so it applies only to numeric UPCCode fields.

CREATE MULTIVALUE INDEX **mvi_UPCCode** ON orders o
      (o.data.LineItems.Part.**UPCCode.numberOnly()**);

See Also: Creating Multivalue Function-Based Indexes for JSON_EXISTS in Oracle AI Database JSON Developer’s Guide

Example 2-5 Creating a Materialized View And an Index For Fields Within Elements of an Array

Prior to Oracle Database 21c you cannot create a multivalue index for fields such as UPCCode, which can occur multiple times in a document because they are contained in objects within an array (objects as elements or at lower levels within elements).

You can instead, as in this example, create a materialized view that extracts the data you want to index, and then create a function-based index on that view data.

This example creates materialized view mv_UPCCode with column upccode, which is a projection of field UPCCode from within the Part object in array LineItems of column data of table orders. It then creates index mv_UPCCode_idx on column upccode of the materialized view (mv_UPCCode).

CREATE MATERIALIZED VIEW **mv_UPCCode**
  BUILD IMMEDIATE
  REFRESH FAST ON STATEMENT WITH PRIMARY KEY
  AS SELECT o.id, jt.**upccode**
       FROM **orders** o,
            json_table(**data**, '$.**LineItems[*]**'
                       ERROR ON ERROR NULL ON EMPTY
              COLUMNS (**upccode** NUMBER PATH '$.**Part**.**UPCCode**')) jt;

CREATE INDEX **mv_UPCCode_idx** ON **mv_UPCCode**(**upccode**);

The query optimizer is responsible for finding the most efficient method for a SQL statement to access requested data. In particular, it determines whether to use an index that applies to the queried data, and which index to use if more than one is relevant. In most cases the best guideline is to rely on the optimizer.

In some cases, however, you might prefer to specify that a particular index be picked up for a given query. You can do this with a MongoDB hint that names the index. (Oracle does not support the use of MongoDB index specifications — just provide the index name.)

For example, this query uses index poNumIdx on collection orders, created in Example 2-1.

db.orders.find({"PONumber":1600})**.hint("poNumIdx")**

Alternatively, you can specify an index to use by passing an Oracle SQL hint, using query-by-example (QBE) operator $native, which is an Oracle extension to the MongoDB hint syntax.

The argument for $native has the same syntax as a SQL hint string (that is, the actual hint text, without the enclosing SQL comment syntax /*+...*/). You can pass any SQL hint using $native. In particular, you can turn on monitoring for the current SQL statement using hint MONITOR. This code does that for a find() query:

db.orders.find()**.hint({"$native":"MONITOR"})**

Related Topics

See Also:

Users, Authentication, and Authorization

Oracle AI Database security differs significantly from that of MongoDB. The security model of Oracle Database API for MongoDB is described: the creation of users, their authentication, and their authorization to perform different operations.

By default, MongoDB does not enable user authentication and authorization checks. Oracle AI Database always requires authentication, and it always verifies that a connected user is authorized to perform a requested operation. A valid username and password must be provided for authentication.

Oracle Database API for MongoDB supports only the following connection-option values for authentication:

Oracle Database API for MongoDB relies on Oracle AI Database users, privileges, and roles. You cannot add or modify these users and roles using MongoDB clients or drivers. You can instead do this using SQL or Oracle Database Actions. The minimum Oracle AI Database roles required to use the API are CONNECT, RESOURCE, and SODA_APP.

A user (database schema) also needs to be enabled for use with Oracle REST Data Services (ORDS). This can be done by invoking PL/SQL procedure ORDS.enable_schema or using Oracle AI Database Actions.

For MongoDB, a “database” is a set of collections. For Oracle Database API for MongoDB, this corresponds to an Oracle AI Database schema.

Note:

Using Oracle API for MongoDB to drop a “database” does not drop the underlying database schema. Instead, it drops all collections within the schema.

An administrative user can drop a schema using SQL (for example, using Database Actions with an Autonomous Oracle AI Database).

For the API, a username must be a database schema name. The name is case-insensitive, it cannot start with a nonalphabetic character (including a numeral), and it must be provided with a secure password.

Normally, a user of the API can only perform operations within its schema (the username is the schema name). Examples of such operations include creating new collections, reading and writing documents, and creating indexes.

When an administrative user tries to insert data into a database schema (user) that does not exist, that schema is created automatically as a schema-only account, which means that it does not have a password and it cannot be logged into. The new schema is granted these privileges: SODA_APP, CREATE SESSION, CREATE TABLE, CREATE VIEW, CREATE SEQUENCE, CREATE PROCEDURE, and CREATE JOB. The schema is also given an unlimited tablespace quota, and is enabled for using Oracle REST Data Services (ORDS).

For an ordinary user of the API, a MongoDB shell command (such as use <database>) that switches from the current MongoDB database to another one is typically not supported — switching to another database schema raises an error.

However, an administrative user , which is one that has all of the following privileges, can create new users (database schemas), and can access any schema as any user: CREATE USER, ALTER USER, DROP USER.

An administrative user can do the following:

Oracle recommends that you do not allow production applications to make use of an administrative user. Applications should instead connect as ordinary users, with a minimum of privileges. In particular, connect an application to the database using a MongoClient that is specific to a particular schema (user).

Related Topics

See Also:

Migrate Application Data from MongoDB to Oracle Database

Some ways to export your JSON data from MongoDB and then import it into Oracle AI Database are described. Migration considerations are presented.

You can migrate your application data in any of these ways:

Example 2-6 Migrate JSON Data to Oracle AI Database Using mongoexport and mongoimport

This example exports collection sales from MongoDB to file-system file sales.json. It then imports the data from that file to Oracle AI Database as collection sales. The user is connected to host ** as database schema ** with password **.

**mongoexport** **--collection=sales** **--out sales.json**

**mongoimport** 'mongodb://*<user>*:*<password>*@*<host>*:27017/*<user>*?authMechanism=**PLAIN**&authSource=**$external**&ssl=true' **--collection=sales** **--file=sales.json**

Note:

Use URI percent-encoding to replace any reserved characters in your connection-string URI — in particular, characters in your username and password. These are the reserved characters and their percent encodings:

! # $ % & ( ) * +
%21 %23 %24 %25 %26 %27 %28 %29 %2A %2B
, \/ : ; = ? @ [ ]
%2C %2F %3A %3B %3D %3F %40 %5B %5D

For example, if your username is RUTH and your password is @least1/2#? then your MongoDB connection string to server <server> might look like this:

'mongodb://RUTH:**%40**least1**%2F**2**%23****%3F**@*<server>*:27017/ruth/ ...'

Depending on the tools or drivers you use, you might be able to provide a username and password as separate parameters, instead of as part of a URI connection string. In that case you likely won’t need to encode any reserved characters they contain.

See also:

See Also:

Example 2-7 Loading JSON Data Into a Collection Using DBMS_CLOUD.COPY_COLLECTION

This example loads data from the Oracle Cloud Object Store into a new collection, newCollection, using PL/SQL procedure DBMS_CLOUD.copy_collection. It assumes that the data was exported from MongoDB to your file system and then imported from there to the object-store location that’s passed as the value of parameter file_uri_list.

The value passed as copy_collection parameter FORMAT is a JSON object with fields recorddelimiter and type:

See DBMS_CLOUD Package Format Options in Using Oracle Autonomous AI Database Serverless for information about parameter FORMAT.

DBMS_CLOUD.**copy_collection**(
    collection_name => '**newCollection**',
    **file_uri_list**   => 'https://objectstorage.../data.json',
    **format**          => json_object(
                         '**recorddelimiter**' : '''**\n**''',
                         '**type**'            : '**ejson'**));
END;
/

Related Topics

See Also:

MongoDB Aggregation Pipeline Support

Oracle Database API for MongoDB supports MongoDB aggregation pipelines, that is, MongoDB command aggregate. It lets you use pipeline code to execute a query as a sequence of operations. You can also use SQL as a declarative alternative to this procedural approach.

MongoDB’s aggregation pipeline is essentially a weak emulation of SQL capabilities. With MongoDB you express operations such as sorting, grouping, and ordering as separate steps in a pipeline. This approach is procedural: you specify how to execute a query as a sequence of operations.

SQL on the other hand is declarative. You specify the query result you want, and the optimizer picks an optimal execution plan based on available indexes, data statistics, cost estimate, and so on. In other words, you specify what you want done, and the optimizer, not you, determines how it should be done.

Oracle AI Database SQL support of JSON data includes operating on documents and collections, as well as joining JSON and non-JSON data (relational, spatial, graph, …). As a user of Oracle Database API for MongoDB you can apply SQL directly to JSON data without worrying about manually specifying and sequencing any specific operations.

But if you do use MongoDB aggregation pipeline code then the MongoDB API automatically translates the pipeline stages and operations into equivalent SQL code, and the optimizer picks the best execution plan possible. The API supports a subset of the MongoDB aggregation pipeline stages and operations — see Aggregation Pipeline Operators for details.

Unlike MongoDB, Oracle AI Database does not limit the size of the data to be sorted, joined, or grouped. You can use it for reporting or analytical work that spans millions of documents across any number of collections.

You can use Oracle AI Database simplified dot notation for JSON data, or standard SQL/JSON functions json_value, json_query, and json_table, to extract values from your JSON data for reporting or analytic purposes. You can convert relational and other kinds of data (including spatial and graph data) to JSON data using the SQL/JSON generation functions. You can join JSON data from multiple tables and collections with a single SQL FROM clause.

A MongoDB aggregation pipeline performs operations on JSON documents from one or more collections. It’s composed of successive stages, each of which performs document operations and passes the resulting documents to the next stage for further processing. The operations for any stage can filter the documents passed from the previous stage, transform (update) them, or even create new documents, for the next stage. Transformation can involve the use of aggregate operators, also called accumulators, such as $avg (average), which can combine field values from multiple documents.

Each stage in a pipeline is represented by an aggregation expression, which is a JSON value. See the MongoDB Aggregation Pipeline documentation for more background.

You can use declarative SQL code to accomplish what you would otherwise use an aggregation pipeline for. This is particularly relevant if your Oracle Database parameter compatible is less than 23, in which case most MongoDB aggregation pipelines are not supported. Example 2-8 illustrates this.

Example 2-8 Using SQL Code Instead of MongoDB Aggregation Pipeline Code

This example calculates average revenues by zip code. It first shows a MongoDB aggregation pipeline expression to do this; then it shows equivalent SQL code.

MongoDB aggregation pipeline:

This code tells MongoDB how to calculate the result; it specifies the order of execution.

db.sales.aggregate(
  [{"$group" : {"_id"    : "$address.zip",
                "avgRev" : {"$avg" : "$revenue"}}},
   {"$sort"  : {"avgRev" : -1}}])

SQL:

This code specifies the grouping and order of the output presentation declaratively. It does not specify how the computation is to be carried out, including the order of execution. It simply says that the results are to be grouped by zipcode and presented in descending order of the average revenue figures. The query returns rows of two columns with scalar values for zipcode (a string) and average revenue (a number).

SELECT s.data.address.zip.string(),
       avg(s.data.revenue.number())
  FROM sales s
  GROUP BY s.data.address.zip.string()
  ORDER BY 2 DESC;

The following query is similar, but it provides the result as rows of JSON objects, each with a string field zip, for the zipcode, and a numeric field avgRev, for the average revenue. SQL/JSON generation function json_object constructs JSON objects from the results of evaluating its argument SQL expressions.

SELECT **json_object**('**zip**'    : s.data.address.zip.string(),
                   '**avgRev**' : avg(s.data.revenue.number()))
  FROM sales s
  GROUP BY s.data.address.zip.string()
  ORDER BY avg(s.data.revenue.number()) DESC;

Related Topics

MongoDB Documents and Oracle Database

Presented here is the relationship between a JSON document used by MongoDB and the same content as a JSON document stored in, and used by, Oracle AI Database.

Note: This topic applies to JSON documents that you migrate from MongoDB and store in Oracle AI Database. It does not apply to JSON documents that are generated/supported by JSON-relational duality views. For information about MongoDB-compatible duality views see Using the Mongo DB API with JSON-Relational Duality Views.

You can migrate an existing application and its data from MongoDB to Oracle AI Database, or you can develop new applications on Oracle AI Database, which use the same or similar data as applications on MongoDB. JSON data in both cases is stored in documents.

It’s helpful to have a general understanding of the differences between the documents used by MongoDB and those used by Oracle AI Database. In particular, it helps to understand what happens to a MongoDB document that you import, to make it usable with Oracle AI Database.

Some of the information here presents details that you can ignore if you read this topic just to get a high-level view. But it’s good to be aware of what’s involved; you may want to revisit this at some point.

When you import a collection of MongoDB documents, the key and the content of each document are converted to forms appropriate for Oracle AI Database.

A MongoDB document has a native binary JSON format called BSON. An Oracle AI Database document has a native binary JSON format called OSON. So one change that’s made to your MongoDB document is to translate its binary format from BSON to OSON. This translation applies to both the key and the content of a document

Note: For Oracle Database API for MongoDB, as for MongoDB itself, a stage receives input, and produces output, in the form of BSON data, that is, binary JSON data in the MongoDB format.

Document Key: Differences and Conversion (Oracle Database Prior to 26ai)

This section applies only to Oracle Database releases prior to 26ai.

For MongoDB, the unique key of a document, which identifies it, is the value of mandatory field _id, in the document itself. For Oracle Database releases prior to 26ai, the unique key that identifies a document is separate from the document; the key is stored in a separate database column from the column that stores the document. The key column has is named id, and it is the primary key column for the table that stores your collection data.

When you import a collection into Oracle Database prior to 26ai, Oracle Database API for MongoDB creates id column values from the values of field _id in your MongoDB documents. MongoDB field _id can have values of several different data types. The Oracle Database id column that corresponds to that field is always of SQL data type VARCHAR2 (character data; in other words, a string).

The _id field in your imported documents is untouched during import or thereafter. Oracle Database doesn’t use it — it uses column id instead. But it also doesn’t change it, so any use your application might make of that field is still valid. Field _id in your documents is never changed; even applications cannot change (delete or update) it.

If you need to work with your documents using SQL or Simplified Oracle Document Access (SODA) then you can directly use column id. You can easily use that primary-key column to join JSON data with other database data, for instance. The documents that result from importing from MongoDB are SODA documents (with native binary OSON data).

Be aware of these considerations that result from the separation of document key from document:

BSON values of field _id are converted to VARCHAR2 column id values according to Table 2-1. If an _id field value is any type not listed in the table then it is replaced by a generated ObjectId value, which is then converted to the id column value.

Table 1 Conversion of BSON Field _id

_id Field Type ID Column VARCHAR2 Value
Double Canonical numeric format string
32-bit integer Canonical numeric format string
64-bit integer Canonical numeric format string
Decimal128 Canonical numeric format string
String No conversion, including no character escaping
ObjectId Lowercase hexadecimal string
Binary data (UUID) Lowercase hexadecimal string
Binary data (non-UUID) Uppercase hexadecimal string

The canonical numeric format for a VARCHAR2 value is as follows:

In practice, this canonical numeric format means that in most cases the numeric _id field value results in an obvious, or “pretty” VARCHAR2 value for column id. A format that uses an exponent is used only when necessary, which generally means infrequently.

Document Content Conversion

Two general considerations:

Table 2-2 specifies the type mappings that are applied when converting scalar BSON data to scalar OSON data. The OSON scalar types used are SQL data types, except as noted. Any BSON types not listed are not converted; instead, an error is raised when they are encountered. This includes BSON types regex, and JavaScript.

Table 2 JSON Scalar Type Conversions:

BSON Type OSON TypeFoot 2 Notes
Double BINARY_DOUBLE NA
32-bit integer NUMBER (Oracle number) Flagged as int.
64-bit integer NUMBER (Oracle number) Flagged as long.
Decimal128 NUMBER (Oracle number) Flagged as decimal. Note: This conversion can be lossy.
Date TIMESTAMP WITH TIME ZONE Always UTC time zone.
String VARCHAR2 Always in character set AL32UTF8 (Unicode UTF-8).
Boolean BOOLEAN Supported only if initialization parameter compatible has value 23 or larger. (There is no Oracle SQL BOOLEAN type in releases prior to 26ai.)
ObjectId ID (RAW(12)) NA
Binary data (UUID) ID (RAW(16)) NA
Binary data (non-UUID) RAW NA
Null NULL Used for JSON null.

Footnote 2 These are SQL data types, except as noted.

Related Topics

See Also:

Other Differences Between MongoDB and Oracle AI Database

Various differences between MongoDB and Oracle AI Database are described. These differences are generally not covered in other topics. Consider these differences when you migrate an application to Oracle AI Database or you develop a new application for Oracle AI Database that uses MongoDB commands.

Related Topics

See Also: Unicode Collation Algorithm, Unicode® Technical Standard #10

Accessing Collections Owned By Other Users (Database Schemas)

You can directly access a MongoDB API collection owned by another user (database schema) if you log into that schema. You can indirectly access a collection owned by another user, without logging into that schema, if that collection has been mapped to a collection in your schema.

A MongoDB API collection of JSON documents consists of (1) a collection backing table , which contains the JSON documents in the collection, and (2) some JSON-format collection metadata , which is stored in the data dictionary and specifies various collection-configuration properties. The backing table belongs to a given database user/schema. The metadata is stored in the database data dictionary.

A mapped collection is a collection that is defined (mapped) on top of an existing table, which can belong to any database schema and which could also back one or more other collections.

You can control which operations on a collection — including a mapped collection — are allowed for various users (schemas), by granting those users different privileges or roles on the backing table. Example 2-9 illustrates this.

Example 2-9 Creating a Collection in One Schema and Mapping a Collection To It in Another Schema

In this example user john creates collection john_coll (in database schema john), and adds a document to it. User john then grants user janet some access privileges to the backing table of collection john_coll.

User janet then maps a new collection, janet_coll (in schema janet) to collection john_coll in schema john. (The original and mapped collections need not have different names, such as john_coll and janet_coll; they could both have the same name.)

User janet then lists the collections available to schema janet, and reads the content of mapped collection janet_coll, which is the same as the content of collection john_coll.

(The commands submitted to mongosh are each a single line (string), but they are shown here continued across multiple lines for clarity.)

Note: Examples in this documentation of input to, and output from, Oracle Database API for MongoDB use the syntax of shell mongosh.

1. When connected to the database as user john, run PL/SQL code to create collection john_coll backed by table john_coll. The second argument to create_collection is the metadata needed for a MongoDB-compatible collection. (The backing table name is derived from the collection name — see Default Naming of a Collection Table.)

col SODA_COLLECTION_T;
BEGIN
  col := DBMS_SODA.**create_collection**(
           '**john_coll**',
           '{"contentColumn"      : {"name"       : "DATA",
                                     "sqlType"    : "BLOB",
                                     "jsonFormat" : "OSON"},
             "keyColumn"          : {"name"             : "ID",
                                     "assignmentMethod" : "EMBEDDED_OID",
                                     "sqlType"          : "VARCHAR2"},
             "versionColumn"      : {"name" : "VERSION", "method" : "UUID"},
             "lastModifiedColumn" : {"name" : "LAST_MODIFIED"},
             "creationTimeColumn" : {"name" : "CREATED_ON"}}');
END;

2. Connect to the database using shell mongosh as user john, list the collections in that schema (John’s collections), insert a document into collection john_coll, and show the result of the insertion.

mongosh 'mongodb://**john**:...
@MQSSYOWMQVGAC1Y-CTEST.adb.us-ashburn-1.oraclecloudapps.com:27017/**john**
?authMechanism=PLAIN&authSource=$external&ssl=true&retryWrites=false&loadBalanced=true'
john> show collections;

Output:

john> **db.john_coll**.**insert**({"hello" : "world"});
john> **db.john_coll**.**find**()

Output:

[ { _id: ObjectId("6318b0060a51240e4bf3b001"), hello: 'world' } ]

3. In schema john, grant user janet access privileges to collection john_coll and its backing table of the same name, john_coll.

GRANT SELECT, INSERT, UPDATE, DELETE ON **john**.**john_coll** TO **janet**;

4. When connected to the database as user (schema) janet, Create a new collection janet_coll in schema janet that’s mapped to collection john_coll in schema john.

The second argument to method create_collection() is the collection metadata. Among the things it specifies here are the schema and backing-table names of the collection to be mapped to. The last argument, CREATE_MODE_MAP, specifies that the new collection is to be mapped on top of the table that backs the original collection.

col SODA_COLLECTION_T;
BEGIN
  col := DBMS_SODA.**create_collection**(
           '**janet_coll**',
           '{"**schemaName**"         : "**JOHN**",
             "**tableName**"          : "**JOHN_COLL**",
             "contentColumn"      : {"name"       : "DATA",
                                     "sqlType"    : "BLOB",
                                     "jsonFormat" : "OSON"},
             "keyColumn"          : {"name"             : "ID",
                                     "assignmentMethod" : "EMBEDDED_OID",
                                     "sqlType"          : "VARCHAR2"},
             "versionColumn"      : {"name" : "VERSION", "method" : "UUID"},
             "lastModifiedColumn" : {"name" : "LAST_MODIFIED"},
             "creationTimeColumn" : {"name" : "CREATED_ON"}}',
           **DBMS_SODA.CREATE_MODE_MAP**);
END;

Note:

The schema and table names used in the collection metadata argument must be as they appear in the data dictionary, which in this case means they must be uppercase. You can use these queries to obtain the correct schema and table names for collection <collection> (when connected as the owner of <collection>):

SELECT c.json_descriptor.**schemaName** FROM USER_SODA_COLLECTIONS c
 WHERE uri_name = '*<collection>*';

SELECT c.json_descriptor.**tableName** FROM USER_SODA_COLLECTIONS c
 WHERE uri_name = '*<collection>*';

5. Connect to the database using shell mongosh as user janet, list the available collections, and show the content of collection janet_coll (which is the same as the content of John’s collection john_coll).

mongosh 'mongodb://**janet**:...
@MQSSYOWMQVGAC1Y-CTEST.adb.us-ashburn-1.oraclecloudapps.com:27017/**janet**
?authMechanism=PLAIN&authSource=$external&ssl=true&retryWrites=false&loadBalanced=true'

janet> **show collections**;
**janet_coll**
janet> **db.janet_coll.find**()
[ { _id: ObjectId("6318b0060a51240e4bf3b001"), **hello: 'world'** } ]

Footnote Legend

Footnote 1: MongoDB calls a composite index a compound index. A composite index is also sometimes called a concatenated index.