Use Case Demonstrations for Oracle NoSQL Database Migrator

Learn how to perform data migration using the Oracle NoSQL Database Migrator for specific use cases. You can find detailed systematic instructions with code examples to perform migration in each of the use cases.

This article has the following topics:

Migrate from Oracle NoSQL Database Cloud Service to a JSON file

This example shows how to use the Oracle NoSQL Database Migrator to copy data and the schema definition of a NoSQL table from Oracle NoSQL Database Cloud Service (NDCS) to a JSON file.

Use Case

An organization decides to train a model using the Oracle NoSQL Database Cloud Service (NDCS) data to predict future behaviors and provide personalized recommendations. They can take a periodic copy of the NDCS tables' data to a JSON file and apply it to the analytic engine to analyze and train the model. Doing this helps them separate the analytical queries from the low-latency critical paths.

Example

For the demonstration, let us look at how to migrate the data and schema definition of a NoSQL table called myTable from NDCS to a JSON file.
Prerequisites
  • Identify the source and sink for the migration.
    • Source: Oracle NoSQL Database Cloud Service
    • Sink: JSON file
  • Identify your OCI cloud credentials and capture them in the OCI config file. Save the config file in /home/.oci/config. See Acquiring Credentials in Using Oracle NoSQL Database Cloud Service.
    [DEFAULT]
    tenancy=ocid1.tenancy.oc1....
    user=ocid1.user.oc1....
    fingerprint= 43:d1:....
    key_file=</fully/qualified/path/to/the/private/key/>
    pass_phrase=<passphrase>
  • Identify the region endpoint and compartment name for your Oracle NoSQL Database Cloud Service.
    • endpoint: us-phoenix-1
    • compartment: developers
Procedure
To migrate the data and schema definition of myTable from Oracle NoSQL Database Cloud Service to a JSON file:
  1. Open the command prompt and navigate to the directory where you extracted the NoSQL Database Migrator utility.
  2. To generate the configuration JSON file using the NoSQL Database Migrator, run the runMigrator command without any runtime parameters.
    [~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator
  3. As you did not provide the configuration file as a runtime parameter, the utility prompts if you want to generate the configuration now. Type y.
    configuration file is not provided. Do you want to generate configuration? (y/n) [n]: y
     
    This command provides a walkthrough of creating a valid config for
    Oracle NoSQL data migrator.
     
    The following link explain where to find the information required by this
    script:
     
    <link to doc>
  4. Based on the prompts from the utility, choose your options for the Source configuration.
    Enter a location for your config [./migrator-config.json]: /home/apothula/nosqlMigrator/NDCS2JSON
    Select the source:
    1) nosqldb
    2) nosqldb_cloud
    3) file
    #? 2
    Configuration for source type=nosqldb_cloud
    Enter endpoint URL or region of the Oracle NoSQL Database Cloud: us-phoenix-1
    Enter table name: myTable
    Enter compartment name or id of the source table []: developers
    Enter path to the file containing OCI credentials [/home/apothula/.oci/config]:
    Enter the profile name in OCI credentials file [DEFAULT]:
    Enter percentage of table read units to be used for migration operation. (1-100) [90]:
    Enter store operation timeout in milliseconds. (1-30000) [5000]:
  5. Based on the prompts from the utility, choose your options for the Sink configuration.
    Select the sink:
    1) nosqldb
    2) nosqldb_cloud
    3) file
    #? 3
    Configuration for sink type=file
    Enter path to a file to store JSON data: /home/apothula/nosqlMigrator/myTableJSON
    Would you like to store JSON in pretty format? (y/n) [n]: y
    Would you like to migrate the table schema also? (y/n) [y]: y
    Enter path to a file to store table schema: /home/apothula/nosqlMigrator/myTableSchema
  6. Based on the prompts from the utility, choose your options for the source data transformations. The default value is n.
    Would you like to add transformations to source data? (y/n) [n]:
  7. Enter your choice to determine whether to proceed with the migration in case any record fails to migrate.
    Would you like to continue migration in case of any record/row is failed to migrate?: (y/n) [n]:
    
  8. The utility displays the generated configuration on the screen.
    generated configuration is:
    {
      "source": {
        "type": "nosqldb_cloud",
        "endpoint": "us-phoenix-1",
        "table": "myTable",
        "compartment": "developers",
        "credentials": "/home/apothula/.oci/config",
        "credentialsProfile": "DEFAULT",
        "readUnitsPercent": 90,
        "requestTimeoutMs": 5000
      },
      "sink": {
        "type": "file",
        "format": "json",
        "schemaPath": "/home/apothula/nosqlMigrator/myTableSchema",
        "pretty": true,
        "dataPath": "/home/apothula/nosqlMigrator/myTableJSON"
      },
      "abortOnError": true,
      "migratorVersion": "1.0.0"
    }
  9. Finally, the utility prompts for your choice to decide whether to proceed with the migration with the generated configuration file or not. The default option is y.

    Note:

    If you select n, you can use the generated configuration file to run the migration using the ./runMigrator -c or the ./runMigrator --config option.
    would you like to run the migration with above configuration?
    If you select no, you can use the generated configuration file to run the migration using
    ./runMigrator --config /home/apothula/nosqlMigrator/NDCS2JSON
    (y/n) [y]:
  10. The NoSQL Database Migrator migrates your data and schema from NDCS to the JSON file.
    Records provided by source=10,Records written to sink=10,Records failed=0.
    Elapsed time: 0min 1sec 277ms
    Migration completed.
Validation

To validate the migration, you can open the JSON Sink files and view the schema and data.

-- Exported myTable Data
 
[~/nosqlMigrator]$cat myTableJSON
{
  "id" : 10,
  "document" : {
    "course" : "Computer Science",
    "name" : "Neena",
    "studentid" : 105
  }
}
{
  "id" : 3,
  "document" : {
  "course" : "Computer Science",
    "name" : "John",
    "studentid" : 107
  }
}
{
  "id" : 4,
  "document" : {
    "course" : "Computer Science",
    "name" : "Ruby",
    "studentid" : 100
  }
}
{
  "id" : 6,
  "document" : {
    "course" : "Bio-Technology",
    "name" : "Rekha",
    "studentid" : 104
  }
}
{
  "id" : 7,
  "document" : {
    "course" : "Computer Science",
    "name" : "Ruby",
    "studentid" : 100
  }
}
{
  "id" : 5,
  "document" : {
    "course" : "Journalism",
    "name" : "Rani",
    "studentid" : 106
  }
}
{
  "id" : 8,
  "document" : {
    "course" : "Computer Science",
    "name" : "Tom",
    "studentid" : 103
  }
}
{
  "id" : 9,
  "document" : {
    "course" : "Computer Science",
    "name" : "Peter",
    "studentid" : 109
  }
}
{
  "id" : 1,
  "document" : {
    "course" : "Journalism",
    "name" : "Tracy",
    "studentid" : 110
  }
}
{
  "id" : 2,
  "document" : {
    "course" : "Bio-Technology",
    "name" : "Raja",
    "studentid" : 108
  }
}
-- Exported myTable Schema
 
[~/nosqlMigrator]$cat myTableSchema
CREATE TABLE IF NOT EXISTS myTable (id INTEGER, document JSON, PRIMARY KEY(SHARD(id)))

Migrate from Oracle NoSQL Database On-Premise to Oracle NoSQL Database Cloud Service

This example shows how to use the Oracle NoSQL Database Migrator to copy data and the schema definition of a NoSQL table from Oracle NoSQL Database to Oracle NoSQL Database Cloud Service (NDCS).

Use Case

As a developer, you are exploring options to avoid the overhead of managing the resources, clusters, and garbage collection for your existing NoSQL Database KVStore workloads. As a solution, you decide to migrate your existing on-premise KVStore workloads to Oracle NoSQL Database Cloud Service because NDCS manages them automatically.

Example

For the demonstration, let us look at how to migrate the data and schema definition of a NoSQL table called myTable from the NoSQL Database KVStore to NDCS. We will also use this use case to show how to run the runMigrator utility by passing a pre-created configuration JSON file.
Prerequisites
  • Identify the source and sink for the migration.
    • Source: Oracle NoSQL Database
    • Sink: Oracle NoSQL Database Cloud Service
  • Identify your OCI cloud credentials and capture them in the OCI config file. Save the config file in /home/.oci/config. See Acquiring Credentials in Using Oracle NoSQL Database Cloud Service.
    [DEFAULT]
    tenancy=ocid1.tenancy.oc1....
    user=ocid1.user.oc1....
    fingerprint= 43:d1:....
    key_file=</fully/qualified/path/to/the/private/key/>
    pass_phrase=<passphrase>
  • Identify the region endpoint and compartment name for your Oracle NoSQL Database Cloud Service.
    • endpoint: us-phoenix-1
    • compartment: developers
  • Identify the following details for the on-premise KVStore:
    • storeName: kvstore
    • helperHosts: <hostname>:5000
    • table: myTable
Procedure
To migrate the data and schema definition of myTable from NoSQL Database KVStore to NDCS:
  1. Prepare the configuration JSON file with the identified Source and Sink details. See Source Configuration Templates and Sink Configuration Templates.
    {
      "source" : {
        "type" : "nosqldb",
        "storeName" : "kvstore",
        "helperHosts" : ["<hostname>:5000"],
        "table" : "myTable",
        "requestTimeoutMs" : 5000
      },
      "sink" : {
        "type" : "nosqldb_cloud",
        "endpoint" : "us-phoenix-1",
        "table" : "myTable",
        "compartment" : "developers",
        "schemaInfo" : {
          "schemaPath" : "<complete/path/to/the/JSON/file/with/DDL/commands/for/the/schema/definition>",
          "readUnits" : 100,
          "writeUnits" : 100,
          "storageSize" : 1
        },
        "credentials" : "<complete/path/to/oci/config/file>",
        "credentialsProfile" : "DEFAULT",
        "writeUnitsPercent" : 90,
        "requestTimeoutMs" : 5000
      },
      "abortOnError" : true,
      "migratorVersion" : "1.0.0"
    }
  2. Open the command prompt and navigate to the directory where you extracted the NoSQL Database Migrator utility.
  3. Run the runMigrator command by passing the configuration JSON file using the --config or -c option.
    [~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator --config <complete/path/to/the/JSON/config/file>
    
  4. The utility proceeds with the data migration, as shown below.
    Records provided by source=10, Records written to sink=10, Records failed=0.
    Elapsed time: 0min 10sec 426ms
    Migration completed.
Validation

To validate the migration, you can login to your NDCS console and verify that myTable is created with the source data.

Migrate from MongoDB JSON file to an Oracle NoSQL Database Cloud Service

This example shows how to use the Oracle NoSQL Database Migrator to copy Mongo-DB Formatted Data to the Oracle NoSQL Database Cloud Service (NDCS).

Use Case

After evaluating multiple options, an organization finalizes Oracle NoSQL Database Cloud Service as its NoSQL Database platform. As its NoSQL tables and data are in MongoDB, they are looking for a way to migrate those tables and data to Oracle NDCS.

Example

For the demonstration, let us look at how to migrate a MongoDB-formatted JSON file to NDCS. We will use a manually created configuration JSON file for this example.
Prerequisites
  • Identify the source and sink for the migration.
    • Source: MongoDB-Formatted JSON File
    • Sink: Oracle NoSQL Database Cloud Service
  • Extract the data from Mongo DB using the mongoexport utility. See mongoexport for more information.
  • Create a NoSQL table in the sink with a table schema that matches the data in the Mongo-DB-formatted JSON file. As an alternative, you can instruct the NoSQL Database Migratorto create a table with the default schema structure by setting the defaultSchema attribute to true.

    Note:

    For a MongoDB-Formatted JSON source, the default schema for the table will be as:
    CREATE TABLE IF NOT EXISTS <tablename>(ID STRING, DOCUMENT JSON,PRIMARY KEY(SHARD(ID))
    
    Where:
    • tablename = value of the table config.
    • ID = _id value from the mongoDB exported JSON source file.
    • DOCUMENT = The entire contents of the mongoDB exported JSON source file is aggregated into the DOCUMENT column excluding the _id field.
  • Identify your OCI cloud credentials and capture them in the OCI config file. Save the config file in /home/.oci/config. See Acquiring Credentials in Using Oracle NoSQL Database Cloud Service.
    [DEFAULT]
    tenancy=ocid1.tenancy.oc1....
    user=ocid1.user.oc1....
    fingerprint= 43:d1:....
    key_file=</fully/qualified/path/to/the/private/key/>
    pass_phrase=<passphrase>
  • Identify the region endpoint and compartment name for your Oracle NoSQL Database Cloud Service.
    • endpoint: us-phoenix-1
    • compartment: developers
Procedure

To migrate the MongoDB-formatted JSON data to the Oracle NoSQL Database Cloud Service:

  1. Prepare the configuration JSON file with the identified Source and Sink details. See Source Configuration Templates and Sink Configuration Templates.
    {
      "source" : {
        "type" : "file",
        "format" : "mongodb_json",
        "dataPath" : "<complete/path/to/the/MongoDB/Formatted/JSON/file>"
      },
      "sink" : {
        "type" : "nosqldb_cloud",
        "endpoint" : "us-phoenix-1",
        "table" : "mongoImport",
        "compartment" : "developers",
        "schemaInfo" : {
          "defaultSchema" : true,
          "readUnits" : 100,
          "writeUnits" : 60,
          "storageSize" : 1
        },
        "credentials" : "<complete/path/to/the/oci/config/file>",
        "credentialsProfile" : "DEFAULT",
        "writeUnitsPercent" : 90,
        "requestTimeoutMs" : 5000
      },
      "abortOnError" : true,
      "migratorVersion" : "1.0.0"
    }
  2. Open the command prompt and navigate to the directory where you extracted the NoSQL Database Migrator utility.
  3. Run the runMigrator command by passing the configuration JSON file using the --config or -c option.
    [~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator --config <complete/path/to/the/JSON/config/file>
    
  4. The utility proceeds with the data migration, as shown below.
    Records provided by source=29,353, Records written to sink=29,353, Records failed=0.
    Elapsed time: 9min 9sec 630ms
    Migration completed.
Validation

To validate the migration, you can login to your NDCS console and verify that myTable is created with the source data.

Migrate from DynamoDB JSON file in AWS S3 to an Oracle NoSQL Database Cloud Service

This example shows how to use the Oracle NoSQL Database Migrator to copy DynamoDB JSON file stored in an AWS S3 store to the Oracle NoSQL Database Cloud Service (NDCS).

Use Case:

After evaluating multiple options, an organization finalizes Oracle NoSQL Database Cloud Service over DynamoDB database. The organization wants to migrate their tables and data from DynamoDB to Oracle NoSQL Database Cloud Service.

See Mapping of DynamoDB table to Oracle NoSQL table for more details.

Example:

For this demonstration, you will learn how to migrate a DynamoDB JSON file in an AWS S3 source to NDCS. You will use a manually created configuration JSON file for this example.

Prerequisites

  • Identify the source and sink for the migration.
    • Source: DynamoDB JSON File in AWS S3
    • Sink: Oracle NoSQL Database Cloud Service
  • Identify the table in AWS DynamoDB that needs to be migrated to NDCS. Login to your AWS console using your credentials. Go to DynamoDB. Under Tables, choose the table to be migrated.
  • Create an object bucket and export the table to S3. From your AWS console, go to S3. Under buckets, create a new object bucket. Go back to DynamoDB and click Exports to S3. Provide the source table and the destination S3 bucket and click Export.
  • You need aws credentials (including access key ID and secret access key) and config files (credentials and optionally config) to access AWS S3 from the migrator. See Set and view configuration settings for more details on the configuration files. See Creating a key pair for more details on creating access keys.
  • Identify your OCI cloud credentials and capture them in the OCI config file. Save the config file in a directory .oci under your home directory (~/.oci/config). See Acquiring Credentials for more details.
    [DEFAULT]              
    tenancy=ocid1.tenancy.oc1....         
    user=ocid1.user.oc1....         
    fingerprint= 43:d1:....         
    key_file=</fully/qualified/path/to/the/private/key/>              
    pass_phrase=<passphrase>
  • Identify the region endpoint and compartment name for your Oracle NoSQL Database. For example,
    • endpoint: us-phoenix-1
    • compartment: developers

Procedure

To migrate the DynamoDB JSON data to the Oracle NoSQL Database:
  1. Prepare the configuration JSON file with the identified source and sink details. See Source Configuration Templates and Sink Configuration Templates.
    You can choose one of the following two options.
    • Option 1: Importing DynamoDB table a as JSON document using default schema config.
      Here the defaultSchema is TRUE and so the migrator creates the default schema at the sink. You need to specify the DDBPartitionKey and the corresponding NoSQL column type. Else an error is thrown.
      {
       "source" : {
         "type" : "aws_s3",
         "format" : "dynamodb_json",
         "s3URL" : "<https://<bucket-name>.<s3_endpoint>/export_path>",
         "credentials" : "</path/to/aws/credentials/file>",
         "credentialsProfile" : <"profile name in aws credentials file">
       },
       "sink" : {
         "type" : "nosqldb_cloud",
         "endpoint" : "<region_name>",
         "table" : "<table_name>",
         "compartment" : "<compartment_name>",
         "schemaInfo" : {
            "defaultSchema" : true,
            "readUnits" : 100,
            "writeUnits" : 60,
            "DDBPartitionKey" : "<PrimaryKey:Datatype>",
            "storageSize" : 1
         },
         "credentials" : "<complete/path/to/the/oci/config/file>",
         "credentialsProfile" : "DEFAULT",
         "writeUnitsPercent" : 90,
         "requestTimeoutMs" : 5000
       },
       "abortOnError" : true,
       "migratorVersion" : "1.0.0"
      }
      For a DynamoDB JSON source, the default schema for the table will be as shown below:
      CREATE TABLE IF NOT EXISTS <TABLE_NAME>(DDBPartitionKey_name DDBPartitionKey_type, 
      [DDBSortKey_name DDBSortKey_type], DOCUMENT JSON, 
      PRIMARY KEY(SHARD(DDBPartitionKey_name),[DDBSortKey_name]))

      Where

      TABLE_NAME = value provided for the sink 'table' in the configuration

      DDBPartitionKey_name = value provided for the partiiton key in the configuration

      DDBPartitionKey_type = value provided for the data type of the partition key in the configuration

      DDBSortKey_name = value provided for the sort key in the configuration if any

      DDBSortKey_type = value provided for the data type of the sort key in the configuration if any

      DOCUMENT = All attributes except the partition and sort key of a Dynamo DB table item aggregated into a NoSQL JSON column

    • Option 2: Importing DynamoDB table as fixed columns using a user-supplied schema file.
      Here the defaultSchema is FALSE and you specify the schemaPath as a file containing your DDL statement. See Mapping of DynamoDB types to Oracle NoSQL types for more details.

      Note:

      If the Dynamo DB table has a data type that is not supported in NoSQL, the migration fails.
      A sample schema file is shown below.
      CREATE TABLE IF NOT EXISTS sampledynDBImp (AccountId INTEGER,document JSON, 
      PRIMARY KEY(SHARD(AccountId)));
      The schema file is used to create the table at the sink as part of the migration. As long as the primary key data is provided, the input JSON record will be inserted, otherwise it throws an error.

      Note:

      If the input data does not contain a value for a particular column(other than the primary key) then the column default value will be used. The default value should be part of the column definition while creating the table. For example id INTEGER not null default 0. If the column does not have a default definition then SQL NULL is inserted if no values are provided for the column.
      {
       "source" : {
         "type" : "aws_s3",
         "format" : "dynamodb_json",
         "s3URL" : "<https://<bucket-name>.<s3_endpoint>/export_path>",
         "credentials" : "</path/to/aws/credentials/file>",
         "credentialsProfile" : <"profile name in aws credentials file">
       },
       "sink" : {
         "type" : "nosqldb_cloud",
         "endpoint" : "<region_name>",
         "table" : "<table_name>",
         "compartment" : "<compartment_name>",
         "schemaInfo" : {
            "defaultSchema" : false,
            "readUnits" : 100,
            "writeUnits" : 60,
            "schemaPath" : "<full path of the schema file with the DDL statement>",
            "storageSize" : 1
         },
         "credentials" : "<complete/path/to/the/oci/config/file>",
         "credentialsProfile" : "DEFAULT",
         "writeUnitsPercent" : 90,
         "requestTimeoutMs" : 5000
       },
       "abortOnError" : true,
       "migratorVersion" : "1.0.0"
      }
  2. Open the command prompt and navigate to the directory where you extracted the NoSQL Database Migrator utility.
  3. Run the runMigrator command by passing the configuration JSON file using the --config or -c option.
    [~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator 
    --config <complete/path/to/the/JSON/config/file>
  4. The utility proceeds with the data migration, as shown below.
    Records provided by source=7..,
    Records written to sink=7,
    Records failed=0,
    Records skipped=0.
    Elapsed time: 0 min 2sec 50ms
    Migration completed.

Validation

You can login to your NDCS console and verify that the new table is created with the source data.

Migrate from DynamoDB JSON file to Oracle NoSQL Database

This example shows how to use the Oracle NoSQL Database Migrator to copy DynamoDB JSON file to Oracle NoSQL Database.

Use Case:

After evaluating multiple options, an organization finalizes Oracle NoSQL Database over DynamoDB database. The organization wants to migrate their tables and data from DynamoDB to Oracle NoSQL Database (On-premises).

See Mapping of DynamoDB table to Oracle NoSQL table for more details.

Example:

For this demonstration, you will learn how to migrate a DynamoDB JSON file to Oracle NoSQL Database(On-premises). You will use a manually created configuration JSON file for this example.

Prerequisites

  • Identify the source and sink for the migration.
    • Source: DynamoDB JSON File
    • Sink: Oracle NoSQL Database (On-premises)
  • In order to import DynamoDB table data to Oracle NoSQL Database, you must first export the DynamoDB table to S3. Refer to steps provided in Exporting DynamoDB table data to Amazon S3 to export your table. While exporting, you select the format as DynamoDB JSON. The exported data contains DynamoDB table data in multiple gzip files as shown below.
    / 01639372501551-bb4dd8c3 
    |-- 01639372501551-bb4dd8c3 ==> exported data prefix
    |----data
    |------sxz3hjr3re2dzn2ymgd2gi4iku.json.gz  ==>table data
    |----manifest-files.json
    |----manifest-files.md5
    |----manifest-summary.json
    |----manifest-summary.md5
    |----_started
  • You must download the files from AWS s3. The structure of the files after the download will be as shown below.
    download-dir/01639372501551-bb4dd8c3     
    |----data    
    |------sxz3hjr3re2dzn2ymgd2gi4iku.json.gz  ==>table data   
    |----manifest-files.json   
    |----manifest-files.md5   
    |----manifest-summary.json   
    |----manifest-summary.md5   
    |----_started

Procedure

To migrate the DynamoDB JSON data to the Oracle NoSQL Database:
  1. Prepare the configuration JSON file with the identified source and sink details. See Source Configuration Templates and Sink Configuration Templates.
    You can choose one of the following two options.
    • Option 1: Importing DynamoDB table a as JSON document using default schema config.
      Here the defaultSchema is TRUE and so the migrator creates the default schema at the sink. You need to specify the DDBPartitionKey and the corresponding NoSQL column type. Else an error is thrown.
      {
       "source" : {
         "type" : "file",
         "format" : "dynamodb_json",
         "dataPath" : "<complete/path/to/the/DynamoDB/Formatted/JSON/file>"
       },
       "sink" : {
          "type" : "nosqldb",
          "table" : "<table_name>",
          "storeName" : "kvstore",
          "helperHosts" : ["<hostname>:5000"]
          "schemaInfo" : {
             "defaultSchema" : true,    
             "DDBPartitionKey" : "<PrimaryKey:Datatype>",
           },  
        },
        "abortOnError" : true,
        "migratorVersion" : "1.0.0"
      }
      For a DynamoDB JSON source, the default schema for the table will be as shown below:
      CREATE TABLE IF NOT EXISTS <TABLE_NAME>(DDBPartitionKey_name DDBPartitionKey_type, 
      [DDBSortKey_name DDBSortKey_type], DOCUMENT JSON, 
      PRIMARY KEY(SHARD(DDBPartitionKey_name),[DDBSortKey_name]))

      Where

      TABLE_NAME = value provided for the sink 'table' in the configuration

      DDBPartitionKey_name = value provided for the partiiton key in the configuration

      DDBPartitionKey_type = value provided for the data type of the partition key in the configuration

      DDBSortKey_name = value provided for the sort key in the configuration if any

      DDBSortKey_type = value provided for the data type of the sort key in the configuration if any

      DOCUMENT = All attributes except the partition and sort key of a Dynamo DB table item aggregated into a NoSQL JSON column

    • Option 2: Importing DynamoDB table as fixed columns using a user-supplied schema file.
      Here the defaultSchema is FALSE and you specify the schemaPath as a file containing your DDL statement. See Mapping of DynamoDB types to Oracle NoSQL types for more details.

      Note:

      If the Dynamo DB table has a data type that is not supported in NoSQL, the migration fails.
      A sample schema file is shown below.
      CREATE TABLE IF NOT EXISTS sampledynDBImp (AccountId INTEGER,document JSON, 
      PRIMARY KEY(SHARD(AccountId)));
      The schema file is used to create the table at the sink as part of the migration. As long as the primary key data is provided, the input JSON record will be inserted, otherwise it throws an error.

      Note:

      If the input data does not contain a value for a particular column(other than the primary key) then the column default value will be used. The default value should be part of the column definition while creating the table. For example id INTEGER not null default 0. If the column does not have a default definition then SQL NULL is inserted if no values are provided for the column.
      {
        "source" : {
          "type" : "file",
          "format" : "dynamodb_json",
          "dataPath" : "<complete/path/to/the/DynamoDB/Formatted/JSON/file>"
        },
        "sink" : {
          "type" : "nosqldb",
          "table" : "<table_name>",
          "schemaInfo" : {
            "defaultSchema" : false,
            "readUnits" : 100,
            "writeUnits" : 60,
            "schemaPath" : "<full path of the schema file with the DDL statement>",
            "storageSize" : 1
          },
          "storeName" : "kvstore",
          "helperHosts" : ["<hostname>:5000"]
        },
        "abortOnError" : true,
        "migratorVersion" : "1.0.0"
      }
  2. Open the command prompt and navigate to the directory where you extracted the NoSQL Database Migrator utility.
  3. Run the runMigrator command by passing the configuration JSON file using the --config or -c option.
    [~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator 
    --config <complete/path/to/the/JSON/config/file>
  4. The utility proceeds with the data migration, as shown below.
    Records provided by source=7..,
    Records written to sink=7,
    Records failed=0,
    Records skipped=0.
    Elapsed time: 0 min 2sec 50ms
    Migration completed.

Validation

Start the SQL prompt in your KVStore.
 java -jar lib/sql.jar -helper-hosts localhost:5000 -store kvstore
Verify that the new table is created with the source data:
desc <table_name>
SELECT * from <table_name>