3 Oracle Autonomous Data Warehouse Cloud

This chapter describes how to work with Autonomous Data Warehouse Cloud (ADWC) in Oracle Data Integrator.

This chapter includes the following sections:

Introduction

Autonomous Data Warehouse Cloud (ADWC) is a fully-managed, high-performance elastic cloud service providing analytical capability over data stored in the database and Oracle Object Storage.

Oracle Data Integrator (ODI) seamlessly integrates with ADWC. By integrating ODI with ADWC, you can get the full performance of Oracle Database, in a fully-managed environment that is tuned and optimized for data warehouse workloads.

Concepts

The Oracle ADWC concepts map the Oracle Data Integrator concepts as follows: An Oracle ADWC Instance corresponds to a data server in Oracle Data Integrator. Within this instance, a schema maps to an Oracle Data Integrator physical schema. A set of related objects within one schema corresponds to a data model, and each table, view or synonym will appear as an ODI datastore, with its attributes, columns and constraints.

Oracle Data Integrator uses Java Database Connectivity (JDBC) to connect to Oracle ADWC instance. All connections to the ADWC require the use of an Oracle Wallet to manage public key security credentials. ODI users can use a Thin or OCI connection to access ADWC.

Knowledge Modules

Oracle Data Integrator provides the following Knowledge Modules (KM) for loading data into ADWC. The KMs use Oracle specific features. It is also possible to use the generic SQL KMs with ADWC.

Table 3-1 ADWC Knowledge Modules

Knowledge Module Description

LKM SQL to Oracle (Built-In)

Loads data from any ANSI SQL-92 source database to an Oracle staging area.

LKM SQL Multi-Connect

Enables the use of multi-connect IKM for target table. Built-in IKM.

LKM File to Oracle (SQLLDR)

Loads data from a file to an Oracle staging area using the SQL*Loader command line utility.

IKM SQL to File Append

Integrates data in a target file from any ANSI SQL-92 compliant staging area in replace mode.

IKM Oracle Insert

Integrates data into an Oracle target table in append mode. The data is loaded directly in the target table with a single INSERT SQL statement. Built-in KM.

IKM Oracle Update

Integrates data into an Oracle target table in incremental update mode. The data is loaded directly into the target table with a single UPDATE SQL statement. Built-in KM.

IKM Oracle Merge

Integrates data into an Oracle target table in incremental update mode. The data is loaded directly into the target table with a single MERGE SQL statement. Built-in KM.

IKM Oracle Multi-Insert

Integrates data from one source into one or many Oracle target tables in append mode, using a multi-table insert statement (MTI). This IKM can be utilized in a single mapping to load multiple targets. Built-in KM.

RKM Oracle

Reverse-engineers tables, views, columns and creates data models to use as targets or sources in Oracle Data Integrator mappings.

LKM SQL to ADWC External Table

Loads data from SQL source to Oracle ADWC using Oracle Object Storage as intermediate staging. You have to use this LKM in combination with Oracle or generic SQL IKM.

Attention:

This KM applies only to Data Integration Platform Cloud.

LKM SQL to ADWC Copy

Loads data from SQL source to Oracle ADWC using Oracle Object Storage as intermediate staging. You have to use this LKM in combination with Oracle or generic SQL IKM.

Note:

This KM applies only to Data Integration Platform Cloud.

LKM SQL to ADWC Copy Direct

Loads data from SQL source to Oracle ADWC using Oracle Object Storage as intermediate staging. You can use this LKM as a standalone KM as you do not need any IKM.

Note:

This KM applies only to Data Integration Platform Cloud.

LKM File to ADWC External Table

Loads data from local or HDFS file source to Oracle ADWC using Oracle Object Storage as intermediate staging. You have to use this LKM in combination with Oracle or generic SQL IKM.

Note:

This KM applies only to Data Integration Platform Cloud.

LKM File to ADWC Copy

Loads data from local or HDFS file source to Oracle ADWC using Oracle Object Storage as intermediate staging. You have to use this LKM in combination with Oracle or generic SQL IKM.

Note:

This KM applies only to Data Integration Platform Cloud.

LKM File to ADWC Copy Direct

Loads data from local or HDFS file source to Oracle ADWC using Oracle Object Storage as intermediate staging. You can use this LKM as a standalone KM as you do not need an IKM for its implementation.

Note:

This KM applies only to Data Integration Platform Cloud.

LKM Oracle to ADWC Datapump

Loads data from Oracle On-premises products to Oracle ADWC.. You can use it in combination with Oracle or generic SQL IKM.

Note:

This KM applies only to Data Integration Platform Cloud.

LKM Object Storage to ADWC Copy

Loads data from Oracle Cloud Object Storage to Oracle ADWC. You can use it in combination with Oracle or generic SQL IKM

Note:

This KM applies only to Data Integration Platform Cloud.

LKM Object Storage to ADWC Copy Direct

Loads data from Oracle Cloud Object Storage to Oracle ADWC. You can use this LKM to move files/objects from Oracle Object Storage to an ADWC table.

Note:

This KM applies only to Data Integration Platform Cloud.

LKM Object Storage to ADWC External Table

Loads data from Oracle Object Storage to Oracle ADWC using External Table method. You can use this LKM in combination with Oracle or generic SQL IKM.

Note:

This KM applies only to Data Integration Platform Cloud.

Prerequisites

The following prerequisites are essential for connecting to ADWC environment. Make sure you go through the following prerequisites, before connecting to ADWC environment.

Note:

The following prerequisites are common for both ODI Studio and ODI Agent.

Wallet Configuration

All connections to ADWC require the use of an Oracle Wallet to manage public key security credentials. A wallet is a password-protected container used to store authentication and signing credential, including private key, certificates, and trusted certificates needed by SSL. Oracle Wallet provides a simple and easy method to manage database credentials across multiple domains. It allows you to update database credentials by updating the Wallet instead of having to change individual data source definitions. To connect to the ADWC, applications need access to the Oracle wallet.

For more details on wallet, refer to Securing Passwords in Application Design section of Managing Security for Application Developers in Database Security Guide.

The JDBC properties require a Wallet file location.

  • Get the wallet zip file from ADWC wallet location and place it in a local directory accessible to both ODI Studio and ODI Agent. The default location of the wallet file is <homedir>/.odi/oracledi/ewallet.For auto login wallet, upload the zip as wallet file.

Java Security configuration

Note:

Steps listed below are not required for JDK1.8.0_u161 or later versions.
Update the file java.security, from the location JDK_HOME\jre\lib\security and add the following lines of code, as shown below:
  • security.provider.10=sun.security.mscapi.SunMSCAPI

  • security.provider.11=oracle.security.pki.OraclePKIProvider

security.provider.1=sun.security.provider.Sun
security.provider.2=sun.security.rsa.SunRsaSign
security.provider.3=sun.security.ec.SunEC
security.provider.4=com.sun.net.ssl.internal.ssl.Provider
security.provider.5=com.sun.crypto.provider.SunJCE
security.provider.6=sun.security.jgss.SunProvider
security.provider.7=com.sun.security.sasl.Provider
security.provider.8=org.jcp.xml.dsig.internal.dom.XMLDSigRI
security.provider.9=sun.security.smartcardio.SunPCSC
security.provider.10=sun.security.mscapi.SunMSCAPI
security.provider.11=oracle.security.pki.OraclePKIProvider

Preparing for OCI Connection

Follow the below steps, to create a OCI Connection :

  1. Download and install Oracle Instant Client: Version 12.2.0.1.0. For more instructions, see http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html

  2. After installing the Oracle Instant Client, add its folder in the environment variable called LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/oracle/instantclient_12_2

  3. Unzip the credential file into a new directory: i.e. /home/oracle/wallet

  4. Navigate to the new wallet directory, and modify the sqlnet.ora file as:

    • specify the wallet location.

      For Example —
      WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY=/home/oracle/wallet)))
    • add the following two SSL parameters in the sqlnet.ora file:
      SSL_SERVER_DN_MATCH= TRUE 
      SSL_VERSION = 1.2 
  5. Change or set TNS_ADMIN to the location or directory where the unzipped credential files are located. For instance, if your wallet file was unzipped to a directory called /home/oracle/wallet then, set TNS_ADMIN parameter as follows:

    TNS_ADMIN=/home/oracle/wallet

Agent Configurations for OCI

Perform the following configurations to use OCI, to connect ODI Agents (Standalone and J2EE) with ADWC:

  • In Local (No Agent), ODI Standalone agent and J2EE Agent, set the instant client folder for the LD_LIBRARY_PATH parameter before launching ODI Studio. For example, export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/oracle/instantclient_12_2

    Note:

    The existing value of jars or directories mentioned in the LD_LIBRARY_PATH parameter should be consistent to the current version of instant client jars . If not, you may get an error.
  • For ODI Standalone Agent, configure the instant client library path in ODI_ADDITIONAL_JAVA_OPTIONS variable, present in instance.sh/cmd of the standalone domain. The instance.sh file is located at : <DOMAIN_HOME>config/fmwconfig/components/ODI/<agent-name>/bin . For Example,
    ODI_ADDITIONAL_JAVA_OPTIONS="${ODI_ADDITIONAL_JAVA_OPTIONS} -Djava.library.path=${ODI_HOME}/../sdk/lib:/home/oracle/instantclient_12_2"
    
  • For ODI J2EE Agent, set JAVA_OPTIONS parameter to $JAVA_OPTIONS -Djava.library.path=/home/oracle/instantclient_12_2. For example, export JAVA_OPTIONS="$JAVA_OPTIONS -Djava.library.path=/home/oracle/instantclient_12_2"

Setting up the Topology

Note:

Please note the ADWC data server is created under Oracle Technology.

Setting up the Topology consists of:

Creating an Oracle Data Server

Create a data server for ADWC using the standard procedure, as described in Creating a Data Server of Administering Oracle Data Integrator. This section details only the properties required to be set in the data server created under Oracle technology for ADWC:
  1. In the Definition tab:

    Data Server

    • Name : Enter a name for the data server definition.

    • Instance / dblink (Data Server) : TNS Alias used for this Oracle instance. It will be used to identify the Oracle instance when using database links and SQL*Loader.

    Connection

    • User/Password: Oracle user (with its password), having select privileges on the source schemas, select/insert privileges on the target schemas and select/insert/object creation privileges on the work schemas that will be indicated in the Oracle physical schemas created under this data server.

    • JNDI Connection : Select this check-box to configure the JNDI connection settings. Navigate to the JNDI tab, and fill in the required fields.

      Note:

      JNDI connection field is not applicable for ADWC.
    • Use Credential File

      Note:

      Please note it is required to use a credential file with ADWC.
      Select this check box to upload the connection details directly from a pre-configured wallet fileFoot 1 or credential zip file. The above JNDI Connection check-box is disabled and the following fields with respect to Wallet feature appear:

    Credential Details

    Note:

    This section applies only to Data Integration Platform Cloud.
    • Credential File: Click the browse icon present beside the Credential File text box, to browse for the location of the required wallet file containing the connection details.

      By default, the credential location is populated to point to the same wallet file (ewallet.p12) or credential zip file (auto login wallet) that is used for login which is present in <homedir>/.odi/oracledi/ewallet.You can also browse for any other location where this credential file is stored with the required database connection details.

      Note:

      ODI supervisor can distribute the password protected wallet and credential zip file to other ODI users.
    • Click the Connection Details drop down arrow to choose the required connection URL from the list of available connection URLs retrieved from tnsnames.ora.

    • Upon selecting the required connection URL, JDBC Driver, JDBC URL, Username and Password fields are disabled and the associated username, password, jdbc URL and jdbc driver details are auto-populated with credentials retrieved from the pre-configured wallet file or credential zip file. You can change them only through the list of connections available in the wallet file (ewallet.p12) or credentials zip (tnsnames.ora).

    • Click Save, to save the Data Server details

    • Click Test Connection, to test the established connection

  2. The ADWC Data Server (using the Oracle Technology) supports JDBC. Fill-in the following details of the JDBC tab:

    ODI users can use a Thin or OCI connection to access ADWC.

    Note:

    The JDBC tab will be auto populated with the JDBC information contained in the credential file. You can review the auto-populated information in the JDBC tab and make edits (only if required).
    • JDBC Driver : Name of the JDBC driver used for connecting to the data server. oracle.jdbc.OracleDriver

    • JDBC URL : URL allowing you to connect to the data server.

      Format of the JDBC URL: jdbc:oracle:thin:@<network name or ip address of the Oracle machine>:<port of the Oracle listener (1521)>:<name of the Oracle instance>

      1. To connect an ADWC instance with the Oracle JDBC thin driver, use a database URL as shown below:

        jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=<hostname or ipaddress>)(PORT=<port>))(CONNECT_DATA=(SERVICE_NAME=<db service>)) (security=(ssl_server_cert_dn="<certificate_info>")) )

        For Example(description= (address=(protocol=tcps)(port=1522)(host=129.146.11.169))(connect_data=(service_name="<dbservice.oracle.com>"))(security=(ssl_server_cert_dn="<certificate_info>")

      2. To connect an ADWC instance with JDBC OCI, use a database URL as shown below:

        jdbc:oracle:oci8:@tnsname , where tnsname is the network name to identify a server or sid or port combination.

        For Example jdbc:oracle:oci8:@adwc_high

    • JDBC Properties: (Wallet Specific Properties)

      • oracle.net.ssl_server_dn_match = true

      • oracle.net.wallet_location =(SOURCE=(METHOD=file)(METHOD_DATA=(DIRECTORY=<wallet_directory>))

        For Example —WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/network/admin/ewallet")))

        This property is used to force the distinguished name (dn) of the server to match with its service name.

Creating an Oracle Physical Schema

Create an Oracle physical schema for ADWC using the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator.
Create a logical schema for this physical schema using the standard procedure, as described in Creating a Logical Schema in Administering Oracle Data Integrator and associate it in a given context.

Creating and Reverse-Engineering an Oracle Model

This section contains the following topics:

Create an Oracle Model

Create an Oracle Model using the standard procedure, as described in Creating a Model of Developing Integration Projects with Oracle Data Integrator.

Reverse Engineer an Oracle Model

An Oracle model for ADWC supports both Standard reverse-engineering - which uses only the abilities of the JDBC driver - and Customized reverse-engineering, which uses a RKM to retrieve the structure of the objects directly from the Oracle dictionary.

In most of the cases, consider using the standard JDBC reverse engineering for starting. Standard reverse-engineering with Oracle retrieves tables, views, columns and references.

Consider switching to customized reverse-engineering for retrieving more metadata. Oracle customized reverse-engineering retrieves the table and view structures, including columns, indexes, check constraints, synonyms, and references.

Standard Reverse-Engineering

To perform a Standard Reverse-Engineering on an Oracle model for ADWC, use the usual procedure, as described in Reverse-engineering a Model of Developing Integration Projects with Oracle Data Integrator.

Customized Reverse-Engineering

To perform a Customized Reverse-Engineering on Oracle model for ADWC with a RKM, use the usual procedure, as described in Reverse-engineering a Model of Developing Integration Projects with Oracle Data Integrator.

This section details only the fields specific to the Oracle technology. In the Reverse Engineer tab of the Oracle Model, select the KM — RKM Oracle.<project name>.

Designing a Mapping

You can use Oracle ADWC as a source or a target of a mapping. It is also possible to create ETL-style mappings based on the Oracle technology for ADWC.

The recommendations in this section help in the selection of the KM for different situations concerning an Oracle Database Server.

Loading data

ADWC can be used as a source or target of a mapping. The choice of LKM used in the mappings for loading Knowledge Module, to load data between Oracle and another type of data server determines the working and performance of a mapping.

The following KMs implement optimized methods for loading data from an Oracle database to a target database which is ADWC. Our primary goal is to load data into ADWC. In addition to these KMs, you can also use the Generic SQL KMs or the KMs specific to the other technologies involved.

Loading Data using Oracle KMs

You can load data into Oracle tables using the following Oracle KMs by designing a mapping where ADWC Oracle data stores can be the target. KMs that can be used for mapping are:

  • LKM SQL to Oracle (Built-In)

  • IKM Oracle Insert

  • IKM Oracle Update

  • IKM Oracle Merge

  • IKM Oracle Multi-Insert

Loading Data using SQL* Loader KMs

You can also load data into Oracle tables for ADWC using the SQL* Loader KM. LKM File to Oracle (SQLLDR) loads data into Oracle tables from files. You can design a mapping that uses the data stores for an Oracle Schema for ADWC as the target of the mapping, where the data is loaded using the SQL*Loader KM.

Connection Setup for SQL* Loader KMs

Make sure your tnsnames.ora and sqlnet.ora properties are configured to use a Wallet file.

For Example:

  1. sqlnet.ora :
    WALLET_LOCATION=(SOURCE = (METHOD = file)
    (METHOD_DATA = (DIRECTORY="<wallet_directory>")))
    SSL_SERVER_DN_MATCH=yes
  2. tnsnames.ora :

    • <db_name>_DB_high=(description=(address=(protocol=tcps)(port=<port>)(host=<hostname or ipaddress>))(connect_data=(service_name=<db_service>))(security=(ssl_server_cert_dn="<certificate_info>")) )

    • <db_name>_DB_medium=(description=(address=(protocol=tcps)(port=<port>)(host=<hostname or ipaddress>))(connect_data=(service_name=<db_service>))(security=(ssl_server_cert_dn="<certificate_info>")) )

    • <db_name>_DB_low=(description=(address=(protocol=tcps)(port=<port>)(host=<hostname or ipaddress>))(connect_data=(service_name=<db_service>))(security=(ssl_server_cert_dn="<certificate_info>")) )

For more details, refer to Use of an External Password Store to Secure Passwords section of Database Security Guide.

Loading Data directly into ADWC

Note:

This section applies only to Data Integration Platform Cloud.

You can use the following knowledge modules for loading data directly into Oracle ADWC.

For loading data directly into ADWC set the Source of the mapping to be File/HDFS or SQL technology such as Oracle. Target of the mapping has to be Oracle technology, more specifically ADWC database. Object storage staging area is used to temporary store files and this is defined by setting KM option TEMP_OBJECT_STORAGE_SCHEMA(logical schema). Some properties such as user/password are retrieved from Oracle Object Storage data server attached to TEMP_OBJECT_STORAGE_SCHEMAlogical schema. If you need to create temporary local file in case of SQL source, you have to define its location as TEMP_FILE_SCHEMAKM option. If you use transform components, they can be included in the source execution unit in case of SQL as a source and/or on the target ADWC execution unit. File, as a source does not support source transformations. Direct Copy KMs do not support transformations on the target.

LKM SQL to ADWC External Table

This LKM helps in loading data from SQL source to Oracle ADWC using Oracle Object Storage as intermediate staging. The result from SQL query is first loaded into Oracle Object Storage staging area and then External Table method is used to pull data from Oracle Object Storage. You must set the TEMP_OBJECT_STORAGE_SCHEMAand TEMP_FILE_SCHEMA KM options to designate temporary storage locations.

You have to use this LKM in combination with Oracle or generic SQL IKM.

The KM invokes the ODI tool OdiSqlUnload to unload SQL query data to a local file.

For Example –
OdiSqlUnload "-FILE=/tmp/odi_e3a779f8-ddd3-49d2-9575-a9f3c675b0f6_FILTER_AP.txt" "-DRIVER=oracle.jdbc.OracleDriver" "-URL=jdbc:oracle:thin:@//slc03sap:1521/flex" "-USER=system" "-PASS=<@=odiRef.getInfo("SRC_ENCODED_PASS") @>" "-FILE_FORMAT=VARIABLE" "-FIELD_SEP=," "-ROW_SEP=" "-DATE_FORMAT=MM-DD-YYYY" "-CHARSET_ENCODING=ISO8859_1"

SELECT
  PER.PID AS PID ,
  PER.PNAME AS PNAME 
FROM
  UT_TD_D_1.PERSON PER
WHERE (PER.PID = 2)

The KM then calls ODI tool OdiObjectStorageUpload to upload the local file to Oracle Object Storage.

OdiObjectStorageUpload "-TRG_LOGICAL_SCHEMA=Object Storage - SRC1" "-SRC_LOGICAL_SCHEMA=FILE_GENERIC_TMP" "-FILE_NAMES_FILTER=odi_e3a779f8-ddd3-49d2-9575-a9f3c675b0f6_FILTER_AP.txt" "-OVERWRITE=true"

The KM creates a temporary staging external table to pull the data from Oracle Object Storage.

BEGIN
  dbms_cloud.create_external_table(
    table_name =>'C$_0FILTER_EXT',
    credential_name =>'ODI',
    file_uri_list =>'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/dwcsdemo/odi_bojana/odi_e3a779f8-ddd3-49d2-9575-a9f3c675b0f6_FILTER_AP.txt',
    column_list => 'PID NUMBER(2), PNAME VARCHAR2(20)',
    format => json_object(
    'type' VALUE 'CSV',
    'skipheaders' VALUE '0',
    'dateformat' VALUE 'AUTO')
 );
END;

The KM optionally creates the credential object to connect to Object Storage, but can also re-use an existing one.

  dbms_cloud.create_credential(
    credential_name => 'ODI_FLEX',
    username => 'tenant15',
    password => 'xxxxxxxx'
  );
END;

Note:

LKM SQL to ADWC External Table has two different KM options for formatting date/time datatypes:
  1. UNLOAD_DATE_FORMAT– Use this KM option while unloading SQL query into text file. The formatting syntax is in conform to Java Date and Time Patterns.

  2. DATE_FORMAT– Use this KM option while loading object storage text file into ADWC. The formatting syntax is in conform to Oracle database syntax for formatting DATE datatype.

If you are loading into TIMESTAMP column, use Advanced option ADD_FORMAT_PROPERTIESto add the timestampformat property. It is very important to synchronize UNLOAD_DATE_FORMATand DATE_FORMAT/timestampformatto use identical format settings, even though they are using different syntax.

Here is an example of loading from source TIMESTAMP column into target TIMESTAMP column.

UNLOAD_DATE_FORMAT = yyyy-MM-dd
UNLOAD_TIMESTAMP_FORMAT = HH:mm:ss.SSS
ADD_FORMAT_PROPERTIES = 'timestampformat' VALUE 'YYYY-MM-DD HH24:MI:SS.FF3’

Also, make sure that TIMESTAMP column precision allows to store the given format.

KM Options

This KM has the following options:

  • CREDENTIAL_NAME — It provides Credential name to connect to Oracle Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL- It creates new credentials. If set to False, ODI reuses the existing credentials.

  • GENERATE_FIELD_LIST — If this KM option is set to False, then the field_list clause is skipped and the default settings of ORACLE_LOADER access driver is applied.

    Note:

    You have to always generate the field_list clause as it required by Fixed file format.
  • TEMP_OBJECT_STORAGE_SCHEMA — It specifies the name of logical schema defining the location of the temporary file that will be stored in Oracle Object Storage staging area. This must be an Oracle Object Storage technology logical schema.

  • CLEANUP_TEMPORARY_OBJECTS— Set this property to True, if you want temporary objects to be automatically cleaned up.

  • ADD_COMPRESSION — It compresses data before loading. Set this property to True, if you want to compress source data before loading into Oracle Object Storage. Additional KM options COMPRESSION_TYPEand KEEP_SOURCE_FILESdefine compression preferences.

  • COMPRESSION_TYPE- It allows you to configure the required compression type. Select the type of compression you want to apply on source data before loading into Oracle Object Storage.

  • KEEP_SOURCE_FILES — Use this KM option to retain the original source files after compression. Set this property to True (by default), if you wish to compress the target files and retain the original files.

    Note:

    gzip supports KEEP_SOURCE_FILESoption, starting from version 1.6 only.

Formatting

  • DELIMITED_FILE_FORMAT — It specifies delimited File Format and it can be CSV (default) or common delimited format known to ORACLE_LOADER access driver. You can use this KM option only if the source datastore File Format property is set to Delimited.

  • COMPRESSION — It specifies the compression method of the source file. There are 3 possible compression types or auto. Empty value implies no compression and AUTO implies compression type is auto-detected.

  • DATE_FORMAT - It helps to set specific date format. The default format option AUTO searches for the following formats:

    J
    MM-DD-YYYYBC
    MM-DD-YYYY
    YYYYMMDD HHMISS
    YYMMDD HHMISS
    YYYY.DDD
    YYYY-MM-DD
  • REJECT_LIMIT — The query helps to display an error message after the specified number of rows are rejected. Default value is zero.

  • CONVERSION_ERRORS — It specifies the processing conversion errors. If any row throws an error because of a conversion error, the related columns are stored as null or the row is rejected.

  • TRIM_SPACES — It helps to trim the leading and trailing spaces of the fields. Select the required trim option from the provided list of trim options.

  • IGNORE_BLANK_LINES — If set to True, the blank lines are ignored without throwing any error.

  • IGNORE MISSING COLUMNS — If there are more columns in the field_list than the source files, the extra columns will be stored as null.

  • TRUNCATE_COLUMNS — If the data in a file is too long for a field, then this option will truncate the value of the field rather than rejecting the row.

Advanced

  • ADD_FORMAT_PROPERTIES — This option allows adding custom format properties.

    Use the following syntax: '<prop1>' VALUE '<value1>', '<prop2>' VALUE '<value2>' ...

  • OVERWRITE_FIELD_LIST — This option gives the possibility to redefine the source file field definitions, where ODI does not have enough information about your input data. The details that you enter here is used as field_list parameter of dbms_cloud.create_external_table function call.

    For more details, refer to DBMS_CLOUD package documentation for more information.

Cleanup

  • CLEANUP_EXTERNAL_TABLE— Set this property to True, if you want the external table to be automatically cleaned up at the end of the every execution.

  • CLEANUP_CREDENTIAL — Set this property to True, if you want the credential object to be automatically cleaned up at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIALoption is also set to True.

SQL Unload Options

  • TEMP_FILE_SCHEMA — It specifies the name of logical schema defining the location of the temporary file that is stored before uploading to Oracle Object Storage. This must be a File technology logical schema. The temporary file is stored on local file system where the ODI agent is running.

  • UNLOAD_DATE_FORMAT— It specifies the output format used for date datatypes. This format follows Java date format standards.

  • UNLOAD_TIMESTAMP_FORMAT— It specifies the output format used for time datatypes. This format follows Java time format standards.

  • CHARSET_ENCODING — Use this for character set encoding.

  • FETCH_SIZE — It denotes the number of rows (records read) requested by ODI agent on each communication with the data server.

LKM SQL to ADWC Copy

This LKM helps in loading data from SQL source to Oracle ADWC using Oracle Object Storage as intermediate staging. The result from SQL query is first loaded into Oracle Object Storage staging area. Then we use dbms_cloud.copy_data () to load data from Oracle Object Storage into staging table on ADWC. The user must set TEMP_OBJECT_STORAGE_SCHEMA and TEMP_FILE_SCHEMA KM options to designate temporary storage locations.

You can use this LKM in combination with Oracle or generic SQL IKM.

OdiSqlUnloadand OdiObjectStorageUpload follows the same steps as described in LKM SQL to ADWC External Table. LKM SQL to ADWC Copy is then creating a temporary staging table on ADWC execution unit and pulls the data from Object Store into it using dbms_cloud.copy_data()function.

For Example
create table STAR.C$_0FILTER
(
  PID NUMBER(2),
  PNAME VARCHAR2(20)
)
BEGIN
  dbms_cloud.copy_data(
    schema_name => 'STAR',
    table_name =>'C$_0FILTER',
    credential_name =>'ODI',
    file_uri_list =>'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/dwcsdemo/odi_bojana/odi_d0b58fb8-bde5-4ce5-89da-0d258c98380e_FILTER_AP.txt',
    format => json_object(
    'type' VALUE 'CSV',
    'skipheaders' VALUE '0',
    'dateformat' VALUE 'AUTO')
 );
END;

The KM optionally creates the credential object to connect to Oracle Object Storage, but can also re-use an existing one:

BEGIN  
  dbms_cloud.create_credential(
    credential_name => 'ODI_FLEX',
    username => 'tenant15',
    password => 'xxxxxxxx'
  );
END;

Note:

LKM SQL to ADWC Copy has two different KM options for formatting date/time datatypes:
  • UNLOAD_DATE_FORMAT- Use this KM option while unloading SQL query into text file. The formatting syntax is conform to Java Date Patterns.

  • DATE_FORMAT — Use this KM option while loading object storage text file into ADWC. The formatting syntax is conform to Oracle database syntax for formatting DATE datatype.

If you are loading into TIMESTAMP column use Advanced option ADD_FORMAT_PROPERTIES to add timestampformat property. It is very important to synchronize UNLOAD_DATE_FORMATand DATE_FORMAT/timestampformatto use identical format settings, even though they are using different syntax.

Here is an example of loading from source TIMESTAMP column into target TIMESTAMP column.

  • UNLOAD_DATE_FORMAT = yyyy-MM-dd HH:mm:ss.SSS

  • ADD_FORMAT_PROPERTIES = 'timestampformat' VALUE 'YYYY-MM-DD HH24:MI:SS.FF3'

    Also, make sure that TIMESTAMP column precision allows the given format to be stored.

KM Options

This KM has the following KM options:

  • CREDENTIAL_NAME — It provides Credential name to connect to Oracle Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL — It creates new credentials. If set to False, ODI reuses the existing credentials.

  • GENERATE_FIELD_LIST — If this KM option is set to False, then the field_list clause is skipped and the default settings of ORACLE_LOADER access driver is applied.

    Note:

    You have to always generate the field_list clause as it is required by Fixed file format.
  • TEMP_OBJECT_STORAGE_SCHEMA — It specifies the name of logical schema defining the location of the temporary file that will be stored in Object Storage staging area. This must be an Oracle Object Storage technology logical schema.

  • ADD_COMPRESSION — It compresses data before loading. Set this property to True, if you want to compress source data before loading into Oracle Object Storage. Additional KM options COMPRESSION_TYPEand KEEP_SOURCE_FILESdefine compression preferences.

  • COMPRESSION_TYPE — It allows you to configure the required compression type. Select the type of compression you want to apply on source data before loading into Oracle Object Storage.

  • KEEP_SOURCE_FILES — Use this KM option to retain the original source files after compression. Set this property to True (by default), if you wish to compress the target files and retain the original files.

    Note:

    gzip supports KEEP_SOURCE_FILES option, starting from version 1.6 only.

Formatting

  • DELIMITED_FILE_FORMAT — It specifies delimited File Format and it can be CSV (default) or common delimited format known to ORACLE_LOADER access driver. You can use this KM option only if the source datastore File Format property is set to Delimited.

  • COMPRESSION — It specifies the compression method of the source file. There are 3 possible compression types or auto. Empty value implies no compression and AUTO implies compression type is auto-detected.

  • DATE_FORMAT — It helps to set specific date format. The default format option AUTO searches for the following formats:

    J
    MM-DD-YYYYBC
    MM-DD-YYYY
    YYYYMMDD HHMISS
    YYMMDD HHMISS
    YYYY.DDD
    YYYY-MM-DD
  • REJECT_LIMIT — The query helps to display an error message after the specified number of rows are rejected. Default value is zero.

  • CONVERSION_ERRORS — It specifies the processing conversion errors. If any row throws an error because of a conversion error, the related columns are stored as null or the row is rejected.

  • TRIM_SPACES — It helps to trim the leading and trailing spaces of the fields. If set to True, it trims the specified spaces.

  • IGNORE_BLANK_LINES — If set to True, the blank lines are ignored without throwing any error.

  • IGNORE MISSING COLUMNS — If there are more columns in the field_list than the source files, the extra columns will be stored as null.

  • TRUNCATE_COLUMNS — If the data in a file is too long for a field, then this option will truncate the value of the field rather than rejecting the row.

Advanced

  • ADD_FORMAT_PROPERTIES - This option allows adding custom format properties.

    Use the following syntax: <'prop1>' VALUE '<value1>', '<prop2>' VALUE '<value2>' ...

  • OVERWRITE_FIELD_LIST — This option gives the possibility to redefine the source file field definitions, where ODI does not have enough information about your input data. The details that you enter here is used as field_listparameter of dbms_cloud.create_external_tablefunction call.

    For more details, refer to DBMS_CLOUD package documentation for more information.

Cleanup

  • CLEANUP_TEMPORARY_OBJECTS — Set this property to True, if you want temporary objects to be automatically cleaned up.

  • CLEANUP_CREDENTIAL — Set this property to True, if you want the credential object to be automatically cleaned up at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIALoption is also set to True.

SQL Unload Options

  • TEMP_FILE_SCHEMA — It specifies the name of logical schema defining the location of the temporary file that is stored before uploading to Object Storage. This must be a File technology logical schema. The temporary file is stored on local file system where the ODI agent is running.

  • UNLOAD_DATE_FORMAT — It specifies the output format used for date datatypes. This format follows Java date/time format standards.

  • UNLOAD_TIMESTAMP_FORMAT— It specifies the output format used for time datatypes. This format follows Java time format standards.

  • CHARSET_ENCODING— Use this for character set encoding.

  • FETCH_SIZE — It denotes the number of rows (records read) requested by ODI agent on each communication with the data server.

LKM SQL to ADWC Copy Direct

This LKM loads data from SQL source to Oracle ADWC using Oracle Object Storage as intermediate staging. The result from SQL query is first loaded into Oracle Object Storage staging area. Then we use dbms_cloud.copy_data () function to load data from Oracle Object Storage directly into ADWC target table. All target columns are loaded irrespective of whether they are mapped or not. Only transformations on source are supported. The user must set TEMP_OBJECT_STORAGE_SCHEMA and TEMP_FILE_SCHEMA KM options to designate temporary storage locations.

You can use this LKM as a standalone KM as you do not need any IKM.

OdiSqlUnloadand OdiObjectStorageUploadsteps are the same as described in LKM SQL to ADWC External Table. LKM SQL to ADWC Copy Direct pulls the data from Object Store and loads data directly into target using dbms_cloud.copy_data() function.

For Example
BEGIN
  dbms_cloud.copy_data(
    schema_name => 'STAR',
    table_name =>'PERSON',
    credential_name =>'ODI',
    file_uri_list =>'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/dwcsdemo/odi_bojana/odi_16a97fd6-8a61-4303-8f83-81988222b8cb_FILTER_AP.txt',
    format => json_object(
    'type' VALUE 'CSV',
    'skipheaders' VALUE '0',
    'dateformat' VALUE 'AUTO')
 );
END;

The KM optionally creates the credential object to connect to Object Storage, but can also re-use an existing one.

BEGIN  

  dbms_cloud.create_credential(
    credential_name => 'ODI_FLEX',
    username => 'tenant15',
    password => 'xxxxxxxx'
  );

END;

Note:

LKM SQL to ADWC Copy Direct has two different KM options for formatting date/time Data types:
  • UNLOAD_DATE_FORMAT is used while unloading SQL query into text file. The formatting syntax is conform to Java Date and Time Patterns.

  • DATE_FORMATis used while loading object storage text file into ADWC. The formatting syntax is conform to Oracle database syntax for formatting DATE data type.

If you are loading into TIMESTAMPcolumn use Advanced option ADD_FORMAT_PROPERTIESto add timestampformatproperty. It is very important to synchronize UNLOAD_DATE_FORMATand DATE_FORMAT/timestampformatto use identical format settings, even though they are using different syntax.

Here is an example of loading from source TIMESTAMP column into target TIMESTAMP column.

UNLOAD_DATE_FORMAT = yyyy-MM-dd HH:mm:ss.SSS
ADD_FORMAT_PROPERTIES = 'timestampformat' VALUE 'YYYY-MM-DD HH24:MI:SS.FF3'

Also, make sure that TIMESTAMPcolumn precision allows for the given format to be stored.

KM Options

This KM has the following options:

  • CREDENTIAL_NAME— It provides Credential name to connect to Oracle Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL— It creates new credentials. If set to False, ODI reuses the existing credentials.

  • GENERATE_FIELD_LIST— If this KM option is set to False, then the field_list clause is skipped and the default settings of ORACLE_LOADERaccess driver is applied.

    Note:

    You have to always generate the field_list clause as it is required by Fixed file format.
  • TEMP_OBJECT_STORAGE_SCHEMA— It specifies the name of logical schema defining the location of the temporary file that will be stored in Oracle Object Storage staging area. This must be an Oracle Object Storage technology logical schema.

  • CLEANUP_TEMPORARY_OBJECTS — Set this property to True, if you want temporary objects to be automatically cleaned up.

  • ADD_COMPRESSION- It compresses data before loading. Set this property to True, if you want to compress source data before loading into Oracle Object Storage. Additional KM options COMPRESSION_TYPEand KEEP_SOURCE_FILESdefine compression preferences.

  • COMPRESSION_TYPE- It allows you to configure the required compression type. Select the type of compression you want to apply on source data before loading into Oracle Object Storage.

  • KEEP_SOURCE_FILES— Use this KM option to retain the original source files after compression. Set this property to True (by default), if you wish to compress the target files and retain the original files.

    Note:

    gzip supports KEEP_SOURCE_FILESoption, starting from version 1.6 only.

Formatting

  • DELIMITED_FILE_FORMAT- It specifies delimited File Format and it can be CSV (default) or common delimited format known to ORACLE_LOADER access driver. You can use this KM option only if the source datastore File Format property is set to Delimited.

  • COMPRESSION — It specifies the compression method of the source file. There are 3 possible compression types or auto. Empty value implies no compression and AUTO implies compression type is auto-detected.

  • DATE_FORMAT — It helps to set specific date format. The default format option AUTO searches for the following formats:

    J
    MM-DD-YYYYBC
    MM-DD-YYYY
    YYYYMMDD HHMISS
    YYMMDD HHMISS
    YYYY.DDD
    YYYY-MM-DD
  • REJECT_LIMIT — The query helps to display an error message after the specified number of rows are rejected. Default value is zero.

  • CONVERSION_ERRORS — It specifies the processing conversion errors. If any row throws an error because of a conversion error, the related columns are stored as null or the row is rejected.

  • TRIM_SPACES — It helps to trim the leading and trailing spaces of the fields. If set to True it trims the specified spaces.

  • IGNORE_BLANK_LINES — If set to True, the blank lines are ignored without throwing any error.

  • IGNORE MISSING COLUMNS — If there are more columns in the field_list than the source files, the extra columns will be stored as null.

  • TRUNCATE_COLUMNS — If the data in a file is too long for a field, then this option will truncate the value of the field rather than rejecting the row.

Advanced

  • ADD_FORMAT_PROPERTIES — This option allows adding custom format properties.

    Use the following syntax:'<prop1>' VALUE '<value1>', '<prop2>' VALUE '<value2>' ...

  • OVERWRITE_FIELD_LIST — This option gives the possibility to redefine the source file field definitions, where ODI does not have enough information about your input data. The details that you enter here is used as field_list parameter of dbms_cloud.create_external_tablefunction call.

    For more details, refer to DBMS_CLOUD package documentation for more information.

Target

  • CREATE_TARG_TABLE— It helps you to create target table. Set this option to True, if you want to create target table before loading.

  • TRUNCATE_TARG_TABLE— It helps to truncate target table. Set this KM option to True if you want to truncate target table before loading it.

  • DELETE_TARG_TABLE— It allows you to delete the target table. Set this KM option to True if you want to delete data from target table before loading.

Cleanup

  • CLEANUP_CREDENTIAL — Set this property to True, if you want the credential object to be automatically cleaned up at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIALoption is also set to True.

SQL Unload Options

  • TEMP_FILE_SCHEMA— It specifies the name of logical schema defining the location of the temporary file that is stored before uploading to Object Storage. This must be a File technology logical schema. The temporary file is stored on local file system where the ODI agent is running.

  • UNLOAD_DATE_FORMAT— It specifies the output format used for date data types. This format follows Java date/time format standards.

  • UNLOAD_TIMESTAMP_FORMAT— It specifies the output format used for time data types. This format follows Java time format standards.

  • CHARSET_ENCODING— Use this for character set encoding.

  • FETCH_SIZE- It denotes the number of rows (records read) requested by ODI agent on each communication with the data server.

LKM File to ADWC External Table

This LKM helps to load data from local or HDFS file source to Oracle ADWC using Oracle Object Storage as intermediate staging. The source file is first loaded into Oracle Object Storage staging area. Then we use External Table method to pull data from Object Storage. You must set TEMP_OBJECT_STORAGE_SCHEMA KM option to designate temporary storage location.

You have to use this LKM in combination with Oracle or generic SQL IKM.

The KM invokes ODI tool OdiObjectStorageUpload to upload the local file to Object Storage.

For Example

OdiObjectStorageUpload "-TRG_LOGICAL_SCHEMA=Object Storage - SRC1" "-SRC_LOGICAL_SCHEMA=FILE_GENERIC_TMP" "-FILE_NAMES_FILTER=person_no_header.csv" "-OVERWRITE=true"

The KM then creates a temporary staging external table to pull the data from Oracle Object Storage.

BEGIN
  dbms_cloud.create_external_table(
    table_name =>'C$_0PER_EXT',
    credential_name =>'ODI',
    file_uri_list =>'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/dwcsdemo/odi_bojana/person_no_header.csv',
    column_list => 'PID NUMBER(2,0), PNAME VARCHAR2(20)',
    format => json_object(
    'type' VALUE 'CSV',
    'skipheaders' VALUE '0',
    'dateformat' VALUE 'AUTO')
 );
END;

The KM optionally creates the credential object to connect to Oracle Object Storage, but can also re-use an existing one.

BEGIN  
  dbms_cloud.create_credential(
    credential_name => 'ODI_FLEX',
    username => 'tenant15',
    password => 'xxxxxxxx'
  );
END;

KM Options

This KM has the following options:

  • CREDENTIAL_NAME — It provides Credential name to connect to Oracle Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL — It creates new credentials. If set to False, ODI reuses the existing credentials.

  • GENERATE_FIELD_LIST — If this KM option is set to False, then the field_list clause is skipped and the default settings of ORACLE_LOADER access driver is applied.

    Note:

    You have to always generate the field_list clauseas it is required by Fixed file format.
  • TEMP_OBJECT_STORAGE_SCHEMA — It specifies the name of logical schema defining the location of the temporary file that will be stored in Object Storage staging area. This must be an Oracle Object Storage technology logical schema.

  • ADD_COMPRESSION — It compresses data before loading. Set this property to True, if you want to compress source data before loading into Oracle Object Storage. Additional KM options COMPRESSION_TYPEand KEEP_SOURCE_FILES define compression preferences.

  • COMPRESSION_TYPE — It allows you to configure the required compression type. Select the type of compression you want to apply on source data before loading into Oracle Object Storage.

  • KEEP_SOURCE_FILES — Use this KM option to retain the original source files after compression. Set this property to True (by default), if you wish to compress the target files and retain the original files.

    Note:

    gzip supports KEEP_SOURCE_FILES option, starting from version 1.6 only.

Formatting

  • DELIMITED_FILE_FORMAT — It specifies delimited File Format and it can be CSV (default) or common delimited format known to ORACLE_LOADERaccess driver. You can use this KM option only if the source datastore File Format property is set to Delimited.

  • COMPRESSION — It specifies the compression method of the source file. There are 3 possible compression types or auto. Empty value implies no compression and AUTO implies compression type is auto-detected.

  • DATE_FORMAT — It helps to set specific date format. The default format option AUTO searches for the following formats:

    J
    MM-DD-YYYYBC
    MM-DD-YYYY
    YYYYMMDD HHMISS
    YYMMDD HHMISS
    YYYY.DDD
    YYYY-MM-DD
  • REJECT_LIMIT — The query helps to display an error message after the specified number of rows are rejected. Default value is zero.

  • CONVERSION_ERRORS — It specifies the processing conversion errors. If any row throws an error because of a conversion error, the related columns are stored as null or the row is rejected.

  • TRIM_SPACES — It helps to trim the leading and trailing spaces of the fields. If set to True, it trims the specified spaces.

  • IGNORE_BLANK_LINES — It set to True the blank lines are ignored without throwing any error.

  • IGNORE MISSING COLUMNS — If there are more columns in the field_list than the source files, the extra columns will be stored as null.

  • TRUNCATE_COLUMNS — If the data in a file is too long for a field, then this option will truncate the value of the field rather than rejecting the row.

Advanced

  • ADD_FORMAT_PROPERTIES — This option allows adding custom format properties.

    Use the following syntax: '<prop1>' VALUE '<value1>', '<prop2>' VALUE '<value2>' ...

  • OVERWRITE_FIELD_LIST — This option gives the possibility to redefine the source file field definitions, where ODI does not have enough information about your input data. The details that you enter here is used as field_listparameter of dbms_cloud.create_external_tablefunction call.

    For more details, refer to DBMS_CLOUD package documentation for more information.

Cleanup

  • CLEANUP_TEMPORARY_OBJECTS — Set this property to True, if you want temporary objects to be automatically cleaned up.

  • CLEANUP_CREDENTIAL — Set this property to True, if you want the credential object to be automatically cleaned up at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIAL option is also set to True.

LKM File to ADWC Copy

This LKM helps to load data from local or HDFS file source to Oracle ADWC using Oracle Object Storage as intermediate staging. The source file is first loaded into Oracle Object Storage staging area. Then we use dbms_cloud.copy_data()to load data from Oracle Object Storage into staging table on ADWC. You must set TEMP_OBJECT_STORAGE_SCHEMA KM option to designate temporary storage location.

You can use this LKM in combination with Oracle or generic SQL IKM.

OdiObjectStorageUpload happens similar to LKM File to ADWC External Table.

LKM File to ADWC Copy then creates a temporary staging table on ADWC execution unit and pulls the data from Object Store into it using dbms_cloud.copy_data()function.

For Example

create table STAR.C$_0PER
(  PID NUMBER(2,0),
   PNAME VARCHAR2(20)
)
BEGIN
  dbms_cloud.copy_data(
    schema_name => 'STAR',
    table_name =>'C$_0PER',
    credential_name =>'ODI',
    file_uri_list =>'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/dwcsdemo/odi_bojana/person_no_header.csv',
    format => json_object(
    'type' VALUE 'CSV',
    'skipheaders' VALUE '0',
    'dateformat' VALUE 'AUTO')
 );
END;

The KM optionally creates the credential object to connect to Object Storage, but can also re-use an existing one.

BEGIN  

  dbms_cloud.create_credential(
    credential_name => 'ODI_FLEX',
    username => 'tenant15',
    password => 'xxxxxxxx'
  );
END;

KM Options

This KM has the following options:

  • CREDENTIAL_NAME — It provides Credential name to connect to Oracle Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL — It creates new credentials. If set to False, ODI reuses the existing credentials.

  • GENERATE_FIELD_LIST — If this KM option is set to False, then the field_list clause is skipped and the default settings of ORACLE_LOADER access driver is applied.

    Note:

    You have to always generate the field_list clause as it is required by Fixed file format.
  • TEMP_OBJECT_STORAGE_SCHEMA — It specifies the name of logical schema defining the location of the temporary file that will be stored in Object Storage staging area. This must be an Oracle Object Storage technology logical schema.

  • ADD_COMPRESSION — It compresses data before loading. Set this property to True, if you want to compress source data before loading into Oracle Object Storage. Additional KM options COMPRESSION_TYPEand KEEP_SOURCE_FILESdefine compression preferences.

  • COMPRESSION_TYPE — It allows you to configure the required compression type. Select the type of compression you want to apply on source data before loading into Oracle Object Storage.

  • KEEP_SOURCE_FILES — Use this KM option to retain the original source files after compression. Set this property to True (by default), if you wish to compress the target files and retain the original files.

    Note:

    gzip supports KEEP_SOURCE_FILES option, starting from version 1.6 only.

Formatting

  • DELIMITED_FILE_FORMAT — It specifies delimited File Format and it can be CSV (default) or common delimited format known to ORACLE_LOADERaccess driver. You can use this KM option only if the source datastore File Format property is set to Delimited.

  • COMPRESSION — It specifies the compression method of the source file. There are 3 possible compression types or auto. Empty value implies no compression and AUTO implies compression type is auto-detected.

  • DATE_FORMAT — It helps to set specific date format. The default format option AUTO searches for the following formats:

    J
    MM-DD-YYYYBC
    MM-DD-YYYY
    YYYYMMDD HHMISS
    YYMMDD HHMISS
    YYYY.DDD
    YYYY-MM-DD
  • REJECT_LIMIT — The query helps to display an error message after the specified number of rows are rejected. Default value is zero.

  • CONVERSION_ERRORS — It specifies the processing conversion errors. If any row throws an error because of a conversion error, the related columns are stored as null or the row is rejected.

  • TRIM_SPACES — It helps to trim the leading and trailing spaces of the fields. If set to True, it trims the specified spaces.

  • IGNORE_BLANK_LINES — If set to True, the blank lines are ignored without throwing any error.

  • IGNORE MISSING COLUMNS — If there are more columns in the field_list than the source files, the extra columns will be stored as null.

  • TRUNCATE_COLUMNS — If the data in a file is too long for a field, then this option will truncate the value of the field rather than rejecting the row.

Advanced

  • ADD_FORMAT_PROPERTIES — This option allows adding custom format properties.

    Use the following syntax: '<prop1>' VALUE '<value1>', '<prop2>' VALUE '<value2>' ...

  • OVERWRITE_FIELD_LIST — This option gives the possibility to redefine the source file field definitions, where ODI does not have enough information about your input data. The details that you enter here is used as field_list parameter of dbms_cloud.create_external_tablefunction call.

    For more details, refer to DBMS_CLOUD package documentation for more information.

Cleanup

  • CLEANUP_TEMPORARY_OBJECTS — Set this property to True, if you want temporary objects to be automatically cleaned up.

  • CLEANUP_CREDENTIAL — Set this property to True, if you want the credential object to be automatically cleaned up at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIAL option is also set to True.

LKM File to ADWC Copy Direct

This KM is helpful to load data from local or HDFS file source to Oracle ADWC using Oracle Object Storage as intermediate staging. The source file is first loaded into Oracle Object Storage staging area. Then we use dbms_cloud.copy_data()function to load data from Object Storage directly into ADWC target table. Source attributes must match target column names. Either use data entities with matching attributes or set option GENERATE_FIELD_LIST to false. All target columns are loaded, irrespective of their mapping.

You must set TEMP_OBJECT_STORAGE_SCHEMA KM option to designate temporary storage location. LKM File to ADWC Copy Direct does not support any transformations.

You can use this LKM as a standalone KM as you do not need an IKM for its implementation. OdiObjectStorageUpload happens similar to LKM File to ADWC External Table. LKM SQL to ADWC Copy Direct pulls the data from Oracle Object Storage and loads data directly into target using dbms_cloud.copy_data() function.

For Example

BEGIN
  dbms_cloud.copy_data(
    schema_name => 'STAR',
    table_name =>'PERSON',
    credential_name =>'ODI',
    file_uri_list =>'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/dwcsdemo/odi_bojana/person_no_header.csv',
    format => json_object(
    'type' VALUE 'CSV',
    'skipheaders' VALUE '0',
    'dateformat' VALUE 'AUTO')
 );
END;

The KM optionally creates the credential object to connect to Oracle Object Storage, but can also re-use an existing one.

BEGIN  

  dbms_cloud.create_credential(
    credential_name => 'ODI_FLEX',
    username => 'tenant15',
    password => 'xxxxxxxx'
  );

END;

KM Options

This KM has the following options:

  • CREDENTIAL_NAME — It provides Credential name to connect to Oracle Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL — It creates new credentials. If set to False, ODI reuses the existing credentials.

  • GENERATE_FIELD_LIST — If this KM option is set to False, then the field_list clause is skipped and the default settings of ORACLE_LOADER access driver is applied.

    Note:

    You have to always generate the field_list clause as it is required by Fixed file format.
  • TEMP_OBJECT_STORAGE_SCHEMA — It specifies the name of logical schema defining the location of the temporary file that will be stored in Object Storage staging area. This must be an Oracle Object Storage technology logical schema.

  • CLEANUP_TEMPORARY_OBJECTS — Set this property to True, if you want temporary objects to be automatically cleaned up.

  • ADD_COMPRESSION — It compresses data before loading. Set this property to True, if you want to compress source data before loading into Oracle Object Storage. Additional KM options COMPRESSION_TYPEand KEEP_SOURCE_FILESdefine compression preferences.

  • COMPRESSION_TYPE — It allows you to configure the required compression type. Select the type of compression you want to apply on source data before loading into Oracle Object Storage.

  • KEEP_SOURCE_FILES — Use this KM option to retain the original source files after compression. Set this property to True (by default), if you wish to compress the target files and retain the original files.

    Note:

    gzip supports KEEP_SOURCE_FILESoption, starting from version 1.6 only.

Formatting

  • DELIMITED_FILE_FORMAT — It specifies delimited File Format and it can be CSV (default) or common delimited format known to ORACLE_LOADERaccess driver. You can use this KM option only if the source data store File Format property is set to Delimited.

  • COMPRESSION — It specifies the compression method of the source file. There are 3 possible compression types, or auto. Empty value implies no compression and AUTO implies compression type is auto-detected.

  • DATE_FORMAT — It helps to set specific date format. The default format option AUTO searches for the following formats

    J
    MM-DD-YYYYBC
    MM-DD-YYYY
    YYYYMMDD HHMISS
    YYMMDD HHMISS
    YYYY.DDD
    YYYY-MM-DD
  • REJECT_LIMIT — The query helps to display an error message after the specified number of rows are rejected. Default value is zero.

  • CONVERSION_ERRORS — It specifies the processing conversion errors. If any row throws an error because of a conversion error, the related columns are stored as null or the row is rejected.

  • TRIM_SPACES — It helps to trim the leading and trailing spaces of the fields. If set to True, it trims the specified spaces.

  • IGNORE_BLANK_LINES — If set to True, the blank lines are ignored without throwing any error.

  • IGNORE MISSING COLUMNS — If there are more columns in the field_list than the source files, the extra columns will be stored as null.

  • TRUNCATE_COLUMNS — If the data in a file is too long for a field, then this option will truncate the value of the field rather than rejecting the row.

Advanced

  • ADD_FORMAT_PROPERTIES — This option allows adding custom format properties.

    Use the following syntax: '<prop1>' VALUE '<value1>', '<prop2>' VALUE '<value2>' ...

  • OVERWRITE_FIELD_LIST — This option gives the possibility to redefine the source file field definitions, where ODI does not have enough information about your input data. The details that you enter here is used as field_list parameter of dbms_cloud.create_external_tablefunction call.

    For more details, refer to DBMS_CLOUD package documentation for more information.

Target

  • CREATE_TARG_TABLE — It helps you to create target table. Set this option to True, if you want to create target table before loading.

  • TRUNCATE_TARG_TABLE — It helps to truncate target table. Set this KM option to True if you want to truncate target table before loading it.

  • DELETE_TARG_TABLE — It allows you to delete the target table. Set this KM option to True if you want to delete data from target table before loading.

Cleanup

  • CLEANUP_CREDENTIAL — Set this property to True, if you want the credential object to be automatically cleaned up at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIALoption is also set to True.

LKM Oracle to ADWC Datapump

This KM is helpful to load data from Oracle On-premises products to Oracle ADWC. You can use it in combination with Oracle or generic SQL IKM.

Working

First source table is exported into local Datapump dump file. The export file is created in a local directory specified by TEMP_FILE_SCHEMA logical schema. A unique Export file name is generated as odi_<ODI_session_id>_<mapping_node>.dmp. Export log name is odi_<ODI_session_id>_<mapping_node>_exp.log. The export file is then uploaded to Oracle Object storage bucket specified by TEMP_OBJECT_STORAGE_SCHEMA logical schema. It is then imported to ADWC database into temporary staging table C$_alias. Import log name is odi_<ODI_session_id>_<mapping_node>_imp.log. Finally, the mapping IKM integrates the staging table into target. Transformations on target execution unit are supported.

Note:

  • ODI agent has to run on the same host as the source database to be able to access the dump/log files.

  • Currently the KM does not support multi file loading from Oracle Object Storage. Hence you cannot set FILESIZE and PARALLEL options. dbms_datapump package is used for export and import.

Example of Export code:
declare
  h1 number;
  j_status varchar2(200);
begin
  dbms_output.enable();
  h1 := dbms_datapump.open('EXPORT','SCHEMA',NULL,'ODI_EXPORT','LATEST');
  dbms_datapump.add_file(
    handle => h1,
    filename => 'odi_de73e691-2ef5-4fff-8d98-697cf8e123cf_PER_AP.dmp',
    directory => 'ODI_DIR',
    filetype => dbms_datapump.KU$_FILE_TYPE_DUMP_FILE,
    reusefile => 1);
  dbms_datapump.add_file(
    handle => h1,
    filename => 'odi_de73e691-2ef5-4fff-8d98-697cf8e123cf_PER_AP_exp.log',
    directory => 'ODI_DIR',
    filetype => dbms_datapump.KU$_FILE_TYPE_LOG_FILE);
  dbms_datapump.metadata_filter( h1, 'SCHEMA_LIST', q'|'UT_TD_D_1'|' );
  dbms_datapump.metadata_filter( h1, 'NAME_LIST', q'|'PERSON_SRC'|', 'TABLE' );
  dbms_datapump.metadata_filter( h1, 'INCLUDE_PATH_LIST', q'|'TABLE'|' );
  dbms_datapump.set_parameter(h1, 'COMPRESSION', 'METADATA_ONLY');
  dbms_datapump.set_parameter(h1, 'COMPRESSION_ALGORITHM', 'BASIC');
  dbms_datapump.start_job(h1);
  dbms_datapump.wait_for_job(h1, j_status);
  exception
    when others then
    ....
end;
Example of Import code:
declare
  h1 number;
  j_status varchar2(200);
begin
  h1 := dbms_datapump.open('IMPORT','FULL',NULL,'ODI_IMPORT','LATEST');
  dbms_datapump.add_file(
    handle => h1,
    filename => 'https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/dwcsdemo/odi_bojana/odi_de73e691-2ef5-4fff-8d98-697cf8e123cf_PER_AP.dmp',
    directory => 'ODI',
    filetype => dbms_datapump.KU$_FILE_TYPE_URIDUMP_FILE);
  dbms_datapump.add_file(
    handle => h1,
    filename => 'odi_de73e691-2ef5-4fff-8d98-697cf8e123cf_PER_AP_imp.log',
    directory => 'DATA_PUMP_DIR',
    filetype => dbms_datapump.KU$_FILE_TYPE_LOG_FILE);
  dbms_datapump.metadata_remap( h1, 'REMAP_SCHEMA', 'UT_TD_D_1', 'STAR');
  dbms_datapump.metadata_remap( h1, 'REMAP_TABLE', 'PERSON_SRC', 'C$_0PER');
  dbms_datapump.set_parameter(h1,'TABLE_EXISTS_ACTION','SKIP');
  dbms_datapump.set_parameter(h1,'PARTITION_OPTIONS','MERGE');
  dbms_datapump.metadata_transform( h1, 'SEGMENT_ATTRIBUTES', 0);  
  dbms_datapump.start_job(h1);
  dbms_datapump.wait_for_job(h1, j_status);
  dbms_cloud.put_object(
    credential_name => 'ODI',
    object_uri => 'https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/dwcsdemo/odi_bojana/odi_de73e691-2ef5-4fff-8d98-697cf8e123cf_PER_AP_imp.log',
    directory_name  => 'DATA_PUMP_DIR',
    file_name => 'odi_de73e691-2ef5-4fff-8d98-697cf8e123cf_PER_AP_imp.log');
  exception
    when others then
    ....
end;

KM Options

This KM has the following options:

Source Export

  • TEMP_FILE_SCHEMA— It specifies the name of logical schema defining the location of the temporary export file that is stored before uploading to Oracle Object Storage. This must be a File technology logical schema. Oracle directory is created based on this location. The temporary file is stored on a file system accessible by the source database.

  • ORACLE_DIRECTORY_NAME — It specifies the name of Oracle directory used by Datapump to store export file and logs on Source. The default value is ODI_DIR.

    Note:

    Use only uppercase for directory names.
  • COMPRESSION — It specifies which data to compress before writing to the dump file set. You can specify any of the following compression options:

    • ALL — enables compression for the entire export operation. You must enable Oracle Advanced Compression option for making this type of compression.

    • DATA_ONLY — When you select this option, entire data is written to the dump file in a compressed format. You must enable Oracle Advanced Compression option for making this type of compression.

    • METADATA_ONLY — When you select this option, entire metadata is written to the dump file in compressed format. This is the default option for COMPREESSION.

    • NONE — It disables compression for the entire export operation.

  • COMPRESSION_ALGORITHM— It specifies the compression algorithm to be used when compressing dump file data. You can specify one of the following compression algorithm options:

    • BASIC — Offers a good combination of compression ratios and speed; the algorithm used is the same as in previous versions of Oracle Datapump.

    • LOW — Least impact on export throughput and suited for environments where CPU resources are the limiting factor.

    • MEDIUM — Recommended for most environments. This option is similar to the BASIC option that provides a good combination of compression ratios and speed, but it uses a different algorithm when compared to BASIC algorithm.

    • HIGH — Best suited for situations in which dump files are copied over slower networks where the limiting factor is the network speed.

  • CREATE_VIEW_ON_SOURCE — It allows you to create view on source. Set this property to False only if you have a single source with no transformations or a simple filter.

  • CREATE_VIEW_ON_SOURCE_TEMP_SCHEMA — It allows you to create view on source work schema. Set this property to True if you want to create view on source Work Schema.

    Note:

    • Work Schema user should have select privileges to all the source tables participating in the view select statement for the view to be valid. If set to False the view is created in the schema of the currently connected user.

    • You cannot remap a SYSTEM schema, which means if set to False you must not be connected as SYSTEM. Because of restrictions in both the cases, we provide this option.

  • USE_GROUP_PARTITION_TABLE_DATA — Use parameter DATA_OPTIONS=GROUP_PARTITION_TABLE_DATA on export. GROUP_PARTITION_TABLE_DATA value for DATA_OPTIONS parameter is only available for Oracle 12.2 versions or later.

    Note:

    Set this option to False, for any previous Oracle versions.

Target Import

  • CREDENTIAL_NAME — It provides credential name to connect to Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL — It helps to create new credential. If set to False ODI will reuse existing credentials.

  • TEMP_OBJECT_STORAGE_SCHEMA — It denotes the logical schema for temporary Oracle Object Storage location. Specify the name of logical schema defining the location of the temporary export file that will be stored in Oracle Object Storage staging area. This must be an Object Storage technology logical schema.

Cleanup

  • CLEANUP_TEMPORARY_OBJECTS — Set this property to True, if you want to clean up the temporary objects automatically.

  • CLEANUP_CREDENTIAL — Set this property to True, if you want to clean up the credential object automatically at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIALoption is also set to true.

Loading Oracle Object Storage files into ADWC

Note:

This section applies only to Data Integration Platform Cloud.

You can use the following knowledge modules for loading Oracle Object Storage files into Oracle ADWC:

When you load data from Oracle Object Storage to ADWC, target of the mapping has to be Oracle technology as there is no specific technology for ADWC. Object Storage Bucket represents Object Storage physical schema. Some properties such as Heading and Delimiter are retrieved from source Data Store. Other properties such as user name and password are retrieved from Object Storage Data Server. If you use transform components, they need to be moved to the target (Oracle) execution unit. No transformations on source are supported.

LKM Object Storage to ADWC Copy

This KM helps to load data from Oracle Cloud Object Storage to Oracle ADWC. It is using dbms_cloud.copy_data() function to load data into a staging table. LKM Object Storage to ADWC Copy is assigned to an AP node. It is used in moving data from Oracle Object Storage file to an ADWC table. You can use it in combination with Oracle or generic SQL IKM.

For Example

LKM Object Storage to ADWC External Table is creating a temporary staging external table to pull the data from Object Store.

BEGIN
  dbms_cloud.create_external_table(
    table_name =>'C$_0PER_EXT',
    credential_name =>'ODI_FLEX',
    file_uri_list =>'https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/dwcsprod/bucket_tenant15/person.csv',
    column_list => 'PID NUMBER(2,0),
PNAME VARCHAR2(20)',
    field_list => 'PID, PNAME',
    format => json_object('type' VALUE 'CSV', 'skipheaders' VALUE '1')
 );
END;

It also optionally creates the credential object, but can also re-use an existing one.

BEGIN  

  dbms_cloud.create_credential(
    credential_name => 'ODI_FLEX',
    username => 'tenant15',
    password => 'xxxxxxxx'
  );
END;

KM Options

This KM has the following options:

  • CREDENTIAL_NAME — It provides credential name to connect to Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL — It helps to create new credential. If set to False ODI will reuse existing credentials.

  • GENERATE_FIELD_LIST — If this KM option is set to False, then the field_list clause is skipped and the default settings of ORACLE_LOADER access driver is applied.

    Note:

    You have to always generate the field_list clause as it is required by Fixed file format.
  • DELIMITED_FILE_FORMAT — It specifies delimited File Format and it can be CSV (default) or common delimited format known to ORACLE_LOADER access driver. You can use this KM option only if the source datastore File Format property is set to Delimited.

  • COMPRESSION — It specifies the compression method of the source file. It can have values nil or auto. Empty value implies no compression and AUTO implies compression type is auto-detected.

  • DATE_FORMAT — It helps to set specific date format. The default format option AUTO searches for the following formats:

    J
    MM-DD-YYYYBC
    MM-DD-YYYY
    YYYYMMDD HHMISS
    YYMMDD HHMISS
    YYYY.DDD
    YYYY-MM-DD
  • REJECT_LIMIT — The query helps to display an error message after the specified number of rows are rejected. Default value is zero.

  • CONVERSION_ERRORS — It specifies the processing conversion errors. If any row throws an error because of a conversion error, the related columns are stored as null or the row is rejected.

  • TRIM_SPACES — It helps to trim the leading and trailing spaces of the fields. If set to True it trims the specified spaces.

  • IGNORE_BLANK_LINES— It set to True the blank lines are ignored without throwing any error.

  • IGNORE MISSING COLUMNS — If there are more columns in the field_list than the source files, the extra columns will be stored as null.

  • TRUNCATE_COLUMNS — If the data in a file is too long for a field, then this option will truncate the value of the field rather than rejecting the row.

  • ADD_FORMAT_PROPERTIES — This option allows adding custom format properties.

    Use the following syntax: <'prop1>' VALUE '<value1>', '<prop2>' VALUE '<value2>' ...

  • OVERWRITE_FIELD_LIST — This option gives the possibility to redefine the source file field definitions, where ODI does not have enough information about your input data. The details that you enter here is used as field_list parameter of dbms_cloud.create_external_table function call.

    For more details, refer to DBMS_CLOUD package documentation for more information.

  • CLEANUP_TEMPORARY_OBJECTS — Set this property to True, if you want temporary objects to be automatically cleaned up.

  • CLEANUP_CREDENTIAL — Set this property to True, if you want the credential object to be automatically cleaned up at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIAL option is also set to True.

LKM Object Storage to ADWC Copy Direct

This KM is helpful to load data from Oracle Object Storage to Oracle ADWC. It is using dbms_cloud.copy_data () function. This LKM can be used as standalone as no IKM is needed. It loads data directly into target table. Source attributes must match target column names. Either use data entities with matching attributes or set option GENERATE_FIELD_LIST = false. All target columns are loaded irrespective of whether they are mapped or not.

Note:

LKM Object Storage to ADWC Copy Direct does not support any transformations.

The LKM Object Storage to ADWC Copy Direct is assigned to an AP node. This is used for moving files/objects from Oracle Object Storage to an ADWC table.

For Example

LKM Object Store to ADWC Copy Direct pulls the data from Oracle Object Storage and loads data directly into target using dbms_cloud.copy_data() function.

BEGIN
  dbms_cloud.copy_data(
    schema_name => 'ODI',
    table_name =>'PERSON',
    credential_name =>'ODI_FLEX',
    file_uri_list =>'https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/dwcsprod/bucket_tenant15/person.csv',
    field_list => 'PID, PNAME',    
    format => json_object('type' VALUE 'CSV', 'skipheaders' VALUE '1')
 );
END;

It also optionally creates the credential object, but can also re-use an existing one.

BEGIN  

  dbms_cloud.create_credential(
    credential_name => 'ODI_FLEX',
    username => 'tenant15',
    password => 'xxxxxxxx'
  );
END;

KM Options

This KM has the following options:

  • CREATE_TARG_TABLE — It helps you to create target table. Set this option to True, if you want to create target table before loading.

  • CREDENTIAL_NAME — It provides Credential name to connect to Oracle Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL — It creates new credentials. If set to False, ODI reuses the existing credentials.

  • TRUNCATE_TARG_TABLE — It helps to truncate target table. Set this KM option to True if you want to truncate target table before loading it.

  • DELETE_TARG_TABLE — It allows you to delete the target table. Set this KM option to True if you want to delete data from target table before loading.

  • GENERATE_FIELD_LIST — If this KM option is set to False, then the field_list clause is skipped and the default settings of ORACLE_LOADER access driver is applied.

    Note:

    You have to always generate the field_list clause as it is required by Fixed file format.
  • DELIMITED_FILE_FORMAT — It specifies delimited File Format and it can be CSV (default) or common delimited format known to ORACLE_LOADER access driver. You can use this KM option only if the source datastore File Format property is set to Delimited.

  • COMPRESSION — It specifies the compression method of the source file. It can have values nil or auto. Empty value implies no compression and AUTO implies compression type is auto-detected.

  • DATE_FORMAT — It helps to set specific date format. The default format option AUTO searches for the following formats:

    J
    MM-DD-YYYYBC
    MM-DD-YYYY
    YYYYMMDD HHMISS
    YYMMDD HHMISS
    YYYY.DDD
    YYYY-MM-DD
  • REJECT_LIMIT — The query helps to display an error message after the specified number of rows are rejected. Default value is zero.

  • CONVERSION_ERRORS — It specifies the processing conversion errors. If any row throws an error because of a conversion error, the related columns are stored as null or the row is rejected.

  • TRIM_SPACES — It helps to trim the leading and trailing spaces of the fields. If set to True it trims the specified spaces.

  • IGNORE_BLANK_LINES — If set to True, the blank lines are ignored without throwing any error.

  • IGNORE MISSING COLUMNS — If there are more columns in the field_list than the source files, the extra columns will be stored as null.

  • TRUNCATE_COLUMNS — If the data in a file is too long for a field, then this option will truncate the value of the field rather than rejecting the row.

  • ADD_FORMAT_PROPERTIES — This option allows adding custom format properties.

    Use the following syntax:'<prop1>' VALUE '<value1>', '<prop2>' VALUE '<value2>' ...

  • OVERWRITE_FIELD_LIST — This option gives the possibility to redefine the source file field definitions, where ODI does not have enough information about your input data. The details that you enter here is used as field_list parameter of dbms_cloud.create_external_table function call.

    For more details, refer to DBMS_CLOUD package documentation for more information.

  • CLEANUP_TEMPORARY_OBJECTS — Set this property to True, if you want temporary objects to be automatically cleaned up.

  • CLEANUP_CREDENTIAL — Set this property to True, if you want the credential object to be automatically cleaned up at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIAL option is also set to True.

LKM Object Storage to ADWC External Table

This KM is helpful to load data from Oracle Object Storage to Oracle ADWC using External Table method. You can use this LKM in combination with Oracle or generic SQL IKM.

The LKM is assigned to an AP node, so that it can move Oracle Object Storage file to Oracle ADWC table.

For Example —

LKM Object Storage to ADWC External Table is creating a temporary staging external table to pull the data from Oracle Object Storage.

BEGIN
  dbms_cloud.create_external_table(
    table_name =>'C$_0PER_EXT',
    credential_name =>'ODI_FLEX',
    file_uri_list =>'https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/dwcsprod/bucket_tenant15/person.csv',
    column_list => 'PID NUMBER(2,0),
PNAME VARCHAR2(20)',
    field_list => 'PID, PNAME',
    format => json_object('type' VALUE 'CSV', 'skipheaders' VALUE '1')
 );
END;

It also optionally creates the credential object, but can also re-use an existing one.

BEGIN  

  dbms_cloud.create_credential(
    credential_name => 'ODI_FLEX',
    username => 'tenant15',
    password => 'xxxxxxxx'
  );

END;

KM Options

This KM has the following options:

  • CREDENTIAL_NAME — It provides Credential name to connect to Oracle Object Storage. The default value is ODI.

  • CREATE_CREDENTIAL — It creates new credentials. If set to False, ODI reuses the existing credentials.

  • GENERATE_FIELD_LIST — If this KM option is set to False, then the field_list clause is skipped and the default settings of ORACLE_LOADER access driver is applied.

    Note:

    You have to always generate the field_list clause as it is required by Fixed file format.
  • DELIMITED_FILE_FORMAT — It specifies delimited File Format and it can be CSV (default) or common delimited format known to ORACLE_LOADER access driver. You can use this KM option only if the source datastore File Format property is set to Delimited.

  • COMPRESSION — It specifies the compression method of the source file. It can have values nil or auto. Empty value implies no compression and AUTO implies compression type is auto-detected.

  • DATE_FORMAT — It helps to set specific date format. The default format option AUTO searches for the following formats:

    J
    MM-DD-YYYYBC
    MM-DD-YYYY
    YYYYMMDD HHMISS
    YYMMDD HHMISS
    YYYY.DDD
    YYYY-MM-DD
  • REJECT_LIMIT — The query helps to display an error message after the specified number of rows are rejected. Default value is zero.

  • CONVERSION_ERRORS — It specifies the processing conversion errors. If any row throws an error because of a conversion error, the related columns are stored as null or the row is rejected.

  • TRIM_SPACES — It helps to trim the leading and trailing spaces of the fields. If set to True it trims the specified spaces.

  • IGNORE_BLANK_LINES — It set to True the blank lines are ignored without throwing any error.

  • IGNORE MISSING COLUMNS — If there are more columns in the field_list than the source files, the extra columns will be stored as null.

  • TRUNCATE_COLUMNS— If the data in a file is too long for a field, then this option will truncate the value of the field rather than rejecting the row.

  • ADD_FORMAT_PROPERTIES — This option allows adding custom format properties.

    Use the following syntax: '<prop1>' VALUE '<value1>', '<prop2>' VALUE '<value2>' ...

  • OVERWRITE_FIELD_LIST — This option gives the possibility to redefine the source file field definitions, where ODI does not have enough information about your input data. The details that you enter here is used as field_list parameter of dbms_cloud.create_external_table function call.

    For more details, refer to DBMS_CLOUD package documentation for more information.

  • CLEANUP_TEMPORARY_OBJECTS — Set this property to True, if you want temporary objects to be automatically cleaned up.

  • CLEANUP_CREDENTIAL— Set this property to True, if you want the credential object to be automatically cleaned up at the end of the every execution. Cleanup will happen only if CREATE_CREDENTIAL option is also set to True.

Extracting data

In ODI, you can extract data from ADWC via JDBC using Oracle KMs.

You can design a mapping that uses the Data Stores for an ADWC Schema as the source of the mapping, where you can extract data via JDBC using the following Oracle KMs:

  • Extract data from ADWC and load to on-premise Oracle Table through

    • LKM SQL to Oracle (Built-In)

    • IKM Oracle Insert

  • Extract data from ADWC and load to on-premise File through

    • LKM SQL Multi-Connect

    • IKM SQL to File Append

Best Practices for Working with ADWC

This section provides the best practices for working with Oracle Data Integrator connected to ADWC technology.

Caching Oracle Sequences in ADWC

In order to maintain the referential integrity between dimensions and facts tables, warehouse dimensions require unique surrogate keys. These surrogate keys get auto populated when you configure the dimensions with new records. It is a common practice to use Oracle sequences to populate the surrogate keys of warehouse dimensions. During the initial upload operation of a large warehouse dimension, you have to set the CACHE SIZE of the sequence used by the dimension to a large size. This will significantly improve the initial upload operation of the large dimension. You can then reduce the CACHE SIZE for incremental-updates operations.

The following example shows how to create an Oracle sequence for a large dimension. The CACHE SIZE for this sequence is 50,000 sequences:

CREATE SEQUENCE "SEQ_W_CUSTOMER_D" MINVALUE 1000 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1000 CACHE 50000 NOORDER NOCYCLE;


Footnote Legend

Footnote 1: In ODI, you can directly upload connection details from a credential file (cwallet.sso) or a password protected wallet file (ewallet.p12).