This chapter describes how to use the knowledge modules in Oracle Data Integrator (ODI) Application Adapter for Hadoop.
It contains the following sections:
See Also:
Oracle Big Data Connectors User's GuideApache Hadoop is designed to handle and process data that is typically from data sources that are nonrelational and data volumes that are beyond what is handled by relational databases.
Oracle Data Integrator (ODI) Application Adapter for Hadoop enables data integration developers to integrate and transform data easily within Hadoop using Oracle Data Integrator. Employing familiar and easy-to-use tools and preconfigured knowledge modules (KMs), the application adapter provides the following capabilities:
Loading data into Hadoop from the local file system, HDFS, HBase (using Hive), and SQL database (using SQOOP)
Performing validation and transformation of data within Hadoop
Loading processed data from Hadoop to an Oracle database, an SQL database (using SQOOP), or HBase for further processing and generating reports
Knowledge modules (KMs) contain the information needed by Oracle Data Integrator to perform a specific set of tasks against a specific technology. An application adapter is a group of knowledge modules. Thus, Oracle Data Integrator Application Adapter for Hadoop is a group of knowledge modules for accessing data stored in Hadoop.
Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job requires expert programming knowledge. However, when you use Oracle Data Integrator and Oracle Data Integrator Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Apache Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs.
When you implement a big data processing scenario, the first step is to load the data into Hadoop. The data source is typically in the local file system, HDFS, HBase, and Hive tables.
After the data is loaded, you can validate and transform it by using HiveQL like you use SQL. You can perform data validation (such as checking for NULLS and primary keys), and transformations (such as filtering, aggregations, set operations, and derived tables). You can also include customized procedural snippets (scripts) for processing the data.
When the data has been aggregated, condensed, or processed into a smaller data set, you can load it into an Oracle database, other relational database, or HBase for further processing and analysis. Oracle Loader for Hadoop is recommended for optimal loading into an Oracle database.
Oracle Data Integrator provides the knowledge modules (KMs) described in Table 3-1 for use with Hadoop.
Table 3-1 Oracle Data Integrator Application Adapter for Hadoop Knowledge Modules
KM Name | Description | Source | Target |
---|---|---|---|
Loads data from local and HDFS files into Hive tables. It provides options for better performance through Hive partitioning and fewer data movements. |
File system |
Hive |
|
Integrates data into a Hive target table in truncate/insert (append) mode. Data can be controlled (validated). Invalid data is isolated in an error table and can be recycled. |
Hive |
Hive |
|
Integrates data into a Hive target table after the data has been transformed by a customized script such as Perl or Python |
Hive |
Hive |
|
Integrates data from an HDFS file or Hive source into an Oracle database target using Oracle Loader for Hadoop, Oracle SQL Connector for HDFS, or both. |
File system or Hive |
Oracle Database |
|
IKM File-Hive to SQL (SQOOP) |
Integrates data from an HDFS file or Hive data source into an SQL database target using SQOOP. SQOOP uses parallel JDBC connections for loading data. |
File system or Hive |
SQL Database |
IKM SQL to Hive-HBase-File (SQOOP) |
Integrates data from an SQL database into a Hive table, HBase table, or HDFS file using SQOOP. SQOOP uses parallel JDBC connections for unloading data. |
SQL Database |
Hive, HBase, or File system |
IKM Hive to HBase Incremental Update (HBase-SerDe) |
Integrates data from a Hive table into an HBase table. It supports inserting new rows and updating existing rows. |
Hive |
HBase |
LKM HBase to Hive (HBase-SerDe) |
Loads data from an HBase table into a Hive table. It provides read-only access to the source HBase table from Hive. It defines a temporary load table definition on Hive, which represents all the relevant columns of the HBase source table. |
HBase |
Hive |
Validates data against constraints |
NA |
Hive |
|
Reverse engineers Hive tables |
Hive metadata |
NA |
|
RKM HBase |
Reverse engineers HBase tables |
HBase metadata |
NA |
Installation requirements for Oracle Data Integrator (ODI) Application Adapter for Hadoop are provided in these topics:
To use Oracle Data Integrator Application Adapter for Hadoop, you must first have Oracle Data Integrator, which is licensed separately from Oracle Big Data Connectors. You can download ODI from the Oracle website at
http://www.oracle.com/technetwork/middleware/data-integrator/downloads/index.html
Oracle Data Integrator Application Adapter for Hadoop requires a minimum version of Oracle Data Integrator 11.1.1.6.0.
Before performing any installation, read the system requirements and certification documentation to ensure that your environment meets the minimum installation requirements for the products that you are installing.
The list of supported platforms and versions is available on Oracle Technology Network:
http://www.oracle.com/technetwork/middleware/data-integrator/overview/index.html
The list of supported technologies and versions is available on Oracle Technology Network:
Oracle Data Integrator Application Adapter for Hadoop is available in the ODI_Home/odi/sdk/xml-reference
directory.
To set up the topology in Oracle Data Integrator, you need to identify the data server and the physical and logical schemas that store the file system, Hive, and HBase information.
This section contains the following topics:
Setting Up the Oracle Data Integrator Agent to Execute Hadoop Jobs
Configuring Oracle Data Integrator Studio for Executing Hadoop Jobs on the Local Agent
Note:
Many of the environment variables described in the following sections are already configured for Oracle Big Data Appliance. See the configuration script at/opt/oracle/odiagent-
version
/agent_standalone/odi/agent/bin/HadoopEnvSetup.sh
In the Hadoop context, there is a distinction between files in Hadoop Distributed File System (HDFS) and local files (outside of HDFS).
To define a data source:
Create a Data Server object under File technology.
Create a Physical Schema object for every directory to be accessed.
Create a Logical Schema object for every directory to be accessed.
Create a Model for every Logical Schema.
Create one or more data stores for each different type of file and wildcard name pattern.
For HDFS files, create a Data Server object under File technology by entering the HDFS name node in the field JDBC URL and leave the JDBC Driver name empty. For example:
hdfs://bda1node01.example.com:8020
Test Connection is not supported for this Data Server configuration.
Note:
No dedicated technology is defined for HDFS files.The following steps in Oracle Data Integrator are required for connecting to a Hive system. Oracle Data Integrator connects to Hive by using JDBC.
The Hive technology must be included in the standard Oracle Data Integrator technologies. If it is not, then import the technology in INSERT_UPDATE
mode from the xml-reference
directory.
You must add all Hive-specific flex fields.
To set up a Hive data source:
Create a Data Server object under Hive technology.
Set the following locations under JDBC:
JDBC Driver: weblogic.jdbc.hive.HiveDriver
JDBC URL: jdbc:weblogic:hive://<host>:<port>[; property=value[;...]]
For example, jdbc:weblogic:hive://localhost:10000;DatabaseName=default;User=default;Password=default
Note:
Usually the user ID and password are provided in the respective fields of an ODI Data Server. If a Hive user is defined without a password, "password=default" is necessary as part of the URL and the password field of Data Server should be left blank.Set the following under Flexfields:
Hive Metastore URIs: for example, thrift://BDA:10000
Ensure that the Hive server is up and running.
Test the connection to the Data Server.
Create a Physical Schema. Enter the name of the Hive schema in both schema fields of the Physical Schema definition.
Create a Logical Schema object.
Import RKM Hive into Global Objects or a project.
Create a new model for Hive Technology pointing to the logical schema.
Perform a custom reverse-engineering operation using RKM Hive.
At the end of this process, the Hive Data Model contains all Hive tables with their columns, partitioning, and clustering details stored as flex field values.
The following steps in Oracle Data Integrator are required for connecting to a HBase system.
The HBase technology must be included in the standard Oracle Data Integrator technologies. If it is not, then import the technology in INSERT_UPDATE
mode from the xml-reference directory.
You must add all HBase-specific flex fields.
To set up a HBase data source:
Create a Data Server object under HBase technology.
JDBC Driver and URL are not available for data servers of this technology.
Set the following under Flexfields:
HBase Quorum: Quorum of the HBase installation. For example, localhost:2181
Ensure that the HBase server is up and running.
Note:
You cannot test the connection to the HBase Data Server.Create a Physical Schema.
Create a Logical Schema object.
Import RKM HBase into Global Objects or a project.
Create a new model for HBase Technology pointing to the logical schema.
Perform a custom reverse-engineering operation using RKM HBase.
At the end of this process, the HBase Data Model contains all the HBase tables with their columns and data types.
To run the Oracle Data Integrator agent on a Hadoop cluster that is protected by Kerberos authentication, you must perform additional configuration steps.
To use a Kerberos-secured cluster:
Log in to the node04 of the Oracle Big Data Appliance, where the Oracle Data Integrator agent runs.
Generate a new Kerberos ticket for the oracle user. Use the following command, replacing realm with the actual Kerberos realm name.
$ kinit oracle@realm
Set the environment variables by using the following commands. Substitute the appropriate values for your appliance:
$ export KRB5CCNAME=Kerberos-ticket-cache-directory
$ export KRB5_CONFIG=Kerberos-configuration-file
$ export HADOOP_OPTS="$HADOOP_OPTS -Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal. jaxp.DocumentBuilderFactoryImpl-Djava.security.krb5.conf=Kerberos-configuration-file"
In this example, the configuration files are named krb5* and are located in /tmp/oracle_krb/:
$ export KRB5CCNAME=/tmp/oracle_krb/krb5cc_1000
$ export KRB5_CONFIG=/tmp/oracle_krb/krb5.conf
$ export HADOOP_OPTS="$HADOOP_OPTS -D javax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal. jaxp.DocumentBuilderFactoryImpl -D java.security.krb5.conf=/tmp/oracle_krb/krb5.conf"
Redefine the JDBC connection URL, using syntax like the following:
jdbc:hive2://node1:10000/default;principal=HiveServer2-Kerberos-Principal
For example:
jdbc:hive2://bda1node01.example.com:10000/default;principal= hive/HiveServer2Host@EXAMPLE.COM
See also, "HiveServer2 Security Configuration" in the CDH5 Security Guide at the following URL:
Renew the Kerberos ticket for the oracle use on a regular basis to prevent disruptions in service.
See Oracle Big Data Appliance Software User's Guide for instructions about managing Kerberos on Oracle Big Data Appliance.
After setting up an Oracle Data Integrator agent, configure it to work with Oracle Data Integrator Application Adapter for Hadoop.
To configure the Oracle Data Integrator agent:
Install Hadoop on your Oracle Data Integrator agent computer.
For Oracle Big Data Appliance, see Oracle Big Data Appliance Software User's Guide for instructions for setting up a remote Hadoop client.
Install Hive on your Oracle Data Integrator agent computer.
Install SQOOP on your Oracle Data Integrator agent computer.
Set the following base environment variables for Hadoop and Hive on your ODI agent computer.
Table 3-2 Environment variables mandatory for Hadoop and Hive
Environment Variable | Value |
---|---|
|
Location of Hadoop dir. For example, |
|
Location of Hadoop configuration files such as core-default.xml, core-site.xml, and hdfs-site.xml. For example, |
|
Location of Hive dir. For example, |
|
Location of Hive configuration files such as hive-site.xml. For example, |
|
|
|
|
|
|
If you plan to use HBase features, set the following environment variables on your ODI agent computer. Note that you need to set these environment variables in addition to the base Hadoop and Hive environment variables.
Table 3-3 Environment Variables mandatory for HBase (In addition to base Hadoop and Hive environment variables)
Environment Variable | Value |
---|---|
|
Location of HBase dir. For example, |
|
|
|
|
|
|
To use Oracle Loader for Hadoop:
Install Oracle Loader for Hadoop on your Oracle Data Integrator agent system. See Installing Oracle Loader for Hadoop in Oracle Big Data Connectors User's Guide.
To use Oracle SQL Connector for HDFS (OLH_OUTPUT_MODE=DP_OSCH
or OSCH
), you must first install it. See "Oracle SQL Connector for Hadoop Distributed File System Setup" in Oracle Big Data Connectors User's Guide.
Set the following environment variables for Oracle Loader for Hadoop on your ODI agent computer. Note that you must set these environment variables in addition to the base Hadoop and Hive environment variables.
Table 3-4 Environment Variables mandatory for Oracle Loader for Hadoop (In addition to base Hadoop and Hive environment variables)
Environment Variable | Value |
---|---|
OLH_HOME |
Location of OLH installation. For example, |
OSCH_HOME |
Location of OSCH installation. For example, |
HADOOP_CLASSPATH |
In order to work with OLH, the Hadoop jars in the |
ODI_OLH_JARS |
Comma-separated list of all JAR files required for custom input formats, Hive, Hive SerDes, and so forth, used by Oracle Loader for Hadoop. All filenames have to be expanded without wildcards. For example:
|
ODI_OLH_SHAREDLIBS |
|
ODI_ADDITIONAL_CLASSPATH |
|
For executing Hadoop jobs on the local agent of an Oracle Data Integrator Studio installation, follow the configuration steps in the previous section with the following change: Copy JAR files into the Oracle Data Integrator userlib
directory instead of the drivers
directory. For example:
Linux: $USER_HOME/.odi/oracledi/userlib
directory.
Windows: C:\Users\<USERNAME>\AppData\Roaming\odi\oracledi\userlib
directory
Setting up a project follows the standard procedures. See Developing Integration Projects with Oracle Data Integrator.
Import the following KMs into Global Objects or a project:
IKM File to Hive (Load Data)
IKM Hive Control Append
IKM Hive Transform
IKM File-Hive to Oracle (OLH-OSCH)
IKM File-Hive to SQL (SQOOP)
IKM SQL to Hive-HBase-File (SQOOP)
IKM Hive to HBase Incremental Update (HBase-SerDe)
LKM HBase to Hive (HBase-SerDe)
CKM Hive
RKM Hive
RKM HBase
This section contains the following topics:
To create a model that is based on the technology hosting Hive or HBase and on the logical schema created when you configured the Hive or HBase connection, follow the standard procedure described in Developing Integration Projects with Oracle Data Integrator.
RKM Hive is used to reverse engineer Hive tables and views. To perform a customized reverse-engineering of Hive tables with RKM Hive, follow the usual procedures, as described in Developing Integration Projects with Oracle Data Integrator. This topic details information specific to Hive tables.
The reverse-engineering process creates the data stores for the corresponding Hive table or views. You can use the data stores as either a source or a target in a mapping.
RKM Hive reverses these metadata elements:
Hive tables and views as Oracle Data Integrator data stores.
Specify the reverse mask in the Mask field, and then select the tables and views to reverse. The Mask field in the Reverse Engineer tab filters reverse-engineered objects based on their names. The Mask field cannot be empty and must contain at least the percent sign (%).
Hive columns as Oracle Data Integrator attributes with their data types.
Information about buckets, partitioning, clusters, and sort columns are set in the respective flex fields in the Oracle Data Integrator data store or column metadata.
Table 3-5 describes the created flex fields.
Table 3-5 Flex Fields for Reverse-Engineered Hive Tables and Views
Object | Flex Field Name | Flex Field Code | Flex Field Type | Description |
---|---|---|---|---|
DataStore |
|
String |
Number of buckets to be used for clustering |
|
Column |
Hive Partition Column |
|
Numeric |
All partitioning columns are marked as "1". Partition information can come from the following:
|
Column |
Hive Cluster Column |
|
Numeric |
All cluster columns are marked as "1". |
Column |
Hive Sort Column |
|
Numeric |
All sort columns are marked as "1". |
RKM HBase is used to reverse engineer HBase tables. To perform a customized reverse-engineering of HBase tables with RKM HBase, follow the usual procedures, as described in Developing Integration Projects with Oracle Data Integrator. This topic details information specific to HBase tables.
The reverse-engineering process creates the data stores for the corresponding HBase table. You can use the data stores as either a source or a target in a mapping.
RKM HBase reverses these metadata elements:
HBase tables as Oracle Data Integrator data stores.
Specify the reverse mask in the Mask field, and then select the tables to reverse. The Mask field in the Reverse Engineer tab filters reverse-engineered objects based on their names. The Mask field cannot be empty and must contain at least the percent sign (%).
HBase columns as Oracle Data Integrator attributes with their data types.
HBase unique row key as Oracle Data Integrator attribute called key
.
Table 3-6 describes the options for RKM HBase.
Option | Description |
---|---|
Specifies the maximum number of rows to be scanned during reversing of a table. The default value is |
|
Specifies the key of the row to start the scan on. By default the scan will start on the first row. The row key is specified as a Java expressions returning an instance of |
|
Specifies the key of the row to stop the scan on? By default the scan will run to the last row of the table or up to Only applies if |
|
Restricts the scan to column families, whose name match this pattern. SQL-LIKE wildcards percentage ( |
|
|
Specifies the path and file name of the log file. Default path is the user home and the default file name is |
Table 3-7 describes the created flex fields.
Table 3-7 Flex Fields for Reverse-Engineered HBase Tables
Object | Flex Field Name | Flex Field Code | Flex Field Type | Description |
---|---|---|---|---|
DataStore |
|
String |
Comma separated list of Zookeeper nodes. It is used by the HBase client to locate the HBase Master server and HBase Region servers. |
|
Column |
HBase storage type |
|
String |
Defines how a data type is physically stored in HBase. Permitted values are |
After reverse engineering Hive tables and configuring them, you can choose from these mapping configurations:
To load data from the local file system or the HDFS file system into Hive tables:
Create the data stores for local files and HDFS files.
Refer to Connectivity and Knowledge Modules Guide for Oracle Data Integrator for information about reverse engineering and configuring local file data sources.
Create a mapping using the file data store as the source and the corresponding Hive table as the target. Use the IKM File to Hive (Load Data) knowledge module specified in the physical diagram of the mapping. This integration knowledge module loads data from flat files into Hive, replacing or appending any existing data.
IKM File to Hive (Load Data) supports:
One or more input files. To load multiple source files, enter an asterisk or a question mark as a wildcard character in the resource name of the file DataStore (for example, webshop_*.log
).
Fixed length
Delimited
Customized format
Immediate or deferred loading
Overwrite or append
Hive external tables
Table 3-8 describes the options for IKM File to Hive (Load Data). See the knowledge module for additional details.
Table 3-8 IKM File to Hive Options
Option | Description |
---|---|
Check this option, if you wish to create the target table. In case |
|
Set this option to true, if you wish to replace the target table/partition content with the new data. Otherwise the new data will be appended to the target table. If |
|
Defines whether the source file is to be considered local (outside of the current Hadoop cluster). If this option is set to This option only applies, if |
|
Defines whether to declare the target/staging table as externally managed. For non-external tables Hive manages all data files. That is, it will move any data files into If this option is set to
|
|
Defines whether an intermediate staging table will be created. A Hive staging table is required if:
In case none of the above is |
|
Removes temporary objects, such as tables, files, and scripts after integration. Set this option to |
|
Defines whether the file(s), which have been declared to the staging table should be loaded into the target table now or during a later execution. Permitted values are This option only applies if The typical use case for this option is when there are multiple files and each of them requires data redistribution/sorting and the files are gathered by calling the interface several times. For example, the interface is used in a package, which retrieves (many small) files from different locations and the location, stored in an Oracle Data Integrator variable, is to be used in a target partition column. In this case the first interface execution will have |
|
Allows to override the entire Hive row format definition of the staging table (in case
The list of columns in the source DataStore must match the list of input groups in the regular expression (same number of columns and appropriate data types). If |
|
Defines whether the KM should stop, if input file is not found. |
|
|
Specifies the Hive version compatibility. The values permitted for this option are 0.7 and 0.8.
|
To load data from an HBase table into Hive:
Create a data store for the HBase table that you want to load in Hive.
Refer to "Setting Up HBase Data Sources" for information about reverse engineering and configuring HBase data sources.
Create a mapping using the HBase data store as the source and the corresponding Hive table as the target. Use the LKM HBase to Hive (HBase-SerDe) knowledge module, specified in the Physical diagram of the mapping. This knowledge module provides read access to an HBase table from Hive.
LKM HBase to Hive (HBase-SerDe)
LKM HBase to Hive (HBase-SerDe) supports:
A single source HBase table.
Table 3-9 describes the options for LKM HBase to Hive (HBase-SerDe). See the knowledge module for additional details.
To load data from a Hive table into HBase:
Create a data store for the Hive tables that you want to load in HBase.
Refer to "Setting Up Hive Data Sources" for information about reverse engineering and configuring Hive data sources.
Create a mapping using the Hive data store as the source and the corresponding HBase table as the target. Use the IKM Hive to HBase Incremental Update (HBase-SerDe) knowledge module, specified in the physical diagram of the mapping. This integration knowledge module loads data from Hive into HBase. It supports inserting new rows and updating existing rows.
IKM Hive to HBase Incremental Update (HBase-SerDe)
IKM Hive to HBase Incremental Update (HBase-SerDe) supports:
Filters, Joins, Datasets, Transformations and Aggregations in Hive
Inline views generated by IKM Hive Transform
Inline views generated by IKM Hive Control Append
Table 3-10 describes the options for IKM Hive to HBase Incremental Update (HBase-SerDe). See the knowledge module for additional details.
Table 3-10 IKM Hive to HBase Incremental Update (HBase-SerDe) Options
Option | Description |
---|---|
|
Creates the HBase target table. |
|
Replaces the target table content with the new data. If this option is set to |
|
Deletes temporary objects such as tables, files, and scripts post data integration. Set this option to |
|
Enables or disables the Write-Ahead-Log (WAL) that HBase uses to protect against data loss. For better performance, WAL can be disabled. |
To load data from an SQL Database into a Hive, HBase, and File target:
Create a data store for the SQL source that you want to load into Hive, HBase, or File target.
Refer to Connectivity and Knowledge Modules Guide for Oracle Data Integrator for information about reverse engineering and configuring SQL data sources.
Create a mapping using the SQL source data store as the source and the corresponding HBase table, Hive table, or HDFS files as the target. Use the IKM SQL to Hive-HBase-File (SQOOP) knowledge module, specified in the Physical diagram of the mapping. This integration knowledge module loads data from a SQL source into Hive, HBase, or Files target. It uses SQOOP to load the data into Hive, HBase, and File targets. SQOOP uses parallel JDBC connections to load the data.
IKM SQL to Hive-HBase-File (SQOOP)
IKM SQL to Hive-HBase-File (SQOOP) supports:
Mappings on staging
Joins on staging
Filter expressions on staging
Datasets
Lookups
Derived tables
Table 3-11 describes the options for IKM SQL to Hive-HBase-File (SQOOP). See the knowledge module for additional details.
Table 3-11 IKM SQL to Hive-HBase-File (SQOOP) Options
Option | Description |
---|---|
|
Creates the target table. This option is applicable only if the target is Hive or HBase. |
|
Replaces any existing target table content with the new data. For Hive and HBase targets, the target data is truncated. For File targets, the target directory is removed. For File targets, this option must be set to |
|
Specifies the degree of parallelism. More precisely the number of mapper processes used for extraction. If |
|
Specifies the target column to be used for splitting the source data into n chunks for parallel extraction, where n is |
|
For splitting the source data into chunks for parallel extraction the minimum and maximum value of the split column is retrieved (KM option
For preserving context independence, regular table names should be inserted through
|
|
Specifies the directory used for storing temporary files, such as sqoop script, stdout and stderr redirects. Leave this option blank to use system's default temp directory:
|
|
Specifies an hdfs directory, where SQOOP creates subdirectories for temporary files. A subdirectory called like the work table will be created here to hold the temporary data. |
|
Deletes temporary objects such as tables, files, and scripts after data integration. Set this option to |
Loads data into the Hive work table before loading into the Hive target table. Set this option to Setting this option to
Setting this option to This option is applicable only if the target technology is Hive. |
|
Specifies whether to use the generic JDBC connector if a connector for the target technology is not available. For certain technologies SQOOP provides specific connectors. These connectors take care of SQL-dialects and optimize performance. When there is a connector for the respective target technology, this connector should be used. If not, the generic JDBC connector can be used. |
|
|
Optional generic Hadoop properties. |
|
Optional SQOOP properties. |
|
Optional SQOOP connector properties. |
After loading data into Hive, you can validate and transform the data using the following knowledge modules.
This knowledge module validates and controls the data, and integrates it into a Hive target table in truncate/insert (append) mode. Invalid data is isolated in an error table and can be recycled. IKM Hive Control Append supports inline view mappings that use either this knowledge module or IKM Hive Transform.
Table 3-12 lists the options. See the knowledge module for additional details.
Table 3-12 IKM Hive Control Append Options
Option | Description |
---|---|
Activates flow control. |
|
Recycles data rejected from a previous control. |
|
Controls the target table after having inserted or updated target data. |
|
Creates the target table. |
|
Replaces the target table content with the new data. Setting this option to |
|
Removes the temporary objects, such as tables, files, and scripts after data integration. Set this option to |
|
|
Specifies the Hive version compatibility. The values permitted for this option are 0.7 and 0.8.
|
This knowledge module checks data integrity for Hive tables. It verifies the validity of the constraints of a Hive data store and diverts the invalid records to an error table. You can use CKM Hive for static control and flow control. You must also define these constraints on the stored data.
Table 3-13 lists the options for this check knowledge module. See the knowledge module for additional details.
Option | Description |
---|---|
Drops error table before execution. When this option is set to |
|
|
Specifies the Hive version compatibility. The values permitted for this option are 0.7 and 0.8.
|
This knowledge module performs transformations. It uses a shell script to transform the data, and then integrates it into a Hive target table using replace mode. The knowledge module supports inline view mappings and can be used as an inline-view for IKM Hive Control Append.
The transformation script must read the input columns in the order defined by the source data store. Only mapped source columns are streamed into the transformations. The transformation script must provide the output columns in the order defined by the target data store.
Table 3-14 lists the options for this integration knowledge module. See the knowledge module for additional details.
Table 3-14 IKM Hive Transform Options
Option | Description |
---|---|
Creates the target table. |
|
Removes the temporary objects, such as tables, files, and scripts post data integration. Set this option to |
|
Defines the file name of the transformation script. This transformation script is used to transform the input data into the output structure. Both local and HDFS paths are supported, for example: Local script location: HDFS script location: Ensure that the following requirements are met:
When the KM option
|
|
Defines the transformation script content. This transformation script is then used to transform the input data into the output structure. If left blank, the file given in Script example (1-to-1 transformation): All mapped source columns are spooled as tab separated data into this script via stdin. This unix script then transforms the data and writes out the data as tab separated data on stdout. The script must provide as many output columns as there are target columns. |
|
|
Unix/HDFS file permissions for script file in octal notation with leading zero. For example, full permissions for owner and group: 0770. Warning: Using wider permissions like 0777 poses a security risk. See also KM option description for |
Provides an optional, comma-separated list of source column names, which enables the knowledge module to distribute the data before the transformation script is applied. |
|
Provide an optional, comma-separated list of source column names, which enables the knowledge module to sort the data before the transformation script is applied. |
|
Provides an optional, comma-separated list of target column names, which enables the knowledge module to distribute the data after the transformation script is applied. |
|
Provides an optional, comma-separated list of target column names, which enables the knowledge module to sort the data after the transformation script is applied. |
IKM File-Hive to Oracle (OLH-OSCH) integrates data from an HDFS file or Hive source into an Oracle database target using Oracle Loader for Hadoop. Using the mapping configuration and the selected options, the knowledge module generates an appropriate Oracle Database target instance. Hive and Hadoop versions must follow the Oracle Loader for Hadoop requirements.
See Also:
"Oracle Loader for Hadoop Setup" in Oracle Big Data Connectors User's Guide for the required versions of Hadoop and Hive
"Setting Up the Oracle Data Integrator Agent to Execute Hadoop Jobs" for required environment variable settings
Table 3-15 lists the options for this integration knowledge module. See the knowledge module for additional details.
Table 3-15 IKM File - Hive to Oracle (OLH-OSCH)
Option | Description |
---|---|
Specifies how to load the Hadoop data into Oracle. Permitted values are JDBC, OCI, DP_COPY, DP_OSCH, and OSCH.
|
|
|
Specifies the maximum number of errors for Oracle Loader for Hadoop and external table. Examples: |
Creates the target table. |
|
Replaces the target table content with the new data. |
|
Deletes all the data in target table. |
|
Materializes Hive source data before extraction by Oracle Loader for Hadoop. If this option is set to
Setting this option to This option is applicable only if the source technology is Hive. |
|
Uses an intermediate Oracle database staging table. The extracted data is made available to Oracle by an external table. If
Setting this option to |
|
Specifies the file system path of the external table. Please note the following:
|
|
Specifies the directory used for storing temporary files, such as sqoop script, stdout and stderr redirects. Leave this option blank to use system's default temp directory:
|
|
Specifies an HDFS directory, where the Oracle Loader for Hadoop job will create subdirectories for temporary files/datapump output files. |
|
Specifies the attributes for the integration table at create time and used for increasing performance. This option is set by default to |
|
Removes temporary objects, such as tables, files, and scripts post data integration. Set this option to |
|
By default the InputFormat class is derived from the source DataStore/Technology (DelimitedTextInputFormat or HiveToAvroInputFormat). This option allows the user to specify the class name of a custom InputFormat. Cannot be used with Example, for reading custom file formats like web log files the OLH RegexInputFormat can be used by assigning the value: See KM option EXTRA_OLH_CONF_PROPERTIES for details on how to specify the regular expression. |
|
Particularly when using custom InputFormats (see KM option Example, (loading apache weblog file format): When OLH RegexInputFormat is used for reading custom file formats, this KM option specifies the regular expression and other parsing details:
|
IKM File-Hive to SQL (SQOOP) integrates data from an HDFS file or Hive source into an SQL database target using SQOOP.
IKM File-Hive to SQL (SQOOP) supports:
Filters, Joins, Datasets, Transformations and Aggregations in Hive
Inline views generated by IKM Hive Control Append
Inline views generated by IKM Hive Transform
Hive-HBase source tables using LKM HBase to Hive (HBase SerDe)
File source data (delimited file format only)
Table 3-16 lists the options for this integration knowledge module. See the knowledge module for additional details.
Table 3-16 IKM File-Hive to SQL (SQOOP)