6 Using Query Processing Engines to Generate Code in Different Languages

This chapter describes how to set up the query processing engines that are supported by Oracle Data Integrator to generate code in different languages.

This chapter includes the following sections:

6.1 Query Processing Engines Supported by Oracle Data Integrator

Hadoop provides a framework for parallel data processing in a cluster. There are different languages that provide a user front-end. Oracle Data Integrator supports the following query processing engines to generate code in different languages:

  • Hive

    The Apache Hive warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL.

  • Pig

    Pig is a high-level platform for creating MapReduce programs used with Hadoop. The language for this platform is called Pig Latin.

  • Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop Input Format.

To generate code in these languages, you need to set up Hive, Pig, and Spark data servers in Oracle Data Integrator. These data servers are to be used as the staging area in your mappings to generate HiveQL, Pig Latin, or Spark code.

Section 2.2, "Generate Code in Different Languages with Oracle Data Integrator"

6.2 Setting Up Hive Data Server

To set up the Hive data server:

  1. Click the Topology tab.

  2. In the Physical Architecture tree, under Technologies, right-click Hive and then click New Data Server.

  3. In the Definition tab, specify the details of the Hive data server.

    See Section 6.2.1, "Hive Data Server Definition" for more information.

  4. In the JDBC tab, specify the Hive data server connection details.

    See Section 6.2.2, "Hive Data Server Connection Details" for more information.

  5. Click Test Connection to test the connection to the Hive data server.

6.2.1 Hive Data Server Definition

The following table describes the fields that you need to specify on the Definition tab when creating a new Hive data server.

Note: Only the fields required or specific for defining a Hive data server are described.

Table 6-1 Hive Data Server Definition

Field Description

Name

Name of the data server that appears in Oracle Data Integrator.

Data Server

Physical name of the data server.

User/Password

Hive user with its password.

Metastore URI

Hive Metastore URIs: for example, thrift://BDA:10000.

Hadoop Data Server

Hadoop data server that you want to associate with the Hive data server.

Additional Classpath

Additional classpaths.


Section 6.2, "Setting Up Hive Data Server"

6.2.2 Hive Data Server Connection Details

The following table describes the fields that you need to specify on the JDBC tab when creating a new Hive data server.

Note: Only the fields required or specific for defining a Hive data server are described.

Table 6-2 Hive Data Server Connection Details

Field Description

JDBC Driver

DataDirect Apache Hive JDBC Driver

Use this JDBC driver to connect to the Hive Data Server. The driver documentation is available at the following URL:

http://media.datadirect.com/download/docs/jdbc/alljdbc/help.html#page/userguide/rfi1369069225784.html#

JDBC URL

jdbc:weblogic:hive://<host>:<port>[; property=value[;...]]

For example, jdbc:weblogic:hive://localhost:10000;DatabaseName=default;User=default;Password=default


Section 6.2, "Setting Up Hive Data Server"

6.3 Creating a Hive Physical Schema

Create a Hive physical schema using the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator.

Create for this physical schema a logical schema using the standard procedure, as described in Creating a Logical Schema in Administering Oracle Data Integrator and associate it in a given context.

Section 6.2, "Setting Up Hive Data Server"

6.4 Setting Up Pig Data Server

To set up the Pig data server:

  1. Click the Topology tab.

  2. In the Physical Architecture tree, under Technologies, right-click Pig and then click New Data Server.

  3. In the Definition tab, specify the details of the Pig data server.

    See Section 6.4.1, "Pig Data Server Definition" for more information.

  4. In the Properties tab, add the Pig data server properties.

    See Section 6.4.2, "Pig Data Server Properties" for more information.

  5. Click Test Connection to test the connection to the Pig data server.

6.4.1 Pig Data Server Definition

The following table describes the fields that you need to specify on the Definition tab when creating a new Pig data server.

Note: Only the fields required or specific for defining a Pig data server are described.

Table 6-3 Pig Data Server Definition

Field Description

Name

Name of the data server that will appear in Oracle Data Integrator.

Data Server

Physical name of the data server.

Process Type

Choose one of the following:

  • Local Mode

    Select to run the job in local mode.

    In this mode, pig scripts located in the local file system are executed. MapReduce jobs are not created.

  • MapReduce Mode

    Select to run the job in MapReduce mode.

    In this mode, pig scripts located in the HDFS are executed. MapReduce jobs are created.

    Note: If this option is selected, the Pig data server must be associated with a Hadoop data server.

Hadoop Data Server

Hadoop data sever that you want to associate with the Pig data server.

Note: This field is displayed only when the MapReduce Mode option is set to Process Type.

Additional Classpath

Specify additional classpaths.

Add the following additional classpaths:

  • /usr/lib/pig/lib

  • /usr/lib/pig/pig-0.12.0-cdh<version>.jar

    Replace <version> with the Cloudera version you have. For example, /usr/lib/pig/pig-0.12.0-cdh5.3.0.jar.

  • /usr/lib/hive/lib

  • /usr/lib/hive/conf

For pig-hcatalog-hive, add the following classpath in addition to the ones mentioned above:

/usr/lib/hive-hcatalaog/share/hcatalog

User/Password

Pig user with its password.


Section 6.4, "Setting Up Pig Data Server"

6.4.2 Pig Data Server Properties

The following table describes the Pig data server properties that you need to add on the Properties tab when creating a new Pig data server.

Table 6-4 Pig Data Server Properties

Key Value

hive.metastore.uris

thrift://bigdatalite.localdomain:9083

pig.additional.jars

/usr/lib/hive-hcatalog/share/hcatalog/*.jar:/usr/lib/hive/


Section 6.4, "Setting Up Pig Data Server"

6.5 Creating a Pig Physical Schema

Create a Pig physical schema using the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator.

Create for this physical schema a logical schema using the standard procedure, as described in Creating a Logical Schema in Administering Oracle Data Integrator and associate it in a given context.

Section 6.4, "Setting Up Pig Data Server"

6.6 Setting Up Spark Data Server

To set up the Spark data server:

  1. Click the Topology tab.

  2. In the Physical Architecture tree, under Technologies, right-click Spark Python and then click New Data Server.

  3. In the Definition tab, specify the details of the Spark data server.

    See Section 6.6.1, "Spark Data Server Definition" for more information.

  4. Click Test Connection to test the connection to the Spark data server.

6.6.1 Spark Data Server Definition

The following table describes the fields that you need to specify on the Definition tab when creating a new Spark Python data server.

Note: Only the fields required or specific for defining a Spark Python data server are described.

Table 6-5 Spark Data Server Definition

Field Description

Name

Name of the data server that will appear in Oracle Data Integrator.

Master Cluster (Data Server)

Physical name of the master cluster or the data server.

User/Password

Spark data server or master cluster user with its password.


Section 6.6, "Setting Up Spark Data Server"

6.7 Creating a Spark Physical Schema

Create a Spark physical schema using the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator.

Create for this physical schema a logical schema using the standard procedure, as described in Creating a Logical Schema in Administering Oracle Data Integrator and associate it in a given context.

Section 6.6, "Setting Up Spark Data Server"

6.8 Generating Code in Different Languages

By default, Oracle Data Integrator generates HiveQL code. To generate Pig Latin or Spark code, you must use the Pig data server or the Spark data server as the staging location for your mapping.

Before you generate code in these languages, ensure that the Hive, Pig, and Spark data servers are set up.

For more information see the following sections:

Section 6.2, "Setting Up Hive Data Server"

Section 6.4, "Setting Up Pig Data Server"

Section 6.6, "Setting Up Spark Data Server"

To generate code in different languages:

  1. Open your mapping.

  2. To generate HiveQL code, run the mapping with the default staging location (Hive).

  3. To generate Pig Latin or Spark code, go to the Physical diagram and do one of the following:

    1. To generate Pig Latin code, set the Execute On Hint option to use the Pig data server as the staging location for your mapping.

    2. To generate Spark code, set the Execute On Hint option to use the Spark data server as the staging location for your mapping.

  4. Execute the mapping.

Section 6.1, "Query Processing Engines Supported by Oracle Data Integrator"

Section 2.2, "Generate Code in Different Languages with Oracle Data Integrator"