3 Setting Up the Environment for Integrating Hadoop Data

This chapter provides information steps you need to perform to set up the environment to integrate Hadoop data.

This chapter includes the following sections:

3.1 Configuring Big Data technologies using the Big Data Configurations Wizard

The Big Data Configurations wizard provides a single entry point to set up multiple Hadoop technologies. You can quickly create data servers, physical schema, logical schema, and set a context for different Hadoop technologies such as Hadoop, HBase, Oozie, Spark, Hive, Pig, etc.

The default metadata for different distributions, such as properties, host names, port numbers, etc., and default values for environment variables are pre-populated for you. This helps you to easily create the data servers along with the physical and logical schema, without having in-depth knowledge about these technologies.

After all the technologies are configured, you can validate the settings against the data servers to test the connection status.

Note:

If you do not want to use the Big Data Configurations wizard, you can set up the data servers for the Big Data technologies manually using the information mentioned in the subsequent sections.

To run the Big Data Configurations Wizard:

  1. In ODI Studio, select File and click New... or

    Select Topology tab — Topology Menu — Big Data Configurations.

  2. In the New Gallery dialog, select Big Data Configurations and click OK.

    The Big Data Configurations wizard appears.

  3. In the General Settings panel of the wizard, specify the required options.

    See General Settings for more information.

  4. Click Next.

    Data server panel for each of the technologies you selected in the General Settings panel will be displayed.

  5. In the Hadoop panel of the wizard, do the following:
    • Specify the options required to create the Hadoop data server.

      See Hadoop Data Server Definition for more information.

    • In Properties section, click the + icon to add any data server properties.

    • Select a logical schema, physical schema, and a context from the appropriate drop-down lists.

  6. Click Next.
  7. In the HDFS panel of the wizard, do the following:
    • Specify the options required to create the HDFS data server.

      See HDFS Data Server Definition for more information.

    • In the Properties section, click + icon to add any data server properties.

    • Select a logical schema, physical schema, and a context from the appropriate drop-down lists.

  8. Click Next.
  9. In the HBase panel of the wizard, do the following:
    • Specify the options required to create the HBase data server.

      See HBase Data Server Definition for more information.

    • In the Properties section, click + icon to add any data server properties.

    • Select a logical schema, physical schema, and a context from the appropriate drop-down lists.

  10. In the Spark panel of the wizard, do the following:
    • Specify the options required to create the Spark data server.

      See Spark Data Server Definition for more information.

    • In the Properties section, click + icon to add any data server properties.

    • Select a logical schema, physical schema, and a context from the appropriate drop-down lists.

  11. Click Next.
  12. In the Kafka panel of the wizard, do the following:
    • Specify the options required to create the Kafka data server.

      See Kafka Data Server Definition for more information.

    • In the Properties section, click + icon to add any data server properties.

    • Select a logical schema, physical schema, and a context from the appropriate drop-down lists.

  13. Click Next.
  14. In the Pig panel of the wizard, do the following:
    • Specify the options required to create the Pig data server.

      See Pig Data Server Definition for more information.

    • In the Properties section, click + icon to add any data server properties.

    • Select a logical schema, physical schema, and a context from the appropriate drop-down lists.

  15. Click Next.
  16. In the Hive panel of the wizard, do the following:
    • Specify the options required to create the Hive data server.

      See Hive Data Server Definition for more information.

    • In the Properties section, click + icon to add any data server properties.

    • Select a logical schema, physical schema, and a context from the appropriate drop-down lists.

  17. Click Next.
  18. In the Oozie panel of the wizard, do the following:
    • Specify the options required to create the Oozie runtime engine.

      See Oozie Runtime Engine Definition for more information.

    • Under Properties section, review the data server properties that are listed.

      Note: You cannot add new properties or remove listed properties. However, if required, you can change the value of listed properties.

      See Oozie Runtime Engine Properties for more information.

    • Select a logical agent and a context from the appropriate drop-down lists.

  19. Click Next.
  20. In the Validate all the settings panel, click Test All Settings to validate the settings against the data servers to ensure the connection status.
  21. Click Finish.

3.1.1 General Settings

The following table describes the options that you need to set on the General Settings panel of the Big Data Configurations wizard.


Table 3-1 General Settings Options

Option Description

Prefix

Specify a prefix. This prefix is attached to the data server name, logical schema name, and physical schema name.

Distribution

Select a distribution, either Manual or CDH <version>.

Base Directory

Specify the base directory. This base directory is automatically populated in all other panels of the wizard.

Note: This option appears only if the distribution is other than Manual.

Distribution Type

Select a distribution type, either Normal or Kerberized.

Technologies

Select the technologies that you want to configure.

Note: Data server creation panels only for the selected technologies are displayed.


3.1.2 HDFS Data Server Definition

The following table describes the options that you must specify to create a HDFS data server.

Note:

Only the fields required or specific for defining a HDFS data server are described.

Table 3-2 HDFS Data Server Definition

Option Description

Name

Type a name for the data server. This name appears in Oracle Data Integrator.

User/Password

User name with its password.

Hadoop Data Server

Hadoop data server that you want to associate with the HDFS data server.

Additional Classpath

Specify additional classpaths.


3.1.3 HBase Data Server Definition

The following table describes the options that you must specify to create an HBase data server.

Note: Only the fields required or specific for defining a HBase data server are described.


Table 3-3 HBase Data Server Definition

Option Description

Name

Type a name for the data server. This name appears in Oracle Data Integrator.

HBase Quorum

Quorum of the HBase installation. For example, localhost:2181.

User/Password

User name with its password.

Hadoop Data Server

Hadoop data server that you want to associate with the HBase data server.

Additional Classpath

By default, the following classpaths are added:

  • /usr/lib/hbase/*

  • usr/lib/hbase/lib/*

Specify the additional classpaths, if required.


3.1.4 Kafka Data Server Definition

The following table describes the options that you must specify to create a Kafka data server.

Note:

Only the fields required or specific for defining a Kafka data server are described.

Table 3-4 Kafka Data Server Definition

Option Description

Name

Type a name for the data server. This name appears in Oracle Data Integrator.

User/Password

User name with its password.

Hadoop Data Server

Hadoop data server that you want to associate with the Kafka data server.

Additional Classpath

The following additional classpaths are added by default:

  • /opt/cloudera/parcels/CDH/lib/kafka/libs/*

  • /opt/cloudera/parcels/CDH/lib/base dir

    basedir/lib/kafka/libs/*

If required, you can add more additional classpaths.

Note: This field appears only when you are creating the Kafka Data Server using the Big Data Configuration wizard.


3.1.5 Kafka Data Server Properties

The following table describes the Kafka data server properties that you need to add on the Properties tab when creating a new Kafka data server.


Table 3-5 Kafka Data Server Properties

Key Value

metadata.broker.list

There are two values, PLAINTTEXT or SASL_PLAINTTEXT. SASL_PLAINTTEXT is used for Kerberized Kafka server. Default value is PLAINTTEXT.

oracle.odi.prefer.dataserver.packages

Retrieves the topic and message from Kafka server. The address is oracle.odi.

3.2 Creating and Initializing the Hadoop Data Server

To create and initialize the Hadoop data server:

  1. Click the Topology tab.
  2. In the Physical Architecture tree, under Technologies, right-click Hadoop and then click New Data Server.
  3. In the Definition tab, specify the details of the Hadoop data server.

    See Hadoop Data Server Definition for more information.

  4. In the Properties tab, specify the properties for the Hadoop data server.

    See Hadoop Data Server Properties for more information.

  5. Click Initialize to initialize the Hadoop data server.

    Initializing the Hadoop data server creates the structure of the ODI Master repository and Work repository in HDFS.

  6. Click Test Connection to test the connection to the Hadoop data server.

3.2.1 Hadoop Data Server Definition

The following table describes the fields that you must specify on the Definition tab when creating a new Hadoop data server.

Note: Only the fields required or specific for defining a Hadoop data server are described.


Table 3-6 Hadoop Data Server Definition

Field Description

Name

Name of the data server that appears in Oracle Data Integrator.

Data Server

Physical name of the data server.

User/Password

Hadoop user with its password.

If password is not provided, only simple authentication is performed using the username on HDFS and Oozie.

Authentication Method

Select one of the following authentication methods:
  • Simple Username Authentication

  • Kerberos Principal Username/Password

  • Kerberos Credential Cache

HDFS Node Name URI

URI of the HDFS node name.

hdfs://localhost:8020

Resource Manager/Job Tracker URI

URI of the resource manager or the job tracker.

localhost:8032

ODI HDFS Root

Path of the ODI HDFS root directory.

/user/<login_username>/odi_home.

Additional Class Path

Specify additional classpaths.

Add the following additional classpaths:

  • /usr/lib/hadoop/*

  • /usr/lib/hadoop/lib/*

  • /usr/lib/hadoop-hdfs/*

  • /usr/lib/hadoop-mapreduce/*

  • /usr/lib/hadoop-yarn/*

  • /usr/lib/oozie/lib/*

  • /etc/hadoop/conf/


3.2.2 Hadoop Data Server Properties

The following table describes the properties that you can configure in the Properties tab when defining a new Hadoop data server.

Note: These properties can be inherited by other Hadoop technologies, such as Hive or HDFS. To inherit these properties, you must select the configured Hadoop data server when creating data server for other Hadoop technologies.


Table 3-7 Hadoop Data Server Properties Mandatory for Hadoop and Hive

Property Description/Value

HADOOP_HOME

Location of Hadoop dir. For example, /usr/lib/hadoop

HADOOP_CONF

Location of Hadoop configuration files such as core-default.xml, core-site.xml, and hdfs-site.xml. For example, /home/shared/hadoop-conf

HIVE_HOME

Location of Hive dir. For example, /usr/lib/hive

HIVE_CONF

Location of Hive configuration files such as hive-site.xml. For example, /home/shared/hive-conf

HADOOP_CLASSPATH

$HIVE_HOME/lib/hive-metastore-*.jar:$HIVE_HOME/lib/libthrift-*.jar:$HIVE_HOME/lib/libfb*.jar:$HIVE_HOME/lib/hive-exec-*.jar:$HIVE_CONF

HADOOP_CLIENT_OPTS

-Dlog4j.debug -Dhadoop.root.logger=INFO,console -Dlog4j.configuration=file:/etc/hadoop/conf.cloudera.yarn/log4j.properties

ODI_ADDITIONAL_CLASSPATH

$HIVE_HOME/lib/'*':$HADOOP_HOME/client/*:$HADOOP_CONF

HIVE_SESSION_JARS

$HIVE_HOME/lib/hive-contrib-*.jar:<ODI library directory>/wlhive.jar

  • Actual path of wlhive.jar can be determined under ODI installation home.

  • Include other JAR files as required, such as custom SerDes JAR files. These JAR files are added to every Hive JDBC session and thus are added to every Hive MapReduce job.

  • List of JARs is separated by ":", wildcards in file names must not evaluate to more than one file.

  • Follow the steps for Hadoop Security models, such as Apache Sentry, to allow the Hive ADD JAR call used inside ODI Hive KMs:
    • Define the environment variable HIVE_SESSION_JARS as empty.

    • Add all required jars for Hive in the global Hive configuration hive-site.xml.



Table 3-8 Hadoop Data Server Properties Mandatory for HBase (In addition to base Hadoop and Hive Properties)

Property Decription/Value

HBASE_HOME

Location of HBase dir. For example, /usr/lib/hbase

HADOOP_CLASSPATH

$HBASE_HOME/lib/hbase-*.jar:$HIVE_HOME/lib/hive-hbase-handler*.jar:$HBASE_HOME/hbase.jar

ODI_ADDITIONAL_CLASSPATH

$HBASE_HOME/hbase.jar

HIVE_SESSION_JARS

$HBASE_HOME/hbase.jar:$HBASE_HOME/lib/hbase-sep-api-*.jar:$HBASE_HOME/lib/hbase-sep-impl-*hbase*.jar:/$HBASE_HOME/lib/hbase-sep-impl-common-*.jar:/$HBASE_HOME/lib/hbase-sep-tools-*.jar:$HIVE_HOME/lib/hive-hbase-handler-*.jar

Note:

Follow the steps for Hadoop Security models, such as Apache Sentry, to allow the Hive ADD JAR call used inside ODI Hive KMs:
  • Define the environment variable HIVE_SESSION_JARS as empty.

  • Add all required jars for Hive in the global Hive configuration hive-site.xml.



Table 3-9 Hadoop Data Server Properties Mandatory for Oracle Loader for Hadoop (In addition to base Hadoop and Hive properties)

Property Description/Value

OLH_HOME

Location of OLH installation. For example, /u01/connectors/olh

OLH_FILES

usr/lib/hive/lib/hive-contrib-1.1.0-cdh5.5.1.jar

ODCH_HOME

Location of OSCH installation. For example, /u01/connectors/osch

HADOOP_CLASSPATH

$OLH_HOME/jlib/*:$OSCH_HOME/jlib/*

In order to work with OLH, the Hadoop jars in the HADOOP_CLASSPATH have to be manually resolved without wildcards.

OLH_JARS

Comma-separated list of all JAR files required for custom input formats, Hive, Hive SerDes, and so forth, used by Oracle Loader for Hadoop. All filenames have to be expanded without wildcards.

For example:

$HIVE_HOME/lib/hive-metastore-0.10.0-cdh4.5.0.jar,$HIVE_HOME/lib/libthrift-0.9.0-cdh4-1.jar,$HIVE_HOME/lib/libfb303-0.9.0.jar

OLH_SHAREDLIBS

$OLH_HOME/lib/libolh12.so,$OLH_HOME/lib/libclntsh.so.12.1,$OLH_HOME/lib/libnnz12.so,$OLH_HOME/lib/libociei.so,$OLH_HOME/lib/libclntshcore.so.12.1,$OLH_HOME/lib/libons.so

ODI_ADDITIONAL_CLASSPATH

$OSCH_HOME/jlib/'*'


Table 3-10 Hadoop Data Server Properties Mandatory for SQOOP (In addition to base Hadoop and Hive properties)

Property Description/Value

SQOOP_HOME

Location of Sqoop dir. For example, /usr/lib/sqoop

SQOOP_LIBJARS

Location of the SQOOP library jars. For example, usr/lib/hive/lib/hive-contrib-1.1.0-cdh5.5.1.jar

Creating and Initializing the Hadoop Data Server

3.3 Creating a Hadoop Physical Schema

Create a Hadoop physical schema using the standard procedure, as described in Creating a Physical Schema in Administering Oracle Data Integrator.

Create for this physical schema a logical schema using the standard procedure, as described in Creating a Logical Schema in Administering Oracle Data Integrator and associate it in a given context.

3.4 Configuring the Oracle Data Integrator Agent to Execute Hadoop Jobs

You must configure the Oracle Data Integrator agent to execute Hadoop jobs.

To configure the Oracle Data Integrator agent:

  1. Install Hadoop on your Oracle Data Integrator agent computer.

    For Oracle Big Data Appliance, see Oracle Big Data Appliance Software User's Guide for instructions for setting up a remote Hadoop client.

  2. Install Hive on your Oracle Data Integrator agent computer.
  3. Install SQOOP on your Oracle Data Integrator agent computer.
  4. Set the base properties for Hadoop and Hive on your ODI agent computer.

    These properties must be added as Hadoop data server properties. For more information, see Hadoop Data Server Properties.

  5. If you plan to use HBase features, set the properties on your ODI agent computer. Note that you need to set these properties in addition to the base Hadoop and Hive properties.

    These properties must be added as Hadoop data server properties. For more information, see Hadoop Data Server Properties.

3.5 Configuring Oracle Loader for Hadoop

If you want to use Oracle Loader for Hadoop, you must install and configure Oracle Loader for Hadoop on your Oracle Data Integrator agent computer.

To install and configure Oracle Loader for Hadoop:

  1. Install Oracle Loader for Hadoop on your Oracle Data Integrator agent computer.

    See Installing Oracle Loader for Hadoop in Oracle Big Data Connectors User's Guide.

  2. To use Oracle SQL Connector for HDFS (OLH_OUTPUT_MODE=DP_OSCH or OSCH), you must first install it.

    See Oracle SQL Connector for Hadoop Distributed File System Setup in Oracle Big Data Connectors User's Guide.

  3. Set the properties for Oracle Loader for Hadoop on your ODI agent computer. Note that you must set these properties in addition to the base Hadoop and Hive properties.

    These properties must be added as Hadoop data server properties. For more information, see Hadoop Data Server Properties.

3.6 Configuring Oracle Data Integrator to Connect to a Secure Cluster

To run the Oracle Data Integrator agent on a Hadoop cluster that is protected by Kerberos authentication, you must configure a Kerberos-secured cluster.

To use a Kerberos-secured cluster:

  1. Log in to the node04 of the Oracle Big Data Appliance, where the Oracle Data Integrator agent runs.
  2. Set the environment variables by using the following commands. Substitute the appropriate values for your appliance:

    $ export KRB5CCNAME=Kerberos-ticket-cache-directory

    $ export KRB5_CONFIG=Kerberos-configuration-file

    $ export HADOOP_OPTS="$HADOOP_OPTS -Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal. jaxp.DocumentBuilderFactoryImpl-Djava.security.krb5.conf=Kerberos-configuration-file"

    In this example, the configuration files are named krb5* and are located in /tmp/oracle_krb/:

    $ export KRB5CCNAME=/tmp/oracle_krb/krb5cc_1000

    $ export KRB5_CONFIG=/tmp/oracle_krb/krb5.conf

    $ export HADOOP_OPTS="$HADOOP_OPTS -D javax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal. jaxp.DocumentBuilderFactoryImpl -D java.security.krb5.conf=/tmp/oracle_krb/krb5.conf"

  3. Generate a new Kerberos ticket for the oracle user. Use the following command, replacing realm with the actual Kerberos realm name.

    $ kinit oracle@realm

  4. ODI Studio: To set the VM for ODI Studio , we need to add AddVmoption in odi.conf in the same folder as odi.sh.
    Kerberos configuration file location:
    AddVMOption -Djava.security.krb5.conf=/etc/krb5.conf
    AddVMOption -Dsun.security.krb5.debug=trueAddVMOption -Dsun.security.krb5.principal=odidemo
    
  5. Redefine the JDBC connection URL, using syntax like the following:

    Table 3-11 Kerberos Configuration for Dataserver

    Technology Configuration Example
    Hadoop No specific configuration to be done, general settings is sufficient.  
    Hive $MW_HOME/oracle_common/modules/datadirect/JDBCDriverLogin.conf Example of configuration file
    JDBC_DRIVER_01 {
    com.sun.security.auth.module.Krb5LoginModule required
    debug=true
    useTicketCache=true
    ticketCache="/tmp/krb5cc_500"
    doNotPrompt=true
    ;
    };
    

    Example of Hive URL

    jdbc:weblogic:hive://slc05jvn.us.oracle.com:10000;DatabaseName=default;AuthenticationMethod=kerberos;ServicePrincipalName=hive/slc05jvn.us.oracle.com@US.ORACLE.COM

    HBase
    export HBASE_HOME=/scratch/shixu/etc/hbase/conf       
    export HBASE_CONF_DIR = $HBASE_HOME/conf       
    export HBASE_OPTS ="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase-client.jaas"export HBASE_MASTER_OPTS ="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase-server.jaas"
    

    ODI Studio Configuration:

    AddVMOption -Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase-client.jaas"

    Example of Hbase configuration file:
    hbase-client.jaas
    Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=false
    useTicketCache=true;
    };
    
    Spark
    Spark Kerberos configuration is done through spark submit parameters
    --principal // define principle name 
    --keytab         // location of keytab file 
    

    Example of spark-submit command:

    spark-submit --master yarn --py-files  /tmp/pyspark_ext.py --executor-memory 1G --driver-memory 512M --executor-cores 1 --driver-cores 1 --num-executors 2 --principal shixu@US.ORACLE.com --keytab /tmp/shixu.tab --queue default /tmp/New_Mapping_Physical.py
    ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1 lib/spark-examples*.jar 10 
    
    Kafka

    Kafka Kerberos configuration is done through kafka-client.jaas file: The configuration file is placed in Kafka configuration folder.

    Example of Kafka configuration file:

    KafkaClient {
     com.sun.security.auth.module.Krb5LoginModule required
     useKeyTab=false
     useTicketCache=true
     ticketCache="/tmp/krb5cc_1500"
     serviceName="kafka";
    };
    

    The location of Kafka configuration file is set in ODI Studio VM option

    AddVMOption -Djava.security.auth.login.config=/scratch/shixu/etc/kafka-jaas.conf"

    Pig/Oozie Pig and Ooize will extend the Kerberos configuration of linked Hadoop data server and does not require specific configuration.  

    See also, "HiveServer2 Security Configuration" in the CDH5 Security Guide at the following URL:

    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Security-Guide/cdh5sg_hiveserver2_security.html

  6. Renew the Kerberos ticket for the Oracle use on a regular basis to prevent disruptions in service.
  7. Download the unlimited strength JCE security jars.

    See Oracle Big Data Appliance Software User's Guide for instructions about managing Kerberos on Oracle Big Data Appliance.

3.7 Configuring Oracle Data Integrator Studio for Executing Hadoop Jobs on the Local Agent

For executing Hadoop jobs on the local agent of an Oracle Data Integrator Studio installation, follow the configuration steps in the Configuring the Oracle Data Integrator Agent to Execute Hadoop Jobs with the following change: Copy JAR files into the Oracle Data Integrator userlib directory.

For example:

Linux: $USER_HOME/.odi/oracledi/userlib directory.

Windows: C:\Users\<USERNAME>\AppData\Roaming\odi\oracledi\userlib directory