Configurations

Following are the configurations required in case of HDFS based WebLog Source:
  1. From the Register Cluster tab in the DMT Configurations window, register a cluster with Target Information domain name as the Cluster Name.
    For details, see Cluster Registration section in the OFS Analytical Applications Infrastructure User Guide.
  2. Copy the required Third Party Jars from the CDH installation libraries into the following location $FIC_HOME/ext/lib:
    • avro-1.7.4.jar
    • commons-cli-1.2.jar
    • commons-httpclient-3.1.jar
    • hadoop-hdfs-2.6.0-cdh5.4.4.jar
    • jackson-core-asl-1.8.8.jar
    • jackson-mapper-asl-1.8.8.jar
    • protobuf-java-2.4.0a.jar
    • servlet-api.jar
    • htrace-core-3.0.4.jar

    Note:

    The version of Jars depends on the CDH version and the Drivers used.
    1. For CDH 5.8.4 version and above, additionally htrace-core4-4.0.1-incubating.jar should be copied.
    2. For CDH 6.3 version, additionally hadoop-mapreduce-client-core-3.0.0-cdh6.3.0.jar should be copied.
    Following Jars are required, but those are already present in the $FIC_HOME/ext/lib folder as part of CDH Enablement.
    • commons-configuration-1.6.jar
    • commons-collections-3.2.2.jar
    • commons-io-2.4.jar
    • commons-logging-1.0.4.jar
    • hadoop-auth-2.0.0-cdh4.7.0.jar
    • hadoop-common-2.0.0-cdh4.7.0.jar
    • hadoop-core-2.0.0-mr1-cdh4.7.0.jar
    • libfb303-0.9.0.jar
    • libthrift-0.9.0-cdh4-1.jar
    • slf4j-api-1.6.4.jar

    Note:

    The version of the aforementioned Jars to be copied differs depending upon the version of the configured CDH.
  3. Copy core-site.xml, hdfs-site.xml, mapred-site.xml, hive-site.xml, and yarn-site.xml from the Hadoop Cluster to the location mentioned in the Configuration File Path field in the Cluster Configurations window and the <deployed location>/conf folder. Note that only Client Configuration Properties are required.

    Note:

    If the proxy user option is enabled and the Job is submitted by the same, the user should be created in every node of the Hadoop Cluster.
    For more information, see Cloudera DocumentationCloudera Documentation.
  4. Generate the application EAR/WAR file and redeploy the application onto your configured web application server. For more information on generating and deploying EAR/WAR file, see the Post Installation Configuration section in OFS AAAI Installation and Configuration Guide.
  5. Restart all the OFSAAI services. For more information, see the Start/Stop Infrastructure Services section in the OFS AAAI Installation and Configuration Guide.