Skip Headers
Oracle® Big Data Appliance Software User's Guide
Release 1 (1.1)

Part Number E36162-04
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

2 Administering Oracle Big Data Appliance

This chapter provides information about the software and services installed on Oracle Big Data Appliance. It contains these sections:

2.1 Managing CDH Operations

Cloudera Manager is installed on Oracle Big Data Appliance to help you with Cloudera's Distribution including Apache Hadoop (CDH) operations. Cloudera Manager provides a single administrative interface to all Oracle Big Data Appliance servers configured as part of the Hadoop cluster.

Cloudera Manager simplifies the performance of these administrative tasks:

Cloudera Manager runs on the Cloudera Manager node (node02) and is available on port 7180.

To use Cloudera Manager: 

  1. Open a browser and enter a URL like the following:

    http://bda1node02.example.com:7180
    

    In this example, bda1 is the name of the appliance, node02 is the name of the server, example.com is the domain, and 7180 is the default port number for Cloudera Manager.

  2. Log in with a user name and password for Cloudera Manager. Only a user with administrative privileges can change the settings. Other Cloudera Manager users can view the status of Oracle Big Data Appliance.

See Also:

Cloudera Manager User Guide at

http://ccp.cloudera.com/display/ENT/Cloudera+Manager+User+Guide

or click Help on the Cloudera Manager Help menu

2.1.1 Monitoring the Status of Oracle Big Data Appliance

In Cloudera Manager, you can choose any of the following pages from the menu bar across the top of the display:

  • Services: Monitors the status and health of services running on Oracle Big Data Appliance. Click the name of a service to drill down to additional information.

  • Hosts: Monitors the health, disk usage, load, physical memory, swap space, and other statistics for all servers.

  • Activities: Monitors all MapReduce jobs running in the selected time period.

  • Logs: Collects historical information about the systems and services. You can search for a particular phrase for a selected server, service, and time period. You can also select the minimum severity level of the logged messages included in the search: TRACE, DEBUG, INFO, WARN, ERROR, or FATAL.

  • Events: Records a change in state and other noteworthy occurrences. You can search for one or more keywords for a selected server, service, and time period. You can also select the event type: Audit Event, Activity Event, Health Check, or Log Message.

  • Reports: Generates reports on demand for disk and MapReduce use.

Figure 2-1 shows the opening display of Cloudera Manager, which is the Services page.

Figure 2-1 Cloudera Manager Services Page

Description of Figure 2-1 follows
Description of "Figure 2-1 Cloudera Manager Services Page"

2.1.2 Performing Administrative Tasks

As a Cloudera Manager administrator, you can change various properties for monitoring the health and use of Oracle Big Data Appliance, add users, and set up Kerberos security.

To access Cloudera Manager Administration: 

  1. Log in to Cloudera Manager with administrative privileges.

  2. Click Welcome admin at the top right of the page.

2.1.3 Collecting Diagnostic Information

If you need help from Oracle Support to troubleshoot CDH issues, then you should first collect diagnostic information using Cloudera Manager.

To collect diagnostic information about CDH: 

  1. Log in to Cloudera Manager with administrative privileges.

  2. From the Help menu, click Send Diagnostic Data.

  3. Verify that Send Diagnostic Data to Cloudera Automatically is not selected. Keep the other default settings.

  4. Click Collect Host Statistics Globally.

  5. Wait while all statistics are collected on all nodes.

  6. Click Download Result Data and save the ZIP file with the default name. It identifies your CDH license.

  7. Go to My Oracle Support at http://support.oracle.com.

  8. Open a Service Request (SR) if you have not already done so.

  9. Upload the ZIP file into the SR. If the file is too large, then upload it to ftp.oracle.com, as described in the next procedure.

To upload the diagnostics to ftp.oracle.com: 

  1. Open an FTP client and connect to ftp.oracle.com.

    You can use an FTP client such as WinSCP to upload the ZIP file. See Example 2-1 if you are using a command-line FTP client.

  2. Log in as user anonymous and leave the password field blank.

  3. In the bda/incoming directory, create a directory using the SR number for the name, in the format SRnumber. The resulting directory structure looks like this:

    bda
       incoming
          SRnumber
    
  4. Set the binary option to prevent corruption of binary data.

  5. Upload the diagnostics ZIP file to the bin directory.

  6. Update the SR with the full path and file name.

Example 2-1 shows the commands to upload the diagnostics using the Windows FTP command interface.

Example 2-1 Uploading Diagnostics Using Windows FTP

ftp> open ftp.oracle.com
Connected to bigip-ftp.oracle.com.
220-***********************************************************************
220-Oracle FTP Server
         .
         .
         .
220-****************************************************************************
 
220
User (bigip-ftp.oracle.com:(none)): anonymous
331 Please specify the password.
Password:
230 Login successful.
ftp> cd bda/incoming
250 Directory successfully changed.
ftp> mkdir SR12345
257 "/bda/incoming/SR12345" created
ftp> cd SR12345
250 Directory successfully changed.
ftp> bin
200 Switching to Binary mode.
ftp> put D:\Downloads\3609df...c1.default.20122505-15-27.host-statistics.zip
200 PORT command successful. Consider using PASV.
150 Ok to send data.
226 File receive OK.
ftp: 706755 bytes sent in 1.97Seconds 358.58Kbytes/sec.

2.2 Using Hadoop Monitoring Utilities

Users can monitor MapReduce jobs without providing a Cloudera Manager user name and password.

2.2.1 Monitoring the JobTracker

Hadoop Map/Reduce Administration monitors the JobTracker, which runs on port 50030 of the JobTracker node (node03) on Oracle Big Data Appliance.

To monitor the JobTracker: 

  • Open a browser and enter a URL like the following:

    http://bda1node03.example.com:50030
    

    In this example, bda1 is the name of the appliance, node03 is the name of the server, and 50030 is the default port number for Hadoop Map/Reduce Administration.

Figure 2-2 shows part of a Hadoop Map/Reduce Administration display.

Figure 2-2 Hadoop Map/Reduce Administration

Description of Figure 2-2 follows
Description of "Figure 2-2 Hadoop Map/Reduce Administration"

2.2.2 Monitoring the TaskTracker

The Task Tracker Status interface monitors the TaskTracker on a single node. It is available on port 50060 of all noncritical nodes (node04 to node18) in Oracle Big Data Appliance.

To monitor a TaskTracker: 

  • Open a browser and enter the URL for a particular node like the following:

    http://bda1node13.example.com:50060
    

    In this example, bda1 is the name of the rack, node13 is the name of the server, and 50060 is the default port number for the Task Tracker Status interface.

Figure 2-3 shows the Task Tracker Status interface.

Figure 2-3 Task Tracker Status Interface

Description of Figure 2-3 follows
Description of "Figure 2-3 Task Tracker Status Interface"

2.3 Providing Remote Client Access to CDH

Oracle Big Data Appliance supports full local access to all commands and utilities in Cloudera's Distribution including Apache Hadoop (CDH).

You can use a browser on any computer that has access to the client network of Oracle Big Data Appliance to access Cloudera Manager, Hadoop Map/Reduce Administration, the Hadoop Task Tracker interface, and other browser-based Hadoop tools.

To issue Hadoop commands remotely, however, you must connect from a system configured as a CDH client with access to the Oracle Big Data Appliance client network. This section explains how to set up a computer so that you can access HDFS and submit MapReduce jobs on Oracle Big Data Appliance.

To follow these procedures, you must have these access privileges:

If you do not have these access privileges, then contact your system administrator for help.

2.3.1 Installing CDH on the Client System

The system that you use to access Oracle Big Data Appliance must run an operating system that Cloudera supports for CDH3. For the list of supported operating systems, see "Before You Install CDH3 on a Cluster" in the Cloudera CDH3 Installation Guide at

https://ccp.cloudera.com/display/CDHDOC/Before+You+Install+CDH3+on+a+Cluster

To install the CDH client software: 

  1. Follow the installation instructions for your operating system provided in the Cloudera CDH3 Installation Guide at

    https://ccp.cloudera.com/display/CDHDOC/CDH3+Installation

    When you are done installing the Hadoop core and native packages, the system can act as a basic CDH client.

    Note:

    Be sure to install CDH3 Update 4 (CDH3u4) or a later version.
  2. To provide support for other components, such as Hive, Pig, or Oozie, see the component installation instructions.

2.3.2 Configuring CDH

After installing CDH, you must configure it for use with Oracle Big Data Appliance.

To configure the Hadoop client: 

  1. Open a browser on your client system and connect to Cloudera Manager. It runs on the Cloudera Manager node (node02) and listens on port 7180, as shown in this example:

    http://bda1node02.example.com:7180
    
  2. Log in as admin.

  3. Cloudera Manager opens on the Services tab. Click the Generate Client Configuration button.

  4. On the Command Details page (shown in Figure 2-4), click Download Result Data to download global-clientconfig.zip.

  5. Unzip global-clientconfig.zip into the /tmp directory on the client system. It creates a hadoop-conf directory containing these files:

    core-site.xml
    hadoop-env.sh
    hdfs-site.xml
    log4j.properties
    mapred-site.xml
    README.txt
    ssl-client.xml.example
    
  6. Open hadoop-env.sh in a text editor and change JAVA_HOME to the correct location on your system:

    export JAVA_HOME=full_directory_path
    
  7. Delete the number sign (#) to uncomment the line, and then save the file.

  8. Copy the configuration files to the Hadoop conf directory:

    cd /tmp/hadoop-conf
    cp * /usr/lib/hadoop/conf/
    
  9. Validate the installation by changing to the mapred user and submitting a MapReduce job, such as the one shown here:

    su mapred
    hadoop jar /usr/lib/hadoop/hadoop-examples.jar pi 10 1000000
    

Figure 2-4 shows the download page for the client configuration.

Figure 2-4 Cloudera Manager Command Details: GenerateClient Page

Description of Figure 2-4 follows
Description of "Figure 2-4 Cloudera Manager Command Details: GenerateClient Page"

2.4 Managing User Accounts

Every open-source package installed on Oracle Big Data Appliance creates one or more users and groups. Most of these users do not have login privileges, shells, or home directories. They are used by daemons and are not intended as an interface for individual users. For example, Hadoop operates as the hdfs user, MapReduce operates as mapred, and Hive operates as hive. Table 2-1 identifies the operating system users and groups that are created automatically during installation of Oracle Big Data Appliance software for use by CDH components and other software packages.

You can use the oracle identity to run Hadoop and Hive jobs immediately after the Oracle Big Data Appliance software is installed. This user account has login privileges, a shell, and a home directory. Oracle NoSQL Database and Oracle Data Integrator run as the oracle user. Its primary group is oinstall.

Note:

Do not delete or modify the users created during installation, because they are required for the software to operate.

When creating additional user accounts, define them as follows:

Table 2-1 Operating System Users and Groups

User Name Group Used By Login Rights

flume

flume

Flume parent and nodes

No

hbase

hbase

HBase processes

No

hdfs

hadoop

NameNode, DataNode

No

hive

hive

Hive metastore and server processes

No

hue

hue

Hue processes

No

mapred

hadoop

JobTracker, TaskTracker, Hive Thrift daemon

Yes

mysql

mysql

MySQL server

Yes

oozie

oozie

Oozie server

No

oracle

dba, oinstall

Oracle NoSQL Database, Oracle Loader for Hadoop, Oracle Data Integrator, and the Oracle DBA

Yes

puppet

puppet

Puppet parent (puppet nodes run as root)

No

sqoop

sqoop

Sqoop metastore

No

svctag

--

Auto Service Request

No

zookeeper

zookeeper

ZooKeeper processes

No


2.5 Recovering Deleted Files

CDH provides an optional trash facility, so that when a user deletes a file, it is moved to a trash directory for a set period of time instead of being deleted immediately from the system.

2.5.1 Restoring Files from the Trash

When the trash facility is enabled, you can easily restore files that were previously deleted with the Hadoop rm file-system command. Files deleted by other programs are not copied to the trash directory.

To restore a file from the trash directory: 

  1. Check that the deleted file is in the trash. The following example checks for files deleted by the oracle user:

    $ hadoop fs -ls .Trash/Current/user/oracle
    Found 1 items
    -rw-r--r--  3 oracle hadoop  242510990 2012-08-31 11:20 /user/oracle/.Trash/Current/user/oracle/ontime_s.dat
    
  2. Move or copy the file to its previous location. The following example moves ontime_s.dat from the trash to the HDFS /user/oracle directory.

    $ hadoop fs -mv .Trash/Current/user/oracle/ontime_s.dat /user/oracle/ontime_s.dat
    

2.5.2 Setting Up the Trash Facility

In this release of Oracle Big Data Appliance, the trash facility is disabled by default. Complete the following procedure to enable it.

To enable the trash facility: 

  1. On each node where you want to enable the trash facility, add the following property description to /etc/hadoop/conf/hdfs-site.xml:

    <property>
         <name>fs.trash.interval</name>
         <value>1</value>
    </property>
    

    Note:

    You can edit the hdfs-site.xml file once and then use the dcli utility to copy the file to the other nodes. See the Oracle Big Data Appliance Owner's Guide.
  2. Change the trash interval as desired (optional). See "Changing the Trash Interval".

  3. Restart the hdfs1 service:

    1. Open Cloudera Manager. See "Managing CDH Operations".

    2. Locate hdfs1 on the Cloudera Manager Services page.

    3. Expand the hdfs1 Actions menu and choose Restart.

  4. Verify that trash collection is working properly:

    1. Copy a file from the local file system to HDFS. This example copies a data file named ontime_s.dat to the HDFS /user/oracle directory:

      $ hadoop fs -put ontime_s.dat /user/oracle
      
    2. Delete the file from HDFS:

      $ hadoop fs -rm ontime_s.dat
      Moved to trash: hdfs://bda1node02.example.com/user/oracle/ontime_s.dat
      
    3. Locate the trash directory in your home Hadoop directory, such as /user/oracle/.Trash. The directory is created when you delete a file for the first time after trash is enabled.

      $ hadoop fs -ls .Trash
      Found 1 items
      drwxr-xr-x  - oracle hadoop  0 2012-08-31 11:20 /user/oracle/.Trash/Current
      
    4. Check that the deleted file is in the trash. The following command lists files deleted by the oracle user:

      hadoop fs -ls .Trash/Current/user/oracle
      Found 1 items
      -rw-r--r--  3 oracle hadoop  242510990 2012-08-31 11:20 /user/oracle/.Trash/Current/user/oracle/ontime_s.dat
      
  5. If trash collection is not working on a particular node, then verify that fs.trash.interval is set in the /etc/hadoop/conf/hdfs-site.xml file on that node, and then restart the hdfs1 service.

2.5.3 Changing the Trash Interval

The trash interval is the minimum number of minutes that a file remains in the trash directory before being deleted permanently from the system. The default value is 1440 minutes (24 hours).

To change the trash interval: 

  1. Open Cloudera Manager. See "Managing CDH Operations".

  2. On the Services page under Name, click hdfs1.

  3. On the hdfs1 page, click the Configuration subtab.

  4. Search for or scroll down to the Filesystem Trash Interval property under NameNode Settings. See Figure 2-5.

  5. Enter a new interval (in minutes) in the value field. A value of 0 disables the trash facility.

  6. Click Save Changes.

  7. Expand the Actions menu at the top of the page and choose Restart.

Figure 2-5 shows the Filesystem Trash Interval property in Cloudera Manager.

Figure 2-5 HDFS Property Settings in Cloudera Manager

Description of Figure 2-5 follows
Description of "Figure 2-5 HDFS Property Settings in Cloudera Manager"

2.6 Software Layout

The following sections identify the software installed on Oracle Big Data Appliance and where it runs in the rack. Some components operate with Oracle Database 11.2.0.2 and later releases.

2.6.1 Software Components

These software components are installed on all 18 servers in Oracle Big Data Appliance Rack. Oracle Linux, required drivers, firmware, and hardware verification utilities are factory installed. All other software is installed on site using the Mammoth Utility.

Note:

You do not need to install software on Oracle Big Data Appliance. Doing so may result in a loss of warranty and support. See the Oracle Big Data Appliance Owner's Guide.

Installed software: 

  • Oracle Linux 5.6

  • Java HotSpot Virtual Machine 6 Update 29

  • Cloudera's Distribution including Apache Hadoop Release 3 Update 4 (CDH)

  • Cloudera Manager 3.7

  • Oracle Loader for Hadoop 1.1

  • Oracle NoSQL Database Community Edition 11g Release 1.2.125

  • Oracle Data Integrator Agent 11.1.1.6.0

  • Oracle R Connector for Hadoop 1.1

  • Oracle R Distribution 2.13.2

  • Oracle Database Instant Client 11.2.0.3

  • MySQL Database 5.5.17 Advanced Edition

See Also:

Oracle Big Data Appliance Owner's Guide for information about the Mammoth Utility

Figure 2-6 shows the relationships among the major components.

Figure 2-6 Major Software Components of Oracle Big Data Appliance

Description of Figure 2-6 follows
Description of "Figure 2-6 Major Software Components of Oracle Big Data Appliance"

2.6.2 Logical Disk Layout

Each server has 12 disks. The critical information is stored on disks 1 and 2.

Table 2-2 describes how the disks are partitioned.

Table 2-2 Logical Disk Layout

Disk Description

1 to 2

150 gigabytes (GB) mirrored, physical and logical partition with the Linux operating system, all installed software, NameNode data, and MySQL Database data, for a total of four copies

2.8 terabytes (TB) HDFS data partition

3

Single Oracle NoSQL Database partition, if activated during software installation; otherwise, a single HDFS data partition

4 to 12

Single HDFS data partition


2.7 Software Services

This section contains the following topics:

2.7.1 Monitoring the Services

You can use Cloudera Manager to monitor the services on Oracle Big Data Appliance.

To monitor the services: 

  1. In Cloudera Manager, click the Services tab at the top of the page to display the Services page.

  2. Click the name of a service to see its detail pages. The service opens on the Status page.

  3. Click the link to the page that you want to view: Status, Instances, Commands, Configuration, or Audits.

2.7.2 About the Parent Services

Table 2-3 describes the parent services and those that run without child services. A parent service controls one or more child services.

Services that are always on are required for normal operation. Services that you can switch on and off are optional.

Table 2-3 Parent Services

Service Role Description Default Status

hbase

--

HBase database

OFF

hdfs1

NameNode

Tracks all files stored in the cluster.

Always ON

hdfs1

Secondary NameNode

Tracks information for the NameNode

Always ON

hdfs1

Balancer

Periodically issues the balancer command; although the balancer service is enabled, it does not run all the time

Always ON

hive

--

Hive data warehouse for Hadoop

Always ON

hue1

Hue Server

GUI for HDFS, MapReduce, and Hive, with shells for Pig, Flume, and HBase

Always ON

mapreduce1

JobTracker

Used by MapReduce

Always ON

mgmt1

all

Cloudera Manager

Always ON

MySQL

--

MySQL Master Database

ON

ODI Agent

--

Oracle Data Integrator agent, installed on same node as MySQL Database

ON

oozie

--

Workflow and coordination service for Hadoop

OFF

ZooKeeper

--

ZooKeeper coordination service

OFF


2.7.3 About the Child Services

Table 2-4 describes the child services. A child service is controlled by a parent service.

Table 2-4 Child Services

Service Role Description Default Status

HBase Region Server

--

Hosts data and processes requests for HBase

OFF

hdfs1

DataNode

Stores data in HDFS

Always ON

mapreduce1

TaskTracker

Accepts tasks from the JobTracker

Always ON

NoSQL DB Storage Node

--

Supports Oracle NoSQL Database

ON

nosqldb

--

Supports a web console or command-line interface for administering Oracle NoSQL Database

ON


2.7.4 Where Do the Services Run?

All services are installed on all servers, but individual services run only on designated nodes in the Hadoop cluster.

2.7.4.1 Service Locations

Table 2-5 identifies the nodes where the services run on the primary rack. Services that run on all nodes run on all racks of a multirack installation.

Table 2-5 Software Service Locations

Service Node Name Initial Node Position

Balancer

HDFS node

Node01

Beeswax Server

JobTracker node

Node03

Cloudera Manager Agents

All nodes

All nodes

Cloudera Manager SCM Server

Cloudera Manager node

Node02

DataNode

All nodes

All nodes

Hive Server

JobTracker node

Node03

Hue Server

JobTracker node

Node03

JobTracker

JobTracker node

Node03

MySQL BackupFoot 1 

JobTracker node

Node03

MySQL Primary ServerFootref 1

Cloudera Manager node

Node02

NameNode

HDFS node

Node01

Oracle Data Integrator AgentFoot 2 

JobTracker node

Node03

Oracle NoSQL Database AdministrationFootref 2

Cloudera Manager node

Node02

Oracle NoSQL Database Server ProcessesFootref 2

All nodes

All nodes

Puppet Agents

All nodes

All nodes

Puppet Master

HDFS Node

Node01

Secondary NameNode

Cloudera Manager node

Node02

TaskTracker

All noncritical nodes

Node04 to Node18


Footnote 1 If the software was upgraded from version 1.0, then MySQL Backup remains on node02 and MySQL Primary Server remains on node03.

Footnote 2 Started only if requested in the Oracle Big Data Appliance Configuration Worksheets

2.7.4.2 NameNode

The NameNode is the most critical process because it keeps track of the location of all data. Without a healthy NameNode, the entire cluster fails. This vulnerability is intrinsic to Apache Hadoop (v0.20.2 and earlier).

Oracle Big Data Appliance protects against catastrophic failure by maintaining four copies of the NameNode logs:

  • Node01: The working copy of the NameNode snapshot and update logs is stored in /opt/hadoop/dfs/ and is automatically mirrored in a local Linux partition.

  • Node02: A backup copy of the logs is stored in /opt/shareddir/ and is also automatically mirrored in a local Linux partition.

A backup copy outside of Oracle Big Data Appliance can be configured during the software installation.

Note:

The Secondary NameNode is not a backup of the primary NameNode and does not provide failover. The Secondary NameNode performs memory-intensive functions for the primary NameNode.

2.7.4.3 Unconfigured Software

The following tools are installed but not configured. Before using them, you must configure them for your use.

  • Flume

  • HBase

  • Mahout

  • Oozie

  • Sqoop

  • Whirr

  • ZooKeeper

See Also:

CDH3 Installation and Configuration Guide for configuration procedures at

http://oracle.cloudera.com

2.8 Effects of Hardware on Software Availability

The effects of a server failure vary depending on the server's function within the CDH cluster. Sun Fire servers are more robust than commodity hardware, so you should experience fewer hardware failures. This section highlights the most important services that run on the various servers of the primary rack. For a full list, see "Service Locations".

2.8.1 Critical and Noncritical Nodes

Critical nodes are required for the cluster to operate normally and provide all services to users. In contrast, the cluster continues to operate with no loss of service when a noncritical node fails.

The critical services are installed initially on the first four nodes of the primary rack. Table 2-6 identifies the critical services that run on these nodes. The remaining nodes (initially node05 to node18) only run DataNode and TaskTracker services. If a hardware failure occurs on one of the critical nodes, then the services can be moved to another, noncritical server. For example, if node02 fails, its critical services might be moved to node05. For this reason, Table 2-6 provides names to identify the nodes providing critical services.

Table 2-6 Critical Nodes

Node Name Initial Node Position Critical Functions

HDFS Node

Node01

NameNode, balancer, puppet master; the cluster is unusable without the NameNode service.

Cloudera Manager Node

Node02

Cloudera Manager, Secondary NameNode, MySQL Server, Oracle NoSQL Database KV Administration

JobTracker Node

Node03

JobTracker, Hue, MySQL Server Backup

Backup Node

Node04

NameNode and Secondary NameNode data backups, unless an external NFS directory is configured for the cluster, making node04 a noncritical node


2.8.2 HDFS Node

The HDFS node (node01) is critically important because it is where the NameNode runs. If this server fails, the effect is downtime for the entire cluster, because the NameNode keeps track of the data locations. However, there are always four copies of the NameNode metadata.

The current state and update logs are written to these locations:

  • HDFS node (node01): /opt/hadoop/dfs/ on Disk 1 is the working copy with a Linux mirrored partition on Disk 2 providing a second copy.

  • Backup node (node04): /opt/shareddir/ on Disk 1 is the third copy, which is also duplicated on a mirrored partition on Disk 2. These copies can be written to an external NFS directory instead of the backup node.

2.8.3 Cloudera Manager Node

The Secondary NameNode runs on the Cloudera Manager node (node02). There are backups for this data like those for the NameNode. If the node fails, then these services are also disrupted:

  • Cloudera Manager: This tool provides central management for the entire CDH cluster. Without this tool, you can still monitor activities using the utilities described in "Using Hadoop Monitoring Utilities".

  • MySQL Master Database: Cloudera Manager, Oracle Data Integrator, Hive, and Oozie use MySQL Database. The data is replicated automatically, but you cannot access it when the master database server is down.

  • Oracle NoSQL Database KV Administration: Oracle NoSQL Database database is an optional component of Oracle Big Data Appliance, so the extent of a disruption due to a node failure depends on whether you are using it and how critical it is to your applications.

2.8.4 JobTracker Node

The JobTracker assigns MapReduce tasks to specific nodes in the CDH cluster. Without the JobTracker node (node03), this critical function is not performed. If the node fails, then these services are also disrupted:

  • Oracle Data Integrator: This service supports Oracle Data Integrator Application Adapter for Hadoop. You cannot use this connector when the JobTracker node is down.

  • Hue: Cloudera Manager uses Hadoop User Experience (Hue), and so Cloudera Manager is unavailable when Hue is unavailable.

  • MySQL Backup Database: MySQL Database continues to run, although there is no backup of the master database.

2.8.5 Backup Node

The backup node (node04) backs up the NameNode and Secondary NameNode for most installations. The backups are important for ensuring the smooth functioning of the cluster, but there is no loss of user services if the backup node fails.

Some installations are configured to use an external NFS directory instead of a backup node. This is a configuration option decided during installation of the appliance. When the backup is stored outside the appliance, the node is noncritical.

2.8.6 Noncritical Nodes

The noncritical nodes (node05 to node18) are optional in that Oracle Big Data Appliance continues to operate with no loss of service if a failure occurs. The NameNode automatically replicates the lost data to maintain three copies at all times. MapReduce jobs execute on copies of the data stored elsewhere in the cluster. The only loss is in computational power, because there are fewer servers on which to distribute the work.

2.9 Security on Oracle Big Data Appliance

You can take precautions to prevent unauthorized use of the software and data on Oracle Big Data Appliance.

This section contains these topics:

2.9.1 CDH Security

Apache Hadoop is not an inherently secure system. It is protected only by network security. After a connection is established, a client has full access to the system.

Cloudera's Distribution including Apache Hadoop (CDH) supports Kerberos network authentication protocol to prevent malicious impersonation. You must install and configure Kerberos and set up a Kerberos Key Distribution Center and realm. Then you configure various components of CDH to use Kerberos.

CDH provides these securities when configured to use Kerberos:

  • The CDH master nodes, NameNode, and JobTracker resolve the group name so that users cannot manipulate their group memberships.

  • Map tasks run under the identity of the user who submitted the job.

  • Authorization mechanisms in HDFS and MapReduce help control user access to data.

See Also:

http://oracle.cloudera.com for these manuals:
  • CDH3 Security Guide

  • Configuring Hadoop Security with Cloudera Manager

  • Configuring TLS Security for Cloudera Manager

2.9.2 Port Numbers Used on Oracle Big Data Appliance

Table 2-7 identifies the port numbers that might be used in addition to those used by CDH. For the full list of CDH port numbers, go to the Cloudera website at

http://ccp.cloudera.com/display/CDHDOC/Configuring+Ports+for+CDH3

To view the ports used on a particular server: 

  1. In Cloudera Manager, click the Hosts tab at the top of the page to display the Hosts page.

  2. In the Name column, click a server link to see its detail page.

  3. Scroll down to the Ports section.

See Also:

The Cloudera website for CDH port numbers:

Table 2-7 Oracle Big Data Appliance Port Numbers

Service Port

Automated Service Monitor (OASM)

30920

MySQL Database

3306

Oracle Data Integrator Agent

20910

Oracle NoSQL Database administration

5001

Oracle NoSQL Database processes

5010 to 5020

Oracle NoSQL Database registration

5000

Port map

111

Puppet master service

8140

Puppet node service

8139

rpc.statd

668

ssh

22

xinetd (service tag)

6481


2.9.3 About Puppet Security

The puppet node service (puppetd) runs continuously as root on all servers. It listens on port 8139 for "kick" requests, which trigger it to request updates from the puppet master. It does not receive updates on this port.

The puppet master service (puppetmasterd) runs continuously as the puppet user on the first server of the primary Oracle Big Data Appliance rack. It listens on port 8140 for requests to push updates to puppet nodes.

The puppet nodes generate and send certificates to the puppet master to register initially during installation of the software. For updates to the software, the puppet master signals ("kicks") the puppet nodes, which then request all configuration changes from the puppet master node that they are registered with.

The puppet master sends updates only to puppet nodes that have known, valid certificates. Puppet nodes only accept updates from the puppet master host name they initially registered with. Because Oracle Big Data Appliance uses an internal network for communication within the rack, the puppet master host name resolves using /etc/hosts to an internal, private IP address.