This chapter describes how you can support users who run MapReduce jobs on Oracle Big Data Appliance or using Oracle Big Data Connectors. It contains these sections:
Apache Hadoop is not an inherently secure system. It is protected only by network security. After a connection is established, a client has full access to the system.
To counterbalance this open environment, Oracle Big Data Appliance supports Kerberos security as a software installation option. Kerberos is a network authentication protocol that helps prevent malicious impersonation.
CDH provides these securities when configured to use Kerberos:
The CDH master nodes, NameNode, and JobTracker resolve the group name so that users cannot manipulate their group memberships.
Map tasks run under the identity of the user who submitted the job.
Authorization mechanisms in HDFS and MapReduce help control user access to data.
If the Oracle Big Data Appliance cluster is secured with Kerberos, then you must take additional steps to authenticate a CDH client and individual users, as described in this chapter. Users must know their Kerberos user name, password, and realm.
Table 3-1 describes some frequently used Kerberos commands. For more information, see the MIT Kerberos documentation.
Table 3-1 Kerberos User Commands
Command | Description |
---|---|
|
Obtains a Kerberos ticket. |
|
Lists a Kerberos ticket if you have one already. |
|
Invalidates a ticket before it expires. |
|
Changes your password. |
See Also:
MIT Kerberos Documentation at
CDH4 Security Guide at
Oracle Big Data Appliance supports full local access to all commands and utilities in Cloudera's Distribution including Apache Hadoop (CDH).
You can use a browser on any computer that has access to the client network of Oracle Big Data Appliance to access Cloudera Manager, Hadoop Map/Reduce Administration, the Hadoop Task Tracker interface, and other browser-based Hadoop tools.
To issue Hadoop commands remotely, however, you must connect from a system configured as a CDH client with access to the Oracle Big Data Appliance client network. This section explains how to set up a computer so that you can access HDFS and submit MapReduce jobs on Oracle Big Data Appliance.
See Also:
My Oracle Support ID 1506203.1Ensure that you have met the following prerequisites:
You must have these access privileges:
Sudo access to the client system
Login access to Cloudera Manager
If you do not have these privileges, then contact your system administrator for help.
The client system must run an operating system that Cloudera supports for CDH4. See the Cloudera CDH4 Installation Guide at
The client system must run Oracle JDK 1.6_28 or later. CDH4 is certified with 1.6.0_31 and 1.7.0_15.
To verify the version, use this command:
$ java -version
java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
When you use Oracle Exadata Database Machine as the client, you can use the RPM files on Oracle Big Data Appliance when both engineered systems use the same operating system (Oracle Linux 5.x or 6.x). Copying the files across the local network is faster than downloading them from the Cloudera website.
Note:
In the following steps, replace version_number with the missing portion of the file name, such as2.2.0+189-1.cdh4.2.0.p0.8.el5
.To install a CDH client on Oracle Exadata Database Machine:
Log into an Exadata database server.
Verify that Hadoop is not installed on your Exadata system:
rpm -qa | grep hadoop
If the rpm
command returns a value, then remove the existing Hadoop software:
rpm -e hadoop_rpm
Copy the following Linux RPMs to the database server from the first server of Oracle Big Data Appliance. The RPMs are located in the /opt/oracle/BDAMammoth/bdarepo/RPMS/x86_64
directory.
ed-
version_number
.x86_64.rpm
m4-
version_number
.x86_64.rpm
nc-
version_number
.x86_64.rpm
redhat-lsb-
version_number
.x86_64.rpm
Install the Oracle Linux RPMs from Step 4 on all database nodes. For example:
sudo yum --nogpgcheck localinstall ed-0.2-39.el5_2.x86_64.rpm sudo yum --nogpgcheck localinstall m4-1.4.5-3.el5.1.x86_64.rpm sudo yum --nogpgcheck localinstall nc-1.84-10.fc6.x86_64.rpm sudo yum --nogpgcheck localinstall redhat-lsb-4.0-2.1.4.0.2.el5.x86_64.rpm
Be sure to install the Oracle Linux RPMs before installing the CDH RPMs.
Copy the following CDH RPMs from the /opt/oracle/BDAMammoth/bdarepo/RPMS/noarch
directory:
bigtop-utils-
version_number
.noarch.rpm
zookeeper-
version_number
.noarch.rpm
Copy the following CDH RPMs from the /opt/oracle/BDAMammoth/bdarepo/RPMS/x86_64
directory:
hadoop-
version_number
.x86_64.rpm
bigtop-jsvc-
version_number
.x86_64.rpm
hadoop-hdfs-
version_number
.x86_64.rpm
hadoop-0.20-mapreduce-
version_number
.x86_64.rpm
hadoop-yarn-
version_number
.x86_64.rpm
hadoop-mapreduce-
version_number
.x86_64.rpm
hadoop-client-
version_number
.x86_64.rpm
Install the CDH RPMs in the exact order shown in Steps 6 and 7 on all database servers. For example:
rpm -ihv /bigtop-utils-0.4+502-1.cdh4.2.0.p0.12.el5.noarch.rpm rpm -ihv zookeeper-3.4.5+14-1.cdh4.2.0.p0.12.el5.noarch.rpm rpm -ihv hadoop-2.0.0+922-1.cdh4.2.0.p0.12.el5.x86_64.rpm rpm -ihv bigtop-jsvc-1.0.10-1.cdh4.2.0.p0.13.el5.x86_64.rpm rpm -ihv hadoop-hdfs-2.0.0+922-1.cdh4.2.0.p0.12.el5.x86_64.rpm rpm -ihv hadoop-0.20-mapreduce-0.20.2+1341-1.cdh4.2.0.p0.21.el5.x86_64.rpm rpm -ihv hadoop-yarn-2.0.0+922-1.cdh4.2.0.p0.12.el5.x86_64.rpm rpm -ihv hadoop-mapreduce-2.0.0+922-1.cdh4.2.0.p0.12.el5.x86_64.rpm rpm -ihv hadoop-client-2.0.0+922-1.cdh4.2.0.p0.12.el5.x86_64.rpm
Configure the CDH client. See "Configuring a CDH Client for an Unsecured Cluster."
To install a CDH client on any operating system identified as supported by Cloudera, follow these instructions.
To install the CDH client software:
Log in to the client system.
If an earlier version of Hadoop is already installed, then remove it.
See the Cloudera documentation for removing CDH3 at
Download the CDH 4.5 software from the Cloudera website:
$ wget http://archive.cloudera.com/cdh4/cdh/4/hadoop-2.0.0-cdh4.5.0.tar.gz
Decompress the file into a permanent location, which will be the Hadoop home directory. The following command unzips the files into hadoop-2.0.0-cdh4.4.0
in the current directory:
tar -xvzf hadoop-2.0.0-cdh4.5.0.tar.gz
Configure the CDH client. See "Configuring a CDH Client for an Unsecured Cluster."
After installing CDH, you must configure it for use with Oracle Big Data Appliance.
To configure the Hadoop client:
Log in to the client system and download the MapReduce client configuration from Cloudera Manager. In this example, Cloudera Manager listens on port 7180 (the default) of bda01node03.example.com
, and the configuration is stored in a file named mapreduce-conf.zip
.
$ wget -O mapreduce-conf.zip http://bda01node03.examplecom:7180/cmf/services/2/client-config
Unzip mapreduce-config.zip
into a permanent location on the client system.
$ unzip mapreduce-config.zip
Archive: mapreduce-clientconfig.zip
inflating: hadoop-conf/hadoop-env.sh
inflating: hadoop-conf/core-site.xml
inflating: hadoop-conf/hdfs-site.xml
inflating: hadoop-conf/log4j.properties
inflating: hadoop-conf/mapred-site.xml
All files are stored in a subdirectory named hadoop-config
.
Open hadoop-env.sh
in a text editor and set the environment variables to the actual paths on your system:
export HADOOP_HOME=hadoop-home-dir/share/hadoop/mapreduce1 export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=hadoop-conf-dir export JAVA_HOME=/usr/java/version ln -s $HADOOP_HOME/../../../bin-mapreduce1 $HADOOP_HOME/bin ln -s $HADOOP_HOME/../../../etc/hadoop-mapreduce1 $HADOOP_HOME/conf rm -f $HADOOP_HOME/lib/slf4j-log4j*jar alias hadoop=$HADOOP_HOME/bin/hadoop alias hdfs=$HADOOP_HOME/../../../bin/hdfs
Make a backup copy of the Hadoop configuration files:
# cp /full_path/hadoop-conf /full_path/hadoop-conf-bak
Overwrite the existing configuration files with the downloaded configuration files in Step 2.
# cd /full_path/hadoop-conf
# cp * /usr/lib/hadoop/conf
Follow these steps to enable the CDH client to work with a secure CDH cluster.
To configure a CDH client for Kerberos:
Log in to the system where you created the CDH client.
Install the Java Cryptography Extension Unlimited Strength Jurisdiction Policy Files:
Download the files for your Java version:
Java 6: http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html
Java 7: http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html
Decompress the downloaded file. This example unzips JCE-7:
$ unzip UnlimitedJCEPolicyJDK7.zip
Archive: UnlimitedJCEPolicyJDK7.zip
creating: UnlimitedJCEPolicy/
inflating: UnlimitedJCEPolicy/US_export_policy.jar
inflating: UnlimitedJCEPolicy/local_policy.jar
inflating: UnlimitedJCEPolicy/README.txt
Note:
The JCE-6 files unzip into a directory named jce instead of UnlimitedJCEPolicy.Copy the unzipped files into the Java security directory. For example:
$ cp UnlimitedJCEPolicy/* /usr/java/latest/jre/lib/security/
Follow the steps for configuring an unsecured client.
Ensure that you have a user ID on the CDH cluster that had been added to the Kerberos realm.
On the CDH client system, create a file named krb5.conf
in the $HADOOP_CONF_DIR
directory. Enter configuration settings like the following, using values appropriate for your installation for the server names, domain, and realm:
[libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = false clockskew = 3600 ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] EXAMPLE.COM = { kdc = bda01node01.example:88 admin_server = bda01node07:749 default_domain = example.com } [domain_realm] .com = EXAMPLE.COM
Activate the new configuration file:
export KRB5_CONFIG=$HADOOP_CONF_DIR/krb5.conf export HADOOP_OPTS="-Djava.security.krb5.conf=$HADOOP_CONF_DIR/krb5.conf" export KRB5CCNAME=$HADOOP_CONF_DIR/krb5cc_$USER
Verify that you have access to the Oracle Big Data Appliance cluster.
Follow this procedure to ensure that you have access to the Oracle Big Data Appliance cluster.
To verify cluster access:
To access a Kerberos-protected CDH cluster, first obtain a ticket granting ticket (TGT):
$ kinit userid@realm
Verify that you can access HDFS on Oracle Big Data Appliance from the client, by entering a simple Hadoop file system command like the following:
$ hadoop fs -ls /user
Found 4 items
drwx------ - hdfs supergroup 0 2013-01-16 13:50 /user/hdfs
drwxr-xr-x - hive supergroup 0 2013-01-16 12:58 /user/hive
drwxr-xr-x - oozie hadoop 0 2013-01-16 13:01 /user/oozie
drwxr-xr-x - oracle hadoop 0 2013-01-29 12:50 /user/oracle
Check the output for HDFS users defined on Oracle Big Data Appliance, and not on the client system. You should see the same results as you would after entering the command directly on Oracle Big Data Appliance.
Submit a MapReduce job. You must be logged in to the client system under the same user name as your HDFS user name on Oracle Big Data Appliance.
The following example calculates the value of pi:
$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.0-cdh4.2.0.jar pi 10 1000000
Number of Maps = 10
Samples per Map = 1000000
Wrote input for Map #0
Wrote input for Map #1
.
.
.
13/04/30 08:15:50 INFO mapred.JobClient: BYTES_READ=240
Job Finished in 12.403 seconds
Estimated value of Pi is 3.14158440000000000000
Use Cloudera Manager to verify that the job ran on Oracle Big Data Appliance instead of the local system. Select mapreduce from the Activities menu for a list of jobs.
Figure 3-1 shows the job created by the previous example.
Figure 3-1 Monitoring a MapReduce Job in Cloudera Manager
Follow this procedure to provide remote client access to Hive.
To set up a Hive client:
Set up a CDH client. See "Providing Remote Client Access to CDH."
Log in to the client system and download the Hive client configuration from Cloudera Manager. In this example, Cloudera Manager listens on port 7180 (the default) of bda01node03.example.com
, and the configuration is stored in a file named hive-conf.zip
.
$ wget -O hive-conf.zip http://bda01node03.examplecom:7180/cmf/services/5/client-config
Unzip the file into a permanent installation directory, which will be the Hive configuration directory:
$ unzip hive-conf.zip Archive: hive-conf.zip inflating: hive-conf/hive-env.sh inflating: hive-conf/hive-site.xml
Download the Hive software from the Cloudera website:
$ wget http://archive.cloudera.com/cdh4/cdh/4/hive-0.10.0-cdh4.4.0.tar.gz
Decompress the file into a permanent installation directory, which will be the Hive home directory. The following command unzips the files into the current directory in a subdirectory named hive-0.10.0-cdh4.4.0
:
$ tar -xvzf hive-0.10.0-cdh4.4.0.tar.gz
Set the following variables, replacing hive-home-dir and hive-conf-dir with the directories you created in steps 3 and 5.
export HIVE_HOME=hive-home-dir export HIVE_CONF_DIR=hive-conf-dir alias hive=$HIVE_HOME/bin/hive
The following steps test whether you successfully set up a Hive client.
To verify Hive access:
To access a Kerberos-protected CDH cluster, first obtain a ticket granting ticket (TGT):
$ kinit userid@realm
Open the Hive console:
$ hive
Logging initialized using configuration in jar:file:/usr/lib/hive/lib/hive-common-0.10.0-cdh4.4.0.jar!/hive-log4j.properties
Hive history file=/tmp/oracle/hive_job_log_e10527ee-9637-4c08-9559-a2e5cea6cef1_831268640.txt
hive>
List all tables:
hive> show tables;
OK
src
This section describes how to create users who can access HDFS, MapReduce, and Hive. It contains the following topics:
When creating user accounts, define them as follows:
To create and modify tables in Hive, users must be in the hive
group.
To create Hue users, open Hue in a browser and click the User Admin icon. See "Using Cloudera Hue to Interact With Hadoop."
To create a user on an unsecured Hadoop cluster:
Open an ssh connection as the root
user to a noncritical node (node04 to node18).
Create the user's home directory:
# sudo -u hdfs hadoop fs -mkdir /user/user_name
You use sudo
because the HDFS super user is hdfs
(not root
).
Change the ownership of the directory:
# sudo -u hdfs hadoop fs -chown user_name:hadoop /user/user_name
Verify that the directory is set up correctly:
# hadoop fs -ls /user
Create the operating system user across all nodes in the cluster:
# dcli useradd -G hadoop,hive[,group_name...] -m user_name
In this syntax, replace group_name with an existing group and user_name with the new name.
Verify that the operating system user belongs to the correct groups:
# dcli id user_name
Verify that the user's home directory was created on all nodes:
# dcli ls /home | grep user_name
Example 3-1 creates a user named jdoe
with a primary group of hadoop
and an addition group of hive
.
Example 3-1 Creating a Hadoop User
# sudo -u hdfs hadoop fs -mkdir /user/jdoe # sudo -u hdfs hadoop fs -chown jdoe:hadoop /user/jdoe # hadoop fs -ls /user Found 5 items drwx------ - hdfs supergroup 0 2013-01-16 13:50 /user/hdfs drwxr-xr-x - hive supergroup 0 2013-01-16 12:58 /user/hive drwxr-xr-x - jdoe jdoe 0 2013-01-18 14:04 /user/jdoe drwxr-xr-x - oozie hadoop 0 2013-01-16 13:01 /user/oozie drwxr-xr-x - oracle hadoop 0 2013-01-16 13:01 /user/oracle # dcli useradd -G hadoop,hive -m jdoe # dcli id jdoe bda1node01: uid=1001(jdoe) gid=1003(jdoe) groups=1003(jdoe),127(hive),123(hadoop) bda1node02: uid=1001(jdoe) gid=1003(jdoe) groups=1003(jdoe),123(hadoop),127(hive) bda1node03: uid=1001(jdoe) gid=1003(jdoe) groups=1003(jdoe),123(hadoop),127(hive) . . . # dcli ls /home | grep jdoe bda1node01: jdoe bda1node02: jdoe bda1node03: jdoe
To create a user on a Kerberos-secured cluster:
Connect to Kerberos as the HDFS principal and execute the following commands, replacing jdoe
with the actual user name:
hdfs dfs -mkdir /user/jdoe hdfs dfs -chown jdoe /user/jdoe dcli -C useradd -G hadoop,hive -m jdoe hash=$(echo "hadoop" | openssl passwd -1 -stdin) dcli -C "usermod --pass='$hash' jdoe"
Log in to the key distribution center (KDC) and add a principal for the user. In the following example, replace jdoe, bda01node01, and example.com with the correct user name, server name, domain, and realm.
ssh -l root bda01node01.example.com kadmin.local
add_principal user_name@EXAMPLE.COM
Users do not need login privileges on Oracle Big Data Appliance to run MapReduce jobs from a remote client. However, for those who want to log in to Oracle Big Data Appliance, you must set a password. You can set or reset a password the same way.
To set a user password across all Oracle Big Data Appliance servers:
Create a Hadoop cluster user as described in "Creating Hadoop Cluster Users.".
Confirm that the user does not have a password:
# dcli passwd -S user_name bda1node01.example.com: jdoe NP 2013-01-22 0 99999 7 -1 (Empty password.) bda1node02.example.com: jdoe NP 2013-01-22 0 99999 7 -1 (Empty password.) bda1node03.example.com: jdoe NP 2013-01-22 0 99999 7 -1 (Empty password.)
If the output shows either "Empty password" or "Password locked," then you must set a password.
Set the password:
hash=$(echo 'password' | openssl passwd -1 -stdin); dcli "usermod --pass='$hash' user_name"
Confirm that the password is set across all servers:
# dcli passwd -S user_name bda1node01.example.com: jdoe PS 2013-01-24 0 99999 7 -1 (Password set, MD5 crypt.) bda1node02.example.com: jdoe PS 2013-01-24 0 99999 7 -1 (Password set, MD5 crypt.) bda1node03.example.com: jdoe PS 2013-01-24 0 99999 7 -1 (Password set, MD5 crypt.)
See Also:
Oracle Big Data Appliance Owner's Guide for information about dcli
.
The Linux man
page for the full syntax of the useradd
command.
CDH provides an optional trash facility, so that a deleted file or directory is moved to a trash directory for a set period, instead of being deleted immediately from the system. By default, the trash facility is enabled for HDFS and all HDFS clients.
When the trash facility is enabled, you can easily restore files that were previously deleted.
To restore a file from the trash directory:
Check that the deleted file is in the trash. The following example checks for files deleted by the oracle
user:
$ hadoop fs -ls .Trash/Current/user/oracle
Found 1 items
-rw-r--r-- 3 oracle hadoop 242510990 2012-08-31 11:20 /user/oracle/.Trash/Current/user/oracle/ontime_s.dat
Move or copy the file to its previous location. The following example moves ontime_s.dat from the trash to the HDFS /user/oracle directory.
$ hadoop fs -mv .Trash/Current/user/oracle/ontime_s.dat /user/oracle/ontime_s.dat
The trash interval is the minimum number of minutes that a file remains in the trash directory before being deleted permanently from the system. The default value is 1 day (24 hours).
To change the trash interval:
Open Cloudera Manager. See "Managing CDH Operations Using Cloudera Manager".
On the All Services page under Name, click hdfs.
On the hdfs page, click Configuration, and then select View and Edit.
Search for or scroll down to the Filesystem Trash Interval property under NameNode Settings. See Figure 3-2.
Click the current value, and enter a new value in the pop-up form.
Click Save Changes.
Expand the Actions menu at the top of the page and choose Restart.
Open a connection as root
to a node in the cluster.
Deploy the new configuration:
dcli -C bdagetclientconfig
Figure 3-2 shows the Filesystem Trash Interval property in Cloudera Manager.
Figure 3-2 HDFS Property Settings in Cloudera Manager
The trash facility on Oracle Big Data Appliance is enabled by default. You can change this configuration for a cluster. When the trash facility is disabled, deleted files and directories are not moved to the trash. They are not recoverable.
The following procedure disables the trash facility for HDFS. When the trash facility is completely disabled, the client configuration is irrelevant.
To completely disable the trash facility:
Open Cloudera Manager. See "Managing CDH Operations Using Cloudera Manager".
On the All Services page under Name, click hdfs.
On the hdfs page, click the Configuration subtab.
Search for or scroll down to the Filesystem Trash Interval property under NameNode Settings. See Figure 3-2.
Click the current value, and enter a value of 0 (zero) in the pop-up form.
Click Save Changes.
Expand the Actions menu at the top of the page and choose Restart.
All HDFS clients that are installed on Oracle Big Data Appliance are configured to use the trash facility. An HDFS client is any software that connects to HDFS to perform operations such as listing HDFS files, copying files to and from HDFS, and creating directories.
You can use Cloudera Manager to change the local client configuration setting, although the trash facility is still enabled.
Note:
If you do not want any clients to use the trash, then you can completely disable the trash facility. See "Completely Disabling the Trash Facility."To disable the trash facility for local HDFS clients:
Open Cloudera Manager. See "Managing CDH Operations Using Cloudera Manager".
On the All Services page under Name, click hdfs.
On the hdfs page, click the Configuration subtab.
Search for or scroll down to the Use Trash property under Client Settings. See Figure 3-2.
Deselect the Use Trash check box.
Click Save Changes. This setting is used to configure all new HDFS clients downloaded to Oracle Big Data Appliance.
Open a connection as root
to a node in the cluster.
Deploy the new configuration:
dcli -C bdagetclientconfig
Remote HDFS clients are typically configured by downloading and installing a CDH client, as described in "Providing Remote Client Access to CDH." Oracle SQL Connector for HDFS and Oracle R Advanced Analytics for Hadoop are examples of remote clients.
To disable the trash facility for a remote HDFS client:
Open a connection to the system where the CDH client is installed.
Open /etc/hadoop/conf/hdfs-site.xml
in a text editor.
Change the trash interval to zero:
<property> <name>fs.trash.interval</name> <value>0</value> </property>
Save the file.