8 Securing the Oracle Big Data SQL Installation

This section describes security features provided by Oracle Big Data SQL. It also explains measures you can take to secure the Oracle Big Data SQL installation, and provides pointers to the information you need in order to configure the Oracle Big Data SQL installation within secured environments.

See Also:

The Security Overview in the Oracle Big Data SQL User's Guide provides more detail on security topics related to user authentication and authorization.

8.1 Security Overview

This section covers installer, network, file, and password security.

Network Security

In Oracle Big Data SQL, network traffic between the database and the Hadoop cluster, is no longer guaranteed to be over a private InfiniBand network, but can occur over a client network. This network traffic is not currently secured. Therefore when operating a secured Hadoop cluster (e.g., Kerberos-enabled, RPC encryption), Oracle Big Data SQL requires the following:

  • All members of the client network are trusted, or
  • Private network connectivity is used exclusively for communication between the Hadoop nodes and Oracle database instances.

This private network is commonly referred to as the Big Data SQL interconnect network. The interconnect network must be a private network with only trusted users, use at least one switch, and 10 Gigabit Ethernet adapters. Ensure that only the nodes in the Hadoop cluster and Oracle RAC cluster can access the interconnect network. Do not use the interconnect network for other user communication.

Installer File Security

The new Jaguar installer incorporates the following best practices for secure Linux installers and applications:

  • No persistent or temporary world-writable files are created.

  • No setuid or setgid files are used.

In addition, the installer works with hardened Oracle Database environments as well as hardened CDH and HDP clusters as described in the Cloudera CDH and Hortonworks HDP security documentation

Password Safety

The Jaguar installer provides these password security measures:

  • Passwords for the Ambari and Cloudera Manager cluster management servers are not be passed in on the command line and are not saved in any persistent file during the installation or after the installation is complete.

  • Passwords are not logged to any log or trace files.

Security of Related Software

Oracle Big Data SQL relies on other installed software, including third party projects. It is the customer’s responsibility to ensure that such software is kept up-to-date with the latest security patches. This software includes (but is not limited to):

  • Python

  • Perl

8.2 Big Data SQL Communications and Secure Hadoop Clusters

It is generally a good security practice to ensure that HDFS file access permissions are minimized in order to prevent unauthorized write/read access. This is true regardless of whether or not the Hadoop cluster is secured by Kerberos.

Please refer to MOS Document 2123125.1 at My Oracle Support for detailed guidelines on securing Hadoop clusters for use with Oracle Big Data SQL.

8.2.1 Connecting a Database to Both Secure and Unsecured Clusters

Oracle Big Data SQL can connect a single database to multiple Kerberos secured or unsecured clusters at the same time.

There is limitation on the use of Hive metadata queries where a database has mixed connections to both secured and unsecured clusters. If the default (the first available connection) is secured, then Hive metadata on unsecured clusters is inaccessible. The reverse is also true – if the default cluster is unsecured, then Hive metadata on secured clusters is inaccessible. This is because the database can only support one Hadoop authentication method.

Suppose the database has these connections: cluster_1 (the default cluster, secured), cluster_2 (unsecured), cluster_3 (secured). In this case, a query such as the following succeeds against cluster_1 since is the default. It also succeeds against cluster_3. However it fails against cluster_2, which is not secured.

select cluster_id, database_name, table_name from all_hive_tables;

If cluster_1 (the default) were not secured, then instead the query would succeed against cluster_2 and fail against cluster_3.

If you already know the metadata (which you can acquire through other tools), then you can successfully create external tables over Hive sources. For example, the following create table command will work against both secured and unsecured clusters, regardless of the status of the default cluster.

  create table myhivetable (  
      id number(10),  
      name varchar2(4000),  
      type varchar2(4000))  
    organization external (
      type ORACLE_HIVE
      default directory default_dir
      access parameters (
        com.oracle.bigdata.cluster=<cluster name>
        com.oracle.bigdata.tablename=<Hive database name>.<Hive table name>)
    ) 
    parallel 2 reject limit unlimited;

8.3 Configuring Oracle Big Data SQL in a Kerberos-Secured Environment

If Kerberos is enabled on the Hadoop system, you must configure Oracle Big Data SQL on the database server to work with Kerberos. This requires a Kerberos client on each database node where Oracle Big Data SQL is installed. Also, the OS account that owns the database (oracle or another account ) must be provisioned as a user principal.

When operating a secured Hadoop cluster (e.g., Kerberos enabled, RPC encryption), Oracle Big Data SQL requires either that all members of the client network be trusted, or that private network connectivity is used exclusively for communication between the Hadoop nodes and Oracle database instances. This private network is commonly referred to as the Big Data SQL interconnect network. The interconnect network must be a private network with only trusted users, use at least one switch, and 10 Gigabit Ethernet adapters. Ensure that only the nodes in the Hadoop cluster and Oracle RAC cluster can access the interconnect network. Do not use the interconnect network for other user communication.

8.3.1 Enabling Oracle Big Data SQL Access to a Kerberized Cluster

You must configure Oracle Big Data SQL to use Kerberos in environments where user access is Kerberos-controlled.

There are two situations when this is required:

  • When enabling Oracle Big Data SQL on a Kerberos-enabled cluster.

  • When enabling Kerberos on a cluster where Oracle Big Data SQL is already installed.

Oracle Big Data SQL processes run on the nodes of the Hadoop cluster as the oracle Linux user. On the Oracle Database server, the owner of the Oracle Database process is also (usually, but not always) the oracle Linux user. When Kerberos is enabled on the Hadoop system, the principal of the database owner or that of another Linux user account can be used for authentication. The following is required in order to give the user access.

The Linux user must be able to be authenticated as a principal in the Kerberos database on the Kerberos Key Distribution Center (KDC) server. The principal on the Hadoop side must have access to the underlying Hadoop data being requested by Oracle Big Data SQL.

The following are required on all Oracle Database nodes and all Hadoop cluster nodes running Oracle Big Data SQL:
  • And installation of the Kerberos client software.

  • A copy of the Kerberos configuration file from the KDC.

  • A copy of the Kerberos keytab file generated on the KDC for the Linux user.

  • A valid Kerberos ticket for the Linux user.

Installing the Kerberos Client

If the Kerberos client is not installed, see Installing a Kerberos Client on the Oracle Database Nodes for instructions on installing the Kerberos client.

Creating a Kerberos Principal for the User

On the Kerberos Key Distribution Center (KDC) server, become root and use kadmin.local to add a principal for the user selected for authentication.

  1. # kadmin.local
  2. Within kadmin.local, type:

    add_principal <user>@<realm>
    quit

    You have the option to include the password, as in:

    add_principal <user>@<realm> -pw <password> 
    quit

Creating a Kerberos Keytab for the User

  1. On the KDC, become root and run the following:

    # kadmin.local
  2. Within kadmin.local, type:
    xst –norandkey -k /home/oracle/oracle.keytab <user>
    quit

    This creates the <user>.keytab file for the Kerberos principal in the /home/oracle directory.

    Note:

    If you run the xst(ktadd) command without the -norandkey option , this will invalidate other keytabs previously created for this user.
  3. Ensure that keytab is owned by the designated Linux user and is readable by that user only.
    $ chown <user> <user>.keytab 
    $ chmod 400 <user>.keytab

Kerberos Tasks Automated by Oracle Big Data SQL

The following Kerberos tasks are now automated:

  • Distributing Keytab and Kerberos configuration file distrbution files.

    The Oracle Big Data SQL installation can now be configured to automatically distribute the keytab and Kerberos configuration files for the principal to the Hadoop DataNodes (and Oracle Database compute nodes). This is done if the principal name and keytab file location parameters are set in the Jaguar configuration file. This automation is performed on the Hadoop and also on Oracle Database side if the principals are the same. If not, you can designate an alternate principal on the database side.

    On Oracle Big Data Appliance, the keytab file distribution is done by default for the oracle account and you do not need to add the principal and keytab file path for this account to the configuration file.

  • Acquiring a Kerberos Ticket for designated principals.

    The installation acquires a Kerberos ticket on each Hadoop DataNode and Oracle DB compute node.

  • Ticket renewal

    The installation automatically sets up cron jobs in the Hadoop cluster and on Oracle Database to kinit for new ticket for each principal in the configuration four times daily.

Cleaning up After Ticket Expirations

When the bd_cell process is running on the nodes of a secured Hadoop cluster but the Kerberos ticket is not valid, then the cell goes to quarantine status. You should drop all such quarantines.

  1. Check that the principal has a valid Kerberos ticket on all Hadoop cluster nodes.

  2. On each cluster node, become oracle and run the following:

    $ /opt/oracle/bd_cell/cellsrv/bin/bdscli
  3. In the bdscli shell, type:

    list quarantine
  4. While still in bdscli, drop each quarantine on the list:
    drop quarantine <id>
  5. Type exit to exit bdscli.

8.3.2 Installing a Kerberos Client on the Oracle Database Nodes

If the Oracle Database system is Kerberos secured, then Oracle Big Data SQL requires a Kerberos client. The client must be installed on each compute node of the database.

For commodity servers, download the Kerberos client software from a repository of your choice. If the database server is an Oracle Exadata Database Machine, download and install the software from the Oracle repository as shown below. The process should be similar for downloads from non-Oracle repositories.

Log on to the database server as root and use yum to install the krb5-libs and krb5-workstation packages. Download from the appropriate Oracle Linux repository (ol7 or ol6).

  1. Check that the correct Oracle epository ID is installed.

    # yum repolist
  2. Temporarily disable all repository IDs and then enable the Oracle repository only ( Oracle Linux 7 in this example).
    # yum --disablerepo="*" --enablerepo="public-yum-ol7" list available 
  3. Install the Kerberos packages.

    # yum install krb5-libs krb5-workstation 
  4. Copy the /etc/krb5.conf file from the Key Distribution Center (KDC) to the same path on the database server.

These steps must be performed for each Oracle Database node.

You must also register the oracle Linux user (or other Linux user) and password in the KDC for the cluster as described in Enabling Oracle Big Data SQL Access to a Kerberized Cluster

8.4 Using Oracle Secure External Password Store to Manage Database access for Oracle Big Data SQL

On the Oracle Database server, you can use the Oracle Secure External Password Store (SEPS) to manage database access credentials for Oracle Big Data SQL.

This is done by creating an Oracle wallet for the oracle Linux user (or other database owner). An Oracle wallet is a password-protected container used to store authentication and signing credentials, including private keys, certificates, and trusted certificates needed by SSL.

See MOS Document 2126903.1 at My Oracle Support for information on using SEPS with Oracle Big Data SQL.

8.5 About Data Security on Oracle Big Data Appliance

If your Hadoop system is an Oracle Big Data Appliance, the following tools to strengthen security are already available.

  • Kerberos authentication: Requires users and client software to provide credentials before accessing the cluster.

  • Apache Sentry authorization: Provides fine-grained, role-based authorization to data and metadata.

  • HDFS Transparent Encryption: Protects the data on disk and at rest. Data encryption and decryption is transparent to applications using the data.

  • HTTPS/ Network Encryption: Provides HTTPS for Cloudera Manager, Hue, Oozie, and Hadoop Web UIs. Also Enables network encryption for other internal Hadoop data transfers, such as those made through YARN shuffle and RPC.

See Also:

Security for Oracle Big Data Appliance in Oracle Big Data Appliance Software User’s Guide.

8.6 Authentication Between Oracle Database and Oracle Big Data SQL Offload Cell Server Processes

The Database Authentication feature described in this guide prevents unauthorized and potentially malicious processes (which can originate from anywhere) from connecting to Oracle Big Data SQL cell server processes in the DataNodes of the Hadoop cluster.

When Ethernet is selected for the connection between Oracle Databases and Oracle Big Data SQL, then by default this secured authentication framework is set up automatically during the installation. Database Authentication is also available as a configuration option for InfiniBand connections.

8.7 Securing Object Store Access

Oracle Big Data SQL enables object store access from Oracle Database and also from the Oracle Big Data SQL offload server, which runs on Hadoop nodes.

The appropriate proxy setting for the Oracle Big Data offload server is required for access to object stores. These is set by including either the cell_http_proxy or cell_no_proxy parameters in the Jaguar configuration file.

Note:

Outbound HTTP connections from Hadoop nodes to object stores should be safeguarded. It is recommended that you use a firewall to whitelist target object store domain names.

The credential information required for object store access is encrypted before it is sent from Oracle Database to the Oracle Big Data SQL offload server.

See Also: