10 Installing the Oracle Big Data Appliance Software

This chapter explains how to install, reinstall, and reconfigure the software on Oracle Big Data Appliance. It contains these sections:

Note:

If you did not enter the required passwords in the Oracle Big Data Appliance Configuration Generation Utility, then you are prompted to enter them during the software installation. Ensure that you know the current passwords for the operating system root and oracle users, the Cloudera Manager admin user, and the MySQL administrator. If you are installing or reinstalling Oracle Big Data Connectors, then you also need the MySQL password for Oracle Data Integrator.

10.1 About the Mammoth Utility

Mammoth is a command-line utility for installing and configuring the Oracle Big Data Appliance software. Using Mammoth, you can:

  • Set up a cluster for either CDH or Oracle NoSQL Database.

  • Create a cluster on one or more racks.

  • Create multiple clusters on an Oracle Big Data Appliance rack.

  • Extend a cluster to new servers on the same rack or a new rack.

  • Update a cluster with new software.

10.2 Installation Prerequisites

The Oracle Audit Vault, Auto Service Request, and Oracle Enterprise Manager options require software on a remote server on the same network as Oracle Big Data Appliance. Those installations must be complete before you run Mammoth, or it will fail.

Similarly, you must complete several steps for Kerberos if the key distribution center (KDC) is installed on a remote server. There are no preliminary steps if you install the KDC on Oracle Big Data Appliance.

The following list describes the prerequisites for all installation options.

Audit Vault Requirements: 

  • Oracle Audit Vault and Database Firewall Server Release 12.1.1 or later must be up and running on a separate server on the same network as Oracle Big Data Appliance

Auto Service Request Requirements: 

  • Your My Oracle Support account is set up.

  • ASR Manager is up and running.

Enterprise Manager Requirements: 

  • Oracle Management System (OMS) version 12.1.0.4.0 or higher is up and running.

  • The OMS agent pull URL is working.

  • The OMS emcli download URL is working.

  • Both the HTTPS upload port and the console port are open.

Note:

Double-check the OMS credentials and ensure that you enter them correctly when running Mammoth. Invalid credentials are the primary cause of failure in the Enterprise Manager discovery process.

Kerberos Requirements for a Remote KDC: 

  1. Add cloudera-scm/admin as a user to the KDC database by running the following command from kadmin:

    addprinc -randkey cloudera-scm/admin@<REALM NAME>
    
  2. Grant cloudera-scm/admin all permissions to the Kerberos database. It must be able to add, modify, remove, and list principals from the database.

  3. Create the cmf.keytab file by running the following command from kadmin:

    xst -k cmf.keytab cloudera-scm/admin@<REALM NAME> 
    
  4. Move cmf.keytab to/opt/oracle/BDAMammoth.

  5. To support Oozie and Hue, ensure that the remote KDC supports renewable tickets. If it does not, then follow these steps:

    1. Open kdc.conf and set values for max_life and max_renewable_life. The max_life parameter defines the time period when the ticket is valid, and the max_renewable_life parameter defines the time period when users can renew a ticket.

    2. Set maxrenewlife for the krbtgt principal. Use the following kadmin command, replacing duration with the time period and REALM NAME with the name of the realm:

      modprinc -maxrenewlife duration krbtgt/REALM NAME
      

    If the KDC does not support renewable tickets when Kerberos is configured, then Oozie and Hue might not work correctly.

Sentry Requirements 

  1. Create a Sentry policy file named sentry-provider.ini.

  2. Copy the file to /opt/oracle/BDAMammoth on the first node in the cluster.

    See Also:

    For information about creating a policy file, "Configuring Sentry" in the CDH5 Security Guide at

    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Security-Guide/cdh5sg_sentry.html?scroll=concept_mm2_21p_wk_unique_1

10.3 Downloading the Mammoth Software Deployment Bundle

The Mammoth bundle contains the installation files and the base image. Before you install the software, you must use Oracle Big Data Appliance Configuration Generation Utility to generate the configuration files, as described in "Generating the Configuration Files."

You use the same bundle for all procedures described in this chapter, regardless of rack size, and whether you are creating CDH or Oracle NoSQL Database clusters, or upgrading existing clusters.

To download the Mammoth bundle: 

  1. Locate the download site in either My Oracle Support or Automated Release Updates (ARU):

    My Oracle Support

    1. Go to My Oracle Support Doc ID 1445745.

    2. Display the Install and Configure page.

    3. Click the appropriate link under Latest Oracle Big Data Software Installation Documents.

    ARU

    1. Connect to ARU.

    2. On the Patches page, set Product to Big Data Appliance Integrated Software and Release to the appropriate release number.

    3. Click Search.

  2. Download the BDAMammoth ZIP files to any directory (such as /tmp) in the first node of the cluster. Depending on the configuration of the rack, this node can be the first, seventh, or thirteenth server from the bottom of the rack. For multirack clusters, this server is in the primary rack.

    The patch consists of two files:

    • ppatch_version_Linux-x86-64_1of2.zip contains the Mammoth installation files.

    • ppatch_version_Linux-x86-64_2of2.zip contains the base image.

  3. Log in to the first node of the cluster as root.

  4. Extract all files from the downloaded zip files. For example:

    $ unzip p12345678_400_Linux-x86-64_1of2.zip
    Archive:  p12345678_400_Linux-x86-64_1of2.zip
      inflating: README.txt
       creating: BDAMammoth-ol6-4.0.0/
      inflating: BDAMammoth-ol6-4.0.0/BDAMammoth-ol6-4.0.0.run
    
    $ unzip p12345678_400_Linux-x86-64_2of2.zip
    Archive:  p19590684_400_Linux-x86-64_2of2.zip
       creating: BDABaseImage-ol6-4.0.0_RELEASE/
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/biosconfig
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/makebdaimage
     extracting: BDABaseImage-ol6-4.0.0_RELEASE/BDABaseImage-ol6-4.0.0_RELEASE.md5sum
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/reimagerack
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/BDABaseImage-ol6-4.0.0_RELEASE.iso
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/reimagecluster
       creating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/
       creating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/RCU/
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/RCU/rcuintegration-11.1.1.7.0-1.x86_64.rpm
       creating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/ODI/
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/ODI/odi_generic.jar
       creating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/NOSQL/
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/NOSQL/kv-ee-3.0.14-0.noarch.rpm
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/NOSQL/kv-ce-3.0.14-0.noarch.rpm
       creating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/BALANCER/
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/BALANCER/orabalancer-2.2.0-h2.noarch.rpm
     extracting: BDABaseImage-ol6-4.0.0_RELEASE/Extras/BALANCER/orabalancer-2.2.0-h2.zip
       creating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/CELL/
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/CELL/bd_cell-12.1.2.0.99_LINUX.X64_140907.2307-1.x86_64.rpm
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/Extras/CELL/bd_cellofl-12.1.2.0.99_LINUX.X64_140907.2307-1.x86_64.rpm
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/README.txt
      inflating: BDABaseImage-ol6-4.0.0_RELEASE/ubiosconfig
    
  5. Change to the BDAMammoth-version directory:

    # cd BDAMammoth-ol6-4.0.0
    
  6. Extract all files from BDAMammoth-version.run:

    # ./BDAMammoth-ol6-4.0.0.run
     
    Big Data Appliance Mammoth v4.0.0 Self-extraction
     
    Checking for and testing BDA Base Image in /tmp
     
    BDABaseImage-4.0.0_RELEASE.iso: OK
     
    Removing existing temporary files
     
    Generating /tmp/BDAMammoth.tar
    Verifying MD5 sum of /tmp/BDAMammoth.tar
    /tmp/BDAMammoth.tar MD5 checksum matches
     
    Extracting /tmp/BDAMammoth.tar to /opt/oracle/BDAMammoth
     
    Extracting Base Image RPMs to bdarepo
    Moving BDABaseImage into /opt/oracle/
     
    Removing temporary files
         .
         .
         .
    Please "cd /opt/oracle/BDAMammoth" before running "./mammoth -i <rack_name>"
    #
    

    The new version of the Mammoth software is installed in /opt/oracle/BDAMammoth, and the previous version (if you are upgrading) is saved in /opt/oracle/BDAMammoth/previous-BDAMammoth.

  7. Follow the specific instructions for the type of installation you are performing.

10.4 Installing the Software on a New Rack

Mammoth installs and configures the software on Oracle Big Data Appliance using the files generated by Oracle Big Data Appliance Configuration Generation Utility. A cluster can be dedicated to either CDH (Hadoop) or Oracle NoSQL Database.

For a CDH cluster, Mammoth installs and configures Cloudera's Distribution including Apache Hadoop. This includes all the Hadoop software and Cloudera Manager, which is the tool for administering your Hadoop cluster. If you have a license, Mammoth optionally installs and configures all components of Oracle Big Data Connectors.

For a NoSQL cluster, Mammoth installs Oracle NoSQL Database. CDH and Oracle NoSQL Database do not share a cluster, beginning with Oracle Big Data Appliance 2.2.

In addition to installing the software across all servers in the rack, Mammoth creates the required user accounts, starts the correct services, and sets the appropriate configuration parameters. When it is done, you have a fully functional, highly tuned, up and running Hadoop cluster.

Complete the appropriate instructions for your installation:

10.4.1 Installing the Software

Follow this procedure to install and configure the software on one or more Oracle Big Data Appliance racks. You can configure one cluster on multiple racks in a single installation.

To install the software: 

  1. Verify that the Oracle Big Data Appliance rack is configured according to the custom network settings described in /opt/oracle/bda/BdaDeploy.json. If the rack is still configured to the factory default IP addresses, first perform the network configuration steps described in "Configuring the Network."

  2. Verify that the software is not installed on the rack already. If the software is installed and you want to upgrade it, then use the mammoth -p option in Step 6.

  3. Download and unzip the Mammoth bundle, as described in "Downloading the Mammoth Software Deployment Bundle." You must be logged in as root to the first server in the cluster.

  4. Change to the BDAMammoth directory.

    # cd /opt/oracle/BDAMammoth
    
  5. Copy cluster_name-config.json to the current directory. See "About the Configuration Files."

  6. Run the mammoth command with the appropriate options. See "Mammoth Software Installation and Configuration Utility." This sample command runs all steps:

    ./mammoth -i rack_name
    

    After Mammoth completes Step 3 of the installation, it prompts you to restart, if it upgraded the base image.

  7. If you installed support for Auto Service Request, then complete the steps in "Verifying the Auto Service Request Installation."

  8. To configure another CDH cluster on the server:

    1. Copy the BDAMammoth ZIP file to any directory on the first server of the cluster, which is either server 7 or server 13.

    2. Repeat Steps 3 to 7. Each cluster has its own cluster_name-config.json file. Oracle Big Data Appliance Configuration Generation Utility creates the files in separate directories named for the clusters.

Note:

Mammoth stores the current configuration in the /opt/oracle/bda/install/state directory. Do not delete the files in this directory. Mammoth fails without this information if you must use it again, such as adding a rack to the cluster.

Verifying the Auto Service Request Installation 

  1. Log in to My Oracle Support at http://support.oracle.com.

  2. Search for document ID 1450112.1, ASR Exadata Configuration Check via ASREXACHECK, and download the asrexacheck script.

    Although this check was originally intended for Oracle Exadata Database Machine, it is now supported on Oracle Big Data Appliance.

  3. Copy asrexacheck to a server in the Oracle Big Data Appliance cluster.

  4. Log in to the server as root.

  5. Copy asrexacheck to all servers in the cluster:

    # dcli -C -f asrexacheck -d /opt/oracle.SupportTools
    

    See Chapter 14 for information about dcli.

  6. Change the file access permissions:

    # dcli -C chmod 755 /opt/oracle.SupportTools/asrexacheck
    
  7. Run the script:

    # dcli -C /opt/oracle.SupportTools/asrexacheck
    
  8. File an Oracle Support Request (SR) to validate the ASR installation. Note the following choices:

    1. Under "What is the Problem?" click the Hardware tab.

    2. For Products Grouped By, select Hardware License.

    3. For Problem Type, select My - Auto Service Request (ASR) Installation and Configuration Issues.

    4. Include the output of asrexacheck.

  9. Continue with Step 8 of the software installation procedure.

10.5 Adding Servers to a Cluster

You can add servers to an existing cluster in groups of three servers. You add a full rack the same way that you add a smaller number of servers. However, Oracle Big Data Appliance Configuration Generation Utility does not generate the configuration file. Instead, you use Mammoth to generate the configuration file and then use it to configure the servers.

To install the software on additional servers in a cluster: 

  1. Ensure that all servers are running the same software version. The additional servers must not have an Oracle Big Data Appliance base image that is newer than the existing cluster. See "About Software Version Differences".

  2. Ensure that all racks that form a single Hadoop cluster are cabled together. See Chapter 9.

  3. Connect as root to node01 of the primary rack and change to the BDAMammoth directory:

    cd /opt/oracle/BDAMammoth
    

    Note: Always start Mammoth from the primary rack.

  4. Generate a parameter file for the server group. The following example adds six servers beginning with node13:

    ./mammoth -e node13 node14 node15 node16 node17 node18
    

    The servers can be on the same rack or multiple racks. See the -e option in "Mammoth Software Installation and Configuration Utility."

    After Mammoth completes Step 3 of the installation, it prompts you to restart, if it upgraded the base image.

  5. If you are using Oracle Enterprise Manager Cloud Control to monitor Oracle Big Data Appliance, then run rediscovery to identify the hardware and software changes.

If you have a license for Oracle Big Data Connectors, then they are installed on all nodes of the non-primary racks, although the services do not run on them. Oracle Data Integrator agent still runs on node03 of the primary rack.

Mammoth obtains the current configuration from the files stored in /opt/oracle/bda/install/state. If those files are missing or if any of the services have been moved manually to run on other nodes, then Mammoth fails.

About Software Version Differences

All servers configured as one Hadoop cluster must have the same image. A new Oracle Big Data Appliance rack or an in-rack expansion kit might be factory-installed with a newer base image than the previously installed racks. Use the imageinfo utility on any server to get the image version. When all servers of a single Hadoop cluster have the same image version, you can install the software.

To synchronize the new servers with the rest of the Hadoop cluster, either upgrade the existing cluster to the latest image version or downgrade the image version of the new servers.

To upgrade the image version: 

To downgrade the image version: 

  • Reimage the new rack to the older version installed on the cluster. See My Oracle Support Information Center ID 1445745.2.

  • Use the old version of the Mammoth utility, which is on the first server of the existing cluster, to extend the cluster onto the new rack.

If you add a newer server model, then you can downgrade only to the first software version available for that model. For example, if you add Sun Server X3-2L servers, then you must install Oracle Big Data Appliance software version 2.3 or higher.

10.6 What If an Error Occurs During the Installation?

If the Mammoth utility fails, take these steps to resolve the problem:

  1. Read the error message to see if it suggests a cause or a resolution.

  2. Make the recommended changes and rerun the step.

  3. If the error message does not recommend a resolution, or it does not resolve the problem:

    • File a service request (SR) with My Oracle Support.

    • Upload the diagnostic zip file, which Mammoth generates when an error occurs.

10.7 Upgrading the Software on Oracle Big Data Appliance

The procedure for upgrading the software is the same whether you are upgrading from one major release to another or just applying a patch set. The procedure is also the same whether your Hadoop cluster consists of one Oracle Big Data Appliance rack or multiple racks.

The process upgrades all components of the software stack including the firmware, Oracle Linux Unbreakable Enterprise Kernel (UEK), CDH, JDK, and Oracle Big Data Connectors (if previously installed). Two versions of the software bundle are available, one for Oracle Linux 5 and the other for Oracle Linux 6).

To upgrade only Oracle Big Data Connectors, and no other components of the software stack, contact Oracle Support for assistance.

Software downgrades are not supported.

Note:

Because the upgrade process automatically stops and starts services as needed, the cluster is unavailable while the mammoth command is executing.

10.7.1 About the Operating System Versions

The Oracle Big Data Appliance 2.3 and later software runs on either Oracle Linux version 5 or version 6. When upgrading the software, you can choose whether to upgrade the operating system also:

  • Retaining Oracle Linux 5: Download the Mammoth bundle for Oracle Linux 5. It retains Oracle Linux 5, but upgrades Oracle Unbreakable Enterprise Kernel (UEK) to version 2. It installs all of the Oracle Big Data Appliance software for this release.

  • Upgrading to Oracle Linux 6: Download the Mammoth bundle for Oracle Linux 6. To upgrade the operating system, you must first reimage the servers. Reimaging erases all files and data, therefore Oracle does not recommend this type of upgrade for a production system, unless you have a second cluster functioning as a backup with the same data. After reimaging, you can install the software with Oracle Linux 6 and UEK version 2.

10.7.2 Upgrading the Software

Follow these procedures to upgrade the software on an Oracle Big Data Appliance cluster to the current version.

10.7.2.1 Prerequisites

You must know the passwords currently in effect for the cluster, which the Mammoth utility will prompt you for:

  • oracle

  • root

  • Cloudera Manager admin

  • MySQL Database admin

  • MySQL Database for Oracle Data Integrator (if Oracle Data Integrator agent is installed)

10.7.2.2 Upgrading to the Current Software Version

Take these steps to upgrade the Oracle Big Data Appliance software to the current software version:

  1. Upgrade the software to version 2.5, if your system is running an earlier version of the Oracle Big Data Appliance software:

    • For Oracle Big Data Appliance 2.3.1 and later, see My Oracle Support Doc ID 1623304.1.

    • For Oracle Big Data Appliance 2.1 and 2.2.1, see Doc ID 1600274.1.

  2. Upgrade the software to version 3.0.1, if your system is running an earlier version of the software. You cannot upgrade directly to the current version from versions earlier than 3.0.

  3. Download and unzip the Mammoth bundle, as described in "Downloading the Mammoth Software Deployment Bundle." You must be logged in as root to the first server in the cluster.

  4. Change to the BDAMammoth directory.

    # cd /opt/oracle/BDAMammoth
    
  5. Run the mammoth command with the -p option:

    # ./mammoth -p
    

    Mammoth automatically upgrades the base image if necessary.

  6. If you are using Oracle Enterprise Manager Cloud Control to monitor Oracle Big Data Appliance, then run rediscovery to identify the software changes.

10.8 Changing the Configuration of Optional Software

During the initial configuration of Oracle Big Data Appliance, the optional software components may or may not be installed. Using the Mammoth Reconfiguration Utility, you can reverse some of those decisions. You must provide the relevant server names, ports, user names, and passwords. See "Mammoth Reconfiguration Utility Syntax".

This section provides examples of some reconfiguration options. It has the following topics:

Note:

The bdacli command provides an alternative syntax for calling mammoth-reconfig. See "bdacli".

10.8.1 Changing Support for Oracle Big Data Connectors

You can add or remove support for Oracle Big Data Connectors:

10.8.1.1 Adding Oracle Big Data Connectors

When adding support for Oracle Big Data Connectors, you can choose whether to install Oracle Data Integrator Application Adapter for Hadoop. If you do, then you must provide passwords for the MySQL Database root user and the Oracle Data Integrator user of MySQL Database (BDA_ODI_REPO). You must also know these passwords if they are not saved in cluster_name-config.json:

  • Cloudera Manager admin user

  • Oracle Audit Vault and Database Firewall, if enabled

The following procedure uses the bdacli utility.

To add Oracle Big Data Connectors to a cluster: 

  1. Log in to the first NameNode (node01) of the primary rack as root.

  2. Enable Oracle Big Data Connectors:

    # bdacli enable bdc
    INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node03-20140805110007.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node03-20140805110007.trc
    INFO: This is the install of the primary rack
    INFO: Checking if password-less ssh is set up
         .
         .
         .
    Do you wish to enable ODI? [y/n]: y
    Enter password for the BDA_ODI_REPO mysql user
    Enter password: odi_password
    Enter password again: odi_password
    Enter password for the mysql root user
    Enter password: root_password
    Enter password again: root_password
    WARNING: The password for the Cloudera Manager admin user is missing from the parameters file and is required for the installation.
    Enter password: admin_password
    Enter password again: admin_password
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     2    0     2    0     0    181      0 --:--:-- --:--:-- --:--:--   250
    INFO: The password for audit vault server is not needed since feature is not enabled
    INFO: Creating environment.pp file ...
    INFO: Making sure all puppet agents can be accessed.
    INFO: Pinging puppet agents
    INFO: Adding BDC to the cluster. This will take some time ...
         .
         .
         .
    SUCCESS: Successfully reconfigured service
    

10.8.1.2 Removing Oracle Big Data Connectors

When removing support for Oracle Big Data Connectors, you must provide passwords for the following users, if the passwords are not saved in cluster_name-config.json:

  • Cloudera Manager admin user

  • MySQL Database root user, if Oracle Data Integrator Application Adapter for Hadoop is enabled

  • BDA_ODI_REPO user of MySQL Database, if Oracle Data Integrator Application Adapter for Hadoop is enabled

  • Oracle Audit Vault and Database Firewall, if enabled

The following procedure uses the bdacli utility.

To remove Oracle Big Data Connectors from a cluster: 

  1. Log in to the first NameNode (node01) of the primary rack as root.

  2. Remove Oracle Big Data Connectors:

    # bdacli disable bdc
    INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node03-20140805104603.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node03-20140805104603.trc
    INFO: This is the install of the primary rack
    INFO: Checking if password-less ssh is set up
    INFO: Executing checkRoot.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    SUCCESS: Executed checkRoot.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    INFO: Executing checkSSHAllNodes.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    SUCCESS: Executed checkSSHAllNodes.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    INFO: Reading component versions from 
     
    /opt/oracle/BDAMammoth/bdaconfig/COMPONENTS
    INFO: Creating nodelist files...
    WARNING: The password for the Cloudera Manager admin user is missing from the parameters file and is required for the installation.
    Enter password: admin_password
    Enter password again: admin_password
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     2    0     2    0     0    184      0 --:--:-- --:--:-- --:--:--   250
    WARNING: The password for the MySQL root user is missing from the parameters file and is required for the installation.
    Enter password: root_password
    Enter password again: root_password
    INFO: Executing verifyMySQLPasswd.sh on nodes 
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    SUCCESS: Executed verifyMySQLPasswd.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    WARNING: The password for the MySQL BDA_ODI_REPO user is missing from the  parameters file and is required for the installation.
    Enter password: odi_password
    Enter password again: odi_password
    INFO: Executing verifyMySQLPasswd.sh on nodes 
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    INFO: The password for audit vault server is not needed since feature is not enabled
    INFO: Creating environment.pp file ...
    INFO: Making sure all puppet agents can be accessed.
    INFO: Pinging puppet agents
    INFO: Removing big data connectors. This will take some time ...
         .
         .
         .
    SUCCESS: Successfully reconfigured service
    

10.8.2 Adding Support for Auto Service Request

The following procedure shows how to add support for Auto Service Request.

To support Auto Service Request: 

  1. Set up your My Oracle Support account and install ASR Manager. You must do this before activating Auto Service Request on Oracle Big Data Appliance. See Chapter 5.

  2. Log in to the first NameNode (node01) of the primary rack as root and change to the BDAMammoth directory:

    cd /opt/oracle/BDAMammoth
    
  3. Turn on Auto Service Request monitoring and activate the assets:

    # cd /opt/oracle/BDAMammoth
    # ./mammoth-reconfig add asr
    INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20130205075303.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20130205075303.trc
         .
         .
         .
    Enter the value for ASR_HOST [Default: ]: asr-host.example.com
    Enter the value for ASR_PORT [Default: 162]:
    Enter the value for ASR_SERVER_USER: jdoe
     
    Please Check the values you entered for the ASR parameters
     
    ASR_HOST = asr-host.example.com
    ASR_PORT = 162
    ASR_SERVER_USER = jdoe
     
    Are these values correct (y/n): y
    Enter password for user jdoe on machine asr-host.example.com
    Enter password: password
    Enter password again: password
    INFO: Creating environment.pp file ...
    INFO: Making sure all puppet agents can be accessed.
    INFO: Pinging puppet agents
    INFO: Setting up ASR on all nodes. This will take some time ...
         .
         .
         .
    
  4. Complete the steps in "Verifying the Auto Service Request Installation."

10.8.3 Adding Support for Oracle Enterprise Manager Cloud Control

The next procedure shows how to add support for Oracle Enterprise Manager Cloud Control:

To support Oracle Enterprise Manager Cloud Control: 

  1. Install the system monitoring plug-in for Oracle Big Data Appliance in an Oracle Enterprise Manager Cloud Control installation on the same network. See the Oracle Enterprise Manager System Monitoring Plug-in Installation Guide for Oracle Big Data Appliance.

  2. Log into the first NameNode (node01) of the primary rack and change to the BDAMammoth directory:

    cd /opt/oracle/BDAMammoth
    
  3. Add support for Oracle Enterprise Manager Cloud Control:

    # cd /opt/oracle/BDAMammoth
    # ./mammoth-reconfig add em
    INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20130205082218.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20130205082218.trc
         .
         .
         .
    

See Also:

My Oracle Support Doc ID 1682558.1, " Instructions to Install 12.1.0.4 BDA Plug-in on Oracle Big Data Appliance" for complete installation instructions.

10.8.4 Adding Support for Oracle Audit Vault and Database Firewall

Before installing support on Oracle Big Data Appliance, ensure that Oracle Audit Vault and Database Firewall Server Release 12.1.1 or a later version is up and running. It must be installed on a separate server on the same network as Oracle Big Data Appliance.

You must also have the following information about the Audit Vault Server installation:

  • Audit Vault Server administration user name and password

  • Database service name

  • IP address

  • Port number

  • Password for disk encryption on Oracle Big Data Appliance, if it is enabled

To add support for Oracle Audit Vault and Database Firewall: 

  1. Log into the first NameNode (node01) of the primary rack and change to the BDAMammoth directory:

    # cd /opt/oracle/BDAMammoth
    
  2. Add support for Oracle Audit Vault and Database Firewall:

    # ./mammoth-reconfig add auditvault
    INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/ bda1node01-20140805072714.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/ bda1node01-20140805072714.trc
    INFO: This is the install of the primary rack
    INFO: Checking if password-less ssh is set up
    INFO: Executing checkRoot.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    SUCCESS: Executed checkRoot.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    INFO: Executing checkSSHAllNodes.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    SUCCESS: Executed checkSSHAllNodes.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    INFO: Reading component versions from 
     
    /opt/oracle/BDAMammoth/bdaconfig/COMPONENTS
    INFO: Creating nodelist files...
    Please enter the Audit Vault Server Admin Username: admin_username
    Please enter the Audit Vault Server Admin Password: admin_password
    Enter password again: admin_password
    Please enter the Audit Vault Server Database Service Name: service_name
    Please enter the Audit Vault Server IP Address: IP address
    Please enter the Audit Vault Server Port: port_number
    INFO: The password for disk encryption is not needed since feature is not enabled
    INFO: Creating environment.pp file ...
    INFO: Making sure all puppet agents can be accessed.
    INFO: Pinging puppet agents
    INFO: Adding audit Vault Service. This will take some time ...
         .
         .
         .
    

10.8.5 Adding Disk Encryption

The following procedure configures disk encryption. After installation, a series of tests run to ensure that the services are working properly.

To change the password, use the mammoth-reconfig update command.

To support disk encryption: 

  1. Log into the first NameNode (node01) of the primary rack and change to the BDAMammoth directory:

    cd /opt/oracle/BDAMammoth
    
  2. Enable disk encryption:

    # bdacli enable disk_encryption
    INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20140805084442.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/                 bda1node01-20140805084442.trc
    INFO: This is the install of the primary rack
    INFO: Checking if password-less ssh is set up
         .
         .
         .
     Enter password for encrypting disks: password
     Enter password again: password
    INFO: The password for audit vault server is not needed since feature is not enabled
    INFO: Making sure all puppet agents can be accessed.
    INFO: Pinging puppet agents ..
    INFO: Shutting down Cloudera Manager server/agents. This will take some time ...
         .
         .
         .
    INFO: Executing setup_disk_encryption.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    SUCCESS: Executed setup_disk_encryption.sh on nodes 
     
    /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    SUCCESS: Successfully setup disk encryption on the cluster
         .
         .
         .
    INFO: Starting Hadoop Services. This will take some time ...
         .
         .
         .
     
    INFO: Doing post-cleanup operations
    INFO: Running cluster validation checks and generating install summary
    Enter CM admin password to enable check for CM services and hosts
    Press ENTER twice to skip CM services and hosts checks
    Enter password: Enter
    Enter password again: Enter
    INFO: No Cloudera Manager password given. Skipping Cloudera Manager health checks
    Checking Cluster Type
    Checking if Kerberos enabled
    Retrieving Cluster Information
     
    Running Cluster Health Checks (bdacheckcluster)
    Warning: Permanently added 'bda1node01-master' (RSA) to the list of known hosts.
    INFO: No Cloudera Manager password given - Cloudera Manager health checks skipped
     
    Running 1 GB teragen-terasort-teravalidate Hadoop Validation Test
    teragen         : 29 s
    terasort        : 36 s
    teravalidate    : 33 s
    -----------------------------
    Total time       : 98 s
     
    Status : succeeded
     
    Running An Oozie Workflow Test
    oozie passed
    oozie workflow test finished in 267 seconds
     
    Map Reduce Job Status:
    0000000-140805085424212-oozie-oozi-W@mr-node OK job_1407254020699_0012 
     
    SUCCEEDED -
    Pig Job Status:
    0000000-140805085424212-oozie-oozi-W@pig-node OK job_1407254020699_0004 
     
    SUCCEEDED -
    Hive Job Status:
    0000000-140805085424212-oozie-oozi-W@hive-node OK job_1407254020699_0006 
     
         .
         .
         .
    SUCCESS: Cluster validation checks were all successful
    INFO: Time spent in post-cleanup operations is 1051 seconds.
    ==================================================================================
     
    INFO: Please download the install summary zipfile from /tmp/bda1cdh-install-summary.zip
    

10.8.6 Adding Kerberos Authentication

The following procedure configures Kerberos authentication.

To support Kerberos authentication: 

  1. Ensure that you complete the Kerberos prerequisites listed in "Installation Prerequisites."

  2. Log into the first NameNode (node01) of the primary rack and change to the BDAMammoth directory:

    cd /opt/oracle/BDAMammoth
    
  3. Configure Kerberos:

    # ./mammoth-reconfig add kerberos
    INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20131104072502.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/bda1node01-20131104072502.trc
         .
         .
         .
    INFO: Executing checkRoot.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    SUCCESS: Executed checkRoot.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
    SUCCESS: Password-less root SSH is setup.
     
     Do you want to setup KDC on a BDA node (yes/no): yes
     
     Please enter the realm name: EXAMPLE.COM
     
     Enter password for Kerberos database: password
     Enter password again: password
    INFO: Executing changekrb5_kdc.sh on nodes bda1node01 #Step -1#
    SUCCESS: Executed changekrb5_kdc.sh on nodes bda1node01 #Step -1#
    SUCCESS: Successfully set the Kerberos configuration on the KDC
    INFO: Setting up Master KDC
    INFO: Executing setupKDC.sh on nodes bda1node01 #Step -1#
         .
         .
         .
    

10.9 Reinstalling the Base Image

The operating system and various utilities are factory installed on Oracle Big Data Appliance, as described in "Oracle Big Data Appliance Management Software". You may need to reinstall this base image if, for example, you want to return Oracle Big Data Appliance to its original state, or you replaced a failed server. Mammoth automatically upgrades the base image as necessary before upgrading the Oracle Big Data Appliance software to a newer version.

You can reimage all or part of a rack. However, all the servers in a cluster must have the same image. Follow the appropriate instructions:

10.9.1 Reimaging a Single Oracle Big Data Appliance Server

Follow this procedure to reimage one server, for example, following replacement of a failed server.

Caution:

If you reimage a server, then all files and data are erased.

To reinstall the base image on one server: 

  1. Download the base image patch from My Oracle Support or Oracle Automated Release Updates (ARU), and copy it to the server being reimaged. You can use the base image downloaded with the Mammoth bundle, or download the separate base image patch.

    Caution:

    Use the most recent 2.x version of the base image. Do not use the version included in the Mammoth bundle.

    See "Downloading the Mammoth Software Deployment Bundle." You can take the same basic steps to download the base image patch from My Oracle Support.

  2. If you are reimaging the server to the current customer settings, verify that /opt/oracle/bda/BdaDeploy.json reflects the intended network configuration. If it does not, then generate a new file using Oracle Big Data Appliance Configuration Generation Utility. See "Generating the Configuration Files."

  3. Ensure that 4 GB or more disk space are free in the partition:

    $ df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/md2              161G   23G  130G  15% /
    /dev/md0              194M   40M  145M  22% /boot
    tmpfs                  24G     0   24G   0% /dev/shm
    /dev/sda4             1.7T  197M  1.7T   1% /u01
    /dev/sdb4             1.7T  197M  1.7T   1% /u02
    /dev/sdc1             1.8T  199M  1.8T   1% /u03
    /dev/sdd1             1.8T  199M  1.8T   1% /u04
         .
         .
         .
    
  4. Unzip the downloaded base image ZIP file. For example:

    $ unzip p19070502_260_Linux-x86-64.zip
    Archive:  p19070502_260_Linux-x86-64.zip
       creating: BDABaseImage-ol6-2.6.0_RELEASE/
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/biosconfig
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/makebdaimage
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/reimagerack
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/reimagecluster
     extracting: BDABaseImage-ol6-2.6.0_RELEASE/BDABaseImage-ol6-2.6.0_RELEASE.md5sum
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/BDABaseImage-ol6-2.6.0_RELEASE.iso
       creating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/
       creating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/RCU/
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/RCU/rcuintegration-11.1.1.7.0-1.x86_64.rpm
       creating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/ODI/
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/ODI/odiagent-11.1.1.7.0-1.x86_64.rpm
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/README.txt
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/ubiosconfig
    
  5. Change to the subdirectory created in the previous step:

    $ cd BDABaseImage-ol6-2.6.0_RELEASE
     
    
  6. Reimage the server using the makebdaimage command. The following example reimages server 4, including the internal USB, from the 2.6.0 base image to the custom settings in BdaDeploy.json. You must be logged in to the server being reimaged, which is server 4 in this example.

    ./makebdaimage --usbint BDABaseImage-ol6-2.6.0_RELEASE.iso /opt/oracle/bda/BdaDeploy.json 4
    

    See makebdaimage for the complete syntax of the command.

  7. If the makebdaimage command succeeds without errors, then restart the server.

10.9.2 Reimaging an Oracle Big Data Appliance Rack

Follow this procedure to reimage an entire rack.

Caution:

If you reimage an entire rack, then all clusters, files, and data on the rack are erased. Reimaging is not required for a software upgrade.

To reinstall the base image on all servers in a rack: 

  1. If the Oracle Big Data Appliance software was installed previously on the rack, then save the /opt/oracle/BDAMammoth/cluster_name-config.json file to a safe place outside Oracle Big Data Appliance.

  2. Download the most recent base image patch from My Oracle Support or Oracle Automated Release Updates (ARU), and copy it to the first (bottom) server of the rack being reimaged. You can use the base image downloaded with the Mammoth bundle, or download the separate base image patch.

    See "Downloading the Mammoth Software Deployment Bundle." You can take the same basic steps to download the base image patch from My Oracle Support.

    Caution:

    Use the most recent 2.x version of the base image. Do not use the version included in the Mammoth bundle.
  3. Establish an SSH connection to the first server and log in as root.

  4. If you are reimaging to existing customer network settings, then verify that /opt/oracle/bda/BdaDeploy.json reflects the intended network configuration. If it does not, then generate a new file using Oracle Big Data Appliance Configuration Generation Utility. See "Generating the Configuration Files."

  5. Ensure that passwordless SSH is set up:

    # dcli hostname
    192.168.41.37: bda1node01.example.com
    192.168.41.38: bda1node02.example.com
    192.168.41.39: bda1node03.example.com
         .
         .
         .
    

    This command must run without errors and return the host names of all Oracle Big Data Appliance servers. If not, then follow the steps in "Setting Up Passwordless SSH". Do not continue until the dcli hostname command runs successfully on all servers.

  6. Check all Oracle Big Data Appliance servers for hardware issues:

    # dcli bdacheckhw | grep -v SUCCESS
    
  7. Resolve any hardware errors and warnings before reimaging the rack.

  8. Verify that at least 4 GB are available in the root (/) partition of all servers:

  9. # dcli df -h /
    192.168.41.37: Filesystem            Size  Used Avail Use% Mounted on
    192.168.41.37: /dev/md2              161G   21G  132G  14% /
    192.168.41.38: Filesystem            Size  Used Avail Use% Mounted on
    192.168.41.38: /dev/md2              161G   19G  135G  12% /
    192.168.41.39: Filesystem            Size  Used Avail Use% Mounted on
    192.168.41.39: /dev/md2              161G   23G  131G  15% /
         .
         .
         .
    
  10. Unzip the downloaded base image ZIP file. For example:

    $ unzip p19070502_260_Linux-x86-64.zip
    Archive:  p19070502_260_Linux-x86-64.zip
       creating: BDABaseImage-ol6-2.6.0_RELEASE/
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/biosconfig
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/makebdaimage
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/reimagerack
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/reimagecluster
     extracting: BDABaseImage-ol6-2.6.0_RELEASE/BDABaseImage-ol6-2.6.0_RELEASE.md5sum
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/BDABaseImage-ol6-2.6.0_RELEASE.iso
       creating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/
       creating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/RCU/
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/RCU/rcuintegration-11.1.1.7.0-1.x86_64.rpm
       creating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/ODI/
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/Extras/ODI/odiagent-11.1.1.7.0-1.x86_64.rpm
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/README.txt
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/ubiosconfig
    
  11. Change to the subdirectory created in the previous step:

    $ cd BDABaseImage-ol6-2.6.0_RELEASE
     
    
  12. Complete one of the following procedures:

    • To reimage an Oracle Big Data Appliance that was configured for a customer network to the same customer network settings, execute the ./reimagerack command.

    • To reimage an appliance that still has the factory settings:

      1. Ensure that /opt/oracle/bda/BdaDeploy.json does not exist.

      2. Execute the ./reimagerack command.

    • To restore the factory network settings on a rack configured with custom network settings:

      1. Copy /opt/oracle/bda/BdaDeploy.json to a safe location outside Oracle Big Data Appliance.

      2. Ensure that /opt/oracle/bda/BdaShip.json exists.

      3. Reimage the rack:

        ./reimagerack deploy ship
        

    See reimagerack for the complete syntax of the command.

  13. Run Mammoth. See "Installing the Software."

10.9.3 Reimaging an Oracle Big Data Appliance Cluster

Follow this procedure to reimage a group of servers that the Mammoth utility has deployed as a cluster. The existing network settings are automatically reapplied after reimaging.

Caution:

If you reimage a cluster, then all files and data on the cluster are erased. Reimaging is not required for a software upgrade.

To reinstall the base image on all servers in a cluster: 

  1. Save the /opt/oracle/BDAMammoth/cluster_name-config.json file to a safe place outside Oracle Big Data Appliance.

  2. Download the most recent base image patch from My Oracle Support or Oracle Automated Release Updates (ARU), and copy it to the first server of the cluster being reimaged.You can copy the file to any directory, such as /tmp.

    You can use the base image downloaded with the Mammoth bundle, or download the separate base image patch.

    Caution:

    Use the most recent 2.x version of the base image. Do not use the version included in the Mammoth bundle.

    Depending on the configuration of clusters in the rack, the first server might be 1, 7, 10, or 13, counting from the bottom

  3. Establish an SSH connection to the first server and log in as root.

  4. Verify that /opt/oracle/bda/BdaDeploy.json reflects the intended network configuration. If it does not, then generate a new file using Oracle Big Data Appliance Configuration Generation Utility. See "Generating the Configuration Files."

  5. Ensure that passwordless SSH is set up:

    # dcli -C hostname
    192.168.41.37: bda1node01.example.com
    192.168.41.38: bda1node02.example.com
    192.168.41.39: bda1node03.example.com
         .
         .
         .
    

    This command must run without errors and return the host names of all Oracle Big Data Appliance servers in the cluster. If not, then follow the steps in "Setting Up Passwordless SSH". Do not continue until the dcli -C hostname command runs successfully on all servers.

  6. Check all Oracle Big Data Appliance servers for hardware issues:

    # dcli -C bdacheckhw | grep -v SUCCESS
    
  7. Resolve any hardware errors and warnings before reimaging the rack.

  8. Verify that at least 4 GB are available in the root (/) partition of all servers:

  9. # dcli -C df -h /
    192.168.41.37: Filesystem            Size  Used Avail Use% Mounted on
    192.168.41.37: /dev/md2              161G   21G  132G  14% /
    192.168.41.38: Filesystem            Size  Used Avail Use% Mounted on
    192.168.41.38: /dev/md2              161G   19G  135G  12% /
    192.168.41.39: Filesystem            Size  Used Avail Use% Mounted on
    192.168.41.39: /dev/md2              161G   23G  131G  15% /
         .
         .
         .
    
  10. Unzip the downloaded base image ZIP file (2 of 2). For example:

    $ unzip p19070502_260_Linux-x86-64_2of2.zip
    Archive:  p19070502_260_Linux-x86-64_2of2.zip
       creating: BDABaseImage-ol6-2.6.0_RELEASE/
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/biosconfig
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/reimagecluster
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/reimagerack
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/makebdaimage
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/BDABaseImage-ol6-2.6.0_RELEASE.iso
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/README.txt
     extracting: BDABaseImage-ol6-2.6.0_RELEASE/BDABaseImage-ol6-2.6.0_RELEASE.md5sum
      inflating: BDABaseImage-ol6-2.6.0_RELEASE/ubiosconfig
    
  11. Change to the subdirectory created in the previous step:

    $ cd BDABaseImage-ol6-2.6.0_RELEASE
     
    
  12. Reimage the cluster:

    ./reimagecluster
    

    See reimagecluster for the complete syntax of the command.

  13. Run Mammoth. See "Installing the Software."

10.10 Installing a One-Off Patch

One-off patch bundles provide a fix to specific bugs in one or more releases. You use Mammoth to apply the patch to your cluster.

To install a one-off patch bundle: 

  1. Download the patch bundle from the Automated Release Update (ARU) system to a directory such as /tmp on the first node of the Oracle Big Data Appliance cluster.

    The file is named BDA-patch-release-patch.zip. The examples in this procedure use the name BDA-patch-2.2.1-123456.zip.

  2. Unzip the file. For example:

    # unzip BDA-patch-2.2.1-123456.zip
    Archive:  BDA-patch-2.2.1-123456.zip
       creating: BDA-patch-2.2.1-123456/
      inflating: BDA-patch-2.2.1-123456/BDA-patch-2.2.1-123456.run 
      inflating: BDA-patch-2.2.1-123456/README.txt
    
  3. Change to the patch directory created in Step 2. For example:

    $ cd BDA-patch-2.2.1-123456
    
  4. Extract the contents of the run file. For example:

    $ ./BDA-patch-2.2.1-123456.run
    Big Data Appliance one-off patch 123456 for v2.2.1 Self-extraction
     
    Removing existing temporary files
     
    Generating /tmp/BDA-patch-2.2.1-123456.tar
    Verifying MD5 sum of /tmp/BDA-patch-2.2.1-123456.tar
    /tmp/BDA-patch-2.2.1-123456.tar MD5 checksum matches
     
    Extracting /tmp/BDA-patch-2.2.1-123456.tar to /opt/oracle/BDAMammoth/patches/123456
    Removing temporary files
     
    Please "cd /opt/oracle/BDAMammoth" before running "./mammoth -p 123456"
    
  5. Change to the BDAMammoth directory:

    $ cd /opt/oracle/BDAMammoth
    
  6. Install the patch. For example:

    $ ./mammoth -p 123456
    

    Alternatively, you can use the bdacli command. See "bdacli".

10.11 Mammoth Software Installation and Configuration Utility

You must log in as root on the first server and change to the /opt/oracle/BDAMammoth directory to use Mammoth. It has this syntax:

./mammoth option [cluster_name] ]

In this command, cluster_name is the name of the cluster You must enter cluster_name in the first command exactly as it appears in (cluster_name-config.json). Afterward, cluster_name defaults to the rack specified in a previous mammoth command.

You must finish installing one rack before starting the installation of another rack.

Example 10-1 Mammoth Syntax Examples

This command displays Help for the Mammoth utility:

./mammoth -h

This command does a complete install on rack bda3:

./mammoth -i bda3

This command runs steps 2 through 6 on the rack being set up:

./mammoth -r 2-6

This command generates a parameter file to add six servers in an in-rack expansion kit, beginning with node07, to an existing cluster:

./mammoth -e node07 node08 node09 node10 node11 node12

10.11.1 Mammoth Options

The syntax of the mammoth command supports the configuration of new clusters and in-rack expansion kits. You can also use the Mammoth bundle to upgrade from earlier releases.

-c

Run the Oracle Big Data Appliance cluster checks.

-e newnode1, newnode2, newnode3...

Generates a parameter file for a group of servers being added to a cluster. The file is named cluster_name-config.json if the new servers are in a rack outside the cluster. Otherwise, Mammoth prompts for an in-rack expansion number from 1 to 5 that it uses in the name, such as mammoth-bda1-1.

To identify the new servers, list them on the command line. The servers can be in the same rack or multiple racks.

No passwords are included in the parameter file, so you must enter them when running Mammoth.

On Oracle NoSQL Database clusters, Mammoth prompts for the kind of zone for the new nodes. You can choose from an existing zone, a new primary zone, or a new secondary zone. When adding to an existing zone, Mammoth lists the zones that you can use. When creating a new zone, Mammoth prompts for the zone name and replication factor.

-h

Displays command Help including command usage and a list of steps.

-i cluster_name

Runs all mandatory steps on the cluster, equivalent to -r 1-18 for a full rack. Use this option when configuring a new rack or adding a group of servers to a cluster.

-l

Lists the steps of the Mammoth utility.

-p

Upgrades the software on the cluster to the current version or installs a one-off patch.

-r n-N

Runs steps n through N of the Mammoth while no errors occur.

-s n [cluster_name]

Runs step n. Enter cluster_name to identify another cluster on the same rack. See the -e option.

-v

Displays the version number of the Mammoth.

10.11.2 Mammoth Installation Steps

Following are descriptions of the steps that the Mammoth and the Mammoth Reconfiguration Utility perform when installing the software.

Step 1   PreinstallChecks

This step performs several tasks:

  • Validates the configuration files and prompts for the passwords.

  • Sets up a Secure Shell (SSH) for the root user so you can connect to all addresses on the administrative network without entering a password.

  • Sets up passwordless SSH for the root user on the InfiniBand network.

  • Generates /etc/hosts from the configuration file and copies it to all servers so they use the InfiniBand connections to communicate internally. The file maps private IP addresses to public host names.

  • Sets up an alias to identify the node where the Mammoth is run as the puppet master node. For example, if you run the Mammoth from bda1node01 with an IP address 192.168.41.1, then a list of aliases for that IP address includes bda1node01-master. The Mammoth uses Puppet for the software installation.

  • Checks the network timing on all nodes. If the timing checks fail, then there are unresolved names and IP addresses that will prevent the installation from running correctly. Fix these issues before continuing with the installation.

This step also performs a variety of hardware and software checks. A failure in any of these checks causes the Mammoth to fail:

  • The ARP cache querying time is 2 seconds or less.

  • All server clocks are synchronized within 10 seconds of the current server.

  • All servers succeeded on the last restart and generated a /root/BDA_REBOOT_SUCCEEDED file.

  • The bdacheckhw utility succeeds.

  • The bdachecksw utility succeeds.

Step 2   SetupPuppet

This step configures puppet agents on all nodes and start them, configures a puppet master on the node where the Mammoth is being run, waits for the agents to submit their certificates, and automates their signing. This step also changes the root password on all nodes (optional). After this step is completed, Puppet can deploy the software.

Puppet is a distributed configuration management tool that is commonly used for managing Hadoop clusters. The puppet master is a parent service and maintains a Puppet repository. A puppet agent operates on each Hadoop node.

A file named /etc/puppet/puppet.conf resides on every server and identifies the location of the puppet master.

Puppet operates in two modes:

  • Periodic pull mode in which the puppet agents periodically contact the puppet master and asks for an update, or

  • Kick mode in which the puppet master alerts the puppet agents that a configuration update is available, and the agents then ask for the update. Puppet operates in kick mode during the Mammoth installation.

In both modes, the puppet master must trust the agent. To establish this trust, the agent sends a certificate to the puppet master node where the sys admin process signs it. When this transaction is complete, the puppet master sends the new configuration to the agent.

For subsequent steps, you can check the Puppet log files on each server, as described in "What If an Error Occurs During the Installation?".

Step 3   PatchFactoryImage

Installs the most recent Oracle Big Data Appliance image and system parameter settings.

Step 4   CopyLicenseFiles

Copies third-party licenses to /opt/oss/src/OSSLicenses.pdf on every server, as required by the licensing agreements.

Step 5   CopySoftwareSource

Copies third-party software source code to /opt/oss/src/ on every server, as required by the licensing agreements.

Mammoth does not copy the source code to Oracle NoSQL Database clusters.

Step 6   CreateLogicalVolumes

Mammoth does not create logical volumes for Oracle NoSQL Database clusters.

Step 7   CreateUsers

Creates the hdfs and mapred users, and the hadoop group. It also creates the oracle user and the dba and oinstall groups.

The various packages installed in later steps also create users and groups during their installation.

See Also:

Oracle Big Data Appliance Software User's Guide for more information about users and groups.
Step 8   SetupMountPoints

The NameNode data is copied to multiple places to prevent a loss of this critical information should a failure occur in either the disk or the entire node where they are set up.

Step 9   SetupMySQL

Installs and configures MySQL Database. This step creates the primary database and several databases on node03 for use by Cloudera Manager. It also sets up replication of the primary database to a backup database on node02.

Mammoth does not install MySQL Database on Oracle NoSQL Database clusters.

Step 10   InstallHadoop

Installs all packages in Cloudera's Distribution including Apache Hadoop (CDH) and Cloudera Manager. It then starts the Cloudera Manager server on node03 and configures the cluster.

Mammoth does not install CDH or Cloudera Manager on Oracle NoSQL Database clusters.

Step 11   StartHadoopServices

Starts the agents on all nodes and starts all CDH services. After this step, you have a fully functional Hadoop installation.

Cloudera Manager runs on port 7180 of node03. You can open it in a browser, for example:

http://bda1node03.example.com:7180

In this example, bda1node02 is the name of node02 and example.com is the domain. The default user name and password is admin, which is changed in Step 17.

Mammoth does not install or start CDH services on Oracle NoSQL Database clusters.

Step 12   InstallBDASoftware

Installs the server-side components of Oracle Big Data Connectors, if this option was selected in Oracle Big Data Appliance Configuration Generation Utility. Oracle Big Data Connectors must be licensed separately. Optional.

Installs the server-side components of Oracle Big Data SQL, if this option was selected in Oracle Big Data Appliance Configuration Generation Utility. For Oracle Big Data SQL support of Oracle NoSQL Database, this step installs the client libraries (kvclient.jar) on the CDH nodes. Oracle Big Data SQL must be licensed separately. Optional.

Installs Oracle NoSQL Database on clusters allocated to its use. Enterprise Edition requires a separate license.

Step 13   HadoopDataEncryption

Configures network and disk encryption.

Step 14   SetupKerberos

Configures Kerberos authentication on Oracle Big Data Appliance, if this option was selected. No prerequisites are required if you set up the key distribution center on Oracle Big Data Appliance. Otherwise, see "Installation Prerequisites."

Step 15   SetupEMAgent

Installs and configures the Oracle Enterprise Manager agents. Optional.

This step does the following:

  • Creates following named credentials: Switch Credential, Host Credential, Cloudera Manager credential and ILOM credential.

    In a cluster expansion, the same credentials are reused.

  • Updates the Oracle Big Data Appliance and Oracle Exadata Database Machine plug-in agents on the Oracle Big Data Appliance servers to the latest version deployed on the management servers.

  • Performs discovery of the cluster using the named credentials.

Note:

For this step to run successfully, Oracle Management System must be up and running. See Oracle Enterprise Manager System Monitoring Plug-in Installation Guide for Oracle Big Data Appliance.
Step 16   SetupASR

Installs and configures Auto Service Request (ASR). Optional.

This step does the following:

  • Installs the required software packages

  • Configures the trap destinations

  • Starts the monitoring daemon

To activate the assets from ASR Manager, see "Verifying ASR Assets".

Note:

For this step to run successfully, the ASR host system must be up with ASR Manager running and configured properly. See Chapter 5.
Step 17   CleanupInstall

Performs the following:

  • Changes the Cloudera Manager password if specified in the Installation Template.

  • Deletes temporary files created during the installation.

  • Copies log files from all nodes to subdirectories in /opt/oracle/bda/install/log.

  • Runs cluster verification checks, including TeraSort, to ensure that everything is working properly. It also generates an install summary. All logs are stored in a subdirectory under /opt/oracle/bda/install/log on node01.

Step 18   CleanupSSHroot (Optional)

Removes passwordless SSH for root that was set up in Step 1.

10.12 Mammoth Reconfiguration Utility Syntax

You must log in as root on the first server and change to the /opt/oracle/BDAMammoth directory to use the Mammoth Reconfiguration Utility. It has this syntax:

./mammoth-reconfig option parameter

Note:

  • Where parameter is a node name in the syntax examples, bda1 is the rack name, node is the server base name, and -adm is the administrative access suffix.

  • This utility uses the configuration settings stored in /opt/oracle/bda/install/state/config.json. When the utility makes a change, it modifies this file to reflect the new configuration.

  • The bdacli command provides an alternative way to call mammoth-reconfig. See "bdacli".

Options 

add | remove

Adds or removes a service from the cluster.

This example adds Auto Service Request support to all servers in the cluster:

# cd /opt/oracle/BDAMammoth
# ./mammoth-reconfig add asr

This example removes Oracle Enterprise Manager support from all servers in the cluster:

# cd /opt/oracle/BDAMammoth
# ./mammoth-reconfig remove em

Table 10-1 describes the keywords that are valid parameters for add and remove.

Table 10-1 Mammoth Reconfiguration Utility ADD and REMOVE Keywords

Component Keyword Description

asr

Auto Service Request

auditvault

Oracle Audit Vault and Database Firewall plugin

bdc

Oracle Big Data Connectors

big_data_sql

Oracle Big Data SQL

disk_encryption

Automatically encrypt and decrypt data on disk and at rest

em

Oracle Enterprise Manager Cloud Control agent

kerberos

Kerberos authentication; manual removal only

network_encryption

Automatically encrypt data as it travels over the network. The cluster must be set up with Kerberos authentication.

sentry

Apache Sentry authorization


install node

Installs and configures software on the specified node.

failover node

Moves critical services from the specified node to a node with no critical services.

update

Changes the configuration of a service from the cluster. The parameter is a keyword that identifies the service. See Table 10-2.

Table 10-2 Mammoth Reconfiguration Utility UPDATE Keywords

Component Keyword Description

disk_encryption

Changes the password used for password-based encryption. The command prompts for both the old password and the new password.

A valid password consists of 1 to 64 printable ASCII characters. It cannot contain whitespace characters (such as spaces, tabs, or carriage returns), single or double quotation marks, or backslashes (\).


10.13 Oracle Big Data Appliance Base Imaging Utilities

The following utilities are distributed in the base image bundle. To run the utilities, you must be logged in as root.

10.13.1 makebdaimage

Reinstalls the base image on a single server, a cluster, or an entire rack. Both reimagecluster and reimagerack call this utility.

The makedbaimage utility has this syntax:

makebdaimage [--usb | --usbint] [--noiloms] /path/BDABaseImage-version_RELEASE.iso /path/BDdaDeploy.json target_servers

Options 

--usb | --usbint

Identifies the USB port that will be used for reimaging. Use --usb for the external USB drive, or --usbint for the internal drive. The internal USB drive contains a full Linux installation.

To use the --usbint option, you must be logged in to the target server; otherwise, you reimage the server using the wrong network information.

--noiloms

The reimaging process does not alter the Oracle ILOMs on the target servers.

target_servers

One or more servers where the image will be installed, which you specify using one of these formats:

  • node_number

    Identifies one server for reimaging. Enter the position of the server in the rack, from 1 to 18, counting from bottom to top.

  • from.json to.json

    Identifies the current configuration (from.json) and the desired configuration (to.json). The JSON files can be either the factory configuration in BdaShip.json or a custom configuration in BdaDeploy.json. You can find them in /opt/oracle/bda on configured servers.

  • to.json: Concatenation of JSON files similar to BdaDeploy.json and BdaShip.json, but containing an extra ETH0_MACS array entry. The first entry in the concatenated file with a matching eth0_mac entry is used when reimaging. The to.json file is used as is.

10.13.2 reimagecluster

Reimages all servers in the cluster in parallel using dcli and makebdaimage.

The reimagecluster utility has this syntax:

reimagecluster [--no-iloms] [from.json [to.json]]

Prerequisites 

  • Verify that the following command returns the list of servers in the cluster:

    $ dcli -C hostname
    
  • Ensure that exactly one BDABaseImage-version_RELEASE*.iso file is in the current directory.

Options 

--no-iloms

The reimaging process does not alter the Oracle ILOMs on the target servers.

from.json

The full path to the current configuration file, either BdaShip.json or BdaDeploy.json.

This option defaults to /opt/oracle/bda/BdaDeploy.json. If BdaDeploy.json is missing, then the option defaults to /opt/oracle/bda/BdaShip.json. The servers must already be set to the values in the JSON file used by reimagecluster.

to.json

The full path to the new network configuration in effect after reimaging. It can be either a BdaDeploy.json or a BdaShip.json file.

10.13.3 reimagerack

Reimages all servers in the rack in parallel using dcli and makedbaimage.

The reimagerack utility has this syntax:

reimagerack [--no-iloms] [--no-macs] [--hosts n1, n2...] [from.json [to.json]]

Prerequisites 

  • Verify that the following command returns the list of servers in the rack:

    $ dcli -hostname
    
  • Ensure that exactly one BDABaseImage-version_RELEASE*.iso file is in the current directory.

Options 

--no-iloms

The reimaging process does not alter the Oracle ILOMs on the target servers.

--no-macs

The utility does not retrieve the server MAC addresses. Instead, it uses the InfiniBand cabling. Use this option only when restoring the factory settings; you must include both JSON files (from.json and to.json) in the command line.

--hosts n1, n2...

Restricts reimaging to the servers identified by a comma-delimited list of host names or IP addresses, whichever one dcli accepts. All servers must be in the list of target servers identified by the dcli -t command.

This option enforces the --no-macs option, so its restrictions also apply.

from.json

The full path to the current configuration file, either BdaShip.json or BdaDeploy.json.

This option defaults to /opt/oracle/bda/BdaDeploy.json. If BdaDeploy.son is missing, then the option defaults to /opt/oracle/bda/BdaShip.json. The servers must already be set to the values in the JSON file used by reimagerack.

to.json

The full path to the new network configuration in effect after reimaging. It can be either a BdaDeploy.json or a BdaShip.json file.