The table below describes the properties you should set for a single-node installation. You can modify bdd.conf in any text editor.
Configuration property | Description |
---|---|
ORACLE_HOME | The path to the directory BDD will be
installed in. This must not exist and the system must contain at least 30GB of
free space to create this directory. Additionally, its parent directories'
permissions must be set to either 755 or 775.
Note that this setting is different from the ORACLE_HOME environment variable required by the Studio database. |
ORACLE_INV_PTR | The absolute path to the Oracle
inventory pointer file, which the installer will create when it runs. This
can't be located in the
ORACLE_HOME directory.
If you have any other Oracle software products installed, this file will already exist. Update this property to point to it. |
INSTALLER_PATH | The absolute path to the installation source directory. |
DGRAPH_INDEX_DIR | The absolute path to the Dgraph
databases. This directory shouldn't be located under
ORACLE_HOME, or it will be deleted.
The script will create this directory if it doesn't currently exist. If you're installing with existing databases, set this property to their parent directory. |
HADOOP_UI_HOST | The hostname of the machine running your Hadoop manager (Cloudera Manager, Ambari, or MCS). Set this to your machine's hostname. |
STUDIO_JDBC_URL | The JDBC URL for your Studio database,
which Studio requires to connect to it.
There are three templates for this property. Copy the
template that corresponds to your database type to
STUDIO_JDBC_URL and update the URL to point
to your database.
|
WORKFLOW_MANAGER_JDBC_URL | The JDBC URL for the Workflow Manager
Service database.
There are two templates for this property. Copy the template
that corresponds to your database type to
WORKFLOW_MANAGER_JDBC_URL and update the URL
to point to your database.
Note that BDD doesn't currently support database migration. After deployment, the only ways to change to a different database are to reconfigure the database itself or reinstall BDD. |
INSTALL_TYPE | Determines the installation type
according to your hardware and Hadoop distribution. Set this to one of the
following:
|
JAVA_HOME | The absolute path to the JDK install
directory. This should have the same value as the
$JAVA_HOME environment variable.
If you have multiple versions of the JDK installed, be sure that this points to the correct one. |
TEMP_FOLDER_PATH | The temporary directory used by the installer. This must exist and contain at least 20GB of free space. |
HADOOP_UI_PORT | The port number for the Hadoop manager. |
HADOOP_UI_CLUSTER_NAME | The name of your Hadoop cluster, which is listed in the manager. Be sure to replace any spaces in the cluster name with %20. |
HUE_URI | The hostname and port for Hue, in the format <hostname>:<port>. This property is only required for HDP. |
HADOOP_CLIENT_LIB_PATHS | A comma-separated list of the absolute
paths to the Hadoop client libraries.
Note: You only need to set this property before installing if
you have HDP or MapR. For CDH, the installer will download the required
libraries and set this property automatically. This requires an internet
connection. If the script is unable to download the libraries, it will fail;
see
Failure to download the Hadoop client libraries
for instructions on solving this issue.
To set this property, copy the template that corresponds to your Hadoop distribution to HADOOP_CLIENT_LIB_PATHS and update the paths to point to the libraries you copied to the install machine. Be sure to replace all instances of <UNZIPPED_XXX_BASE> with the absolute path to the correct library. Don't change the order of the paths in the list as they must be specified as they appear. |
HADOOP_CERTIFICATES_PATH | Only required for Hadoop clusters with
TLS/SSL enabled. The absolute path to the directory on the install machine
where you put the certificates for HDFS, YARN, Hive, and the KMS.
Don't remove this directory after installing, as you will use it if you have to update the certificates. |
ENABLE_KERBEROS | Enables Kerberos. If you have Kerberos 5+ installed, set this value to TRUE; if not, set it to FALSE. |
KERBEROS_PRINCIPAL | The name of the BDD principal. This
should include the name of your domain; for example,
bdd-service@EXAMPLE.COM.
This property is only required if ENABLE_KERBEROS is set to TRUE. |
KERBEROS_KEYTAB_PATH | The absolute path to the BDD keytab file. This property is only required if ENABLE_KERBEROS is set to TRUE. |
KRB5_CONF_PATH | The absolute path to the krb5.conf file. This property is only required if ENABLE_KERBEROS is set to TRUE. |
ADMIN_SERVER | The hostname of the WebLogic Admin Server. This will default to your machine's hostname, so you don't need to set it. |
MANAGED_SERVERS | The hostname of the Managed Server. Leave this set to ${ADMIN_SERVER}. |
DGRAPH_SERVERS | The Dgraph hostname. Leave this set to ${ADMIN_SERVER}. |
DGRAPH_THREADS | The number of threads the Dgraph starts with. This will default to the number of cores your machine has minus 2, so you don't need to set it. |
DGRAPH_CACHE | The size of the Dgraph cache, in MB. This will default to either 50% of your RAM or the total amount of free memory minus 2GB (whichever is larger), so you don't need to set it. |
ZOOKEEPER_INDEX | The index of the Dgraph cluster in the ZooKeeper ensemble, which ZooKeeper uses to identify it. |
HDFS_DP_USER_DIR | The location within the HDFS /user directory that stores the sample files created when Studio users export data. The installer will create this directory if it doesn't already exist. The name of this directory can't include spaces or slashes (/). |
YARN_QUEUE | The YARN queue Data Processing jobs are submitted to. |
HIVE_DATABASE_NAME | The name of the Hive database that stores the source data for Studio data sets. |
SPARK_ON_YARN_JAR | The absolute path to the Spark on YARN
JAR on your Hadoop nodes. This will be added to the CLI classpath.
There are two templates for this property. Copy the value of the template that corresponds to your Hadoop distribution to SPARK_ON_YARN_JAR and update its value as follows:
|
TRANSFORM_SERVICE_SERVERS | A comma-separated list of the Transform Service nodes. For best performance, these should all be Managed Servers. In particular, they shouldn't be Dgraph nodes, as both the Dgraph and the Transform Service require a lot of memory. |
TRANSFORM_SERVICE_PORT | The port the Transform Service listens on for requests from Studio. |
ENABLE_CLUSTERING_SERVICE | For use by Oracle Support only. Leave this property set to FALSE. |
CLUSTERING_SERVICE_SERVERS | For use by Oracle Support only. Don't modify this property. |
CLUSTERING_SERVICE_PORT | For use by Oracle Support only. Don't modify this property. |
WORKFLOW_MANAGER_SERVERS | The Workflow Manager Service node. |
WORKFLOW_MANAGER_PORT | The port the Workflow Manager Service listens on for data processing requests. |