The table below describes the properties you should set for a single-node installation. You can modify bdd.conf in any text editor.
| Configuration property | Description | 
|---|---|
| ORACLE_HOME | The path to the directory BDD will be
					 installed in. This must not exist and the system must contain at least 10GB of
					 free space to create this directory. Additionally, its parent directories'
					 permissions must be set to either 755 or 775. Note that this setting is different from the ORACLE_HOME environment variable required by the database. | 
| ORACLE_INV_PTR | The absolute path to the Oracle
					 inventory pointer file, which the installer will create when it runs. This
					 can't be located in the 
					 ORACLE_HOME directory. If you have any other Oracle software products installed, this file will already exist. Update this property to point to it. | 
| INSTALLER_PATH | The absolute path to the installation source directory. | 
| DGRAPH_INDEX_DIR | The path to the directory on the shared
					 NFS where the Dgraph index will be located. If you have an index, set this to its location. If you don't, set this to the location you want the installer to create one in. The script will create this directory if it doesn't currently exist. Note that the specified directory shouldn't be located under ORACLE_HOME, or it will be deleted. | 
| HADOOP_UI_HOST | The hostname of the machine running your Hadoop manager (Cloudera Manager or Ambari). Set this to your machine's hostname. | 
| STUDIO_JDBC_URL | The JDBC URL for your database, which
					 Studio requires to connect to it. There are three templates for this property. Copy the
						template that corresponds to your database type to 
						STUDIO_JDBC_URL and update the URL to point
						to your database. 
					  
 | 
| INSTALL_TYPE | Determines the installation type
					 according to your hardware and Hadoop distribution. Set this to one of the
					 following: 
 | 
| CLUSTER_MODE | Determines whether you're installing on a single machine or a cluster. Set this to FALSE. | 
| JAVA_HOME | The absolute path to the JDK install
					 directory. This should have the same value as the 
					 $JAVA_HOME environment variable. If you have multiple versions of the JDK installed, be sure that this points to the correct one. | 
| TEMP_FOLDER_PATH | The temporary directory used by the installer. This must exist and contain at least 13GB of free space. | 
| HADOOP_UI_PORT | The port number for the Hadoop manager. | 
| HADOOP_UI_CLUSTER_NAME | The name of your Hadoop cluster, which is listed in the manager. Be sure to replace any spaces in the cluster name with %20. | 
| HUE_URI | The hostname and port for Hue, in the format <hostname>:<port>. This property is only required for HDP installations. | 
| HADOOP_CLIENT_LIB_PATHS | A comma-separated list of the absolute
					 paths to the Hadoop client libraries. Note: You only need to set this property before installing if
						you have HDP. For CDH clusters, the installer will download the required
						libraries and set this property automatically. This requires an internet
						connection. If the script is unable to download the libraries, it will fail;
						see 
						Failure to download the Hadoop client libraries
						for instructions on solving this issue. 
					  There are two HDP templates for this property. Copy the
						template that corresponds to your HDP version to 
						HADOOP_CLIENT_LIB_PATHS and update the paths
						to point to the libraries you copied to the install machine. 
					  
 Don't change the order of the paths in the list as they must be specified as they appear. | 
| ENABLE_KERBEROS | Enables Kerberos. If you have Kerberos 5+ installed, set this value to TRUE; if not, set it to FALSE. | 
| KERBEROS_PRINCIPAL | The name of the BDD principal. This
					 should include the name of your domain; for example, 
					 bdd-service@EXAMPLE.COM. This property is only required if ENABLE_KERBEROS is set to TRUE. | 
| KERBEROS_KEYTAB_PATH | The absolute path to the BDD keytab file. This property is only required if ENABLE_KERBEROS is set to TRUE. | 
| KRB5_CONF_PATH | The absolute path to the krb5.conf file. This property is only required if ENABLE_KERBEROS is set to TRUE. | 
| ADMIN_SERVER | The hostname of the WebLogic Admin Server. This will default to your machine's hostname, so you don't need to set it. | 
| MANAGED_SERVERS | The hostname of the Managed Server. Leave this set to ${ADMIN_SERVER}. | 
| DGRAPH_SERVERS | The Dgraph hostname. Leave this set to ${ADMIN_SERVER}. | 
| DGRAPH_THREADS | The number of threads the Dgraph starts with. This will default to the number of cores your machine has minus 2, so you don't need to set it. | 
| DGRAPH_CACHE | The size of the Dgraph cache, in MB. This will default to either 50% of your RAM or the total amount of free memory minus 2GB (whichever is larger), so you don't need to set it. | 
| COORDINATOR_INDEX | The index of the Dgraph in the ZooKeeper ensemble, which ZooKeeper uses to identify it. Note that this property is not related to the Dgraph index. | 
| DGRAPH_INDEX_NAME | The name of the Dgraph index, which will
					 be located in the directory defined by 
					 DGRAPH_INDEX_DIR. If you have an index, set this to its name, but don't include _indexes. If you don't have one, leave this set to base. Note: If your index happens to be named 
						base, rename it before installing or the
						installer will overwrite it with the empty indexes. 
					  | 
| HDFS_DP_USER_DIR | The location within the HDFS /user directory that stores the Avro files created when users export data from BDD. The installer will create this directory if it doesn't already exist. The name of this directory can't include spaces or slashes (/). | 
| YARN_QUEUE | The YARN queue Data Processing jobs are submitted to. | 
| HIVE_DATABASE_NAME | The name of the Hive database that stores the source data for Studio data sets. | 
| SPARK_ON_YARN_JAR | The absolute path to the Spark on YARN
					 jar. There are three templates for this property. Copy the value
						of the template that corresponds to your Hadoop distribution to 
						SPARK_ON_YARN_JAR and update its value as
						follows: 
					  
 |