Big Data Cloud Console New Job: Driver File Page

You use the Big Data Cloud Console New Job: Driver File page to provide details about the job driver file and its main class, command line arguments, and any additional jars or supporting files for executing the job.

What You See in the Navigation Area

Element Description

< Previous

Click to navigate to the Big Data Cloud Console New Job: Configuration page.

Cancel

Click to cancel creating the new job.

Next >

Click to navigate to the Big Data Cloud Console New Job: Confirmation page.

What You See in the Driver File Section

Element Description

File Path

The path to the executable for the job.

Click Browse to select a file in HDFS, or Cloud Storage, or to upload a file from your local file system. The file must have a .jar or .zip extension.

In the Browse HDFS window, you can also browse to and try some examples.

Main Class (for Spark and MapReduce jobs only)

The main class to run the job.

Arguments

Any argument(s) that invokes the main class. You specify one argument per line.

Additional Py Modules (for Python Spark jobs only)

Any Python dependencies required for the application. You can specify more than one file.

Click Browse to select a file in HDFS or Cloud Storage, or to upload a file from your local file system (.py file only).

Additional Jars (optional)

Any JAR dependencies required for the application, such as Spark libraries. You can specify more than one file.

Click Browse to select a file in HDFS, or Cloud Storage, or to upload a file from your local file system (.jar or .zip file only).

Additional Support Files (optional)

Any additional support files required for the application. You can specify more than one file.

Click Browse to select a file (.jar or .zip file only).