Big Data Cloud Console New Job: Configuration Page

You use the Big Data Cloud Console New Job: Configuration page to provide more details about the new job that you are about to create.

What You See in the Navigation Area

Element Description

< Previous

Click to navigate to the Big Data Cloud Console New Job: Details page.

Cancel

Click to cancel creating a new job.

Next >

Click to navigate to the Big Data Cloud Console New Job: Driver File page.

What You See in the Configuration Section

Element Description

Driver Cores

The total number of CPU cores that are assigned to a Spark driver process.

Driver Memory

The amount of memory that assigned to a Spark driver process, in MB or GB.

This value cannot exceed the memory available on the driver host, which is dependent on the compute shape used for the cluster. Also, some amount of memory is reserved for supporting processes.

Executor Core

The number of CPU cores made available for each Spark executor.

Executor Memory

The amount of memory made available for each Spark executor.

No. of Executors

The number of Spark executor processes that will be used to execute the job.

Queue

The name of the resource queue for which the job will be targeted. When a cluster is created, a set of queues is also created and configured by default. Which queues get created is determined by the queue profile specified when the cluster was created and whether preemption was set to Off or On (the preemption setting can't be changed after a cluster is created).

If preemption was set to Off (disabled), the following queues are available by default:
  • dedicated: Queue used for all REST API and Zeppelin job submissions. Default capacity is 80, with a maximum capacity of 80.

  • default: Queue used for all Hive and Spark Thrift job submissions. Default capacity is 20, with a maximum capacity of 20.

If preemption was set to On (enabled), the following queues are available by default:
  • api: Queue used for all REST API job submissions. Default capacity is 50, with a maximum capacity of 100.

  • interactive: Queue used for all Zeppelin job submissions. Default capacity is 40, with a maximum capacity of 100. To allocate more of the cluster's resources to Notebook, increase this queue's capacity.

  • default: Queue used for all Hive and Spark Thrift job submissions. Default capacity is 10, with a maximum capacity of 100.

Available Cores

Available Memory

This information is available to the user at the right hand corner of the screen. It shows information about the available cores and memory for that cluster which can be allocated for the new job.