Sizing the Data Flow Application

Every time you run a Data Flow Application you specify a size and number of executors which, in turn, determine the number of OCPUs used to run the Spark application.

An OCPU is equivalent to a CPU core, which itself is equivalent to two vCPUs. See Compute Shapes for more information on how many OCPUs each shape contains.

A rough guide is to assume 10 GB of data processed per OCPU per hour. Optimized data formats like Parquet appear to run much faster because only a small subset of data is processed. The formula to calcualte the number of OCPUs needed, assuming 10 GB of data processed per OCPU per hour, is:
<Number_of_OCPUs> = <Processed_Data_in_GB> / (10 * <Desired_runtime_in_hours>)
For example, to process 1 TB of data with an SLA of 30 minutes, expect to use about 200 OCPUs:
<Number_of_OCPUs> = 1024 / (10 * 0.5) = 204.8

You can allocate 200 OCPUs in various ways. For example, you can select an executor shape of VM.Standard2.8 and 25 total executors for 8 * 25 = 200 total OCPUs.

This formula is a rough estimate and the run times might differ. You can better estimate the actual workload's processing rate by loading the Application and viewing the history of Application Runs. This history lets you to see the number of OCPUs used, total data processed, and run time, letting you to estimate the resources you need to meet the SLAs. From there, you estimate the amount of data a Run processes and size the Run appropriately.
Note

The number of OCPUs is limited by the VM shape you chose and the value set in the tenancy for VM.Total. You can't use more VMs across all VM shapes than the value in VM.Total. For example, if each VM shape is set to 20, and VM.Total is set to 20, you can't use more than 20 VMs across all the VM shapes. With flexible shapes, where the limit is measured as cores or OCPUs, 80 cores in a flexible shape is equivalent to 10 VM.Standard2.8 shapes. See Service Limits for more information.

Flexible Compute Shapes

Data Flow supports flexible compute shapes for Spark jobs.

The following flexible compute shapes are supported:
  • VM.Standard3.Flex (Intel)
  • VM.StandardE3.Flex (AMD)
  • VM.StandardE4.Flex (AMD)
  • VM.Standard.A1.Flex (Arm processor from Ampere)
Learn more about flexible compute shapes from the Compute documentation.
When you create an application or edit an application, select the flexible shape for the driver and executor. For each OCPU selection, you can choose the flexible memory option.
Note

The driver and executor must have the same shape.

Migrating Applications from VM.Standard2 Compute Shapes

Follow these steps when migrating your existing Data Flow applications from VM.Standard2 to flexible compute shapes.

  1. Request the limits for your choice of flexible shape.
    OCPU count defines the flexible shape limits. With VM.Standard2 compute shapes, node count defined the limits. For example, if you have an application which uses 16 OCPUs for driver and 16 OCPUs for one executor, you request 32 OCPUs in your limit increase request.
  2. (Optional) If you expect to run more concurrent jobs across different shapes, request more Vm.Total.
  3. When you create an application or edit an application, select the flexible shape for the driver and executor.
    Note

    The driver and executor must have the same shape.
  4. (Optional) For each OCPU selection, choose the flexible memory option.