Running a Pipeline

Executes a Pipeline based on job parameters and variables that you select.

The Pipeline jobtype supports running a Pipeline based on the variable list (depends on how many variables have been defined for the Pipeline in the Data Integration user interface.)

Prerequisites:

  • You must have predefined the Pipeline to run it.

  • You must have the required privileges to execute a Pipeline.

REST Resource

/aif/rest/{api_version}/jobs

Required Roles

Service Administrator

Request

Supported Media Types: application/json

Method:

POST

Payload:

{
    "jobName": "DAILYLOAD",
    "jobType": "pipeline",
    "variables": {
                    "STARTPERIOD": "Jan-23",
                    "ENDPERIOD": "Jan-23",
                    "IMPORTMODE": "Replace",
                    "EXPORTMODE": "Merge",
                    "ATTACH_LOGS": "N",
                    "SEND_MAIL": "ALWAYS",
                    "SEND_TO": "user@company.com"
                }
}

REST Payload Description

The following table summarizes the REST payload.

Table 15-7 Parameters

Name Description Type Required Default
api_version V1 Path Yes  
jobType PIPELINE JSON payload Yes  
jobName

The Pipeline code defined for the Pipeline in Data Integration.

The code can contain up to 30 alphanumeric characters with a minimum of 3 characters and a maximum of 30 characters. This code cannot be updated after a Pipeline is created.

JSON payload Yes  
variables

Name of the variable(s) used in the Pipeline.

The list depends on how many variables have been defined in the Pipeline.

The default out-of-box variables include:

  • STARTPERIOD
  • ENDPERIOD
  • IMPORTMODE
  • EXPORTMODE
  • ATTACH_LOGS
  • SEND_MAIL
  • SEND_TO
JSON payload No  
STARTPERIOD

The first period for which data is to be loaded. This period name must be defined in Data Integration Period mapping.

You can also specify a Planning substitution variable whereby a substitution variable can be specified instead of the actual Year/Month member names for the start period.

The convention is {Month#&CurYr}{&FcstMonth#&CurYr}; for example, {Jan#&CurYr}{&FcstMonth#&CurYr}.

A combination of both actual member names as well as substitution variables is supported.

This parameter is supported in the Planning, Tax Reporting, and Financial Consolidation and Close business processes. It is functional for both your service applications and cloud deployments derived from on-premises data sources.

  Yes  
ENDPERIOD

The last period for which data is to be loaded. This period name must be defined in Data Integration period mapping.

You can also specify a Planning substitution variable whereby a substitution variable can be specified instead of the actual Year/Month member names for the start period.

The convention is {Month#&CurYr}{&FcstMonth#&CurYr}; for example, {Jan#&CurYr}{&FcstMonth#&CurYr}.

A combination of both actual member names as well as substitution variables is supported.

This parameter is supported in the Planning, Tax Reporting, and Financial Consolidation and Close business processes. It is functional for both your service applications and cloud deployments derived from on-premises data sources.

JSON payload Yes  
IMPORTMODE Determines how the data is imported into Data Integration.

Acceptable values are:

  • Append—Add to the existing POV data in Data Integration.

  • Replace—Delete the POV data and replace it with the data from the file.

  • Map and Validate—Skip importing the data, but re-process the data with updated Mappings and Logic Accounts.

  • No Import—Skip data import into Data Integration staging table.

  Yes  
exportMode Determines how the data is exported into Data Integration.

Acceptable values for Planning business processes are:

  • Merge—Merge the data in the Data Integration staging table with the existing Planning data.

  • Replace—Clear the POV data and replace it with data in the Data Integration staging table. The data is cleared for Scenario, Version, Year, Period, and Entity dimensions.

  • Accumulate—Add the data in the Data Integration staging table to Planning.

  • No Export—Skip data export from Data Integration to Planning.

Acceptable values for Financial Consolidation and Close and Tax Reporting are:

  • Merge—Merge the data in the staging table with the data in the Financial Consolidation and Close and Tax Reporting application.

    If data already exists in the application, the system overwrites the existing data with the new data from the load file. If data does not exist, the new data is created.

  • Replace—Delete the POV data and replace it with the data from the file.

  • No Export—Skip the data export from Data Integration to Financial Consolidation and Close or Tax Reporting

  Yes  
ATTACH_LOGS
  • Yes—Logs are zipped and included as an attachment to an email, which can then be download

  • No—Logs are not included as an attachment to an email.

  No  
SEND_MAIL

Determines when an email is sent when a Pipeline is run. Options include:

  • Always

  • No—Default value

  • On Failure

  • On Success

For variables, the default value is set in the Pipeline definition. Overriding individual variables is done by passing it in the JSON payload, for example, STARTPERIOD.

     
SEND_TO

Determines the recipient email ID for the email notification.

Email IDs are comma separated.

  No