Automation Integrations for EPM Cloud

These pre-built integrations are available

To use the pre-built EPM Cloud Integrations, you must specify parameters for the integration. Many parameters for automated integrations are selectable from drop-down lists, which eliminates the need to manually enter values. For example, to run a rule or ruleset, you can select from a list of business rules, such as ForceConsolidate or ForceTranslate.

Automation Integrations for EPM Cloud Platform

Integration Name / Module Module Description Parameters/ Description
Copy File from Financial Consolidation and Close All EPM Cloud Services except Enterprise Data Management

Copies a file from current service where Task Manager is configured to another EPM Cloud Service.

For example, if you have configured Task Manager in Financial Consolidation and Close and set up Account Reconciliation connection, Copy File from Financial Consolidation and Close copies the file from Financial Consolidation and Close to Account Reconciliation.

File Name: Name of the file that you want to copy.

Save File As: Name of the file that you want to save. This can be different than the original file name.

External Directory Name (Optional): Name of the directory.

Copy File to Financial Consolidation and Close All EPM Cloud Services except Enterprise Data Management Copies a file to current service where Task Manager is configured from another EPM Cloud service.

File Name: Name of the file that you want to copy.

Save File As: Name of the filethat you want to save. This can be different than the original file name.

External Directory Name (Optional): Name of the directory.

Delete File From Financial Consolidation and Close All EPM Cloud Services except Enterprise Data Management Deletes a file from a EPM cloud service.

File Name: Name of the file that you want to delete.

Lock Unlock Data Integration All EPM Cloud Services except Enterprise Data Management Locks or unlocks an integration for a location, category and period in Data Exchange. This is a process-automated integration.

Operation: Choose from lock or unlock.

Lock Type: Choose whether the Lock/Unlock operation is for an application or a location.

Period: Specify the period of the POV from the integration or data load rule defined in Data Exchange, for example, "Jan-21".

Category: Specify the predefined Scenario value based on the POV Category from the integration (data rule) definition. The categories available are those that are created in the Data Integration set-up, such as "Actual."

Application (Optional): If selected Lock Type is application, specify the name of application, for example, "Vision".

Location (Optional): If selected Lock Type is location, specify the name of the location. If the location is locked, data cannot be loaded to it.

Unlock By Location (Optional): This parameter can be specified when selected operation is lock and selected location is application.

If checked when locking the target application, then the system locks all rules present in the location under target application and not the application-level lock.

For more information, see Lock and Unlock POV
Run Data Integration All EPM Cloud Services except Enterprise Data Management and Profitability and Cost Management Execute an integration or data load rule based on how periods are processed and source filters. This allows the integration of data loads defined in Data Exchange easily into the monthly processing schedule.

Job Type: Integration is the job type.

Integration Name: The name of the integration defined in Data Integration.

Period Name: Name of the period.

Import Mode: Determines how the data is imported into Data Integration.

Export Mode: Determines how the data is exported into Data Integration.

File Name: Applicable only for native file-based data loads and ignored if specified for other loads.

Source Filters: A parameter used to update the source filters defined for the data load rule or integration.

Target Options: A parameter used to update the target options defined for the data load rule or integration.

Execution Mode: Applicable only for Quick Mode integrations.

For more details about these parameters, see Run Integrations in the REST API for Oracle Enterprise Performance Management Cloud Guide.
Run Pipeline All EPM Cloud Services except Enterprise Data Management and Account Reconciliation Executes a pipeline based on job parameters and variables that you select.

Job Type: Pipeline is the job type.

Job Name: Pipeline code defined for the Pipeline in Data Integration.

Start Period: The first period for which data is to be loaded. This period name must be defined in Data Integration Period mapping.

End Period: The last period for which data is to be loaded. This period name must be defined in Data Integration period mapping.

Import Mode: Determines how the data is imported into Data Integration.

Export Mode: Determines how the data is exported into Data Integration.

Attach Logs: Indicated whether logs are included as an attachment to an email.

Send Email: Determines when an email is sent when a Pipeline is run.

Send To: Determines the recipient email ID for the email notification.

For more details about these parameters, see Running a Pipeline in the REST API for Oracle Enterprise Performance Management Cloud Guide.

See also, Copy and Delete Integration Files

Automation Integrations for Account Reconciliation

Integration Name / Module Module Description Parameters/ Description
Change Period Status Reconciliation Compliance

Changes the status of a period (Open, Closed, Pending, Locked).

Period: The name of the period

Status: Pending, Open, Closed, Locked

Create Period End Reconciliations Reconciliation Compliance

Copies all selected profiles to a period and returns success or failure status.

Period: The name of the period

Filter: The name of the filter that matches the reconciliation

Import Balances Reconciliation Compliance

Imports balance data using Data Management from a previously created Data Load definition.

Period: The name of the period

dl_Definition: The name of a previously saved data load using the format DL_name such as DL_test

Import Pre-Mapped Balances Reconciliation Compliance

Imports pre-mapped balances.

Period: The name of the period

BalanceType: SUB|SRC for sub system or source system

CurrencyBucket: Currency bucket, such as Functional

File: The name of the file relative to the inbox, for example, balances.csv. The file has to be uploaded to ARCS using EPM Automate or REST API.

Import Pre-Mapped Transactions Reconciliation Compliance

Imports pre-mapped transactions for a particular period.

TransactionType: Allowed Transaction Types are BEX (Explained Balance), SRC (Adjustment to Source System), and SUB (Adjustment to Subsystem)

File: The name of the file relative to the inbox, for example, transactions.csv. The file has to be uploaded to ARCS using EPM Automate or REST API.

DateFormat: Date Format, such as MM/dd/yyyy, dd/MM/yyyy, dd-MMM-yy, MMM d,yyyy, or All.

Import Pre-Mapped Transactions Transaction Matching

Imports a file of pre-mapped transactions into Transaction Matching.

DataSource: Text ID of the data source where the transaction will be imported to

File: The name of the file relative to the inbox, for example, transactions.csv. The file has to be uploaded to ARCS using EPM Automate or REST API.

ReconciliationType: Text ID of the reconciliation type where the transaction file will be imported to, such as Bank to GL.

DateFormat: Date Format, such as MM/dd/yyyy, dd/MM/yyyy, MM-dd-yyyy, d-M-yyyy, dd-MMM-yy, MMM d, yyyy

Import Profiles Reconciliation Compliance

Imports profiles for a particular period.

ImportType: The import type. Supported values are Replace and ReplaceAll

Period: The period for which to import

ProfileType: The profile type. Supported values are Profiles and Children

File: The name of the file relative to the inbox, for example, profiles.csv. The file has to be uploaded to ARCS using EPM Automate or REST API.

DateFormat: Date Format, such as MM/dd/yyyy, dd/MM/yyyy, d-M-yyyy, dd-MMM-yy, MMM d, yyyy, or All

Import Rates Reconciliation Compliance

Imports rates for a particular period and rate type.

Period: The name of the period

RateType: The rate type, such as Accounting

Import Rates (Reconciliation Compliance)

File: The name of the file relative to the inbox, for example, rates.csv. The file has to be uploaded to ARCS using EPM Automate or REST API.

ImportType: Supported import types are Replace and ReplaceAll

Monitor Reconciliations Reconciliation Compliance

Monitors list of reconciliations in ARCS.

Period: The name of the period

Filter: Filter string used to query list of reconciliations

Run Auto Match Transaction Matching

Runs the auto match process in Transaction Matching.

ReconTypeId: The Text ID of the Reconciliation type to be auto matched

View Reconciliations

Reconciliation Compliance

View reconciliations for a specified period.

Period: The name of the period

Saved List: The name of a Public saved list

View Transactions

Transaction Matching

View transactions for a specified period.

Period: The name of the period

Saved List: The name of a Public saved list

Automation Integrations for Enterprise Data Management

Integration Name Description Parameters/ Description

Export Dimension

Exports a dimension from Enterprise Data Management to a configured connection. This is a process-automated integration. See Adding Pre-built Integrations in EPM Cloud.

Application: The name of the Enterprise Data Management application from which to export the dimension.

Dimension: The name of the dimension to export.

Connection: Optional. The name of the connection to which to export the dimension.

File Name: The file and path from which to export the dimension.

Export Dimension Mapping

Exports a Dimension Mapping from Enterprise Data Management to a configured connection. This is a process-automated integration.

Application: The name of the Enterprise Data Management application from which to export the Dimension Mapping.

Dimension: The name of the Dimension Mapping to export.

Connection: Optional. The name of the connection to which to export the Dimension Mapping.

Mapping Location: The location to which to export the Dimension Mapping.

File Name: The file and path from which to export the Dimension Mapping.

Import Dimension

Imports a Dimension from a configured connection to an Enterprise Data Management application. This is a process-automated integration. See Adding Pre-built Integrations in EPM Cloud.

Application: The name of the Enterprise Data Management application to which to import the dimension.

Dimension: The name of the dimension to import.

Connection: The name of the connection from which to import the dimension.

File Name: The file and path from which to import the dimension.

Import Option: Optional. Determines how the data is imported into Enterprise Data Management.

Extract Dimension Extracts a dimension from Enterprise Data Management to a configured connection. This is a process-automated integration.

Application: The name of the Enterprise Data Management application from which to extract the dimension.

Dimension: The name of the dimension to extract.

Extract: The name of the extract.

Connection: The name of the connection to which to extract the dimension.

File Name: The file and path from which to extract the dimension.

Automation Integrations for Enterprise Profitability and Cost Management

Integration Name Description Parameters/ Description
Calculate Model Calculates a model for one or more points of view.

Job Type: Calculate Model

Job Name: Name of the job

POV Delimiter: Delimiter used in POV values. The default delimiter is _ (under score). The delimiter must be enclosed in double quotation marks. Only these delimiters are supported:

  • _ (under score)

  • # (hash)

  • & (ampersand)

  • ~ (tilde)

  • % (percentage)

  • ; (semicolon)

  • : (colon)

  • - (dash)

POV Name: Name of the POV to calculate. You can pass one or more POVs separated by a comma (,).

Model Name: Name of the model to calculate

Execution Type: Identifies the rule execution type

Monitoring Task: Monitors another application awaiting an action or status to occur

Rule Name: Name of the single rule to run

First Ruleset Sequence Number: Sequence number of the first rule in the rule set to run

Last Ruleset Sequence Number: Sequence number of the last rule in the rule set to run

Clear Existing Calculations: Whether to clear existing calculations

Execute Calculations: Whether to execute calculations

Optimize for Reporting: Whether to optimze the calculation process for reporting

Generate Debug Scripts: Whether to generate debug scripts

Comment: Comments to describe the job

Clear Cube Clears specific data within the PCM_CLC and PCM_REP cubes.

Job Type: Clear Cube

Job Name: Name of the job

Clear Data by Point of View Clears data from a point of view without removing the point of view.

Job Type: Clear POV

Job Name: Name of the job

POV Delimiter: Delimiter used in POV values. The delimiter must be enclosed in double quotation marks. Other than a comma, only these delimiters are supported:

  • _ (under score)

  • # (hash)

  • & (ampersand)

  • ~ (tilde)

  • % (percentage)

  • ; (semicolon)

  • : (colon)

  • - (dash)

POV Name: Name of the POV to clear

Cube Name: Name of the cube on which clear operation is to be executed

Clear Input Data: Whether to clear input data

Clear Allocated Data: Whether to clear allocated data

Clear Adjustment Data: Whether to clear adjustent data

Copy Data by Point of View Copies data from one point of view to another.

Job Type: Copy POV

Job Name: Name of the job

POV Delimiter: Delimiter used in POV values. The delimiter must be enclosed in double quotation marks. Other than a comma, only these delimiters are supported:

  • _ (under score)

  • # (hash)

  • & (ampersand)

  • ~ (tilde)

  • % (percentage)

  • ; (semicolon)

  • : (colon)

  • - (dash)

Source POV: Name of the source POV

Destination POV: Name of the destination POV

Copy Type: Specifies the data to copy from the source

Source Cube Name: Name of the source cube

Destination Cube Name: Name of the destination cube

Cube Refresh

Refreshes the OLAP cube.

Job Type: Cube Refresh

Job Name: Name of the job

Export Data Exports application data into a file using the export data settings, including file name, specified in a job of type export data. The file containing the exported data is stored in the repository.

Job Type: Export Data

Job Name: Name of the job

Export File Name: File name to which data is to be exported

Export Data Mapping

Exports a Data Mapping defined in Data Management to a specified location. This is a process-automated integration. For more information, see Adding Pre-built Integrations in EPM Cloud.

Member mappings define relationships between source members and target dimension members within a single dimension.

Job Type: Export Data Mapping

Dimension: The dimension name for a specific dimension to import, such as ACCOUNT, or ALL to import all dimensions

File Name: The file and path from which to export mappings. The file format can be .CSV, .TXT, .XLS, or .XLSX. Include the outbox in the file path, for example, outbox/BESSAPPJan-06.csv.

Location Name: Name of the location to which to export

Export Metadata Exports metadata from a file in the repository into the application using the export metadata settings specified in a job of type export metadata.

Job Type: Export Metadata

Job Name: Name of a batch defined in export metadata

Export Zip File Name: Name of the zip file for the exported metadata

Import Data Imports data from a file in the repository into the application using the import data settings specified in a job of type import data.

Job Type: Import Data

Job Name: Name of the job

Import File Name: File name from which data is to be imported

Import Data Mapping

Imports a Data Mapping defined in Data Management to a specified location. This is a process-automated integration.

Member mappings define relationships between source members and target dimension members within a single dimension.

You can import member mappings from a selected Excel, .CSV or .TXT file.

Job Type: Import Data Mapping

Dimension: Dimension name for a specific dimension to import, such as ACCOUNT, or ALL to import all dimensions

File Name: The file and path from which to import mappings. The file format can be .CSV, .TXT, .XLS, or .XLSX. The file must be uploaded prior to importing, either to the inbox or to a sub-directory of the inbox. Include the inbox in the file path, for example,inbox/BESSAPPJan-06.csv.

Import Mode: MERGE to add new rules or replace existing rules, or REPLACE to clear prior mapping rules before import

Validation Mode: Whether to use validation mode: true or false. An entry of true validates the target members against the target application; false loads the mapping file without any validations. Note that the validation process is resource intensive and takes longer than the validation mode of false; the option selected by most customers is false.

Location: Data Management location where the mapping rules should be loaded. Mapping rules are specific to a location in Data Management.

Import Metadata Imports metadata from a file in the repository into the application using the import metadata settings specified in a job of type import metadata.

Job Type: Import Metadata

Job Name: Name of a batch defined in import metadata

Import Zip File Name: Name of the zip file for the imported metadata

Lock Unlock Data Integration Locks or unlocks an integration for a location, category and period in Data Exchange. This is a process-automated integration.

Job Type: Lock Unlock Data Integration

Operation: Lock or unlock

Lock Type: Whether the Lock/Unlock operation is for an application or a location

Period: Period of the POV from the integration or data load rule defined in Data Exchange, for example, "Jan-21"

Category: Predefined Scenario value based on the POV Category from the integration (data rule) definition. The categories available are those that are created in the Data Integration set-up, such as "Actual."

Application (Optional): If the selected Lock Type is application, the name of the application; for example, "Vision".

Location (Optional): If the selected Lock Type is location, the name of the location. If the location is locked, data cannot be loaded to it.

Unlock By Location (Optional): This parameter can be specified when the selected operation is lock and selected location is application.

If checked when locking the target application, then the system locks all rules present in the location under target application and not the application-level lock.

For more information, see Lock and Unlock POV
Run Batch Executes a batch of jobs that have been defined in Data Management

Job Type: Run Batch

Batch Name: Name of the batch to be executed, such as Dimension Map For POV (Dimension, Cat, Per) Path

Report Format Type: The file format of the report - PDF, XLSX, or HTML

Parameters: Can vary in count and values based on the report

Location: The location of the report, such as Comma_Vision

Run As: Specify this parameter in the Workflow tab

Run Business Rule Launches a business rule.

Job Type: Run Business Rule

Rule Name: Name of the business rule.

Parameters: Run time prompts in JSON syntax. The parameter name should be the same as th name defined in rule definition. For example,

{"MyScenario1":"Current", "MyVersion1":"BU Version_1", "ToEntity":"CA",

"Rule_Level_Var":"AZ", "planType":"Plan1"}

The following format is also supported; for example:

"Scenario=Actual" "Entity=Total Geography" "Year=FY21" "Period=Apr"

Run Business Rule Set Launches a business rule set. Rule sets with no runtime prompts or runtime prompts with default values will be supported.

Job Type: Run Business Rule Set

Job Name: Name of the job

Ruleset Name: Name of the business rule set

Parameters: Run time prompts in JSON syntax. The Parameter name should be the same as the name defined in rule definition. For example,

{"MyScenario1":"Current", "MyVersion1":"BU Version_1", "ToEntity":"CA",

"Rule_Level_Var":"AZ", "planType":"Plan1"}

The following format is also supported; for example:

"Scenario=Actual" "Entity=Total Geography" "Year=FY21" "Period=Apr"

Run Data Integration Execute an integration or data load rule based on how periods are processed and source filters. This allows the integration of data loads defined in Data Exchange easily into the monthly processing schedule.

Job Type: Run Data Integration

Integration Name: Name of the integration defined in Data Integration

Period Name: Name of the period

Import Mode: Determines how the data is imported into Data Integration

Export Mode: Determines how the data is exported into Data Integration

File Name: Applicable only for native file-based data loads and ignored if specified for other loads

Source Filters: Parameter used to update the source filters defined for the data load rule or integration

Target Options: Parameter used to update the target options defined for the data load rule or integration

Execution Mode: Applicable only for Quick Mode integrations

For more details about these parameters, see Run Integrations in REST API for Oracle Enterprise Performance Management Cloud.
Run Data Rule Executes a Data Management data load rule based on the start period and end period, and import or export options that you specify.

Job Type: Run Data Rule

Data Rule Name: Name of a data load rule defined in Data Management

Start Period: The first period for which data is to be loaded. This period name must be defined in Data Management period mapping.

End Period: The last period for which data is to be loaded. This period name must be defined in Data Management period mapping.

Import Mode: Determines how the data is imported into Data Management:

  • APPEND to add to the existing rule

  • POV data in Data Management

  • REPLACE to delete the POV data and replace it with the data from the file

  • RECALCULATE to skip importing the data, but re-process the data with updated Mappings and Logic Accounts.

  • NONE to skip data import into Data Management staging table

Export Mode: Determines how the data is exported into Data Management:

  • STORE_DATA to merge the data in the Data Managementstaging table with the existing Financial Consolidation and Close or Tax Reporting data

  • ADD_DATA to add the data in the Data Management staging table to Financial Consolidation and Close or Tax Reporting

  • SUBTRACT_DATA to subtract the data in the Data Management staging table from existing Financial Consolidation and Close or Tax Reporting data

  • REPLACE_DATA to clear the POV data and replace it with data in the Data Management staging table. The data is cleared for Scenario, Version, Year, Period, and Entity

  • NONE to skip data export from Data Management to Financial Consolidation and Close or Tax Reporting

File Name: If you do not specify a file name, the API imports the data contained in the file name specified in the load data rule. The data file must already reside in the INBOX prior to data rule execution.

Run As: Specify this parameter in the Workflow tab

Run Pipeline Executes a pipeline based on job parameters and variables that you select.

Job Type: Pipeline

Job Name: Pipeline code defined for the Pipeline in Data Integration

Start Period: The first period for which data is to be loaded. This period name must be defined in Data Integration Period mapping.

End Period: The last period for which data is to be loaded. This period name must be defined in Data Integration period mapping.

Import Mode: Determines how the data is imported into Data Integration

Export Mode: Determines how the data is exported into Data Integration

Attach Logs: Whether logs are included as an attachment to an email

Send Email: Determines when an email is sent when a Pipeline is run

Send To: Determines the recipient email ID for the email notification

For more details about these parameters, see Running a Pipeline in REST API for Oracle Enterprise Performance Management Cloud.

Automation Integrations for Financial Consolidation and Close and Tax Reporting

Integration Name Description Parameters/ Description
Clear Cube

Note: This integration is applicable only for Tax Reporting.

Clears specific data within input and reporting cubes. Name: Name of the clear cube job.

Copy Ownership Data to Next Year

Automates task to copy the ownership data from the last period of a year to the first period of the next year. For more information, see copyOwnershipDataToNextYear in Working with EPM Automate for Oracle Enterprise Performance Management Cloud.

Scenario: The name of the scenario, such as Actual, selectable

Years: Selectable

Cube Refresh

Refreshes the OLAP cube.

Name: Name of the refresh cube job.

Clear Data

Executes a Clear Data job using the profile name. For more information about using Clear Data in Financial Consolidation and Close, see Clear Data. For more information about using Clear Data in Tax Reporting, see Clear Data.

Profile Name: Clear data profile name.

Copy Data

Executes a Copy Data job using the profile name.

Profile Name: Copy data profile name.

Export Data

Exports application data into a file using the export data settings, including file name, specified in a job of type export data. The file containing the exported data is stored in the repository.

Name: Name of the export data job.

Export File Name: Optional. File name to which data is to be exported.

Export Data Mapping

Exports a Data Mapping defined in Data Management to a specified location. This is a process-automated integration. For more information, see Adding Pre-built Integrations in EPM Cloud.

Member mappings define relationships between source members and target dimension members within a single dimension.

Dimension: The dimension name for a specific dimension to import, such as ACCOUNT, or ALL to import all dimensions.

File Name: The file and path from which to export mappings. The file format can be .CSV, .TXT, .XLS, or .XLSX. Include the outbox in the file path, for example, outbox/BESSAPPJan-06.csv.

Location Name: The name of the location to which to export.

Export Ownership Data

Automates task to export ownership data from a entity to a comma-delimited CSV file. For more information, see exportOwnershipData in Working with EPM Automate for Oracle Enterprise Performance Management Cloud.

Entity: The name of the entity.

Scenario: The name of the scenario, such as Actual. Selectable.

Years: Selectable

Period: The name of the period, such as January. Selectable.

File Name: The name of the file to export.

Import Data

Imports data from a file in the repository into the application using the import data settings specified in a job of type import data.

Name: Name of the import data job.

Import File Name: Optional. File name from which data is to be imported.

Import Data Mapping

Imports a Data Mapping defined in Data Management to a specified location. This is a process-automated integration.

Member mappings define relationships between source members and target dimension members within a single dimension.

You can import member mappings from a selected Excel, .CSV or .TXT file.

Job Type: The job type, MAPPINGIMPORT.

Job Name: The dimension name for a specific dimension to import, such as ACCOUNT, or ALL to import all dimensions.

File Name: The file and path from which to import mappings. The file format can be .CSV, .TXT, .XLS, or .XLSX. The file must be uploaded prior to importing, either to the inbox or to a sub-directory of the inbox. Include the inbox in the file path, for example,inbox/BESSAPPJan-06.csv.

Import Mode: MERGE to add new rules or replace existing rules, or REPLACE to clear prior mapping rules before import.

Validation Mode: Whether to use validation mode: true or false. An entry of true validates the target members against the target application; false loads the mapping file without any validations. Note that the validation process is resource intensive and takes longer than the validation mode of false; the option selected by most customers is false.

Location Name: The Data Management location where the mapping rules should be loaded. Mapping rules are specific to a location in Data Management.

Import Metadata

Imports metadata from a file in the repository into the application using the import metadata settings specified in a job of type import metadata.

Name: The name of a batch defined in import metadata.

Import Ownership Data

Automates task to import ownership data from a CSV file available in the environment into a period. For more information, see importOwnershipData in Working with EPM Automate for Oracle Enterprise Performance Management Cloud.

Scenario: The name of the scenario, such as Actual. Selectable.

Years: Selectable

Period: The name of the period, such as January. Selectable.

File Name: The name of the file to import.

Journal Period

Opens or closes a journal period automatically.

The system will close the period only if there are no Approved and Unposted journals. If there are Approved and Unposted journals, the system will not close the period, and returns an error.

If there are Unposted journals in Working and Submitted status, the system will close the period, with a warning.

Scenario: The name of the scenario, such as Actual

Year: The year, such as FY20

Period: The name of the period, such as January

Action: Open or Close

Monitor Enterprise Journals

Note: This integration is applicable only for Financial Consolidation and Close.

Monitors the completion status of Journals within a Year/Period or filtered list.

Year: Optional. The year, such as 2022. Selectable.

Period: Optional. The name of the period, such as January. Selectable.

Filter Name: Optional.The name of the Filter you created to monitor the status of the Enterprise Journals.

Note: Though all the parameters are optional, you must specify at least a Filter Name or Year and Period.

Recompute Ownership Data

Automates task for recomputing of ownership data. For more information, see recomputeOwnershipData in Working with EPM Automate for Oracle Enterprise Performance Management Cloud.

Scenario: The name of the scenario, such as Actual

Years: The year, such as FY20

Period: The name of the period, such as January

Run Batch Rule

Executes a batch of jobs that have been defined in Data Management.

Name: The name of the report to be executed, such as Dimension Map For POV (Dimension, Cat, Per) Path

Report Format Type: The file format of the report - PDF, XLSX, or HTML

Parameters: Can vary in count and values based on the report

Location: The location of the report, such as Comma_Vision

Run As: You must specify this parameter in the Workflow tab.

Run Business Rule

Launches a business rule.

Name: The name of a business rule exactly as it is defined.

Parameters: Run time prompts in JSON syntax. Parameter name should exactly be same as defined in rule definition. For example,

{ "MyScenario1":"Current", "MyVersion1":"BU Version_1", "ToEntity":"CA",

"Rule_Level_Var":"AZ", "planType":"Plan1"}

Following format is also supported, example:

"Scenario=Actual" "Entity=Total Geography" "Year=FY21" "Period=Apr"

Run Business Rule Set

Launches a business rule set. Rule sets with no runtime prompts or runtime prompts with default values will be supported.

Name:The name of a business rule set exactly as it is defined.

Parameters: Run time prompts in JSON syntax. Parameter name should exactly be same as defined in rule definition. For example,

{ "MyScenario1":"Current", "MyVersion1":"BU Version_1", "ToEntity":"CA",

"Rule_Level_Var":"AZ", "planType":"Plan1"}

Following format is also supported, example:

"Scenario=Actual" "Entity=Total Geography" "Year=FY21" "Period=Apr"

Run Consolidation

This task is a utility task to run consolidation. Task will prompt user to enter parameters for running the tasks such as Scenario, Year, Period and Entity.

Scenario

Year

Period

Entity: Multiple entities can be added with comma separator.

Run Data Rule

Executes a Data Management data load rule based on the start period and end period, and import or export options that you specify.

Job Name: The name of a data load rule defined in Data Management.

Start Period: The first period for which data is to be loaded. This period name must be defined in Data Management period mapping.

End Period: The last period for which data is to be loaded. This period name must be defined in Data Management period mapping.

Import Mode: Determines how the data is imported into Data Management.

APPEND to add to the existing rule

POV data in Data Management

REPLACE to delete the POV data and replace it with the data from the file

RECALCULATE to skip importing the data, but re-process the data with updated Mappings and Logic Accounts.

NONE to skip data import into Data Management staging table

Export Mode: Determines how the data is exported into Data Management.

STORE_DATA to merge the data in the Data Management staging table with the existing Financial Consolidation and Close or Tax Reporting data

ADD_DATA to add the data in the Data Management staging table to Financial Consolidation and Close or Tax Reporting

SUBTRACT_DATA to subtract the data in the Data Management staging table from existing Financial Consolidation and Close or Tax Reporting data

REPLACE_DATA to clear the POV data and replace it with data in the Data Management staging table. The data is cleared for Scenario, Version, Year, Period, and Entity

NONE to skip data export from Data Management to Financial Consolidation and Close or Tax Reporting

File Name: Optional. If you do not specify a file name, this API imports the data contained in the file name specified in the load data rule. The data file must already reside in the INBOX prior to data rule execution.

Run As: You must specify this parameter in the Workflow tab.

Run Force Consolidation

This task is a utility task to run force consolidation. The task will prompt the user to enter parameters for running the tasks such as Scenario, Year, Period and Entity.

Scenario

Year

Period

Entity: Multiple entities can be added using a comma separator.

Run Force Translation

This task is a utility task to run force translation. The task will prompt user to enter parameters for running the tasks such as Scenario, Year, Period and Entity.

Scenario

Year

Period

Entity: Multiple entities can be added with comma separator.

Run Translation

This task is a utility task to run translation. The task will prompt user to enter parameters for running the tasks such as Scenario, Year, Period and Entity.

Scenario

Year

Period

Entity: Multiple entities can be added with comma separator.

Automation Integrations for Planning and Planning Modules

Integration Name Description Parameters/ Description

Clear Cube

Clears specific data within input and reporting cubes.

Name: Name of the clear cube job.

Cube Refresh

Refreshes the OLAP cube.

Name: Name of the refresh cube job.

Export Data

Exports application data into a file using the export data settings, including file name, specified in a job of type export data. The file containing the exported data is stored in the repository.

Name: Name of the export data job.

Export File Name: Optional. File name to which data is to be exported.

Import Data

Imports data from a file in the repository into the application using the import data settings specified in a job of type import data.

Name: Name of the import data job.

Import File Name: Optional. File name from which data is to be imported.

Import Metadata

Imports metadata from a file in the repository into the application using the import metadata settings specified in a job of type import metadata.

Name: The name of a batch defined in import metadata.

Run Batch

Executes a batch of jobs that have been defined in Data Management.

Name: The name of the report to be executed, such as Dimension Map For POV (Dimension, Cat, Per) Path

Report Format Type: The file format of the report, PDF, XLSX, or HTML

Parameters: Can vary in count and values based on the report

Location: The location of the report, such as Comma_Vision

Run Business Rule

Launches a business rule.

Name: The name of a business rule exactly as it is defined.

Parameters: Run time prompts in JSON syntax. Parameter name should exactly be same as defined in rule definition. For example,

{ "MyScenario1":"Current", "MyVersion1":"BU Version_1", "ToEntity":"CA",

"Rule_Level_Var":"AZ", "planType":"Plan1"}

Following format is also supported, example:

"Scenario=Actual" "Entity=Total Geography" "Year=FY21" "Period=Apr"

Run Business Ruleset

Launches a business ruleset. Rulesets with no runtime prompts or runtime prompts with default values will be supported.

Name: The name of a business ruleset exactly as it is defined.

Parameters: Run time prompts in JSON syntax. Parameter name should exactly be same as defined in rule definition. For example,

{ "MyScenario1":"Current", "MyVersion1":"BU Version_1", "ToEntity":"CA",

"Rule_Level_Var":"AZ", "planType":"Plan1"}

Following format is also supported, example:

"Scenario=Actual" "Entity=Total Geography" "Year=FY21" "Period=Apr"

Run Data Rule

Executes a Data Management data load rule based on the start period and end period, and import or export options that you specify.

Job Name: The name of a data load rule defined in Data Management.

Start Period: The first period for which data is to be loaded. This period name must be defined in Data Management period mapping.

End Period: The last period for which data is to be loaded. This period name must be defined in Data Management period mapping.

Import Mode: Determines how the data is imported into Data Management.

APPEND to add to the existing rule

POV data in Data Management

REPLACE to delete the POV data and replace it with the data from the file

RECALCULATE to skip importing the data, but re-process the data with updated Mappings and Logic Accounts.

NONE to skip data import into Data Management staging table

exportMode: Determines how the data is exported into Data Management.

STORE_DATA to merge the data in the Data Management staging table with the existing Oracle Hyperion Planning data

ADD_DATA to add the data in the Data Management staging table to Planning

SUBTRACT_DATA to subtract the data in the Data Management staging table from existing Planning data

REPLACE_DATA to clear the POV data and replace it with data in the Data Management staging table. The data is cleared for Scenario, Version, Year, Period, and Entity

NONE to skip data export from Data Management to Planning

File Name: Optional. If you do not specify a file name, this API imports the data contained in the file name specified in the load data rule. The data file must already reside in the INBOX prior to data rule execution.

Automation Integrations for Profitability and Cost Management

Integration Name Description Parameters/ Description

Apply Data Grants

Applies data grants for a given Profitability and Cost Management application. This API submits a job to create and apply the data grants in Essbase. This API removes all existing data grants in Oracle Essbase and recreates them with the latest information from the application. It can also be used to repair data grants if there are any issues. None

Deploy ML Cube

Deploy or redeploy the calculation cube for a selected Profitability and Cost Management application.

isKeepData: Specify whether to preserve existing data

isReplacecube: Specify whether to replace existing

comment: Any user comments

Run ML Calc

Run or clear calculations for a selected application. Use with Management Ledger.

povGroupMember: The POV group member for which to run calculations, such as 2015_January_Actual

isClearCalculated: Whether to clear the calculation data, true or false

subsetStart: Rule Set Starting Sequence Number

subsetEnd: Rule Set Ending Sequence Number

Rule: Rule Name for a SINGLE_RULE

ruleSetName: Rule Set Name for a SINGLE_RULE option

exeType: The execution type specifies which rules to run; possible values are ALL_RULES, RULESET_SUBSET, SINGLE_RULE. Other parameters are required based on the exeType value.

exeType: ALL_RULES overrides all other options such as subsetStart, subsetEnd, ruleSetName, ruleName, and so on.

exeType: RULESET_SUBSET considers only subsetStart and subsetEnd.

exeType: SINGLE_RULE considers only ruleSetName and ruleName.

Comment: Use comment text.

Delimiter: String delimiter for POV group members, such as an underscore (_).

Clear ML POV

Clear model artifacts and data from a POV combination for any application.

POV GroupMember: The POV group member for which to run calculations, such as 2015_January_Actual

isManageRule: Whether to clear the program rule details

isInputData: Whether to clear input data

IsAllocatedValues: Whether to clear allocated values

stringDelimiter: String delimiter for POV group members

Copy ML POV

Copy model artifacts and data from a Source POV combination to a Destination POV combination for any application. Use with Management Ledger applications.

POVs: Included in the path

srcPOVMemberGroup: Source POV member group, such as 2014_January_Actual

destPOVMemberGroup: Destination POV member group, such as 2014_March_Actual

isManageRule: Whether to copy the program rule details

isInputData: Whether to copy input data

modelViewName: To copy a slice of data from source POV to destination POV

Create Dest POV: Whether to create the destination POV if it does not already exist

String Delimiter: String delimiter for POV group members

Run Data Rule

Executes a Data Management data load rule based on the start period and end period, and import or export options that you specify.

Job Name: The name of a data load rule defined in Data Management.

Start Period: The first period for which data is to be loaded. This period name must be defined in Data Management period mapping.

End Period: The last period for which data is to be loaded. This period name must be defined in Data Management period mapping.

Import Mode: Determines how the data is imported into Data Management.

APPEND to add to the existing rule

POV data in Data Management

REPLACE to delete the POV data and replace it with the data from the file

RECALCULATE to skip importing the data, but re-process the data with updated Mappings and Logic Accounts.

NONE to skip data import into Data Management staging table

Export Mode: Determines how the data is exported into Data Management.

STORE_DATA to merge the data in the Data Management staging table with the existing Profitability and Cost Management data

ADD_DATA to add the data in the Data Management staging table to Profitability and Cost Management

SUBTRACT_DATA to subtract the data in the Data Management staging table from existing Profitability and Cost Management data

REPLACE_DATA to clear the POV data and replace it with data in the Data Management staging table. The data is cleared for Scenario, Version, Year, Period, and Entity

NONE to skip data export from Data Management to Profitability and Cost Management

File Name: Optional. If you do not specify a file name, this API imports the data contained in the file name specified in the load data rule. The data file must already reside in the INBOX prior to data rule execution.

Run Batch Rule

Executes a batch of jobs that have been defined in Data Management.

Job Name: The name of a batch defined in Data Management.

Update Dimension

Uploads a new dimension flat file for an application created using a flat file. This is a process-automated integration. For more information, see Update Dimensions As a Job.

File Name: Data file name

Seperator Character: Optional Parameter