Create Cost Accounting Distributions

In the Cost Accounting work area, access the Create Cost Accounting Distributions page to process imported transaction data. On this page define the run controls by specifying the cost organization books and cost processors that you want to execute.

The processors that you can include in the run control are listed here:

  • Preprocessor prepares all interfaced data for cost processing:

    • Checks for invalid or missing data.

    • Propagates the information to cost organization books and deriving their associated units of measure, currencies, valuation units, and cost profiles. Note that the preprocessor runs for all cost books in the cost organization.

    • Maps incoming cost components to cost elements, based on user-defined mappings.

  • Cost Processor processes:

    • Physical inventory transactions

      • Calculates costs for pre-processed transactions using the perpetual average cost method, actual cost method, or standard cost method.

      • Processes user-entered cost adjustments and applies overhead costs based on user-defined overhead rules.

      • Calculates the variance of standard costs from actual transaction costs.

      • Calls the Acquisition Cost processor to calculate inventory valuation including the tax component where applicable.

    • Trade transactions

      • Uses the Trade Accounting processor to process all in-transit transactions.

  • COGS Recognition processor calculates the cost of goods sold and maintains consistency with the revenue recognized in accounts receivable.

  • Cost Distribution uses the Trade Accounting processor, Cost Processor, and COGS Recognition processor results to create distributions for transaction costs. You can enable parallel processing in the Create Cost Accounting Distributions process so that the eligible transactions are spread across multiple subprocesses to achieve a much higher throughput during the distribution processing stage.

  • Cost Reports Processor: Generates inventory valuation, item cost, and gross margin data and is the source of truth for reports generated by Oracle Fusion Transactional Business Intelligence and Oracle Analytics Publisher. This process builds the data required to report inventory valuation at various levels:

    • Valuation unit level

    • Inventory control attribute levels (inventory organization, subinventory, locator, project, task, and country of origin)

    • Receipt layer level

    The enhanced cost reports processor performs an incremental data update instead of a full data refresh. This ensures that you can view prior period data even when the current period data if being processed.

    You must run the cost reports processor regularly to ensure that the latest cost accounting information is reflecting in UI and the reports. You can create a separate run control for the cost reports processor and schedule it to run on a periodic basis. If the runtime for the cost reports processor isn't too long you can also include it with the Create Cost Accounting Distributions process to reflect real-time data in the reports.

After the Create Cost Accounting Distribution process is run, the cost processer clears the cost processing errors from prior runs. Only the errors for the most recent run of the Create Cost Accounting Distributions process are retained.

Run Control

To run the Create Cost Accounting Distributions process, you must create a run control. A run control is a container to perform centralized cost processing across multiple cost organizations and cost book. When you define a run control, you specify the processors that run as part of the run control, the commit limit, the cost organization and cost book combinations that must be processed, and the cutoff dates.

When defining a run control, you can also define the Maximum Number of Workers, which is greater than 1, indicating the maximum number of subprocesses that you would like to be used by the process for parallel processing. The parallel processing allows for dividing the load irrespective of the cost organization structure or variations in data volume in the cost organization or set of cost organizations during the distribution processing stage of the process.

Depending on the number and complexity of the transactions to be processed and the commit limit defined in the run control, the Cost Processor will process the transactions in batches. When the commit limit is reached, the records processed in the batch are committed to the database and the processor will start another iteration of transaction processing. The Cost Processor will perform multiple iterations to process all the transactions. Each such iteration is known as the commit limit loop.

Within a commit limit loop, the Cost Processor performs various steps. The processor might have to perform multiple iteration of these steps for a commit limit loop. This iteration of steps is known as the inner loop.

To create a run control, perform these steps:

  1. In the Cost Accounting work area, select Create Cost Accounting Distributions from the Tasks menu.

  2. Click Add Row.

  3. Enter a name for the run control and select the processors that must run as part of the run control.

    Enter a number greater than 1 in for Maximum Number of Workers to indicate you want to use parallel processing. The number indicates the maximum number of subprocess that would be used by the Create Cost Accounting Distributions process during the distribution processing stage.

    You can optionally set the Commit Limit and Collect Statistics parameters.

  4. In the Details sections, click Add Row.

  5. Select the cost organization and cost book for which the process must run.

    You can also set the cutoff date.

    For a periodic average cost enabled cost book, you must select the Period for which you want to run the process. The cutoff date is automatically set. You can also specify whether to update the period on the run control automatically. If you enable this, when the current period is closed, the period for this run control is automatically updated to the next open period.

    You can add multiple rows to include different cost organization and cost book combinations, including periodic average cost enabled cost books, as part of the run control.

  6. Click Save.

Note: You can create cost accounting distributions only if the Cost Accounting period is in the Open status and the corresponding period in the General Ledger is in the Open or Future Enterable status. However, here are a few recommended best practices:
  • Keep only one Cost Accounting period open at a time. Don't have multiple costing periods open at the same time.
  • Close the Cost Accounting period before closing the corresponding General Ledger period.
  • Run the Create Accounting process for the Cost Accounting subledger only after the corresponding General Ledger period is set to the Open status.

You can run the process for this run control by clicking Schedule Process. You can run the process on demand or you can schedule it to run periodically. You can also run and schedule the process for this run control from the Scheduled Processes work area.

When the process is running, you can view the status of the process on the Scheduled Processes page. When you select the process, the details section provides a completion text that lets you know which processor is running, the commit limit loop count, inner loop count (applicable only for the Cost Processor), the step and the corresponding start time.

On the Create Cost Accounting Distributions page, select the run control and click View Status to track the progress of the process and obtain detailed timing information.

Parallel Processing

Parallel processing in the Create Cost Accounting Distributions process ensures that eligible transactions are spread across multiple subprocesses to achieve a higher throughput during the quantity preprocessing and distribution processing stages. If your organization has a high to very-high volume of cost processing, then the throughput improvements will be significant in such scenarios. For low to medium volume cost processing, there will be improvements in throughput, but it might not be noticeably higher as the current processing times will usually be optimal to begin with.

The parallel processing allows for dividing the load irrespective of the cost organization structure or variations in data volume in the cost organization or set of cost organizations. The parallel processing also makes better use of the available hardware. The reduced processing time helps speed up period close processing.

When enabled, during the quantity preprocessing and distribution processing stages of the process, multiple subprocesses are automatically spawned to process the transactions in parallel. The main process groups the cost layers such that each of the subprocess can independently process those layers to generate distributions without any contention with the other subprocesses that may be running in parallel.

When defining a run control for Create Cost Accounting Distributions process, you can define the Maximum Number of Workers, which is greater than 1, indicating the maximum number of subprocesses that you would like to be used by the process for parallel processing.

To ensure that the Create Cost Accounting Distributions process doesn't consume a lot of resources that will otherwise be required for other processing elsewhere in the system, some restrictions are placed on the number of workers that will be launched. Regardless of the value for maximum workers set in the run control, during runtime, the system dynamically reduces the number of concurrent workers that are run based on the data volume being processed to ensure that the processing times are optimized. Currently, the threshold is set to 100,000 distribution lines and a maximum of 20 workers.

The table shows how the system decides on the number of workers dynamically during processing.

Cost Layers Maximum Number of Workers Actual Number of Subprocesses Launched (including the parent process)
1000 10 1
100,000 10 1
400,000 10 4*
1,000,000 10 10*
2,000,000 10 10*

In the above table, rows marked with * indicate that the actual subprocesses may vary slightly from the number mentioned based on the actual volume and mix of transactions processed.