Oracle® Fusion
Applications Order Orchestration Implementation Guide 11g Release 5 (11.1.5) Part Number E20386-05 |
Contents |
Contact Us |
Previous |
Next |
This chapter contains the following:
Manage Planning Source Systems
Data Collections, Order Orchestration, and Order Promising: How They Fit Together
Collecting Data for the Order Orchestration and Planning Data Repository: Explained
Data Collection Entities: Explained
Collect Order Promising Reference and Transaction Data
Manage Sourcing Rules and Bills of Distribution
Manage Global Order Promising Profile Options
To populate the order orchestration and planning data repository, you collect data from external source systems, such as external fulfillment source systems and external order capture source systems, and from the Oracle Fusion source system. You manage which source systems are data collection source systems by defining collections parameters and enabling which source systems allow collections.
You manage two categories of source systems for data collections:
External source systems
The Oracle Fusion source system
The following figure illustrates data collections from three source systems. Two of the source systems are external source systems. One of the source systems is the Oracle Fusion source system.
Your business may have many external fulfillment and external order capture source systems. For each external source system from which you need to collect data to include in the order orchestration and planning data repository, define the data collection parameters, and enable the source system for collections. For the Version data collection parameter, the choices are Other or Oracle Fusion.
The order orchestration and order promising processes use data stored in the order orchestration and planning data repository. Some of the data that needs to be in the repository originates in the Oracle Fusion source system. To collect data from the Oracle Fusion source system, include the Oracle Fusion source system as a source system for data collection. Define the data collection parameters for the Oracle Fusion source system, and enable the source system for collections.
For each system from which you intend to collect data to populate the order orchestration and planning data repository, you define and maintain the source system data collection parameters.
For each source system, you complete the following for the data collection parameters:
Specify the time zone.
Specify the version, order orchestration type, and planning type.
Define the number of database connections, parallel workers, rows per processing batch, and cached data entries.
Enable collections allowed.
Enable data cross-referencing.
You must specify the time zone for the source system because the time stamps contained in collected data are converted from the time zone used in the source system to the time zone used for all data stored in the order orchestration and planning data repository. Using the same time zone for all data stored in the order orchestration and planning data repository facilitates correct results when calculations are performed using attributes that store dates. For example, if the source system uses the US Eastern time zone, but the order orchestration and planning data repository stores all data in the US Pacific time zone, then a supply with a due date and time of July 10th 04:00 PM in the source system is stored in the order orchestration and planning data repository with a due date of July 10th 01:00 PM.
You must define one, and only one, source system with the Version attribute equal to Oracle Fusion and the Order Orchestration Type attribute equal to Order Orchestration.
You may define many source systems with the Version attribute equal to Other. For the source systems with the Version attribute equal to Other, the Order Orchestration Type attribute can equal Fulfillment or Order Capture and the Planning Type attribute can equal Fulfillment. Any combination of these values is allowed to describe the purpose of the source system, but you must provide a value for at least one of these type parameters. These parameters do not impact the behavior of the collections process.
Note
Once you have saved a system with the Version attribute equal to Oracle Fusion, you cannot change the value for the Version attribute.
Note
You cannot change the version of a source system from Others to Fusion. You must delete the planning source system definition by scheduling the Delete Source Configuration and All Related Data process. The Delete Source Configuration and All Related Data process performs multiple steps. First the process deletes all data previously collected from the source system. After deleting the collected data, the process deletes the planning source system definition and collection parameters. After the Delete Source Configuration and All Related Data process completes, you must redefine the planning source system definition on the Manage Planning Source Systems page.
These parameters affect the usage of system resources. The table below defines what each parameter does and provides guidelines for setting it.
Parameter |
What the Parameter Does |
A Typical Value for the Parameter |
---|---|---|
Number of Database Connections |
Defines the maximum number of database connections the source server can create during the collection process. This controls the throughput of data being extracted into the Source Java program. |
10 |
Number of Parallel Workers |
Defines the maximum number of parallel workers (Java threads) used to process the extracted data. The number here directly impacts the amount of CPU and memory used during a collection cycle. |
30 |
Number of Rows per Processing Batch |
Define the number of records to process at a time. The idea is to allow the framework to process data in byte-size chunks. A batch too small may cause extra overhead while a batch too big might peak out memory or network bandwidth. |
10,000 |
Cached Data Entries in Thousands |
During data collections, various lookup and auxiliary data are cached in the collection server to support validation. For example, currency rate may be cached in memory. This parameter controls the maximum number of lookup entries cached per lookup to prevent the server from occupying too much memory. |
10,000 |
Before enabling a source system for collections, ensure your definition of the other parameters are complete for the source system. Ensure you have defined values for all applicable attributes, and where applicable, you have enabled organizations for collections or for ATP Web services.
When you enable a source system for data cross-reference, the data collections from the source system perform additional processing steps to check for and to cross-reference data during data collections. You must enable cross-referencing for Order Capture source systems.
From the list of organizations for each source systems, you designate which organizations will have their data collected when a collections process collects data from the source system.
To determine which organizations to enable for collections, analyze the sourcing strategies for your company, the type of organization for each organization in the list, and any other business requirements that would determine whether system resources should be expended to collect data from that organization. If the data from that organization would never be used by order promising or order orchestration, no need to collect the data.
For example, consider a scenario where the list of organizations for a source system includes 20 manufacturing plants and 10 distribution centers. Because the business requirements specify that the movements of materials from the manufacturing plants to the distribution centers are to be controlled separately from order orchestration and order promising, there are no sourcing rules that include transferring from one of the manufacturing plants. For this scenario, you would only enable the 10 distribution centers for collections.
You enable the available-to-promise (ATP) Web Service to enable Oracle Fusion Global Order Promising to invoke an external order promising engine to determine a date and quantity available for fulfillment lines to be shipped from a specific organization.
Your business requirements may require you to obtain the available-to-promise dates and available-to-promise quantities from external fulfillment systems for fulfillment lines to be shipped from specific organizations. To implement such a requirement, you enable the ATP Web Service for each organization subject to the requirement.
When a fulfillment line is received with a ship-from organization that is equal to an organization for which ATP Web Service has been enabled, Oracle Fusion Global Order Promising invokes the order promising engine of the applicable external system to determine an available-to-promise date and an available-to-promise quantity. For example, if the ATP Web Service has been enabled for your Lima organization, when fulfillment lines are received with Lima specified for the ship-from organization, an external order promising engine is invoked to provide an available-to-promise date and an available-to-promise quantity for these fulfillment lines.
There are some rare cases where an organization would be enabled for both collections and for ATP Web Service. For example, consider the scenario where there are different sourcing strategies for two items, Item X and Item Y. Item X is only purchased from the Chicago organization, but the Chicago organization can transfer Item X from the Seattle organization, and the sourcing rules have specified that the order promising process should check for inventory at the Seattle organization for Item X when Chicago is out of stock. The Seattle organization must be enabled for collections because the data from the Seattle organization must be included in the Order Orchestration and Planning data repository for use by the order promising process when it needs to check for available supply of Item X at the Seattle organization to transfer to the Chicago organization. Item Y is purchased directly from Seattle, must be made to order, and the order promising process must invoke an external ATP Web service to determine when the Item Y could be made. In this case, you would also enable the Seattle organization for ATP Web service.
No. You cannot add additional source systems when managing source systems for data collections for the order orchestration and planning data repository.
Source systems must first be defined in the Trading Community Model. For the system to be listed as one of the systems from which to choose from when managing source systems, the definition of the system in the Trading Community Model must enable the system for order orchestration and planning.
You perform data collections to populate the order orchestration and planning data repository. The collected data is used by Oracle Fusion Distributed Order Orchestration and Oracle Fusion Global Order Promising.
The following figure illustrates that the order orchestration and planning data repository is populated with data from external source systems and from the Oracle Fusion source system when you perform data collections. Oracle Fusion Distributed Order Orchestration uses some reference data directly from the repository, but the Global Order Promising engine uses an in-memory copy of the data. After data collections are performed, you refresh the Global Order Promising data store with the most current data from the data repository and start the Global Order Promising server to load the data into main memory for the Global Order Promising engine to use. When Oracle Fusion Distributed Order Orchestration sends a scheduling request or a check availability request to Oracle Fusion Global Order Promising, the Global Order Promising engine uses the data stored in main memory to determine the response.
You perform data collections to populate the order orchestration and planning data repository with data from external source systems and from the Oracle Fusion source system.
Oracle Fusion Distributed Order Orchestration uses some reference data directly from the order orchestration and planning data repository. You must perform data collections for the order orchestration reference entities even if you are not using Oracle Fusion Global Order Promising.
Important
Before collecting data from an Oracle Fusion source system, you must define at least one organization for the source system. After you have defined at least one organization for the source system, you must update the organization list for the source system on the Manage Planning Source Systems page or Manage Orchestration Source Systems page, and enable at least one organization for collections. If there are no organizations enabled for collections when a collections process runs, the collections process will end with an error.
The Global Order Promising engine uses an in-memory copy of the data from the order orchestration and planning data repository. When Oracle Fusion Distributed Order Orchestration sends a scheduling request or a check availability request to Oracle Fusion Global Order Promising, the Global Order Promising engine uses the data stored in main memory to determine the response to send back to order orchestration. After a cycle of data collections is performed, you refresh the Global Order Promising data store with the most current data from the data repository and start the Global Order Promising server to load the data into main memory for the Global Order Promising engine to use.
The order orchestration and planning data repository provides a unified view of the data needed for order orchestration and order promising. You manage data collection processes to populate the data repository with data collected from external source systems and from the Oracle Fusion source system. You manage the data collection processes to collect the more dynamic, transaction data every few minutes and the more static, reference data on a daily, weekly, or even monthly schedule. The data collected into the data repository contains references to data managed in the Oracle Fusion Trading Community Model and to data managed in the Oracle Fusion Product Model. The data managed in these models is not collected into the order orchestration and planning data repository.
The following figure illustrates that the order orchestration and planning data repository is populated with data collected from external source systems and from the Oracle Fusion source system. The data repository does not contain data managed by the Oracle Fusion Trading Community Model and the Oracle Fusion Product Model. The data collected into the data repository references data managed in the models.
When you plan and implement your data collections, you determine which entities you collect from which source systems, the frequency of your collections from each source system, which data collection methods you will use to collect which entities from which source systems, and the sequences of your collections. Consider these categories of data when you plan your data collections:
Data collected for order promising
Data collected for order orchestration
Data not collected into the order orchestration and planning data repository
The following data is collected and stored to support order promising:
Existing supply including on-hand, purchase orders, and work orders
Capacity including supplier capacity and resource capacity
Related demands including work order demands and work order resource requirements
Planned supply including planned buy and make orders
Reference data including calendars, transit times, and routings
Important
After performing data collections, you must refresh the Order Promising engine to ensure it is using the data most recently collected.
The following data is collected and stored to support order orchestration:
Order capture and accounts receivable codes
Accounting terms and currencies
Tip
Use the Review Planning Collected Data page or the Review Order Orchestration Collected Data page to explore many of the entities and attributes collected for the order orchestration and planning data repository.
Data collected into the order orchestration and planning data repository includes attributes, such as customer codes, that refer to data not collected into the data repository. Most of the data references are to data in the Oracle Fusion Trading Community Model or in the Oracle Fusion Product Model. Some of the data references are to data outside the models, such as item organizations and inventory organizations. To manage data collections effectively, especially the sequences of your collections, you must consider the data dependencies created by references to data not collected into the data repository.
References to data in the Oracle Fusion Trading Community Model include references to the following:
Source systems
Geographies and zones
Customers
Customer sites
References to data in the Oracle Fusion Product Model include references to the following:
Items, item relationships, and item categories
Item organization assignments
Structures
When you collect data for the order orchestration and planning data repository, you specify which of the data collection entities to collect data for during each collection. When you plan your data collections, you plan which entities to collect from which source systems and how frequently to collect which entities. One of the factors you include in your planning considerations is the categorizations of each entity. One way entities are categorized is as reference entities or transaction entities. You typically collect transaction entities much more frequently than reference entities.
Another way entities are categorized is as source-specific entities or global entities. For global entities the order in which you collect from your source systems must be planned because the values collected from the last source system are the values that are stored in the data repository.
When you plan your data collections, you consider the following categorizations:
Source-specific entities
Global entities
Reference entities
Transaction entities
You also consider which entities can be collected from which types of source systems using which data collection methods as follows:
Entities you can collect from the Oracle Fusion source system and from external source systems
Entities you can collect only from external source systems
When you collect data for a source-specific entity, every record from every source system is stored in the order orchestration and planning data repository. The source system association is maintained during collections. The data stored in the data repository includes the source system from which the data was collected.
For example, you collect suppliers from source system A and source system B. Both source systems contain a record for the supplier named Hometown Supplies. Two different supplier records will be stored in the data repository for the supplier named Hometown Supplies. One record will be the Hometown Supplies supplier record associated with source system A and the second record will be the Hometown Supplies supplier record associated with source system B.
The majority of the data collections entities are source-specific entities.
When you collect data for a global entity, only one record for each instance of the global entity is stored in the order orchestration and planning data repository. Unlike source-specific entities, the source system association is not maintained during collections for global entities. The data stored in the data repository for global entities does not include the source system from which the data was collected. If the same instance of a global entity is collected from more than one source system, the data repository stores the values from the last collection.
For example, you collect units of measure (UOM) from three source systems and the following occurs:
During the collection of UOM from source system A, the Kilogram UOM is collected.
This is first time the Kilogram UOM is collected. The Kilogram record is created in the data repository.
During the collection of UOMs from source system B, there is no collected UOM with the value = Kilogram
Since there was no record for the Kilogram UOM in source system B, the Kilogram record is not changed.
During the collection of UOMs from source system C, the Kilogram UOM is also collected.
Since the collections from source system C include the Kilogram UOM, the Kilogram record in the data repository is updated to match the values from source system C.
The following entities are the global entities:
Order orchestration reference objects
Units of measure (UOM) and UOM conversions
Demand classes
Currency and currency conversion classes
Shipping methods
Tip
When you collect data for global entities from multiple source systems, you must consider that the last record collected for each occurrence of a global entity is the record stored in the order orchestration and planning data repository. Plan which source system you want to be the source system to determine the value for each global entity. The source system that you want to be the one to determine the value must be the source system that you collect from last.
Reference entities are entities that define codes and valid values that are then used regularly by other entities. Units of measure and demand classes are two examples of reference entities. Reference entities are typically static entities with infrequent changes or additions. Whether an entity is reference entity or a transaction entity does not impact how it is stored in the order orchestration and planning data repository.
You consider whether an entity is a reference entity or a transaction entity when determining which collection method to use to collect data for the entity. You typically use the staging tables upload method to collect data for reference entities from external source systems. You typically used the targeted collection method to collect data for reference entities from the Oracle Fusion source system unless the reference entity is one of the entities for which the targeted collection method is not possible.
Transaction entities are the entities in the data repository that store demand and supply data. Because the data for transaction entities changes frequently, you typically use the web services upload method to collect data for transaction entities from external source systems. You typically use the continuous collection method to collect data for transaction entities from the Oracle Fusion source system.
Many of the data collection entities can be collected from both types of sources systems. For the following entities you can use any of the collections methods:
Approved supplier lists
Calendars
Calendar associations
Interlocation shipping networks
Item costs
On hand
Organization parameters
Purchase orders and requisitions
Subinventories
Suppliers
Units of measure
For the following entities you can only use the Web service upload method to collect data from external source systems:
Currencies
Order orchestration reference objects
Shipping methods
Many of the data collection entities can be only collected from external sources systems. For these entities, you can use both methods for collecting data from external source systems. Remember to consider frequency of change and volume of data in your considerations of which methods to use to collect which entities. The following are the entities you can only collect from external sources systems:
Customer item relationships
Demand classes
Planned order supplies
Routings
Resources
Resource availability
Sourcing
Supplier capacities
Work-in-process supplies
Work-in-process component demands
Work-in-process resource requirements
To populate the order orchestration and planning data repository with data collected from external source systems, you use a combination of two data collection methods. The two methods are Web service uploads and staging tables uploads.
The following figure illustrates the two data collection methods, Web service uploads and staging tables uploads, used to collect data from external source systems. The figure illustrates that both methods require programs to be written to extract data from the external source systems. For Web service uploads, you load the data from the extracted data files directly into the order orchestration and planning data repository. Any records with errors or warnings are written to the data collections staging tables. For staging table uploads, you load the data from the extracted data files into the data collections staging tables, and then you use the Staging Tables Upload program to load the data from the staging tables into the data repository.
You determine which entities you collect from which source systems and at what frequency you need to collect the data for each entity. The data for different entities can be collected at different frequencies. For example, supplies and demands change frequently, so collect data for them frequently. Routings and resources, are more static, so collect data for them less frequently.
Which data collection method you use for which entity depends upon the frequency of data changes as follows:
Web service upload
Use for entities with frequent data changes.
Staging tables upload
Use for entities with more static data.
Use the Web service upload method for entities that change frequently, such as supply and demand entities. You determine the frequency of collections for each entity. For certain entities, you may implement Web services to run every few minutes. For other entities, you may implement Web services to run hourly.
To implement and manage your Web service uploads, you must design and develop the processes and procedures to extract the data in the format needed by the data collection web services. For more information regarding the data collection Web services, refer to the Oracle Enterprise Repository. For additional technical details, see Oracle Fusion Order Promising Data Collection Staging Tables and Web Service Reference, document ID 1362065.1, on My Oracle Support at https://support.oracle.com.
Use the staging tables upload method for entities that do not change frequently, such as routings and resources. You determine the frequency of collections for each entity. You may establish staging table upload procedures to run daily for some entities, weekly for some entities, and monthly for other entities.
To implement and manage your staging table uploads, you must develop the processes and procedures you use to extract data from an external source system. You use Oracle Data Interchange, or another data load method, to load the extracted data into the data collection staging tables. For additional technical details, such as the table and column descriptions for the data collection staging tables, see Oracle Fusion Order Promising Data Collection Staging Tables and Web Service Reference, document ID 1362065.1, on My Oracle Support at https://support.oracle.com.
For the final step of the staging tables upload method, you initiate the Load Data from Staging Tables process from the Manage Data Collection Processes page or via the Enterprise Scheduling Service.
To populate the order orchestration and planning data repository with data collected from the Oracle Fusion source system, you use a combination of two data collection methods: continuous collection and targeted collection. You typically use continuous collection for entities that change frequently and targeted collection for entities that are more static.
The following figure illustrates the two data collection methods, continuous collection and targeted collection, used in combination to collect data from the Oracle Fusion source system.
When you use the continuous collection method, you are only collecting incremental changes, and only for the entities you have included for continuous collection. Because continuous collection only collects incremental changes, you usually set up the continuous collection to run frequently, such as every five minutes.
Note
Prior to including an entity for continuous collection, you must have run at least one targeted collection for that entity.
When you collect data using the targeted collection method, you specify which entities to include in the targeted collection. For the included entities, the data in the data repository that was previously collected from the Oracle Fusion source system is deleted and replaced with the newly collected data. The data for the entities not included in the targeted collection is unchanged. You typically use the targeted collection method to collect data from entities that do not change frequently.
The Global Order Promising engine is an in-memory engine that uses an in-memory copy of the data collected into the order orchestration and planning data repository. To ensure the in-memory data reflects the latest supply and demand data collected into the data repository, you should refresh the Global Order Promising data store and start the Global Order Promising server at least once a day.
The following figure illustrates that you perform data collections to populate the order orchestration and planning data repository with current data from multiple source systems. After you complete a cycle of data collections, you refresh the Global Order Promising data store with the latest data from the data repository. After you refresh the Global Order Promising data store, you start the Global Order Promising server to load a copy of the refreshed data from the data store into main memory.
To refresh the in-memory copy of the collected data with the most recently collected data, perform these two steps:
Refresh the Global Order Promising data store.
Start the Global Order Promising server.
To refresh the Global Order Promising data store, complete these steps:
Navigate to the Schedule New Process page by following this navigation path:
Navigator
Tools
Schedule Processes
Schedule New Process
Click the more link
Select the Schedule Processes link.
Click the Submit New Request button.
In the popup window, select Job for the type.
Search for and select the process named RefreshOpDatastore.
Select the entities you want to refresh and submit the job.
To start the Global Order Promising server, you use an Oracle Fusion Global Order Promising instantiation of Oracle Enterprise Manager.
You do not need to stop the server before you start it. If the Global Order Promising server is already running when you start the Global Order Promising server, the Global Order Promising engine currently in memory continues to run until the start process is complete. The Start Global Order Promising Server process updates another engine with the current data from the Global Order Promising Server data store. When the updated engine comes up, the existing engine with the old data is automatically shut down.
Important
The Current Date attribute stored within the Global Order Promising engine is also updated when you start the Global Order Promising server. If the Global Order Promising engine is not updated at least once a day, the Global Order Promising engine may have a wrong current date, and there may be issues with promising results.
Note
You also use an Oracle Fusion Global Order Promising instantiation of Oracle Enterprise Manager to monitor performance of the Global Order Promising server, to access log files, and to stop the server when necessary.
For your data collections from the Oracle Fusion source system, you use the Manage Planning Data Collection Processes page or the Manage Orchestration Data Collection Processes page. From these pages you perform the following:
Manage your continuous collections from the Oracle Fusion source system.
Manage your collections destination server.
Perform your targeted collections from the Oracle Fusion source system.
For your data collections from external source systems, most of the management of your Web services uploads and staging tables uploads is performed external to the Oracle Fusion application pages. If you choose to perform staging tables uploads, you initiate the Perform Data Load process from the Manage Planning Data Collection Processes page, from the Manage Orchestration Data Collection Processes page, or from the Oracle Fusion Enterprise Scheduler.
To enable continuous collections, you must set up the publish data processes for the Oracle Fusion source system. The publish process performs the incremental data collections from the Oracle Fusion source system. You can start, stop, and pause the publish process. To review statistics regarding the publish process, view process statistics from the Actions menu on the Continuous Collection - Publish tab on the Manage Planning Data Collection Processes page or the Manage Orchestration Data Collection Processes page.
Note
Because continuous collections only collects net changes, you must perform at least one targeted collection for an entity before you include the entity for continuous collections.
You define the publish process parameters to determine the frequency and scope of the continuous collections publish process.
You define the frequency and scope of continuous collections by specifying the following:
Process Parameters
Process Entities
You determine how frequently the continuous collections publish process executes by specifying the frequency in minutes. The continuous collections publish process will publish incremental changes based on the frequency that was defined when the publish process was last started.
You determine which organizations will be included in the set of organizations for which data is collected by specifying an organization collection group. You can leave it blank if you want data collected from all organizations.
You determine which entities are collected during the continuous collections cycles by selecting which entities you want included in the collections. The continuous collections publish process collects incremental changes for the business entities that were included when the publish process was last started.
The collections destination server is applicable to all four data collection methods. For the continuous collections method the collections server is the subscriber to the continuous collections publish process. From the Actions menu on the Collections Destination Server tab you can access a daily statistic report with statistics regarding each of the collection methods. You also can access a data collections summary report.
The collection parameters are initially set to what was defined for the Oracle Fusion system when your planning source systems or order orchestration source systems were initially managed. You can fine tune the parameters for your data collections.
The data collection parameters affect the usage of system resources. This table define what each parameter does and provides guidelines for setting it.
Parameter |
What the Parameter Does |
A Typical Value for the Parameter |
---|---|---|
Number of Database Connections |
Defines the maximum number of database connections the source server can create during the collection process. This controls the throughput of data being extracted into the Source Java program. |
10 |
Number of Parallel Workers |
Defines the maximum number of parallel workers (Java threads) used to process the extracted data. The number here directly impacts the amount of central processing units and memory used during a collection cycle. |
30 |
Cached Data Entries in Thousands |
During data collections, various lookup and auxiliary data are cached in the collection server to support validation. For example, currency rate may be cached in memory. This parameter controls the maximum number of lookup entries cached per lookup to prevent the server from occupying too much memory. |
10,000 |
When you collect data from multiple source systems, you often collect a variety of values for the same instance of an entity. You cross-reference data during data collections to store a single, agreed value in the order orchestration and planning data repository for each instance of an entity.
The following information explains why you might need to cross-reference your data during data collections, and what you need to do to implement cross-referencing:
Cross-reference example
Cross-reference implementation
The following table provides an example of why you might need to cross-reference your data during data collections. In the example, the Kilogram unit of measure is collected from two source systems. The source systems use a different value to represent kilogram. You decide to store kg for the value for Kilogram in the order orchestration and planning repository.
Source System |
Collections Entity |
Source Value |
Target Value |
---|---|---|---|
System A |
Unit of measure |
kilogram |
kg |
System B |
Unit of measure |
k.g. |
kg |
To implement cross-referencing, you must complete the following actions:
Decide which business object to enable for cross-referencing.
For each object, work with business analyst to decide which values to map to which other values.
Use the Oracle Fusion Middleware Domain Value Map user interface to upload mappings to the corresponding domain value map.
On the Manage Planning Data Collection Processes page, enable the corresponding entity for cross-referencing.
Determine an ongoing procedure for adding new values into the domain value map.
The continuous collection data collection method is partially supported for item costs. Item costs are collected in the next incremental collection cycle for previously existing items when one or more item organization attributes in addition to item cost have changed.
When a new item is defined, the item cost for the new item is not collected in the next incremental collection cycle. If an existing item is not changed other than an update to the item cost, the item cost change is not picked up in the next incremental collection cycle.
Tip
If items are added frequently, item costs are changed frequently, or both, then targeted collection of item costs should be routinely performed, perhaps once a day.
To use the staging tables upload method, you must load the data you extract from your external source systems into the staging tables. You can use Oracle Data Integrator to load the extracted data into the staging tables.
If you have installed Oracle Data Integrator (ODI), and configured ODI for use by Oracle Fusion applications, you can load data to the staging tables by scheduling the Perform Data Load to Staging Tables process, PerformOdiSatagingLoad. To use this process, you must perform these steps and understand these details:
Steps to use the Perform Data Load to Staging Tables process
Steps to manually prepare and update the required dat files
Details regarding the Perform Data Load to Staging Tables process
Steps to verify execution status after starting the Perform Data Load to Staging Tables process
Details regarding verifying the Perform Data Load to Staging Tables process execution status
List of interface ODI scenarios run for each business entity
The Perform Data Load to Staging Tables process invokes an ODI data load. To use this process, follow these steps:
Create a data file for each business entity for which you are extracting data from your external source system. The file type for the data files must be dat. Use the sample dat files provided on My Oracle Support as templates. The data in the files you create must conform to the exact formats provided in the sample files.
To obtain the sample dat files, see Oracle Fusion Order Promising Data Collections Sample ODI Data Files, document ID 1361518.1, on My Oracle Support https://support.oracle.com.
You can open the sample dat files in a spreadsheet tool to review the sample data. The sample data shows the minimum required fields for an entity.
Place the dat files in the host where the Supply Chain Management (SCM) ODI agent is installed. The dat files must be placed at this specific location: /tmp/ODI_IN.
The path for this location is configured for the SCM ODI Agent. The SCM ODI Agent is an ODI software agent that services ODI related client requests. More information about this agent can be found in the ODI product documentation.
After ODI is installed, you must use the ODI console to refresh the variables C_LAST_UPDATED_BY and C_CREATED_BY.
Schedule the Perform Data Load to Staging Tables, PerformOdiStagingLoad, process.
You can develop data extract programs to extract data from your external source systems and store the extracted data into the required dat files in the required format. To manually add data to the dat files, follow these steps:
Open the applicable dat file in a spreadsheet tool. When you open the file, you will be prompted to specify the delimiter.
Use the tilde character, ~ , for the delimiter.
Add any data records you want to upload to the staging tables into the spreadsheet. Data for date type columns must be in the DD-MON-YY date format.
Save the worksheet from the spreadsheet tool into a text file.
Use a text editor and replace spaces between columns with the tilde character.
Verify that every line terminates with a CR and LF (ASCII 000A & 000D respectively.)
Upload the dat file to the /tmp/ODI_IN directory where the SCM ODI agent is running. The location is seeded in the ODI topology. Upload (FTP) the dat file in binary mode only.
Review the file in vi after the FTP upload to detect junk characters and, if any, remove them.
The Perform Data Load to Staging Tables process invokes the ODI scenario MASTER_PACKAGE that internally invokes all four projects defined in ODI for collections. Each of these four projects invokes various interfaces. Data is loaded from flat files to staging tables for all the business objects enabled for Oracle Fusion 11.1.2.0.0 through Oracle Data Integrator.
The following are specific details for the process:
Process Name: PerformOdiStagingLoad
Process Display Name: Perform Data Load to Staging Tables
Process Description: Collects planning data from flat files and loads to staging tables using Oracle Data Integrator.
ODI Project Name: SCM_BulkImport
ODI scenario Name: MASTER_PACKAGE
SCM Scheduler: SCM_ESS_ODI_SCHEDULER
Agent URL: your_host_name:your_port_no/oracleodiagent (substitute your host name and your port number)
To verify the execution status after starting the Perform Data Load to Staging Tables process, perform these steps:
The Perform Data Load to Staging Tables process does not log messages to the scheduled processes side. To check for a log message, query the Request_History table using this select statement:
Select * from fusion_ora_ess.request_history where requestid= <request_id>;
Check the Status column for the overall execution status of the job and the Error_Warning_Detail column for a detailed error message, if any.
Check the ODI scenario execution status details in the ODI operator window. The scenario names are listed in the table in the List of Interface ODI Scenarios Run for Each Business Entity section of this document.
If log directories are accessible, check the following ODI logs for specific information on ODI scenario execution path:
/slot/emsYOUR_SLOT_NUMBER/appmgr/WLS/user_projects/domains/wls_appYOUR_SLOT_NUMBER/servers/YOUR_ODI_SERVER_NAME/logs
Diagnostic: for any errors in execution
Server: for all the logs specific to ODI console
Agent: for scenario entry and exit and for session ID
When verifying the Perform Data Load to Staging Table process, remember the following:
No logs will be written at the scheduled processes side. Also, the session id for ODI scenario cannot be found at the scheduled processes side.
When viewing the process status on the Scheduled Processes page, a Success status does not mean that all the data got into the staging tables successfully. The Success status only indicates that the scenario is launched successfully. Scenario status must be checked from ODI logs.
You cannot determine the refresh_number generated by ODI for the current process run from the Scheduled Processes page. To obtain the refresh number, you must use this query to query the msc_coll_cycle_status table and check for the ODI collection_channel:
Select * from msc_coll_cycle_status order by refresh_number desc;
One or more interface ODI scenarios are run for each business entity. Each interface scenario maps to one entity. If any interface Scenario fails in ODI, that entity data is not collected to the staging tables. This table lists the business entities and the interface ODI scenarios run within each business entity.
Business Entity |
Interface ODI Scenarios |
---|---|
Work-in-Process Requirements |
WIP_COMP_DEMANDS _SCEN WIP_OP_RESOURCE_SCEN |
Calendars |
CALENDAR_SCEN CALENDAR_WORKDAYS_SCEN CALENDARDATES_SCEN CALENDAR_EXCEPTIONS_SCEN CALENDARSHIFTS_SCEN CALENDAR_PERIODSTARTDAYS_SCEN CALENDAR_WEEKSTARTDAY_SCEN CALENDAR_ASSIGNMENTS_SCEN |
Demand Classes |
DEMAND_CLASS_SCEN |
Global Supplier Capacities |
GLOBAL_SUP_CAPACITIES_SCEN |
Interorganization Shipment Methods |
SHIPMENT_METHODS_SCEN |
Item Cost |
ITEM_COST_SCEN |
Item Substitutes |
ITEM_SUBSTITUTES_SCEN |
Item Suppliers (Approved Supplier List) |
ITEM_SUPPLIERS_SCEN |
On Hand |
ONHAND_SCEN |
Organizations |
ORGANIZATIONS_SCEN |
Purchase Orders and Requisitions |
SUPPLY_INTRANSIT_SCEN PO_IN_RECEIVING_SCEN PO_SCEN PR_SCEN |
Planned Order Supplies |
PLANNEDORDERSUP_SCEN |
Resources |
RESOURCES_SCEN RESOURCE_CHANGE_SCEN RESOURCE_SHIFTS_SCEN RESOURCE_AVAILABILITY_SCEN |
Routings |
ROUTING_OPERATION_RESOURCES_SCEN ROUTINGS_SCEN ROUTING_OPERATIONS_SCEN |
Sourcing Rules |
SOURCING_ASSIGNMENTS_SCEN SOURCING_RULES_SCEN SOURCING_ASSIGNMENTSETS_SCEN SOURCING_RECEIPT_ORGS_SCEN SOURCING_SOURCE_ORGS_SCEN |
Subinventories |
SUB_INVENTORIES_SCEN |
Trading Partners |
TRADING_PARTNERS_SCEN TRADING_PARTNER_SITES_SCEN |
Units of Measure |
UOM_SCEN UOM_CONVERSION_SCEN UOM_CLASS_CONVERSION_SCEN |
Work Order Supplies |
WORKORDER_SUPPLY_SCEN |
To perform a data load from the data collection staging tables, you invoke the Perform Data Load from Staging Tables process. When you invoke the process, you provide values for the parameters used by the process
When you perform an upload from the staging tables, you specify values for a set of parameters for the Perform Data Load from Staging Tables process including specifying Yes or No for each of the entities you can load. For the parameters that are not just entities to select, the table below explains the name of each parameter, the options for the parameter values, and the effect of each option.
Parameter Name |
Parameter Options and Option Effects |
---|---|
Source System |
Select from a list of source systems. |
Collection Type |
|
Group Identifier |
Leave blank or select from the list of collection cycle identifiers. Leave blank to load all staging table data for the selected collection entities. Select a specific collection cycle identifier to load data for that collection cycle only. |
Regenerate Calendar Dates |
|
Regenerate Resource Availability |
|
The parameters presented for the Perform Data Load from Staging Tables process also include a yes-or-no parameter for each of the entities you can collect using the staging tables upload method. If you select yes for all of the entities, the data collections will be performed in the sequence necessary to avoid errors caused by data references from one entity being loaded to another entity being loaded.
Important
If you do not select yes for all of the entities, you need to plan your load sequences to avoid errors that could occur because one of the entities being loaded is referring to data in another entity not yet loaded. For more information, see the articles regarding order promising or data collections on My Oracle Support at https://support.oracle.com.
The collection cycle identifier is a unique number that identifies a specific data collection cycle, or occurrence. One cycle of a data collection covers the time required to collect the set of entities specified to be collected for a specific data collection method. The collection cycle identifier is then used in statistics regarding data collections, such as the Data Collection Summary report. The collection cycle identifier is also used for a parameter in various processes related to data collections, such as the Purge Staging Tables process and the Perform Data Load process.
This topic explains the population of the collection cycle identifier when you use collecting data from external source systems as follows:
Web Service Uploads and the Collection Cycle Identifier
Staging Tables Uploads and the Collection Cycle Identifier
When you use the Web service upload data collection method, a collection cycle identifier is included as part of the collected data. You can then use the collection cycle identifier to review statistics regarding the Web service collections, or to search for error and warning records written to the data collection staging tables.
If you use the Oracle Data Integrator tool to load your extracted data into the data collections staging tables, a collection cycle identifier is created for each load session. Each record loaded into the staging table during the load session will include the collection cycle identifier for that session.
If you populate the data collection staging tables using a method other than the Oracle Data Integrator tool, you must follow these steps to populate the collection cycle identifier.
Groupid is to be populated in column refresh_number of each data collections staging table. In one cycle of loading data into the staging tables, the column should be populated with same value. Get the group id value as follows:
SELECT ....NEXTVAL FROM DUAL;
After a cycle loading data into the data collections staging tables, insert a row as follows into table msc_cycle_status for that cycle as follows:
INSERT INTO MSC_COLL_CYCLE_STATUS
(INSTANCE_CODE, INSTANCE_ID, REFRESH_NUMBER, PROC_PHASE, STATUS, COLLECTION_CHANNEL, COLLECTION_MODE, CREATED_BY, CREATION_DATE, LAST_UPDATED_BY, LAST_UPDATE_DATE)
SELECT a.instance_code, a.instance_id, :b1, 'DONE', 'NORMAL',
'LOAD_INTERFACE', 'OTHER', 'USER', SYSTIMESTAMP, USER, SYSTIMESTAMP
FROM msc_apps_instances a
WHERE a.instance_code= :b2 ;
:b1 is instance_code for which data is loaded
:b2 is the groupid value populated in column refresh_number in all interface tables for this cycle
When you collect calendars and net resource availability from external source systems, you decide whether to collect patterns or individual dates. Order promising requires individual calendar dates and individual resource availability dates to be stored in the order orchestration and planning data repository. If you collect calendar patterns or resource shift patterns, you must invoke processes to populate the order orchestration and planning data repository with the individual dates used by order promising.
You invoke the necessary processes by specifying the applicable parameters when you run data collections. The processes generate the individual dates by using the collected patterns as input. The processes then populate the order orchestration and planning data repository with the individual calendar dates and the individual resource availability dates.
When you collect calendars from external source systems, you decide whether to collect calendar patterns or individual calendar dates. Both methods for collecting data from external source systems, Web service upload and staging tables upload, include choosing whether individual calendar dates must be generated as follows:
The Web service to upload to calendars includes a parameter to run the Generate Calendar Dates process.
You control whether the process will run. If the parameter is set to yes, then after the Web service upload completes, the process will be launched to generate and store individual calendar dates.
The parameters for the Perform Data Load from Staging Tables process also include a parameter to run the Generate Calendar Dates process.
You control whether the process will run. If the parameter is set to yes, then after the load from staging tables completes, the process will be launched to generate and store individual calendar dates.
In both scenarios, calendar data is not available while the Generate Calendar Dates process is running.
When you collect calendars from the Oracle Fusion system, the Generate Calendar Dates process is run automatically.
Restriction
Only calendar strings that are exactly equal to seven days are allowed. Calendar strings with lengths other than seven are not collected. Only calendars with Cycle = 7 should be used.
When you collect net resource availability from external source systems, you decide whether to collect resource shift patterns or individual resource availability dates. Both methods for collecting data from external source systems, Web service upload and staging tables upload, include specifying whether individual resource availability dates must be generated as follows:
The Web service to upload to net resource availability includes a parameter to run the Generate Resource Availability process.
You control whether the process will run. If the parameter is set to Yes, then after the Web service upload completes, the process will be launched to generate and store individual resource availability dates.
The parameters for the Perform Data Load from Staging Tables process also include a parameter to run the Generate Resource Availability process.
You control whether the process will run. If the parameter is set to Yes, then after the load from staging tables completes, the process will be launched to generate and store individual resource availability dates.
In both scenarios, new resource availability data is not available while the Generate Resource Availability process is running.
You cannot collect net resource availability from the Oracle Fusion source system.
To perform a targeted data collection from the Oracle Fusion system, you use the Perform Data Collection process. When you invoke the process, you provide values for the parameters used by the process.
When you perform a targeted collection, you specify the Oracle Fusion source system to be collected from and the organization collection group to collect for. When you invoke the process, the parameters also include each of the fourteen entities you can collect from the Oracle Fusion source system with yes or no for the parameter options. The table below explains the other two parameters.
Parameter Name |
Parameter Options |
---|---|
Source System |
The source system presented for selection is determined by what system has been defined as the Oracle Fusion source system when the manage source systems task was performed. |
Organization Collection Group |
The organization collection groups presented for selection are determined by what organization groups were defined when the manage source systems task was performed for the selected source system. |
The parameters presented also include a yes-or-no parameter for each of the entities you can collect. If you select yes for all of the entities, the data collections will be performed in the sequence necessary to avoid errors caused by data references from one entity being loaded to another entity being loaded.
Important
If you do not select yes for all of your entities, you need to plan your load sequences to avoid errors that could occur because one of the entities being loaded is referring to data in another entity not yet loaded. For more information, see the articles regarding order promising or data collections on My Oracle Support at https://support.oracle.com.
When you perform a targeted collection from the Oracle Fusion source system, you use an organization collection group to contain the collections processing to only the organizations with data that is needed for the order orchestration and planning data repository. Organization collection groups limit targeted collections from the Oracle Fusion source system to a specific set of organizations.
You perform the following actions for organization collection groups:
Define an organization collection group.
Use an organization collection group.
You define organization groups when managing source systems for the source system where the version equals Oracle Fusion. For each organization in the organization list for the Oracle Fusion source system, you can specify an organization group. You can specify the same organization group for many organizations.
You use an organization collection group when you perform a targeted collection from the Oracle Fusion source system and you want to contain the collections processing to a specific set of organizations. You specify which organization group to collect data from by selecting from the list of organization groups defined for the Oracle Fusion source system. Data will only be collected from the organizations in the organization group you specified.
For example, if only certain distribution centers in your Oracle Fusion source system are to be considered for shipments to your customers by the order promising and order orchestration processes, you could create a DC123 organization group and assign the applicable distribution centers to the DC123 organization group when managing source systems. When you perform a targeted collection for the Oracle Fusion source system, you could select DC123 for the organization collection group.
When you manage the data collection processes, you use the Process Statistics report and the Data Collection Summary report to routinely monitor your collections. When error records are reported, you query the data staging tables for further details regarding the error records. You can also review most of your collected data using the review collected data pages.
The following information sources are available for you to monitor data collections:
Process Statistics report
Data Collection Summary report
Review collected data pages
Staging table queries
You view the Process Statistics report to monitor summary of statistic for the daily collections activity for each of your source systems. This report is available on the Actions menu when managing data collection processes for either the continuous collection publish process or the collections destination server. The day starts at 00:00 based on the time zone of the collection server.
For the Oracle Fusion source system, statistics are provided for both the continuous collection and the targeted collection data collection methods. For each external source system, statistics are provided for the Web service upload and for the staging tables upload data collection methods. The following statistics are provided in the Process Statistics report:
Number of collection cycles for the current day
Average cycle time in seconds
Average number of records
Average number of data errors
Note
The process statistics provide summary information, and are not intended for detailed analysis of the collections steps. Use the Oracle Enterprise Scheduler Service log files for detailed analysis.
You view the Data Collection Summary report to monitor statistics regarding the data collection cycles for each of your source systems. The summary report shows last the results of the last 20 cycles of all collection types. This report is available on the Action menu when managing data collection processes for the collections destination server.
The Data Collection Summary report provides information for each source system. If a source system was not subject to a data collection cycle for the period covered by the summary, an entry in the report states that there are no cycles in the cycle history for that source system. For each source system that was subject to a data collection cycle for the period covered by the summary, the following information is provided for each data collection method and collected entity value combination:
The data collection method
The collection cycle number
The entity collected and, for that entity, the number of records collected, the number of records with data errors, and collection duration
Time started
Time ended
You can review most of your collected data by using the Review Planning Collected Data page or the Review Order Orchestration Collected Data page. Both pages include a list of entities from which you select to specify the entity for which you want to review collected data. The list of entities is the same on both pages. Most of the entities listed on the review collected data pages are identical to the entities you select from when you run collections, but there are a few differences.
Some of the entities on the list of entities you select from when you review collected data are a combination or a decomposition of the entities you select from when you run collections. For example, the Currencies data collection entity is decomposed into the Currencies entity and the Currency Conversions entity on the review collected data pages. For another example, the Supplies entity on the review collected data pages is a combination of data collection entities including the On Hand entity and the Purchase Orders and Requisitions entity.
A few of the data collection entities cannot be reviewed from the review collected data pages. The data collection entities that are not available for review on the review collected data pages are Resources, Resource Availability, Routings, Work-in-Process Resource Requirements, and Customer Item Relationships.
If errors or warnings have been encountered during data collections, you can submit queries against the staging tables to examine the applicable records. For more information regarding the staging tables and staging table columns, see the articles regarding order promising or data collections on My Oracle Support at https://support.oracle.com.
When you are collecting data from external source systems, the data collection processes perform many data validation checks. If the data validations fail with errors or warnings, the steps taken by the data collection processes vary slightly depending upon whether the Web service upload data collection method or the staging tables upload data collection method is used.
In both cases, records where errors are found are not loaded into the order orchestration and planning data repository. Instead records are loaded into, or remain in, the applicable staging tables with an appropriate error message. Records where only warnings are found are loaded to the data repository, and records are loaded into, or remain in, the applicable staging tables with an appropriate warning message.
The handling of errors and warnings encountered when the data collection processes validate data during collections from external source systems depends upon which data collection method is used, Web service upload or staging tables upload.
When you are running data collections using the Web services method, records without errors or warnings are loaded into the data repository. For records with errors or warnings the following error and warning handling steps occur when using the Wed services method:
Warnings: Records with warnings are fixed automatically, then loaded into the data repository and copied into the applicable staging tables with the appropriate warning message.
Errors: Records are loaded to the applicable staging tables instead of the data repository and are marked with the appropriate error message. When there is an error due to missing mandatory fields, in cases where possible, the collections process will retry the record. After several unsuccessful retry attempts, the record will be marked as error. In some cases, retrying is not an option, and the record will be immediately marked as an error.
When you are running data collections using the staging tables upload method, the following error and warning handling steps occur:
Warnings: Records are loaded into the data repository and remain in the staging tables with the appropriate warning message. The message is associated with the record in the data repository.
Errors: When there is an error due to missing mandatory fields, in cases where possible, the collections process will retry the record. After several unsuccessful retry attempts, the record will be marked as error. In some cases, retrying is not an option, and the record will be immediately marked as an error.
When a Planned Order Supplies record is collected, many validations occur for which an error is recorded if the validation fails.
For example, the supplier name is validated against the suppliers data in the order orchestration and planning data repository. If the supplier name is not found, the validation fails with an error condition, and the following steps occur:
The Planned Order Supplies record is not loaded into the data repository.
The Planned Order Supplies record is loaded into the applicable staging table, or remains in the applicable staging table, with an error message stating invalid supplier or invalid supplier site.
When a Planned Order Supplies record is collected, many validations occur for which a warning is recorded if the validation fails.
For example, the Firm-Planned-Type value in the record is validated to verify that the value is either 1 for firm or 2 for not firm. If the validation fails, the failure is handled as a warning, and the following steps occur:
The Planned Order Supplies record is loaded into the data repository with the Firm-Planned-Type value defaulted to 2 for not firm.
The Planned Order Supplies record is also loaded into the applicable staging table, or remains in the applicable staging table, with a warning message stating invalid firm planned type.
You use the Purge Data Repository Tables process to delete all collected data from the order orchestration and planning data repository that was collected from a specific source system. You use the Purge Staging Tables process to remove data that you no longer need in the data collections staging tables.
You use the Purge Data Repository process to delete all data for a source system from the order orchestration and planning data repository. The process enables you to delete data for a specific source system. You typically use the Purge Data Repository process when one of your source systems becomes obsolete, or when you decide to do a complete data refresh for a set of collection entities.
The Purge Data Repository process has only two parameters, both of which are mandatory. This table explains the two parameters.
Parameter Name |
Parameter Options |
---|---|
Source System |
Select a source system for the list of source systems. All data for the selected system will be deleted from the data repository. |
Purge Global Entities |
Yes or No If you select yes, in addition to the applicable data being deleted for the source-specific entities, all data from global entities will also be deleted. If you select no, data will be deleted from the source-specific entities only. |
You use the Purge Staging Tables process to delete data from the data collection staging tables.
The following table explains the parameters you specify when you run the Purge Staging Tables process. In addition to the five parameters explained below, you specify yes or no for each of the twenty-five data collection entities.
Parameter Name |
Parameter Options |
---|---|
Source System |
Select a source system for the list of source systems. Data will be deleted for this source system only. |
Record Type |
The record type specifies which type of records to purge as follows:
|
Collection Cycle ID |
Specify a value for the collection cycle identifier to purge data for a specific collection cycle only, or leave blank. |
From Date Collected |
Specify a date to purge data from that date only, or leave blank. |
To Date Collected |
Specify a date to purge data up to that date only, or leave blank. |
One of the objects in the set of objects used by the orchestration processes to determine the meaning and descriptions for names or codes, such as payment terms names, freight-on-board codes, and mode-of-transport codes.
The sales order data passed to the orchestration processes contains the names or codes, but the processes need to display the meanings or descriptions. The data to determine the meanings or descriptions for the names or codes must be collected into the order orchestration and planning data repository.
For example, sales order information is passed to the Order Orchestration processes containing a freight-on-board code equal to 65, and the order orchestration and planning data repository contains a record with freight-on-board code equal to 65. The processes use the matching codes to determine that the freight-on-board code meaning is equal to Origin, and the description is equal to Vendors responsibility.
Tip
For the full list of order orchestration reference objects, review collected data for the order orchestration reference objects, and view the list of values for the Lookup Type field.
To define the sources of supply for your supply chains and to define your date-effective sourcing strategies, create sourcing rules and bills of distribution. Within each sourcing rule or bill of distribution, you define one or more supply sources and a combination of rankings and quantity-based sourcing specifications for each source to define priorities across the supply sources. For each source, you also select one of three source types, and you specify the value for the attributes applicable to the selected source type.
This table lists the three replenishment source types, the definition of the source type, and the attributes to specify for each source type.
Source Type |
Source Type Definition |
Attributes to Specify |
---|---|---|
Buy from |
Sourced from an external supplier. |
Specify the supplier and supplier site. |
Make at |
Sourced from an internal organization that manufactures the item. |
Specify the manufacturing organization. |
Transfer from |
Sourced through an interorganization transfer. |
Specify the organization from which items will be transferred. |
Note
When you create sourcing rules and bills of distribution, you specify how you will replenish items. You do not specify what items that you will replenish. To specify which sourcing rules or bills of distribution that you will use to replenish what items, you create assignment sets.
You define the following aspects of sourcing rules and bills of distribution to define your sources of supply and your sourcing strategies:
Global sourcing rules
Local sourcing rules
Bills of distribution
Effectivity dates
Source ranks, quantity-based sourcing specifications, and allocation percentages
Tip
When first designing your sourcing rules and bills of distribution, start by envisioning your assignment set. Determine what set of global sourcing rules, local sourcing rules, bills of distribution, or combinations of rules and bills that you need to implement your assignment set while minimizing the number of rules or bills to maintain. For example, you may be able to define a global sourcing rule in such a way that you will need only a few local sourcing rules to assign for exceptions to the global rule.
Global sourcing rules can specify two of the source types: the buy-from or transfer-from source types. Any organization can potentially replenish items by buying from any of the suppliers specified in the buy-from sources, or transferring from any of the organizations specified in the transfer-from sources. For example, if you create a global sourcing rule with a buy-from source with Super Supply Company specified for the supplier, any of your organizations can potentially buy from Super Supply Company.
If you have a source that is applicable to most of your organizations, create a global sourcing rule for that source and local sourcing rules for the organizations for which the source is not applicable. For example, if there are 20 organizations in your company, and 19 of the organizations transfer supply from the Munich organization, create a global sourcing rule specifying transfer-from the Munich organization, and create a local sourcing rule specifying where the Munich organization gets supply from.
Local sourcing rules can specify all three source types. Because a local sourcing rule is applicable to one, and only one, organization, you specify which organization the rule is being created for when you create the rule. The replenishment sources defined in the rule are applicable only to the organization for which the rule was created. For example, if you create a local sourcing rule with M1 as the organization for which the rule is being created, and you add a buy-from source to the rule with XYZ Supply Company specified for the supplier, and you have no other sourcing rules or bills of distribution with XYZ Company specified for the supplier, then only the M1 organization can buy from XYZ Supply Company.
If you have designed multiple local sourcing rules with material flowing through three or more organizations, you can choose to create one bill of distribution to implement the sources instead of creating multiple local sourcing rules. Choosing to create a bill of distribution instead of souring rules is a personal or organizational preference. Any scenario that you can implement by creating a bill of distribution, you can also implement by creating multiple local sourcing rules.
For example, the following sourcing scenario could be implemented by three local sourcing rules or one bill of distribution:
Organization M1 sources items by purchasing from a supplier, XYZ Supply.
Organization M2 sources items by transferring from M1.
Organization M3 sources items by transferring from M2.
Use sourcing effectivity dates to modify sourcing rules and bills of distribution when sources change, such as a new supplier contract is established or a manufacturing facility is shut down. Each rule or bill can have multiple, non-overlapping ranges of effectivity start dates and end dates, with a different set of sources specified for each range. For example, if you have a sourcing rule that currently specifies a buy-from source with Acme Supplier specified for the supplier, but your company has decided to start buying from Winter Widgets instead, you would modify the sourcing rule by specifying the applicable end date, the date you will no longer buy from Acme Supplier, for the current effectivity date range. You add a new effectivity date range, specifying the date when you will start buying from Winter Widgets for the start date, and then you add a buy-from source for the new effectivity date range with Winter Widgets specified for the supplier.
For each source in a sourcing rule or bill of distribution, you designate a rank to specify the order in which the sources within the rule or bill will be considered by order promising when the rule or bill is applied during a supply chain availability search. The source with the lowest number rank will be considered first, and the source with the highest number rank will be considered last. If your sourcing strategy includes using specific sources for specific quantities, you designate a from quantity, a less-than quantity, or both, for one or more sources.
Note
Because sourcing rules collected from external source systems may include split allocations for planning purposes, there may be multiple sources with the same rank and quantity range, but the allocation percentages must add up to 100 percent. The Order Promising process does not split the desired quantity when checking for availability.
The Order Promising process checks the source with the highest allocation percent first within a group of sources with the same rank. If the source with the highest allocation percent has enough supply, that source is used for the entire requested quantity. If the source with the highest allocation percent does not have enough supply, then the source with the next highest allocation percent will be checked for the entire quantity. Because split allocations are not applicable to order promising sourcing strategies, the examples provided here do not include split allocations.
The following table is an example of a sourcing rule with three ranks. Quantity-based sourcing is not being used in this example. If a supply chain search is conducted using this rule, order promising checks if organization M2 can make the desired quantity first. If organization M2 cannot make the desired quantity, order promising will then check if there is enough quantity at organization V1 for an interorganization transfer. If there is not enough quantity at organization V1, then order promising will check if the desired quantity can be bought from supplier Winter Widgets.
Replenishment Source and Applicable Attribute Value |
Rank |
Allocation Percent |
---|---|---|
Make at manufacturing organization M2 |
1 |
100 |
Transfer from organization V1 |
2 |
100 |
Buy from supplier Winter Widgets |
3 |
100 |
This example illustrates how to define sourcing rules to implement sourcing requirements with quantity-based sourcing specified in the requirements.
You are defining the sources for a set of business requirements that initially include quantity ranges for two suppliers. The requirements change to include a third quantity range and a third supplier.
Your business initially defines the following sourcing requirements:
For quantities less than 100, buy from Supplier A.
For quantities greater than or equal to 100, buy from Supplier B.
Your business adds a new supplier, Supplier C. Your business now defines the following sourcing requirements:
For quantities less than 100, buy from Supplier A, if Supplier A has enough quantity.
For quantities greater than or equal to 100, but less than 200, buy from Supplier B, if Supplier B has enough quantity.
For quantities greater than or equal to 200, buy from Supplier C.
If Supplier A does not have enough supply for a quantity less than 100, or Supplier B does not have enough supply for a quantity between 100 and 199, buy from Supplier C for these quantities.
First, analyze your sourcing requirements to determine how many different sourcing rules you need to create to implement your sourcing requirements. The requirements specified above can be defined within one sourcing rule. After determining how many sourcing rules to define, determine how many sources must be defined for each sourcing rule. First analyze how many replenishment source types have been specified in the requirements. All of the requirements above are for buy-from-a-supplier replenishment source types. Next, analyze how to define the From Quantity, Less Than Quantity, and Rank attributes as needed to implement your sourcing requirements.
For the requirements as initially stated, define two sources with the following values for the Source Type, Supplier, From Quantity, Less Than Quantity, Allocation, and Rank attributes:
Source Type equals Buy from, Supplier equals Supplier A, Less Than Quantity equals 100, Allocation Percent Equals 100, and Rank equals 1
You do not need to specify a value for the From Quantity attribute because the source applies for any quantity less than 100.
Source Type equals Buy from, Supplier equals Supplier B, From Quantity equals 100, Allocation Percent Equals 100, and Rank equals 1.
You do not need to specify a value for the Less Than Quantity attribute because the source applies for any quantity greater than or equal to 100.
For the requirements after the third supplier is added, edit the buy-from-Supplier-B source and add additional sources for Supplier C to define the four sources with the following values for the Source Type, Supplier, From Quantity, Less Than Quantity, Allocation, and Rank attributes:
Source Type equals Buy from, Supplier A, Less Than Quantity equals 100, Allocation Percent Equals 100, and Rank equals 1.
You do not need to specify a value for the From Quantity attribute because the source applies for any quantity less than 100.
Source Type equals Buy from, Supplier B, From Quantity equals 100, Less Than Quantity equals 200, Allocation Percent Equals 100, and Rank equals 1.
Source Type equals Buy from, Supplier C, From Quantity equals 200, Allocation Percent Equals 100, and Rank equals 1.
You do not need to specify a value for the Less Than Quantity attribute because the source applies for any quantity greater than or equal to 200.
Source Type equals Buy from, Supplier C, and Rank equals 2.
You do not need to specify a value for the From Quantity attribute or Less Than Quantity attribute because there is no minimum or maximum value in this case.
This table lists the two sources you define to implement the following sourcing requirements:
Check Supplier A for order quantities less than 100.
Check Supplier B for order quantities greater than or equal to 100.
Type |
Supplier |
From Quantity |
Less Than Quantity |
Quantity Unit of Measure |
Allocation Percent |
Rank |
---|---|---|---|---|---|---|
Buy from |
A |
|
100 |
Each |
100 |
1 |
Buy from |
B |
100 |
|
Each |
100 |
1 |
This table lists the four sources you define to implement the following sourcing requirements:
Check Supplier A for orders for quantities less than 100.
Check supplier B for quantities greater than or equal to 100 and less than 200.
Check supplier C for quantities greater than 200.
Check supplier C for quantities less than 200 when Supplier A or Supplier B do not have the desired quantity.
Type |
Supplier |
From Quantity |
Less Than Quantity |
Quantity Unit of Measure |
Allocation Percent |
Rank |
---|---|---|---|---|---|---|
Buy from |
A |
1 |
100 |
Each |
100 |
1 |
Buy from |
B |
100 |
200 |
Each |
100 |
1 |
Buy from |
C |
200 |
|
Each |
100 |
1 |
Buy from |
C |
|
|
|
100 |
2 |
You create assignment sets to implement the supply chain networks for your sourcing strategies. You implement your supply chain network by selecting the appropriate sourcing assignment level when you assign a sourcing rule or bill of distribution to an assignment set. You create alternative assignment sets, with different sourcing assignments, to model alternative supply chains.
The following figure shows an example where three sourcing rules and one bill of distribution are assigned to two assignment sets:
The first sourcing rule, SR1, is assigned to the first assignment set, AS1, at the item and organization assignment level for item B241 and organization M1.
The bill of distribution, BD1, is assigned to the first assignment set, AS1, at the item assignment level for item C105.
The second sourcing rule, SR2, is assigned to the first assignment set, AS1, at the organization assignment level for organization M2.
The second sourcing rule, SR2, is also assigned to the second assignment set, AS2, but it is assigned to AS2 at the item assignment level for item C105.
The third sourcing rule, SR3, is assigned to the second assignment set AS2, at the organization assignment level for organization M2.
When the supply chain network implemented by assignment set AS2 is followed, Item C105 is replenished according to the sourcing means specified in the sourcing rule SR2. When the supply chain network implemented by assignment set AS1 is followed, Item C105 is replenished according to the sourcing means specified in the bill of distribution BD1.
When you create sourcing rules and bills of distribution, you create descriptions of the means by which you replenish items, but you do not associate these means with any specific items. You create assignment sets to define your supply chain sourcing and transfer links by assigning sourcing rules and bills of distribution to specific items, customers, organizations, categories, demand classes, or regions. For each sourcing assignment within an assignment set, you select the applicable sourcing assignment level to implement the scope of the sourcing rule or bill of distribution for the specific sourcing assignment.
When you add new replenishment sources, change your strategies for using your existing sources, or you delete replenishment sources, you edit exiting assignment sets, or create new assignment sets, to incorporate these changes into your supply chains. When you edit assignment sets, you add new sourcing assignments to the assignment set, delete exiting sourcing assignments from the assignment set, or make changes to the assignment level and assignment attributes for existing sourcing assignments. You edit assignment sets on the Edit Assignment Set page, or in a worksheet by choosing to edit in worksheet while on the Manage Assignment Sets or Edit Assignment Set pages.
When you design an assignment set, you determine the sourcing assignment level for each sourcing assignment contained within the assignment set. To implement well-designed assignment sets, you must know which sourcing assignment levels take precedence over which other sourcing assignment levels.
Two aspects to understand regarding sourcing assignment levels are:
The sourcing assignment levels and their levels of granularity
Sourcing demand types and the sourcing assignment levels
To determine which sourcing assignments to include in an assignment set, you need to know which assignment levels override which assignment levels. An assignment level that is more granular overrides an assignment level that is less granular.
For example, the Item and Customer and Customer Site assignment level is more granular than the Item and Customer assignment level. If a customer has 12 customer sites, and your sourcing strategy is the same for a specific item at 11 of the 12 customer sites, you only need to add these two sourcing assignments to your assignment set to implement this aspect of your sourcing strategy:
A sourcing assignment at the Item and Customer assignment level to implement which sourcing rule or bill of distribution is applicable for orders placed for the item by the customer at 11 of the customer sites.
A sourcing assignment at the Item and Customer and Customer Site assignment level to implement which sourcing rule or bill of distribution is applicable for orders placed for the item by the customer at the twelfth customer site.
If an order for the item is received for the customer at the twelfth customer site, then the sourcing rule or bill of distribution assigned at the Item and Customer and Customer Site level will be applied. If an order for the item is received for the customer for any of the other eleven sites, then the sourcing rule or bill of distribution assigned at the Item and Customer assignment level will be applied.
The sourcing assignment levels, listed most granular to least granular, are:
Item and customer and customer site: Applies to a specific item for a specific customer at a specific customer site.
Item and customer: Applies to a specific item for a specific customer at all of the customer's sites.
Item and demand class: Applies to a specific item in a specific demand class.
Item and region: Applies to a specific item in a specific region or zone.
Item and organization: Applies to a specific item at a specific organization.
Category and customer and customer site: Applies to all items in specific item category for a specific customer at a specific customer site.
Category and customer: Applies to all items in specific item category for a specific customer at all of the customer's sites.
Category and demand class: Applies to all items in a specific item category for a specific demand class.
Category and organization: Applies to items in a specific item category at a specific organization.
Item: Applies to a specific item in all regions, in all demand classes, and for all customers and all organizations.
Category and region: Applies to all items in a specific item category for a specific region.
Category: Applies to all items in a specific item category in all regions, in all demand classes, and for all customers and all organizations.
Customer and customer site: Applies to a specific customer at a specific customer site for all items.
Customer: Applies a specific customer at all of the customer's sites for all items.
Demand class: Applies to all a specific demand class for all customers and all items.
Region: Applies to a specific region or zone for all demand classes, all customers, and all items.
Organization: Applies to a specific organization for all categories and all items.
Global: Applies to all regions and zones, all demand classes, all customers, all items, and all organizations.
Note
The assignment levels that include category are available only if a category set has been defined for the Sourcing Rule Category Set profile option.
When you create an assignment set, all assignment levels are applicable. When sourcing logic determines which sourcing assignment to use, the type of sourcing need determines what attribute values have been provided, which determines which assignment levels are considered.
Demand for sales orders or forecasts sourcing, also known as independent demand, specifies a value for one or more of the following attributes: item, customer, customer site, demand class. Sales orders always specify item, customer, and customer site. The postal code included in a customer site is used to derive the region. Therefore, for independent-demand sourcing the sourcing logic will consider sourcing assignments where the assignment level includes customer site, customer, item, demand class, or region. A sourcing assignment at the global assignment level will also be considered.
Organization demand specifies a value for the item. The category value is derived from the category the item belongs to. The organization the demand is for defines the organization value. Therefore, for organization-demand sourcing the sourcing logic will consider sourcing assignments where the assignment level includes item, category, or organization. A sourcing assignment at the global assignment level will also be considered.
Note
When sourcing logic is determining where to get the supply from for a specific independent demand, such as the demand specified by a fulfillment line, the answer may be to source it from an organization that doesn't have the supply on hand. At that point, the sourcing logic will use the assignment levels applicable to organization demand to determine how to source the supply for that organization.
Tip
If you are checking the availability for fulfillment line, and you are viewing the pegging tree presented when you view the details of an availability option, you can see the supply chain followed to determine how to source the fulfillment line.
The sourcing assignment levels that you select when you create sourcing assignments in an assignment set formulate a sourcing hierarchy for that assignment set. Order promising uses the sourcing hierarchy to determine which sourcing rule or bill of distribution to follow to find a source for a specific item. Order promising always uses the most specific sourcing rule or bill of distribution that is applicable in the hierarchy.
Note
When order promising conducts a supply chain search, a profile option , the Default Order Promising Assignment Set profile option, designates which assignment set will be applied. Order promising uses the sourcing hierarchy to determine which sourcing rule or bill of distribution to follow from the rules or bills within the designated assignment set.
The position of a sourcing rule or a bill of distribution in the sourcing hierarchy is determined by these two factors:
The assignment level at which you assigned the sourcing rule or bill of distribution to the assignment set.
The rule or bill type which can be global sourcing rule, local sourcing rule, bill of distribution, or source organization. Source organization is the type used to designate when the set of item attribute values is what determines the source instead of a sourcing rule or bill of distribution.
Tip
Understanding and using the power of the sourcing hierarchy in an assignment set can make the designing and managing of sourcing relationships easier.
For example, if a plant initially receives all items belonging to a specific item category, such as the Fasteners item category, from Supplier A, then the sourcing rule to buy from Supplier A can be assigned at the Category assignment level for the Fastener item category.
If you then determine that a specific fastener is to be sourced from a different supplier, Supplier B for example, then you can assign a different sourcing rule to buy from Supplier B at the item level for the specific fastener. The detailed-to-general hierarchy determines that the specific fastener will be sourced from Supplier B, while all other fasteners are still sourced from Supplier A.
The sourcing hierarchy can be envisioned as a detailed-to-general table where each row in the table is a combination of assignment level and rule type. Each row in the hierarchy is more specific than the row below it. The topmost row, the row where a sourcing rule is assigned at the item and customer and customer site assignment level, is the most specific row. The bottommost row, the row where a global sourcing rule is assigned at the global assignment level, is the most general row. You use the sourcing hierarchy to answer which sourcing rule, bill of distribution, or set of item attribute values will be used to find a source for a specific combination of values of these four criteria:
Assignment set
Date
Organization
Item
For the sourcing rules and bills of distribution within the assignment set where the effective date of the sourcing assignment meets the date criteria, each rule or bill is associated with a specific row in the sourcing hierarchy. The sourcing assignment attribute values, such as the item value, determine which of the rules, bills, and set of item attributes are applicable to the specific criteria set. Multiple rules, bills, or item attributes can be applicable; therefore, multiple rows can be applicable. The rule, bill, or set of item attributes associated with the highest row in the hierarchy is the rule, bill, or set of item attributes that will be followed to determine the source.
From the Manage Assignment Sets page, you can select the View Sourcing Hierarchy button to view a table containing rows of the sourcing hierarchy. The most specific, most granular, row is the top row. The least specific, least granular row, is the bottom row.
Assignment Level |
Sourcing Rule Type |
---|---|
Item and organization |
Sourcing rule |
Item and organization |
Source Organization |
Category and organization |
Sourcing Rule |
Item |
Bill of Distribution |
Item |
Sourcing rule |
Category |
Bill of Distribution |
Category |
Sourcing Rule |
Organization |
Sourcing Rule |
Organization |
Source Organization |
Global |
Bill of Distribution |
Global |
Sourcing rule |
Tip
You can view the sourcing hierarchy and initiate a search to ask "Where does this organization get this item on this date?" If you need to analyze why the order promising process returned results that were different than what you expected, you can view and search the sourcing hierarchy to determine which sourcing rule would be used for your set of criteria.
When managing or editing assignment sets, you use the Edit in Spreadsheet button to use a spreadsheet to add, edit, or delete the sourcing rule or bill of distribution assignments for an assignment set. If you are managing assignment sets, you must select an assignment set before you can choose to edit in spreadsheet.
Set profile options to specify the following for Oracle Fusion Global Order Promising:
The sourcing assignment set and lead time multiplier used by the Check Availability process, as well as whether the Check Availability process can invoke external order promising web services
The number of minutes that the results from the Check Availability process remain valid on the Check Availability page, as well as whether the check availability page displays analytics
The number of default display days for the Review Supply Availability page, as well as the organization calendar to be used for supply buckets in the Supply Availability report
The category set used when assignment sets are created
This table lists the profile options that affect the Check Availability process. If the profile option does not have a default value, the Default Value column in the table is blank.
Profile Option Display Name |
Default Value |
Effect |
---|---|---|
Order Promising Sourcing Assignment Set |
|
Defines which sourcing assignment set will be used by the supply allocation and check availability processes |
Supplier Capacity Accumulation Lead Time Multiplier |
1 |
Defines the multiplier of the approved supplier list lead time to be used to determine the date when to begin the accumulation of supplier capacity |
External ATP Web Service Enabled |
No |
If enabled, allows the Check Availability process to invoke external order promising web services |
This table lists the profile options that affect the Check Availability page.
Profile Option Display Name |
Default Value |
Effect |
---|---|---|
Timeout for Check Availability Results |
10 |
Sets the number of minutes that the results returned by the Check Availability process will remain valid on the Check Availability page |
Analytics for Check Availability Page Enabled |
Yes |
If enabled, the Check Availability page will display analytics |
Fulfillment Line Distribution Analytic Days for First Date Range |
2 |
Sets the number of days for the first lateness range in the Fulfillment Line Distribution analytic |
Fulfillment Line Distribution Analytic Days for Second Date Range |
7 |
Sets the number of days for the second lateness range in the Fulfillment Line Distribution Analytic |
Fulfillment Line Distribution Analytic Days for Third Date Range |
14 |
Sets the number of days for the third lateness range in the Fulfillment Line Distribution Analytic |
This table lists the profile options that affect the Review Supply Availability page and the Supply Availability report. If the profile option does not have a default value, the Default Value column in the table is blank.
Profile Option Display Name |
Default Value |
Effect |
---|---|---|
Default Display Days in Review Supply Availability Page |
21 |
Sets the number of horizon days for the Review Supply Availability page if end date was not entered on the ATP Check Availability page |
Organization Calendar for Supply Buckets in Supply Availability Report |
|
Defines the organization calendar to use for the weekly and period supply buckets in the Supply Availability report |
This table lists the Sourcing Rule Category Set profile option. There is no default value for the Sourcing Rule Category Set profile option. You must define a value for the Sourcing Rule Category Set profile option to have the assignment levels that include category available as choices for assignment level when creating assignment sets.
Profile Option Display Name |
Effect |
---|---|
Sourcing Rule Category Set |
Determines which category set is used when defining assignment sets |