Configuring the Incremental File Adapter in Data Management

Note:

Options when defining the Data Load Rule in Data Management enable you to decide if you're loading only incremental data into Workforce or loading all data every time.

To set up an incremental data source file:

  1. Add an incremental data source:
    1. From the Home page, click the Navigator, and then under Integration, click Data Management.
    2. Click the Setup tab, and then under Register, click Target Application.

    3. Under Target Application Summary, click Add, and then Data Source.

    4. Under Source System, select Incremental File.
    5. In Prefix, you can specify a prefix to make the source system name unique.

      Use a prefix when the source system name you want to add is based on an existing source system name. The prefix is joined to the existing name. For example, if you want to name an incremental file source system the same name as the existing one, you might assign your initials as the prefix.

    6. Click OK.
    7. From the Select dialog box, select the source data load file from the Inbox. If the file is missing, click Upload to add it to the Inbox (server /u03/inbox/inbox).

      You may need to expand the Home folder, and then select Inbox to see the source file listing. The file must be a delimited file using one of the supported delimiters, and must contain a header record for each dimension in the first row. The data field is the last column in the file. See Preparing the Source Data File.

    8. Click OK and then Save.

      The system automatically creates the dimension details.

  2. Set up the Import Format, which describes the source file structure and is executed when the source file is imported:
    1. On the Setup tab, under Integration Setup, click Import Format.
    2. Under Import Format Summary, click Add.
    3. Under Details, enter a name for the Import Format.
    4. In Source, browse to select your source.
    5. From File Type, select the delimited file type.

      Oracle recommends selecting Delimited - All Data Type, which is common for loading text as well as numeric data.

    6. From the File Delimiter drop-down list, select the delimiter type in the source data file: comma, vertical bar, exclamation, semicolon, colon, tab, and tilde.
    7. In Target, browse to select your Planning application, and then click Save.
    8. Under Mappings, map dimensions between the Source Column and the target application, and then click Save.

      The Source Column is populated with the dimensions from the header row in your source data file.

      Note:

      Only single-period loads are supported.

      For more information on Import Formats, see Working with Import Formats in Administering Data Management for Oracle Enterprise Performance Management Cloud.

  3. Define the Location, which is the level at which a data load is executed in Data Management. The Location specifies where to load the data and is associated with the Import Format.
    1. On the Setup tab, under Integration Setup, select Location.
    2. Under Location, click Add.
    3. Under Details, on the Location Details tab, enter a name for the Location.
    4. Browse to select your Import Format.
    5. Enter the Functional Currency, and then click Save.

    For more information, see Defining Locations in Administering Data Management for Oracle Enterprise Performance Management Cloud.

  4. Create member mappings:
    1. On the Workflow tab, under Data Load, select Data Load Mapping.
    2. At the bottom of the page, verify the POV for the Location.
      1. Click the link for Location, which displays the Select Point of View dialog box.
      2. Select your Location, Period, and Category (Scenario).
      3. Optional: Select Set as Default to retain this POV.
      4. Click OK.
    3. Map the members from the source that you are loading to the target application:
      1. At the top of the page, from Dimension, select a dimension in your source file.
      2. Click one of the five member mapping tabs (for example, the Like tab), and then click Add.
      3. Enter an asterisk (*) in the Source Value column and also in the Target Value column to represent all values.
      4. Click Save after mapping members for each dimension.

        You must create a member mapping for each dimension in the source data file.

        For more information, see Creating Member Mappings in Administering Data Management for Oracle Enterprise Performance Management Cloud.

  5. Select the data load rule: On the Workflow tab, under Data Load, select Data Load Rule.
    1. At the bottom of the page, verify the POV for the Location, just like you did in Step 4b.
    2. In the Data Rule Summary area, click Add.
    3. Under Details, enter the data load rule name.
    4. In Category, select the category to map source system data to target Scenario members.
    5. In Period Mapping Type, select Default.
    6. You don't need to specify an Import Format, because the Import Format from the Location is used. Specify an Import Format only when you want to override the Import Format for the Location.
    7. From the Target Plan Type drop-down list, select OEP_WFP.
    8. On the Source Filters tab, in Source File, browse to select the data file that contains the data you're loading. It may be the same one from which you created the data source application, or another file that has data as well as the appropriate header.

      It may have the same name as the original file, or it may have a new name. The differences in the incremental load file are created automatically between the two loaded files. So if file A.txt has 100 rows and file B.txt has 300 rows where the first 100 are identical, your first load should select file A.txt when the Last Process ID is 0. The second load will be against file B.txt, and the ID automatically points to the Last Process ID that was assigned to A.

    9. In Incremental Processing Option, select whether to sort data in the source file:
      • Do not sort source file—The source file is compared as provided. This option assumes that the source file is generated in the same sort order each time. Data Management compares the files, and then extracts the new and changed records. This option makes the incremental file load perform faster.
      • Sort source fileData Management sorts the source file before performing the file comparison for changes. The sorted file is then compared to the previous sorted version of this file. Sorting a large file consumes many system resources. If the source system provides a sorted file, avoid using this option because it consumes computer resources and slows performance.

        Note:

        If you have a rule that uses the Do not sort source file option and then switch to a Sort option, then the first load will have invalid results because the files are in different order. Subsequent runs load data correctly.

      • Review the Last Process ID value.

        When the load is first run for the original data file, the Last Process ID shows the value of 0. When the load is run again, the Last Process ID shows the run number of the last load for the original source data file. If the newly created file comparison version and the original data file show no differences, or if the file is not found, the value of the Last Process ID is assigned to the last load ID that ran successfully.

        To reload all data, set the Last Process ID back to 0, and select a new source file to reset the baseline.

        Note:

        Copies of the source data file are archived for only the last 5 versions and retained for a maximum of 60 days. After 60 days, set the Last Process ID to 0 and then perform the load.
    10. Click Save.
  6. Select the Load Method for the target application and add the Workforce business rules that execute the logic for the incremental load when the data load rule is run:
    1. Click the Setup tab, and under Register, click Target Application.
    2. Under Target Application Summary, from the Type column, select Planning.
    3. Under Application Details, click the Application Options tab.
    4. For Load Method, select All data types with security, and then click OK.
    5. Click the Business Rules tab, and then click Add.
    6. Under Business Rule, enter or paste the Workforce business rule name.

      For information on Workforce rules, see Deciding Which Workforce Rules to Add. For more information on adding business rules in Data Management see Adding Business Rules in Administering Data Management for Oracle Enterprise Performance Management Cloud.

    7. Under Scope, select Data Rule.
    8. Under Data Load Rule, select the data load rule you created, and then click Save.
  7. Preview the data before exporting it.
    1. Click the Workflow tab, and then Data Load Rule.
    2. Click Execute.
    3. On the Execute Rule dialog box, select Import from Source and verify the Start Period and End Period.

      To actually send the incremental data to the target application, you could also select Export to Target.

    4. Click Run.

    See Using the Data Load Workbench in Administering Data Management for Oracle Enterprise Performance Management Cloud.

  8. You're now ready to execute the incremental data load. See Executing an Incremental Workforce Data Load.

Tip:

After loading data, to quickly update and process data on multiple existing employees, jobs, or entity defaults in Workforce, you can use the Mass Update forms. These forms enable you to quickly review and edit employees, jobs, and entity defaults after data is loaded. Designed for optimal processing efficiency, each form is associated with a Groovy rule that processes only the changed data. See Updating Multiple Employees and Jobs Details in Working with Planning Modules.