8 Data File Generation

When you are implementing the Retail Analytics and Planning without using an Oracle merchandising system for foundation data, or you are providing history data from non-Oracle sources, you will need to create several data files following the platform specifications. This chapter will provide guidance on the data file formats, structures, business rules, and other considerations that must be accounted for when generating the data.

Important:

Do not begin data file creation for RAP until you have reviewed this chapter and have an understanding of the key data structures used throughout the platform.

For complete column-level definitions of the interfaces, including datatype and length requirements, refer to the RI and AI Foundation Interfaces Guide in My Oracle Support. From the same document, you may also download Data Samples for all of the files covered in this chapter.

Files Types and Data Format

The shared platform data files discussed in this chapter may use a standard comma-delimited (CSV) file format, with text strings enclosed by quotation marks or other characters. The files must be UTF-8 encoded; other encoding types such as ANSI are not supported and may fail in the loads due to unrecognized characters. The files expect the first line to be column headers, and lines 2+ should contain the input data. For specific columns in each of these files, the following standards can be used as a guideline (though they can be changed by configuration options).

Table 8-1 Foundation File Formatting

Datatype Format Example Explanation

Number

0.00

340911.10

Numbers should be unformatted with periods for decimal places. Commas or other symbols should not be used within the numerical values.

Character

“abc”

“Item #4561”

Any alphanumeric string can be optionally enclosed by quotation marks to encapsulate special characters such as commas in a descriptive value.

Date

YYYYMMDD

20201231

Dates should be provided as simple 8-digit values with no formatting in year-month-day sequences.

Context Files

Before creating and processing a data file on the platform, choose the fields that will be populated and instruct the platform to only look for data in those columns. This configuration is handled through the use of Context (CTX) Files that are uploaded alongside each base data file. For example, the context file for PRODUCT.csv will be PRODUCT.csv.ctx (appending the .ctx file descriptor to the end of the base filename).

Within each context file you must provide a single column containing:

  •  One or more parameters defining the behavior of the file load and the format of the file.

  •  The list of fields contained in the source file, in the order in which they appear in the file specification:

    • #TABLE#<Staging Table Name>#

    • #DELIMITER#<Input Value>#

    • #DATEFORMAT#<Input Value>#

    • #REJECTLIMIT#<Input Value>#

    • #RECORDDELIMITER#<Input Value>#

    • #IGNOREBLANKLINES#<Input Value>#

    • #SKIPHEADERS#<Input Value>#

    • #TRIMSPACES#<Input Value>#

    • #TRUNCATECOL#<Input Value>#

    • #COLUMNLIST#<Input Value>#

      <COL1>

      <COL2>

      <COL3>

The following is an example context file for the CALENDAR.csv data file:

File Name: CALENDAR.csv.ctx

File Contents:
#TABLE#W_MCAL_PERIOD_DTS#
#DELIMITER#,#
#DATEFORMAT#YYYY-MM-DD#
#REJECTLIMIT#1#
#RECORDDELIMITER#\n#
#IGNOREBLANKLINES#false#
#SKIPHEADERS#1#
#TRIMSPACES#rtrim#
#TRUNCATECOL#false#
#COLUMNLIST#
MCAL_CAL_ID
MCAL_PERIOD_TYPE
MCAL_PERIOD_NAME
MCAL_PERIOD
MCAL_PERIOD_ST_DT
MCAL_PERIOD_END_DT
MCAL_QTR
MCAL_YEAR
MCAL_QTR_START_DT
MCAL_QTR_END_DT
MCAL_YEAR_START_DT
MCAL_YEAR_END_DT

The file must be UNIX formatted and have an end-of-line character on every line, including the last one. As shown above, the final EOL may appear as a new line in a text editor. The #TABLE# field is required: it indicates the name of the database staging table updated by the file. The COLUMNLIST tag is also required: it determines the columns the customer uses in their .dat or .csv file. The column list must match the order of fields in the file from left to right, which must also align with the published file specifications. Include the list of columns after the #COLUMNLIST# tag. Most of the other parameters are optional and the rows can be excluded from the context file. However, this will set values to system defaults that may not align with your format.

Note:

Both RI and AI Foundation can use these context files to determine the format of incoming data.

The server maintains a copy of all the context files used, so you do not need to send a context file every time. If no context files are found, the Analytics and Planning uses the last known configuration.

For additional format options, the available values used are from the DBMS_CLOUD package options in ADW.

Application-Specific Data Formats

Each application within the Retail Analytics and Planning may require data to be provided using specific rules and data formats, which can differ from those used in the common platform files. This section describes the use-cases for alternate data formats and lays out the basic rules that must be followed.

Retail Insights

Retail Insights has a large number of legacy interfaces that do not follow the shared platform data formats. These interfaces are populated with files named after their target database table with a file extension of .dat, such as W_PRODUCT_DS.dat. All files ending with a .dat extension are pipe-delimited files (using the | symbol as the column separator). These files also have a Unix line-ending character by default, although the line-ending character can be configured to be a different value, if needed. These files may be created by a legacy Merchandising (RMS) extract process or may be produced through existing integrations to an older version of RI or AI Foundation.

Table 8-2 Retail Insights Legacy File Formatting

Datatype Format Example Explanation

Number

0.00

340911.10

Unformatted numbers with periods for decimal places. Commas or other symbols cannot be used within the numerical values.

Character

abc

Item #4561

Any alphanumeric string will provided as-is, with the exception that it must NOT contain pipe characters or line-ending characters.

Date

YYYY-MM-DD;00:00:00

2020-05-09;00:00:00

Dates without timestamps must still use a timestamp format, but they must be hard-coded to have a time of 00:00:00. Date fields (such as DAY_DT columns) must NOT have a non-zero time, or they will not load correctly.

Timestamp

YYYY-MM-DD;HH:MM:SS

2020-05-09;09:35:19

Use full date-and-time formatting ONLY when a full timestamp is expected on the column. This is not commonly used and should be noted in the interface specifications, if supported.

If you are implementing Retail Insights as one of your modules and you are in an environment that was originally installed with version 19 or earlier of RI, you may need to provide some files in this data format, in addition to the foundation files which use the CSV format. This file format is also used when integrating with legacy solutions such as the Retail Merchandising System (RMS) through the Retail Data Extractor (RDE).

Example data from the file W_RTL_PLAN1_PROD1_LC1_T1_FS.dat:

70|-1|13|-1|2019-05-04;00:00:00|RETAIL|0|1118.82|1|70~13~2019-05-04;00:00:00~0
70|-1|13|-1|2019-05-11;00:00:00|RETAIL|0|476.09|1|70~13~2019-05-11;00:00:00~0
70|-1|13|-1|2019-05-18;00:00:00|RETAIL|0|296.62|1|70~13~2019-05-18;00:00:00~0
Retail AI Foundation Cloud Services

Modules within the AI Foundation Cloud Services leverage the same Context (CTX) file concepts as described in the common foundation file formats. You may control the structure and contents of AI Foundation files using the parameters in the context files. The full list of interfaces used by AI Foundation modules is included in the Interfaces Guide.

Planning Platform

Planning solutions using PDS (Planning Data Schema), such as Merchandise Financial Planning, have two main types of files:

 Hierarchy/Dimension Files – Foundation Data for the Hierarchy/Dimensions.

 Measure/Fact Files – Factual Data specific to loadable metric/measures.

When loading directly to Planning applications, both types of files should only be in CSV format and they should contain headers. Headers contain the details of the dimension names for Hierarchy/Dimension Files and the fact names for Measure/Fact Files.

Hierarchy/Dimension Files uses the naming convention <Hierarchy Name>.hdr.csv.dat and Measure Files can be any meaningful fact-grouping name, but with allowed extensions such as .ovr, .rpl, or .inc.

  •  OVR extension is used for override files

  •  RPL extension is used to delete and replace position-based data sets

  •  INC extension is for incremental files that can increment positional data.

If using the common foundation CSV files, most of the data can be interfaced using those shared integrations. However, certain files (such as the VAT Hierarchy) must be directly loaded to Planning: it does not come from RI at this time. Refer to the application-specific Planning Implementation Guides for more details about the list of files that are not included in foundation integrations.

Dimension Files

A dimension is a collection of descriptive elements, attributes, or hierarchical structures that provide context to your business data. Dimensions tell the platform what your business looks like and how it operates. They describe the factual data (such as sales transactions) and provide means for aggregation and summarization throughout the platform. Dimensions follow a strict set of business rules and formatting requirements that must be followed when generating the files.

There are certain common rules that apply across all of the dimension files and must be followed without exception. Failure to adhere to these rules may result in failed data loads or incorrectly structured datasets in the platform.

  • All dimension files must be provided as full snapshots of the source data at all times, unless you change the configuration of a specific dimension to be IS_INCREMENTAL=Y where incremental loads are supported. Incremental dimension loading should only be done once nightly/weekly batch processing has started. Initial/history dimension loads should always be full snapshots.

  • Hierarchy levels must follow a strict tree structure, where each parent has a 1-to-N relationship with the children elements below them. You cannot have the same child level identifier repeat across more than one parent level, with the exception of Class/Subclass levels (which may repeat on the ID columns but must be unique on the UID columns). For example, Department 12 can only exist under Division 1, it cannot also exist under Division 2.

  • Hierarchy files (product, organization, calendar) must have a value in all non-null fields for all rows and must fill in all the required hierarchy levels without exception. For example, even if your non-Oracle product data only has 4 hierarchy levels, you must provide the complete 7-level product hierarchy to the platform. Fill in the upper levels of the hierarchy with values to make up for the differences, such as having the division and group levels both be a single, hard-coded value.

  • Any time you are providing a key identifier of an entity (such as a supplier ID, channel ID, brand ID, and so on) you should fill in the values on all rows of the data file, using a dummy value for rows that don’t have that entity. For example, for items that don’t have a brand, you can assign them to a generic “No Brand” value to support filtering and reporting on these records throughout the platform. You may find it easier to identify the “No Brand” group of products when working with CDTs in the AI Foundation Cloud Services or when creating dashboards in RI, compared to leaving the values empty in the file.

  • Any hierarchy-level ID (department ID, region ID, and so on) or dimensional ID value (brand name, supplier ID, channel ID, store format ID, and so on) intended for Planning applications must not have spaces or special characters on any field, or it will be rejected by the PDS load. ID columns to be used in planning should use a combination of numbers, letters, and underscores only.

  • Any change to hierarchy levels after the first dimension is loaded will be treated as a reclassification and will have certain internal processes and data changes triggered as a result. If possible, avoid loading hierarchy changes to levels above Item/Location during the historical load process. If you need to load new hierarchies during the history loads, make sure to advance the business date in RI using the specified jobs and date parameters, do NOT load altered hierarchies on top of the same business date as previous loads.

  • All fields designated as flags (having FLG or FLAG in the field name) must have a Y or N value. Filters and analytics within the system will generally assume Y/N is used and not function properly if other values (like 0/1) are provided.

  • Retail Insights requires that all hierarchy identifiers above item/location level MUST be numerical. The reporting layer is designed around having numerical identifiers in hierarchies and no data will show in reports if that is not followed. If you are not implementing Retail Insights, then alphanumeric hierarchy IDs could be used, though it is not preferred.

Product File

The product file is named PRODUCT.csv, and it contains most of the identifying information about the merchandise you sell and the services you provide. The file structure follows certain rules based on the Retail Merchandising Foundation Cloud Services (RMFCS) data model, as that is the paradigm for retail foundation data that we are following across all RAP foundation files.

The columns below are the minimum required data elements, but the file supports many more optional fields, as listed in the Interfaces Guide. Optional fields tend to be used as reporting attributes in RI and are nullable descriptive fields. Optional fields designated for use in a AI Foundation or Planning module are generally nullable too, but should generally be populated with non-null values to provide more complete data to those modules.

Table 8-3 Product File Required Fields

Column Header Usage

ITEM

Product number which uniquely identifies the record. Could be any of these records:

  • a sub-transaction level item (such as a UPC) which has a SKU as a parent and Style as a grandparent

  • a transaction-level SKU (with or without parents)

  • a style or "level 1" item which is above transaction level

Style/colors are NOT considered as items and do not need to be provided as separate records.

ITEM_PARENT

Parent item associated with this record when the item level is 2 or 3. If you are not providing level 2 or 3 item records, then you may set this value to -1 on all rows as no items will have parents.

REQUIRED when providing multiple levels of items to establish the parent/child relationships.

ITEM_GRANDPARENT

Grandparent item associated with this record when the item level is 3. If you are not providing level 3 item records, then you may set this value to -1 on all rows as no items will have grandparents.

REQUIRED when providing multiple levels of items to establish the parent/child relationships.

ITEM_LEVEL

Item Level (1, 2, or 3) of this record from the source system. Used to determine what level of item is being provided to the target systems. Item level 2 should have a parent item, and item level 3 should provide both parent and grandparent.

Typical fashion item levels are:

  •  Level 1 – Style

  •  Level 2 – SKU (transaction level)

  •  Level 3 – UPC/EAN/barcode

TRAN_LEVEL

Transaction level (1 or 2) of the item from the source system. Identifies which level is used as the transaction level in RI and AI Foundation.

PACK_FLG

Pack flag (where N = regular item, Y = pack item). Defaults to N if not provided.

REQUIRED column if the retailer has packs, optional otherwise. If pack items are going to be included, then also note that additional interfaces SALES_PACK.csv and PROD_PACK.csv become required.

DIFF_AGGREGATE

Combined differentiator values are used in defining the diff aggregate level (in between Item level 1 and 2 for a multi-level item). For example, for a fashion item, this will be the Color. Specify this on the transaction item-level records. This is used to dynamically create the planning item-parent (SKUP) level, so it must follow RPAS format rules (a combination of numbers, letters, and underscores only, no spaces or other characters).

LVL4_PRODCAT_ID

Default level for Subclass, which is the first level above item in the hierarchy structure. Sometimes referred to as segment or subsegment. All items of any type are mapped to a subclass as their first level. Parent items are not to be treated as hierarchy levels in the file.

LVL4_PRODCAT_UID

Unique identifier of the Subclass. In many merchandising systems the subclass is not unique on its own, so a separate, unique key value must be provided in this case.

LVL5_PRODCAT_ID

Default level for Class (sometimes referred to as Subcategory).

LVL5_PRODCAT_UID

Unique identifier of the Class. In many merchandising systems the class is not unique on its own, so a separate unique key value must be provided in this case.

LVL6_PRODCAT_ID

Default Level for Department (also referred to as Category).

LVL7_PRODCAT_ID

Default Level for Group.

LVL8_PRODCAT_ID

Default Level for Division.

TOP_PRODCAT_ID

Default Level for Company. Only one company is supported at this time, you may not have 2+ companies in the same dataset. Typically, this is hard-coded to a value of 1 for the company ID.

ITEM_DESC

Product Name or primary item description. When you are providing multiple levels of items, this may contain the style name, SKU name, or sub-transaction item name (for example, UPC description).

LVL4_PRODCAT_DESC

Default Level for Subclass Description.

LVL5_PRODCAT_DESC

Default Level for Class Description.

LVL6_PRODCAT_DESC

Default Level for Department Description.

LVL7_PRODCAT_DESC

Default Level for Group Description.

LVL8_PRODCAT_DESC

Default Level for Division Description.

TOP_PRODCAT_DESC

Default Level for Company Description.

INVENTORIED_FLG

Indicates whether the item carries stock on hand. Data sent to Planning apps is generally only for inventoried items (Y), but you may have non-inventoried items loaded for other purposes (N). The flags are used from the SKU level item records, values on the style or UPC level item records can be defaulted to some value, they are not currently used.

SELLABLE_FLG

Indicates whether the item is sellable to customers. Data sent to Planning apps is only for sellable items (Y), but you may have non-sellable items loaded for other purposes (N). The flags are used from the SKU level item records, values on the style or UPC level item records can be defaulted to some value; they are not currently used.

The product hierarchy fields use generic level names to support non-traditional hierarchy structures (for example, your first hierarchy level may not be called Subclass, but you are still loading it into the same position in the file). Other file columns such as LVL1 to LVL3 exist in the interface but are not yet used in any module of the platform.

Note:

Multi-level items are not always required and depend on your use-cases. For example, the lowest level (ITEM_LEVEL=3) for sub-transaction items is only used in Retail Insights for reporting on UPC or barcode level attribute values. Most implementations will only have ITEM_LEVEL=1 and ITEM_LEVEL=2 records. If you are a non-fashion retailer you may only have a single item level (for SKUs) and the other levels could be ignored. The reason for having different records for each item level is to allow for different attributes at each level, which can be very important in Retail Insights analytics. You may also need to provide multiple item levels for optimizing or planning data at a Style or Style/Color level in the non-RI modules. When providing multiple item level records, note that the item IDs must be unique across all levels and records.

Example data for the PRODUCT.csv file columns above, including all 3 supported item levels (style, SKU, and UPC):

ITEM,ITEM_PARENT,ITEM_GRANDPARENT,ITEM_LEVEL,TRAN_LEVEL,PACK_FLG,DIFF_AGGREGATE,LVL4_PRODCAT_ID,LVL4_PRODCAT_UID,LVL5_PRODCAT_ID,LVL5_PRODCAT_UID,LVL6_PRODCAT_ID,LVL7_PRODCAT_ID,LVL8_PRODCAT_ID,TOP_LVL_PRODCAT_ID,ITEM_DESC,LVL4_PRODCAT_DESC,LVL5_PRODCAT_DESC,LVL6_PRODCAT_DESC,LVL7_PRODCAT_DESC,LVL8_PRODCAT_DESC,TOP_LVL_PRODCAT_DESC,INVENTORIED_FLG,SELLABLE_FLG
190085210200,-1,-1,1,2,N,,8,9001,3,910,3,2,1,1,2IN1 SHORTS,Shorts,Active Apparel,Women's Activewear,Activewear,Apparel,Retailer Ltd,Y,Y
190085205725,190085210200,-1,2,2,N,BLK,8,9001,3,910,3,2,1,1,2IN1 SHORTS:BLACK:LARGE,Shorts,Active Apparel,Women's Activewear,Activewear,Apparel,Retailer Ltd,Y,Y
190085205923,190085210200,-1,2,2,N,DG,8,9001,3,910,3,2,1,1,2IN1 SHORTS:DARK GREY:LARGE,Shorts,Active Apparel,Women's Activewear,Activewear,Apparel,Retailer Ltd,Y,Y
1190085205725,190085205725,190085210200,3,2,N,,8,9001,3,910,3,2,1,1,2IN1 SHORTS:BLACK:LARGE:BC,Shorts,Active Apparel,Women's Activewear,Activewear,Apparel,Retailer Ltd,Y,Y
1190085205923,190085205923,190085210200,3,2,N,,8,9001,3,910,3,2,1,1,2IN1 SHORTS:DARK GREY:LARGE:BC,Shorts,Active Apparel,Women's Activewear,Activewear,Apparel,Retailer Ltd,Y,Y

This example and the field descriptions covered in this section all follow the standard Merchandising Foundation (RMFCS) structure for product data, and it is strongly recommended that you use this format for RAP. If you are a legacy Planning customer or have specific needs for extended hierarchies, the preferred approach is to convert your non-RMS hierarchy structure to a standard RMS-like foundation format. This conversion involves:

  • Provide only the SKUs and Styles as separate item records (dropping the style/color level from the hierarchy). The Style will be the ITEM_PARENT value on the SKU records and ITEM_GRANDPARENT will always be -1.

  • Populate the field DIFF_AGGREGATE at the SKU level with the differentiator previously used in the style/color level. For example, a legacy style/color item ID of S1000358:BLUE will instead create S1000358 as the ITEM for the style-level record and the ITEM_PARENT in the SKU record. The value BLUE is written in the DIFF_AGGREGATE field in the SKU-level record (DIFF_AGGREGATE can be set to -1 or left null on style level records).

  • When constructing the extended hierarchies in Planning and AI Foundation, the styles and diff aggregate values are concatenated together to dynamically create the style/color level of the hierarchy where needed.

Following this approach for your product hierarchy ensures you are aligned with the majority of Oracle Retail applications and will be able to take up additional retail applications in the future without restructuring your product data again.

For other fields not shown here, they are optional from a data load perspective but may be used by one or more applications on the platform, so it is best to consider all fields on the interface and populate as much data as you can. For example, supplier information is a requirement for Inventory Optimization, and brand information is often used in Clustering or Demand Transference. Also note that some fields come in pairs and must be provided together or not at all. This includes:

  •  Brand name and description

  •  Supplier ID and description

Description fields could be set to the same value as the identifier if no other value is known or used, but you must include both fields with non-null values when you want to provide the data.

Product Alternates

You may also use the file PRODUCT_ALT.csv to load additional attributes and hierarchy levels specifically for use in Planning applications. The file data is always at item level and may have up to 30 flexible fields for data. These columns exist in the PRODUCT.csv file if you are a non-RMFCS customer so this separate file would be redundant. If you are using RMFCS, then this file provides a way to send extra data to Planning that does not exist in RMFCS.

When using flex fields as alternate hierarchy levels, there are some rules you will need to follow:

  • All hierarchies added this way must have an ID and Description pair as two separate columns

  • The ID column for an alternate hierarchy must ONLY contain numbers; no other characters are permitted

Numerical ID fields are required for integration purposes. When a plan is generated in MFP or AP using an alternate hierarchy, and you wish to send that plan data to AIF for in-season forecasting, the alternate hierarchy ID used must be a number for the integration to work. If your alternate hierarchy level will not be used as the base intersection of a plan, then it does not need to be limited to numerical IDs (although it is still recommended to do so). This requirement is the same for all hierarchy levels when Retail Insights is used, as RI can only accept numerical hierarchy IDs for all levels (for both base levels and alternates).

For example, you might populate FLEX1_CHAR_VALUE with numerical IDs for an alternate level named “Subsegment”. You will put the descriptions into FLEX2_CHAR_VALUE. These values can be mapped into PDS by altering the interface.cfg file, and the values may be used to define plans or targets in MFP. When you export your plans for AIF, they are written into integration tables such as MFP_PLAN1_EXP using the numerical identifiers from FLEX1_CHAR_VALUE as the plan level. This is further integrated to RI tables like W_RTL_PLAN1_PROD1_LC1_T1_FS (columns ORG_DH_NUM and PROD_DH_NUM for location/product IDs respectively). This is where numerical IDs become required for these interfaces to function; they will not load the data if the IDs are non-numerical. Once loaded into W_RTL_PLAN1_PROD1_LC1_T1_F and similar tables, AIF reads the plan data to feed in-season forecast generation.

Re-Using Product Identifiers

It may happen over time that the same product keys (such as SKU numbers) will be re-used to represent brand new items. This scenario is only supported from RI version 23.1.102.0 or greater and must follow a specific flow of data from the source system. There are two parameters in C_ODI_PARAM_VW that must be updated to enable this functionality (RI_ITEM_REUSE_IND and RI_ITEM_REUSE_AFTER_DAYS). Set RI_ITEM_REUSE_IND to Y and set RI_ITEM_REUSE_AFTER_DAYS to some number greater than 0. The combination of these parameters will alter the product hierarchy load in the following ways:

  1. Once enabled, if an item is deleted (when using incremental loads) or stops appearing in the nightly product files (when using full snapshot loads) it will NOT be closed, it will remain as current_flg=Y for the number of days specified in the configuration.

  2. If the item re-appears in the product file the next night, then the existing item record remains open with current_flg=Y and it will be as if it was never dropped or deleted.

  3. If the item re-appears in the product file only after the set number of days has elapsed, then the old version of the product is both closed and archived (example below). The incoming item record is inserted as a new item with no history.

Here is an example of re-using an item:

  1. Item number 1001 is created as a new item in a source system, such as RMFCS.

  2. Item exists for a period of time while accumulating fact data such as sales and inventory.

  3. Item becomes inactive and is eventually deleted from the source system, which marks it inactive in RAP after the set number of days for re-use has passed.

  4. After another period of time (such as 1 month) the same item number 1001 is added back in the source system, representing a completely new item with different attributes and hierarchies. The window between the old and new item is configured in C_ODI_PARAM_VW as mentioned in the Setup and Configuration chapter.

  5. When item 1001 comes to RAP again, the old version of this item will be archived using a value appended to the code (for example, 1001_230101) across all product tables in RI (the item data would already have been dropped from Planning when the re-use number of days had elapsed). Item 1001 is then inserted as a new item not associated in any way with the prior instance of that ID. The history that already exists for item 1001 is now associated with the archived item 1001_230101, and new fact data will be associated only with the new definition of item 1001 going forward.

The same process flow applies both when you are creating the PRODUCT.csv file from a non-Oracle source and when you use RMFCS to provide the data.

Organization File

The organization file will contain most of the identifying information about the locations where you sell or store merchandise, including physical locations (such as a brick & mortar store) and virtual locations (such as a web store or virtual warehouse entity). The file structure follows certain rules based on the Retail Merchandising Foundation Cloud Services (RMFCS) data model, as that is the paradigm for retail foundation data that we are following across all RAP foundation files. The columns below are the minimum required data elements, but the file supports many more optional fields, as listed in the Interfaces Guide.

Table 8-4 Organization File Required Fields

Column Header Usage

ORG_NUM

The external identifier for a location, including stores, warehouses, and partner locations. This value MUST be a number if you will use Retail Insights. RI cannot use non-numeric organization IDs.

ORG_TYPE_CODE

The type code of the location. It must be one of S, W, or E, representing a Store, Warehouse, or External location.

CURR_CODE

This is the 3-character base currency code of the organizational entity, such as USD or CAN.

ORG_HIER10_NUM

Default Level for District, which is the first level above Location in the hierarchy. Hierarchy values MUST be a number if you will use Retail Insights. RI cannot use non-numeric hierarchy IDs.

ORG_HIER11_NUM

Default Level for Region.

ORG_HIER12_NUM

Default Level for Area. Also referred to as the Channel level in some Planning applications.

ORG_HIER13_NUM

Default Level for Chain.

ORG_TOP_NUM

Default Level for Company.

Only one company is supported at this time. You may not have 2+ companies in the same instance.

ORG_DESC

Short name or short description of the location.

ORG_SECONDARY_DESC

Full name or long description of the location.

ORG_HIER10_DESC

Default Level for District Description.

ORG_HIER11_DESC

Default Level for Region Description.

ORG_HIER12_DESC

Default Level for Area Description. Also referred to as the Channel level in some Planning applications.

ORG_HIER13_DESC

Default Level for Chain Description.

ORG_TOP_DESC

Default Level for Company Description.

PHYS_WH_ID

The physical warehouse ID linked to a virtual warehouse. Must be specified for warehouses, can be null otherwise. A physical warehouse record can have its own ID as the value here. A virtual warehouse should have the linked physical warehouse ID that contains the virtual location.

VIRTUAL_WH_FLG

Indicates whether the warehouse record is a physical or virtual WH. Planning GA solutions only use virtual WHs so flag must be Y to send the WH to Planning. Physical warehouse records (VIRTUAL_WH_FLG=N) are not used by planning applications but are used for AI Foundation (such as for Inventory Optimization).

The organization hierarchy fields use generic level names to support non-traditional hierarchy levels (for example, your first hierarchy level may not be called District, but you are still loading it into the same position in the file which is used for Districts). Other levels, such as 1 to 9, have columns in the interface but are not yet used in any module of the platform.

Warehouses get special handling both in the input interface load and throughout the RAP applications. Warehouses are not considered a part of the organization hierarchy structure. While you are required to put some value in the hierarchy level fields for warehouses (because the columns are not nullable) those values are not currently used. Instead, the values will be discarded and the warehouses are loaded with no parent levels in the data warehouse tables. You should provide a unique reserved value like 1 or 9999 on all hierarchy level numbers between location and company for warehouses, just to ensure the data is loaded without violating any multi-parentage rules. When exporting the warehouse locations to Planning applications, each warehouse ID is assigned its own name and number for each parent level, prefixed with WH to make the level IDs distinct from any store hierarchy level. The warehouses must then be mapped to channels from the MFP user interface before you can use their data.

Example data for the ORGANIZATION.csv file columns above as well as some optional fields available on the interface:

ORG_NUM,ORG_TYPE_CODE,CURR_CODE,STATE_PROV_NAME,COUNTRY_REGION_NAME,ORG_HIER10_NUM,ORG_HIER11_NUM,ORG_HIER12_NUM,ORG_HIER13_NUM,ORG_TOP_NUM,ORG_DESC,ORG_SECONDARY_DESC,ORG_HIER10_DESC,ORG_HIER11_DESC,ORG_HIER12_DESC,ORG_HIER13_DESC,ORG_TOP_DESC,CHANNEL_ID,CHANNEL_NAME,PHYS_WH_ID,STOCKHOLDING_FLG,STORE_FORMAT_DESC,STORE_FORMAT_ID,STORE_TYPE,TRANSFER_ZONE_ID,TRANSFER_ZONE_DESC,VIRTUAL_WH_FLG,STORE_CLASS_TYPE,STORE_CLASS_DESC,WH_DELIVERY_POLICY,WH_REPL_IND,DUNS_NUMBER,STORE_REMODEL_DT,STORE_CLOSE_DT,INBOUND_HANDLING_DAYS,FLEX1_CHAR_VALUE,FLEX2_CHAR_VALUE,FLEX3_CHAR_VALUE,FLEX4_CHAR_VALUE,FLEX5_CHAR_VALUE,FLEX6_CHAR_VALUE,FLEX7_CHAR_VALUE,FLEX8_CHAR_VALUE,FLEX9_CHAR_VALUE,FLEX10_CHAR_VALUE
1000,S,USD,North Carolina,United States,1070,170,1,1,1,Charlotte,Charlotte,North Carolina,Mid-Atlantic,Brick & Mortar,US,Retailer Ltd,1,North America,,Y,Store,1,C,101,Zone 101,N,1,A,,,,,,,WH-1,Warehouse - US,1,Store Pick Up / Take With,3,Comp,6,Mixed Humid,1,Very Large
1001,S,USD,Georgia,United States,1023,400,1,1,1,Atlanta,Atlanta,Georgia,South Atlantic,Brick & Mortar,US,Retailer Ltd,1,North America,,Y,Kiosk,2,C,101,Zone 101,N,6,F,,,,,,,WH-1,Warehouse - US,2,Deliver/Install at Customer ,3,Comp,7,Hot Humid,3,Medium
1002,S,USD,Texas,United States,1104,230,1,1,1,Dallas,Dallas,Texas,Gulf States,Brick & Mortar,US,Retailer Ltd,1,North America,,Y,Store,1,C,101,Zone 101,N,6,F,,,,,,,WH-1,Warehouse - US,3,Home Delivery,3,Comp,4,Hot Dry,3,Medium

It is important that your organization hierarchy follow the standard rules laid out at the beginning of this chapter. All IDs must be unique (within their level) and IDs can never be re-used under multiple parents. All IDs must be numbers if you are using Retail Insights. The entire 6-level structure must be filled out, even if your source system doesn’t have that many levels.

Note:

You may duplicate a higher level down to lower levels if you need to fill it out to meet the data requirements.

Also note that some optional fields come in pairs and must be provided together or not at all. This includes:

  • Banner ID and description

  • Channel ID and description

  • Store format ID and description

Description fields can be set to the same value as the identifier if no other value is known or used, but you must include both fields with non-null values when you provide the data.

Organization Alternates

You may also use the file ORGANIZATION_ALT.csv to load additional attributes and hierarchy levels specifically for use in Planning applications. The file data is always at location level and may have up to 30 flexible fields for data. These columns exist on the ORGANIZATION.csv file if you are a non-RMFCS customer, so this separate file would be redundant. If you are using RMFCS, then this file provides a way to send extra data to Planning that does not exist in RMFCS.

When using flex fields as alternate hierarchy levels, there are some rules you will need to follow:

  • All hierarchies added this way must have an ID and Description pair as two separate columns

  • The ID column for an alternate hierarchy must ONLY contain numbers, no other characters are permitted

Numerical ID fields are required for integration purposes. When a plan is generated in MFP or AP using an alternate hierarchy, and you wish to send that plan data to AIF for in-season forecasting, the alternate hierarchy ID used must be a number for the integration to work. If your alternate hierarchy level will not be used as the base intersection of a plan, then it does not need to be limited to numerical IDs (although it is still recommended to do so). This requirement is the same for all hierarchy levels when Retail Insights is used, as RI can only accept numerical hierarchy IDs for all levels (both base levels and alternates).

For example, you might populate FLEX1_CHAR_VALUE with numerical IDs for an alternate level named “Subsegment”. You will put the descriptions into FLEX2_CHAR_VALUE. These values can be mapped into PDS by altering the interface.cfg file, and the values can be used to define plans or targets in MFP. When you export your plans for AIF, they are written into integration tables such as MFP_PLAN1_EXP using the numerical identifiers from FLEX1_CHAR_VALUE as the plan level. This is further integrated to RI tables like W_RTL_PLAN1_PROD1_LC1_T1_FS (columns ORG_DH_NUM and PROD_DH_NUM for location/product IDs respectively). This is where numerical IDs become required for these interfaces to function; they will not load the data if the IDs are non-numerical. Once loaded into W_RTL_PLAN1_PROD1_LC1_T1_F and similar tables, AIF reads the plan data to feed in-season forecast generation.

Calendar File

The calendar file contains your primary business or fiscal calendar, defined at the fiscal-period level of detail. The most common fiscal calendar used is a 4-5-4 National Retail Federation (NRF) calendar or a variation of it with different year-ending dates. This calendar defines the financial, analytical, or planning periods used by the business. It must contain some form of fiscal calendar, but if you are a business that operates solely on the Gregorian calendar, a default calendar file can be generated by an ad hoc batch program to initialize the system. However, if you are implementing a planning solution, you must use the Fiscal Calendar as your primary calendar, and only this calendar will be integrated from RI to Planning.

Table 8-5 Calendar File Required Fields

Column Header Usage

MCAL_CAL_ID

Identifies the accounting calendar. At this time a hard-coded value of Retail Calendar~41 is expected here, no other value should be used.

MCAL_PERIOD_TYPE

Identifies the accounting period type (4 or 5). This represents if the fiscal period has 4 or 5 weeks.

MCAL_PERIOD_NAME

Name of the fiscal period, such as Period01, Period02, and so on. Do not include the year in this name, as the system will automatically add that during the load.

MCAL_PERIOD

Period number within a year, for a 4-5-4 calendar this should be between 1 and 12.

MCAL_PERIOD_ST_DT

Identifies the first date of the period in YYYYMMDD format (default format can be changed in CTX files).

MCAL_PERIOD_END_DT

Identifies the last date of the period in YYYYMMDD format (default format can be changed in CTX files).

MCAL_QTR

Identifies the quarter of the year to which this period belongs. Possible values are 1, 2, 3 and 4.

MCAL_YEAR

Identifies the fiscal year in YYYY format.

MCAL_QTR_START_DT

Identifies the start date of the quarter in YYYYMMDD format (default format can be changed in CTX files).

MCAL_QTR_END_DT

Identifies the end date of the quarter in YYYYMMDD format (default format can be changed in CTX files).

MCAL_YEAR_START_DT

Identifies the start date of the year in YYYYMMDD format (default format can be changed in CTX files).

MCAL_YEAR_END_DT

Identifies the end date of the year in YYYYMMDD format (default format can be changed in CTX files).

The hard-coded calendar ID is used to align with several internal tables that are designed to support multiple calendars but currently have only one in place, and that calendar uses the provided value of MCAL_CAL_ID above.

The fiscal calendar should have, at a minimum, a 5-year range (2 years in the past, the current fiscal year, and 2 years forward from that) but is usually much longer so that you do not need to update the file often. Most implementations should start with a 10-15 year fiscal calendar length. The calendar should start at least 1 full year before the planned beginning of your history files and extend at least 1 year beyond your expected business needs in all RAP modules.

Example data for the CALENDAR.csv file columns above:

MCAL_CAL_ID,MCAL_PERIOD_TYPE,MCAL_PERIOD_NAME,MCAL_PERIOD,MCAL_PERIOD_ST_DT,MCAL_PERIOD_END_DT,MCAL_QTR,MCAL_YEAR,MCAL_QTR_START_DT,MCAL_QTR_END_DT,MCAL_YEAR_START_DT,MCAL_YEAR_END_DT
Retail Calendar~41,4,Period01,1,20070204,20070303,1,2007,20070204,20070505,20070204,20080202
Retail Calendar~41,5,Period02,2,20070304,20070407,1,2007,20070204,20070505,20070204,20080202
Retail Calendar~41,4,Period03,3,20070408,20070505,1,2007,20070204,20070505,20070204,20080202
Retail Calendar~41,4,Period04,4,20070506,20070602,2,2007,20070506,20070804,20070204,20080202

Exchange Rates File

The exchange rates file captures conversion rates between any two currency codes that may appear in your fact data. The standard practice for fact data is to load the source system values in the original currency and allow the platform to convert the amounts to the primary currency. This file facilitates that process and triggers bulk updates in nightly processing any time you wish to change your exchange rates for financial reporting purposes. Adding new rates to the file with an effective date equal to the batch date triggers a mass update to all positional facts, converting all the amounts to the new exchange rate even if the item/location did not otherwise change in the source system.

Note that you do not have to provide exchange rates data for the following scenarios:

  • You are loading your data already converted to primary currency

  • You only use a single currency for your entire business

  • You are only implementing a AI Foundation module and expect to perform those processes in the local currency amounts

Even when exchange rates are not required as a separate file, you must still populate the currency codes (DOC_CURR_CODE, LOC_CURR_CODE) in the fact data files with values. Review the scenarios below to understand how to set these values and provide the associated rates.

Scenario 1 - No Conversion

For this use-case, all data is in the desired currency before sending it to Oracle. You do not want the platform to convert your data from source currency to primary currency. All fact records must have LOC_CURR_CODE = DOC_CURR_CODE. For example, set both values to USD for sales in the U.S. and both values to CAD for sales in Canada that you pre-converted. EXCH_RATE.csv data is not required or used for records having the same currency code on both columns.

Scenario 2 – Only One Currency

If your business only operates in one region and uses a single currency code, then you don’t need to provide exchange rate data. All fact records must have LOC_CURR_CODE = DOC_CURR_CODE. For example, set both values to USD on all rows if your primary operating currency is USD. EXCH_RATE.csv data is not required or used for records having the same currency code on both columns.

Scenario 3 – Multiple Currencies

When you do plan to provide data in multiple source currencies, you must also provide the exchange rates into and out of those currencies. Your fact data must have DOC_CURR_CODE set to the currency of the source system where the transaction was recorded (for example, a sale in Canada has a document currency of CAD). The value of LOC_CURR_CODE will be the same on all records and must be the primary operating currency of your business (if you operate mainly in the United States then it will be USD).

Exchange rates should be provided using the standard international rates (for example USD > CAD may be 1.38) but the fact load will perform lookups in reverse. Fact conversions are applied as a division process. For example, “transaction amount / exchange rate” is the formula to convert from document currency to primary currency; so when converting from CAD > USD the system will look up the value for USD > CAD and divide by that number to get the final value.

Table 8-6 Exchange Rate File Required Fields

Column Header Usage

START_DT

Contains the effective start date of the exchange rate. Set to the current business date to trigger new rate conversions.

END_DT

Contains the effective end date of the exchange rate. Default to 21000101 if the rate should be effective indefinitely.

EXCHANGE_RATE

Contains the exchange rate for the specified currency/type/effective date combination.

FROM_CURRENCY_CODE

Code of the currency to be exchanged.

TO_CURRENCY_CODE

Code of the currency to which a currency is exchanged.

Sample data for the EXCH_RATE.csv file columns above:

START_DT,END_DT,EXCHANGE_RATE,FROM_CURRENCY_CODE,TO_CURRENCY_CODE
20180514,21000101,0.8640055,CAD,USD
20180514,21000101,0.1233959,CNY,USD

The exchange rates data must also satisfy the following criteria if you are loading data for use in Retail Insights reporting:

  1. Rates must be provided in both directions for every combination of currencies that can occur in your dataset (for example, USD > CAD and CAD > USD).

  2. Dates must provide complete coverage of your entire timeframe in the dataset, both for historical and current data. The current effective records for all rates can use 2100-01-01 as the end date. Dates cannot overlap, only a single rate must be effective per day.

  3. Rates should not change more often than absolutely necessary based on the business requirements. If you are implementing RI with positional data, a rate change triggers a complete recalculation of the stock on hand cost/retail amounts for the entire business across all pre-calculated aggregate tables. When RI is not used for financial reporting you might only change the rates once each fiscal year, to maintain a single constant currency for analytical purposes.

Attributes Files

Product attributes are provided on two files: one file for the attribute-to-product mappings and another for attribute descriptions and codes. These files should be provided together to fully describe all the attributes being loaded into the system. The attribute descriptors file must be a full snapshot of all attribute types and values at all times. The product attribute mapping file should start as a full snapshot but can move to incremental (delta) load methods once nightly batches begin, if you can extract the information as deltas only.

Product attributes are a major component of the RI and AI Foundation modules and drive many analytical processes but are not required for some planning modules like MFP.

Table 8-7 Attribute File Required Fields

Column Header Usage

ATTR_VALUE_ID

Unique identifier for a user-defined attribute or product differentiator. This interface contains ALL values regardless of whether they are used on items yet or not. These values must also match the ATTR_ID on the other file.

ATTR_VALUE_DESC

Descriptive value for a user-defined attribute or product differentiator, such as Brown, Cotton, Size12, and so on

ATTR_GROUP_ID

Unique identifier for a group of user-defined attributes, or the name/code for the differentiator group. Reserved attribute types like SIZE and COLOR must all have a single, hard-coded value associated with the group in this field (For example, all sizes must be in the SIZE group, don’t specify group IDs for sizes_pants, sizes_shirts, and so on. This is handled separately). This group ID value must also match the ATTR_GRP_ID value on the second file.

ATTR_GROUP_DESC

Descriptive value for a group of user-defined attributes, such as Color Family or Web Exclusive Code, or the description of the differentiator type such as Color.

ATTR_TYPE_CODE

Indicates the type of UDA or differentiator. UDA types should be hard-coded as one of FF, LV, or DT, for Freeform, List of Values, or Date.

LV type attributes are fixed lists and write values to translation lookup tables, while FF and DT fields do not.

For non-UDA types, some diffs have special reserved codes for this field as well, which should be used when applicable, and include SIZE, FABRIC, SCENT, FLAVOR, STYLE, and COLOR. Other differentiators not using these codes should specify a hard-coded type of DIFF.

Sample data for the ATTR.csv file columns above:

ATTR_VALUE_ID,ATTR_VALUE_DESC,ATTR_GROUP_ID,ATTR_GROUP_DESC,ATTR_TYPE_CODE
13,No_Sugar_IN13,45008,UDA_ING_2018.01.16.01.00,FF
14,Zero_Carbs_IN14,45008,UDA_ING_2018.01.16.01.00,FF
3,Distressed,80008,Wash,LV
STEEL,Steel,METAL,Metals,DIFF
CHOC,Chocolate,FLAVOR,Flavor,FLAVOR
GRAY_43214,Gray,COLOR,Color,COLOR
32X32_9957,32X32,SIZE,Size,SIZE

Table 8-8 Product Attribute File Required Fields

Column Header Usage

ITEM

The item number associated with the specified attribute value.

ATTR_ID

Identifier for an attribute value, such as the UDA value ID or Diff ID which is mapped to the item on the record, or the ID for the item list this item belongs to. These values must also match the ATTR_VALUE_ID on the other file.

ATTR_GRP_TYPE

The attribute group type. This is a set of fixed values which must be selected from what RAP supports. Supported values are ITEMDIFF, ITEMUDA, ITEMLIST, COLOR, and PRODUCT_ATTRIBUTES. These codes determine the target columns for the data (for example, lists, diffs, and UDAs use different internal columns in the data model). PRODUCT_ATTRIBUTES type code encapsulates the other named differentiator types like Size, Fabric, and so on. COLOR has a special type code due to it being a common level between Style and SKU for fashion retailers, so it is handled separately.

ATTR_GRP_ID

Identifier for the attribute group containing the value on this record. Must match a ATTR_GROUP_ID in the other file. Varies by ATTR_GRP_TYPE value used:

  • If ATTR_GRP_TYPE is in ITEMDIFF, COLOR, PRODUCT_ATTRIBUTES, then specify the Diff Type ID or one of the reserved values like SIZE or COLOR.

  • If ATTR_GRP_TYPE is ITEMUDA, specify UDA Group ID.

  • If ATTR_GRP_TYPE is ITEMLIST, this field is not used, leave null.

DIFF_GRP_ID

Differentiator group used to assign the diff attribute on this item; for example the Size Group ID used when generating SKUs from a parent Style. Only SKUs will have diff groups associated with them. Only diffs will have groups, not UDAs or other record types. This is not the same as an attribute group, which is the overall grouping of attribute values across all items. This is used mainly for Size Profile Science.

Foreign key reference to DIFF_GROUP.csv.

DIFF_GRP_DESC

Descriptive value of the diff group mapped to this item/attribute record.

Sample data for the PROD_ATTR.csv file columns above:

ITEM,ATTR_ID,ATTR_GRP_TYPE,ATTR_GRP_ID,DIFF_GRP_ID,DIFF_GRP_DESC
91203747,13,ITEMUDA,45008,,
91203747,3,ITEMUDA,80008,,
190496585706,STEEL,ITEMDIFF,METAL,,
86323133004,GRAY_43214,COLOR,COLOR,,
190085302141,CHOC,PRODUCT_ATTRIBUTES,FLAVOR,,
345873291,32X32_9957,PRODUCT_ATTRIBUTES,SIZE,S13,Pant Sizes

Fact Files

Fact Data Key Columns

Fact data files, such as sales and inventory, share many common characteristics and standards that can be followed regardless of the specific file. The table below summarizes those key elements and their minimum requirements. You will generally always want to populate these fields where they exist on an interface file.

Table 8-9 Common Fact Data Fields

Column Header Usage

ITEM

Unique identifier of a transaction item. Must align with the product records in the PRODUCT.csv file where ITEM_LEVEL = TRAN_LEVEL.

ORG_NUM

Unique identifier of an organizational entity. Must align with the location records in the ORGANIZATION.csv file.

DAY_DT

Transaction date or business date for the fact data entry, formatted as YYYYMMDD. Must be a valid date within the range of periods in CALENDER.csv.

RTL_TYPE_CODE

Retail types define a general category for the record that varies by interface. For Sales and Markdowns, it must be one of R/P/C, representing the regular/promo/clearance status of the transaction.

*_QTY

Fields ending with QTY represent the quantity or units of an item on the record, such as the units sold on a transaction line or the units of on-hand inventory.

*_AMT_LCL

Fields containing AMT_LCL represent the currency amount of a record in the local currency of the source system, such as sales retail amount in Canadian dollars from a Canada location, or the cost value of on-hand inventory at your U.S. warehouse.

DOC_CURR_CODE

The original document currency of the record in the source system. For example, a sale made in Canada may have a value of CAD in this field.

LOC_CURR_CODE

The local operating currency of your main office or headquarters. A company based in the United States would use USD in this field.

ETL_THREAD_VAL

If you are providing any data files ending in a .dat extension, then it might contain an ETL_THREAD_VAL column. This column must be hard-coded to be 1 on all rows without exception; it should not be null and should not be any values greater than 1. This is a legacy field that allowed multi-threading in older generations of RAP architecture.

Nearly all fact files share a common intersection of an item, location, and date as specified above. Such files are expected to come into the platform on a nightly basis and contain that day’s transactions or business activity.

Most fact data also supports having currency amounts in their source currency, which is then automatically converted to your primary operating currency during the load process. There are several currency code and exchange rate columns on such interfaces, which should be populated if you need this functionality. The most important ones are shown in the list above, and other optional column for global currencies can be found in the Interfaces Guide. When you provide these fields, they must all be provided on every row of data, you cannot leave out any of the values or it will not load properly.

Here are sample records for commonly used historical load files having a small set of fields populated. These fields are sufficient to see results in RI reporting and move the data to AI Foundation or MFP but may not satisfy all the functional requirements of those applications. Review the Interfaces Guide for complete details on required/optional columns on these interfaces.

SALES.csv:

ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,PROMO_ID,PROMO_COMP_ID,CASHIER_ID,REGISTER_ID,SALES_PERSON_ID,CUSTOMER_NUM,SLS_QTY,SLS_AMT_LCL,SLS_PROFIT_AMT_LCL,RET_QTY,RET_AMT_LCL,RET_PROFIT_AMT_LCL,TRAN_TYPE,LOC_CURR_CODE,DOC_CURR_CODE
1235842,1029,20210228,0,R,202102281029,-1,-1,96,19,65,-1,173,1730,605.5,0,0,0,SALE,USD,USD
1235842,1029,20210307,0,R,202103071029,-1,-1,12,19,55,-1,167,1670,584.5,0,0,0,SALE,USD,USD
1235842,1029,20210314,0,R,202103141029,-1,-1,30,18,20,-1,181,1810,633.5,0,0,0,SALE,USD,USD

INVENTORY.csv:

ITEM,ORG_NUM,DAY_DT,CLEARANCE_FLG,INV_SOH_QTY,INV_SOH_COST_AMT_LCL,INV_SOH_RTL_AMT_LCL,INV_UNIT_RTL_AMT_LCL,INV_AVG_COST_AMT_LCL,INV_UNIT_COST_AMT_LCL,PURCH_TYPE_CODE,DOC_CURR_CODE,LOC_CURR_CODE
72939751,1001,20200208,N,0,0,0,104.63,0,48.52,0,USD,USD
73137693,1001,20200208,N,0,0,0,104.63,0,48.52,0,USD,USD
75539075,1001,20200208,N,0,0,0,101.73,0,47.44,0,USD,USD

PRICE.csv:

ITEM,ORG_NUM,DAY_DT,PRICE_CHANGE_TRAN_TYPE,SELLING_UOM,STANDARD_UNIT_RTL_AMT_LCL,SELLING_UNIT_RTL_AMT_LCL,BASE_COST_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE
89833651,1004,20200208,0,EA,93.11,93.11,53.56,USD,USD
90710567,1004,20200208,0,EA,90.41,90.41,50.74,USD,USD
90846443,1004,20200208,0,EA,79.87,79.87,44.57,USD,USD

Fact Data Incremental Logic

Daily or weekly fact data files can be provided incrementally instead of as full snapshots, but the specific handling of incremental changes can be different for the various fact types. The table below summarizes the incremental update logic used on the core fact areas.

Facts Incremental Logic

Transactional (Sales, Receipts, Markdowns, Adjustments, RTVs, and so on)

Loading transaction data into RAP uses additive merge logic when new data comes into the tables. If the target intersection doesn’t exist, it will insert it. If the target intersection DOES exist, then it will merge the records by adding together the source and target fields. For example, an existing sales transaction that is revised will add together the Quantity and Amount fields from the source and target.

Note: When posting a partial revision, send zeros in fields that should not be adjusted.

Positional (Inventory, Purchase Order, Price, Cost, and so on)

Positional data loaded into RAP must always be for the current date — it cannot be back-posted — and will merge into the target tables with the incoming values (the new day’s position is a combination of existing data from yesterday merged with the incoming data). You must send a zero if a given position was moved to zero or dropped from the source system; otherwise it would continue to carry forward the last non-zero position in the database. Refer to the detailed sections later in this chapter for Inventory Position and Pricing examples.

Non-Transactional and Non-Positional Facts (Store Traffic, Flex Facts, History Planning Facts)

Some interfaces that are not related to any transactional or positional data elements, like the Store Traffic or Planning interfaces, use non-additive merge logic. When an existing intersection comes into the staging table, it is merged to the target table but overwrites/replaces the target values with the source values.

Multi-Threading and Parallelism

Due to the high data volumes of most fact data (such as sales and inventory), it is necessary to process the data using multiple CPU threads on the database. In RAP’s second-generation architecture, multi-threading is handled automatically. You must not attempt to alter any threading parameters to force a specific thread count greater than 1. If you are providing any data files ending in a .dat extension, then it might contain an ETL_THREAD_VAL column. This column must be hard-coded to be 1 on all rows without exception; it should not be null and should not be any value greater than 1. Similarly, there are database parameters named LOC_NUM_OF_THREAD in the C_ODI_PARAM_VW table. These must be set to a value of 1 and should not be altered to any value greater than 1.

Sales Data Requirements

Sales data (SALES.csv) operates with the assumption that the source system for the data is an auditing system (like Oracle Sales Audit) or non-Oracle data warehouse system. It applies minimal transformations to the inputs and assumes all the needed cleansing and preparation of transaction data has happened outside of RI. Whether you are sourcing history data from one of those systems or directly from a POS or non-Oracle auditing application, there are some business rules that should be followed.

Requirement File Type Explanation

Sales Units

Historical and Ongoing

The values provided for unit quantities should represent the total transaction-line values for an item, split across the gross sales units and return units. In the case of an exchange, you could have both sales and return units on the same line, but most of the time only SLS or RET fields will have a value. These values will be subtracted from each other to form Net Sales Quantity metrics, so they should almost always be positive.

Sales Retail Amounts

Historical and Ongoing

The retail amounts on a sale or return represent the actual selling/return value, after all discounts are subtracted from the base price of the item. In the case of an exchange, you could have both sales and return units on the same line, but most of the time only SLS or RET fields will have a value. These values will be subtracted from each other to form Net Sales Amount metrics, so they should almost always be positive.

Sales Profit Amounts

Historical and Ongoing

Profit calculations must take into consideration the cost of the item at the time it was sold, and will vary based on the retailer's costing methods. The standard approach is to use the value for Weighted Average Cost (WAC) multiplied by the units sold/returned, and subtract that total cost value from the retail amount. An item that is sold and later returned may not have the same profit amounts, if the cost has changed between the two transactions or the return was not for the full price of the item. Most POS systems do not track item costs, so providing this data requires an external process to do the calculations for you.

Sales Taxes

Historical and Ongoing

Tax amounts generally represent Value Added Tax (VAT); however this column could be used to capture other tax amounts if loading directly from the POS or external audit system.

Employee Discounts

Historical and Ongoing

These columns are specifically for employee discounts when the employee purchases a product at the POS and gets a special discount (or has the discounted amount returned later on). These values are just the discount amount, meaning the reduction in value from the selling price.

Promotional Discounts

Historical and Ongoing

These values represent the total discount taken off the initial selling price for the line-item in the transaction. These values will almost always be populated for promotional sales. However, a regular or clearance sale could have a further discount applied (like a coupon) and that should also be captured here. These values are used to populate the Sales Promotion fact table for retail type “P” transactions. So make sure that any change in price related to a promotion is included in this discount amount, so that it is copied into other tables for Promotion-level reporting.

Liabilities

Historical and Ongoing

Liabilities are sales that have not yet been charged to the customer, either due to layaway practices or customer orders that are posted as a liability transaction before they are fulfilled. Liabilities are posted with a positive value when incurred, and reversed with a negative value when they are converted into a regular sale or cancelled. Liabilities are a separate set of metrics in RI and do not interact with sales values, as it is expected that the liability will always result in a cancellation or a sale being posted at a later date.

Liability Cancels

Historical and Ongoing

Liabilities that are cancelled should first be reversed and then posted to these fields as a positive amount. A cancelled liability will have no impact on sales and has a separate set of metrics in RI. The retailer can use liability cancellation metrics to track the value of customer orders that were cancelled before being charged.

Retail Type

Historical and Ongoing

The retail type represents the category of sale as one of Regular, Promotion, or Clearance. We use the codes R/P/C to denote this value. The priority order when assigning a code is C > P > R, meaning that a clearance sale will always have a clearance type, even if it is also affected by a promotion. This matches the financial practices of RMFCS and Sales Audit, which treat all clearance sales as clearance activity on the stock ledger.

Transaction Reversals and Revisions

Historical and Ongoing

When using Sales Audit to audit sales, the export process will automatically handle reversing a transaction and posting revisions to a transaction. Without that, you must manually create a process to send reversals and revisions to RI matching the same data format. These two records come at the same time. A reversal is an exact opposite of the original transaction line (usually all negative values, unless the original value was negative). This will be added to RI’s data and zero it out. The revision record should come next and contain the current actual values on the transaction (not the delta or difference in values).

Keep in mind that, depending on the method used for calculating sales cost/profit amounts, a reversal and revision may have different values from the original profit amount. This could result in a very small residual profit from the prior revision.

Backposted Sales

Ongoing (Daily)

Sales can be backposted for days prior to the current business date. They will be loaded against their backposted transaction date and aggregated up into the existing sales data for that date. No transformations are done, even if the backposted sale is significantly older (for example,1+ years ago). It will be inserted and aggregated using that past date.

Original Selling Locations

Historical and Ongoing

When including the original selling location of a return transaction, you must also make sure that is a real location included on your Organization input data. Some source systems allow the manual entry of a return’s original selling location, so ensure that all such locations are included, or the records will be rejected by RI.

The columns you provide in the sales file will vary greatly depending on your application needs (for example ,you may not need the sales profit columns if you don’t care about Sales Cost or Margin measures). The most commonly used columns are listed below with additional usage notes.

Column Header Usage

ITEM

Must be the transaction level item or SKU number for the sale and must have a record in the PRODUCT.csv file.

ORG_NUM

Must be the store or warehouse number that will get credit for the sale. Note that the location where the sale occurred is not always the location that should be credited for it (for example, a web order placed in-store can still be credited to a web location). Must have a record in ORGANIZATION.csv file.

DAY_DT

The date that the transaction occurred in the source system. Must be a valid date within the periods in the CALENDAR.csv file.

MIN_NUM

Hour and minute of the transaction in 24-hour, 4-digit format; for example, 11:23 PM would be 2323. Currently only used in Retail Insights reporting; default to 0 if not needed.

IT_SEQ_NUM

Sequence of a line in the transaction. Every line of a transaction must have its own sequence number. This facilitates uniqueness requirements where the same item could have multiple lines. Without a unique sequence number, the line would not be unique on the input file and the system would see them as duplicate records (duplicate sales lines without a sequence number will be ignored). If you are pre-summing your sales lines such that there will never be duplicate rows, then this column is not needed.

RTL_TYPE_CODE

This is the sales type using one of (R, P, C), where R = regular, P = promotion, C = clearance. This is a critical piece of information for many AIF and Planning purposes. For example, in some AIF modules like SPO, you can choose to include or exclude sales by retail type in the calculations. In forecasting, you can generate different forecasts based on the retail type, which can feed into Planning measures which are split by reg/pro/clr designations.

SLS_TRX_ID

Unique identifier for a sales transaction. By default, it is expected sales data will come at the most granular level (transaction-line). If you are not able to provide true transaction-level data, you can specify some other unique value in this field. This value is part of the business key for the table, so you need to be able to reference the same keys over time (such as when revising or reversing existing transactions).

PROMO_ID

PROMO_COMP_ID

This two-part identifier maps to the Promotion and Offer associated with a transaction line. They are required if you wish to load sales by promotion for Retail Insights or certain AI Foundation modules such as Demand Transference. They are also required if you wish to perform promotion lift estimation as part of Retail Demand Forecasting, since the system needs to know the specific promotion linked to a set of sales transactions. When providing these values, also provide the PROMOTION.csv file.

CUSTOMER_NUM

CUSTOMER_TYPE

Customer identifier on a sales transaction, which is a critical piece of information for many AI Foundation modules, such as Customer Segmentation and Consumer Decisions Trees. CUSTOMER_TYPE should not be used at this time, it serves no purpose in downstream modules. If you do include the column, hard-code it to a value of CUSTID on all rows, otherwise CUSTOMER_NUM values will be ignored.

SLS_QTY

SLS_AMT_LCL

SLS_PROFIT_AMT_LCL

SLS_TAX_AMT_LCL

Represents the gross sales values for the transaction line (meaning before returns). Will almost always be positive, except in cases of negative profit amounts or if you are reversing a prior transaction line to zero out the amounts. Tax amount is for VAT and other special taxes outside the United States (should not be used for US sales tax).

RET_QTY

RET_AMT_LCL

RET_PROFIT_AMT_LCL

RET_TAX_AMT_LCL

Represents customer return transactions. Will almost always be positive, except in cases of negative profit amounts or if you are reversing a prior transaction line to zero out the amounts. Both gross sales and returns are positive because they are subtracted from each other to determine net sales. Tax amount is for VAT and other special taxes outside the United States (should not be used for US sales tax).

REVISION_NUM

If you will be allowing transactions to get revisions from your source system (such as when a sales audit system changes the sales amount after a user audited the transaction) then you should use this field to track the revision number. The initial transaction should come as 1 and later revisions should be posted with a value of 2 or greater. Revision numbers will be stored on the data warehouse table for tracking and auditing purposes.

LOC_CURR_CODE

DOC_CURR_CODE

The currency codes linked to the sales amounts on the transaction. The values in these fields will control how the system converts the amount (such as SLS_AMT_LCL) and how the EXCH_RATE.csv file data will be used. If you are providing the data in the source currency from the point of sale, then DOC_CURR_CODE will be the currency of the source system. LOC_CURR_CODE is the primary reporting currency you wish RAP applications to operate in, so it will generally be a single, hard-coded value on all your data files. Review the section on Exchange Rate dimension data for additional scenarios.

Sales Pack Data

If you have pack items (sets of SKUs or multiple units of SKUs sold as one item) then you will need to spread sales of such items from the pack item level to the individual component SKU level. The interface file SALES_PACK.csv is provided to load component sales values at a summary level of item/location/date. This data is included in sales calculations for both AI Foundation and Planning applications by adding together the sales of SKUs sold individually on SALES.csv and the sales of items inside packs on SALES_PACK.csv.

As an example, assume you have a SKU# 1090 which is a white T-shirt. This item is sold individually to customers, but it is also included in a pack of three shirts. The 3-pack is sold using a separate SKU# 3451. You must provide the data for this scenario as follows:

  • When SKU 1090 sells to a customer, you will have a transaction for 1 unit on SALES.csv

  • When SKU 3451 sells to a customer, you will have a transaction for 1 unit on SALES.csv, plus a record for SKU 1090 for 3 units on SALES_PACK.csv (representing the 3 units inside the pack that sold).

When this data is loaded into other applications like MFP, you will see a total of 4 units of sales for SKU 1090, because we will sum together the sales from both interfaces. The pack-level sale of SKU 3451 is not exported to Planning applications because that would result in double-counting at an aggregate level, but it can be used for other purposes such as Retail Insights reports.

When you are providing SALES_PACK.csv you must also provide the pack item/component item relationships using a dimension file PROD_PACK.csv. Refer to the RAP Interfaces Guide for the full interface specifications of both of these files.

Inventory Data Requirements

Inventory data (INVENTORY.csv) has several special requirements that need to be followed when generating historical data and ongoing daily feeds, due to the way the data is stored within Retail Insights as well as the different use-cases in each of the AI Foundation and Planning applications. A retailer may not have the data in the required format in their source system, and adjustments would have to be made in the data extraction process to ensure these rules are followed.

Requirement File Type Explanation

Records may be needed before the item/location has any stock on hand

Historical and Ongoing

The inventory position contains fields for inventory movement, such as on-order, in-transit, and reserved quantities. As soon as any of those values may contain data for an item/location (and you intend to use those fields in RAP), a record should be included for inventory, even if the actual stock on hand is still zero that day or week.

Zero balances must be sent after an item/location has started carrying a non-zero position

Historical and Ongoing

Inventory data is stored positionally, meaning we must maintain the current daily balance of stock on hand for every item/location, including zero balances. The need for zeros in the data is slightly different for the two use cases:

  • Daily Batch: If no change comes into the system on a given day, we carry forward that balance. This means that you cannot send only non-zero values in the data files, as it is assumed the last known value is also the current value for the day or week. You must send a zero balance any time the inventory has moved from non-zero to zero.

  • History Load: The history file does not carry balances forward, we directly insert the data you send us. This means we don’t need a zero for every item-location-week. However, for integration purposes, a record must exist for every item/loc/week where transactions are occurring. For example, any fact (like Markdown export to PDS) that joins with inventory must have matching inventory records for all item/loc/weeks, even if the stock on-hand is zero that week.

Clearance indicator is used to show the end-of-period status of the item/location’s inventory

Historical and Ongoing

Inventory data has a required column for a Clearance Flag (Y/N) to indicate for a given day or week what the status of that item/location’s inventory is. The flag is intended to be the end-of-period clearance status of the item/location, so in the history data you should not send multiple records for the same item/location if the flag changed in the middle of the week. Send only one record with the correct flag for the end-of-week value. Default to N if you don’t use it or don’t know it.

Any change to values on the inventory position should send an update of the full record from the source system.

Ongoing (Daily)

If you are using other inventory values besides stock on hand (such as on-order or in-transit), you must ensure the extracts will send a complete record to the inventory interface when any of those values change. For example, a new item/location may carry an on-order balance or in-transit balance for several weeks before it has any stock on hand, so your extracts must trigger changes to those values, not just changes to stock on hand.

For historical loads, this results in the following flow of data across all your files:

  1. Generate the first month of week-ending inventory balances in INVENTORY.csv for all active item/locations in each week of data. Load using the historical inventory load ad hoc process. Make sure you load Receipts data in parallel with inventory data if you need to capture historical first/last receipt dates against the stock positions (for IO or PMO usage)..

  2. Repeat the monthly file generation process, including sets of week-ending balances in chronological order. Remember that you cannot load inventory data out of order, and once a week is loaded you cannot go backwards to update past periods. Make sure all the requirements listed in the table above are satisfied for every week of data. Depending on your data volumes you can include more than one month in a single file upload.

  3. Load every week of inventory snapshots through to the end of your historical period. If there will be a gap of time before starting nightly batches, plan to load an additional history file at a later date to catch up. Make sure you continue loading Receipts data in parallel with inventory data if first/last receipt date calculations are needed.

  4. When you are ready to cutover to batches, you must also re-seed the positions of all item/locations that need to have an inventory record on Day 1 of nightly batch execution (same as for all positional facts in RI). This is needed to fill in any gaps where currently active item/locations are not present in the historical files but need to have an inventory record added on day 1. Use the Seeding Adhoc process for Inventory to do this step, or include a full inventory snapshot file in your first nightly batch run to set all active positions.

The columns you provide on the inventory file will vary depending on your application needs (for example, you may not need the in-transit or on-order columns if you are only providing data for RDF). The most commonly used columns are listed below with additional usage notes.

Column Header Usage

ITEM

Must be the transaction-level item or SKU number for the inventory position and must have a record in the PRODUCT.csv file. Should not be a pack item; all inventory data should be held against the individual components.

ORG_NUM

Must be the location number that is holding the inventory. Must have a record in the ORGANIZATION.csv file. If a location is flagged as non-stockholding in the ORGANIZATION.csv file, then it should not have any data in this file.

DAY_DT

The date that the inventory position record is for. In historical files it must be a week-ending date. In nightly batch files it must be the current system business date on all rows; you cannot post inventory for any other date.

CLEARANCE_FLG

Must be a value of Y or N to indicate the inventory is on clearance or not. Differentiating inventory status is important to RI, AI Foundation, and Planning applications anywhere you would be viewing or planning clearance sales separately from regular sales. Depending on your planning configuration needs, you cannot plan/forecast clearance sales without also having clearance inventory (like when you are specifically forecasting clearance sales and want to compare to clearance inventory). If you are not implementing a solution that separates regular and clearance activities, then you would set this value to N on all rows.

INV_SOH_QTY

INV_SOH_COST_AMT_LCL

INV_SOH_RTL_AMT_LCL

Stock on hand and available to sell. The cost and retail amounts are the total cost/retail value of all units of stock. All the values on this interface are close-of-business values (for example, stock on hand at end of day or end of week). Pack item inventory should be broken down into their component quantities and amounts and summed together with the stock for the same SKUs held individually. Only one row should be provided per item/location/date.

INV_IN_TRAN_QTY

INV_IN_TRAN_COST_AMT_LCL

INV_IN_TRAN_RTL_AMT_LCL

Stock in-transit between two owned locations (such as from warehouse-to-store or store-to-store). This stock is shipped but not yet received. It will be summed together with SOH values for MFP to show total owned stock for a location, inclusive of units that will arrive at that location soon. All the same criteria listed for SOH values apply to these fields as well.

INV_UNIT_RTL_AMT_LCL

The base unit retail value of an item at the location for this stock position. Used by multiple applications to display the retail value for a specific item/location in the context of owned inventory. May or may not be the same value as the Selling Price provided on the PRICE.csv file depending on how the business calculates the retail value of owned inventory. Other columns like INV_SOH_RTL_AMT_LCL should be a multiplication of this value times the unit quantity.

INV_AVG_COST_AMT_LCL

INV_UNIT_COST_AMT_LCL

The average cost and unit cost of owned inventory for this item/location. The default cost used by retailers is usually the average cost (also known as Weighted Average Cost or WAC). Other columns like INV_SOH_COST_AMT_LCL should be a multiplication of one of these values times the unit quantity.

LOC_CURR_CODE

DOC_CURR_CODE

The currency codes linked to the cost and retail amounts on the inventory data. The values in these fields will control how the system converts the amount (such as INV_SOH_COST_AMT_LCL) and how the EXCH_RATE.csv file data will be used. If you are providing the data in the source currency from the store or warehouse location, then DOC_CURR_CODE will be the currency of the source location. If the data is already all on one currency, then DOC_CURR_CODE will be that currency code. LOC_CURR_CODE is the primary reporting currency you wish RAP applications to operate in, so it will generally be a single hard-coded value on all your data files. Review the section on Exchange Rate dimension data for additional scenarios.

Price Data Requirements

Pricing data (PRICE.csv) has several special requirements that need to be followed when generating historical data and ongoing daily feeds, due to the way the data is stored within Retail Insights as well as the different use-cases in the RI and AI Foundation applications. A retailer may not have the data in the required format in their source system, and adjustments would have to be made in the data extraction process to ensure these rules are followed.

Requirement File Type Explanation

The first price file must include records for all active item/locations

Historical

The pricing fact is stored positionally, meaning that it first needs a starting position for an entity (for example, an initial price for an item/location) and then it may not be sent again unless there is a change to the value. The very first price file loaded into RI must contain a starting position for all active item/locations. How you determine which item/locations were active on that day in history will depend on your source system (for example, you can base it on items having stock on hand, historical ranging status, or a price history table if you have one).

The first record sent for an item/location must come as a new price transaction type

Historical and Ongoing

Price records have a column for the price type (PRICE_CHANGE_TRAN_TYPE) with a fixed list of possible values. The value 0 represents a new price, which means it is the first time our system is getting a price for this item/location. All item/locations must be given a type 0 record as their first entry in the historical or ongoing data files. All the initial position records in the first file will have type=0. Also, all new item/locations coming in later files must first come with type=0.

Price records should follow a specific lifecycle using the type codes

Historical and Ongoing

The typical flow of price changes that will occur for an item/location should be as follows:

  • New price/cost (PRICE_CHANGE_TRAN_TYPE=0)

  • Regular cost changes (PRICE_CHANGE_TRAN_TYPE=2)

  • Regular price changes (PRICE_CHANGE_TRAN_TYPE=4)

  • Promotional/temporary markdowns (PRICE_CHANGE_TRAN_TYPE=9)

  • Clearance markdowns (PRICE_CHANGE_TRAN_TYPE=8)

  • Price reset due to new selling cycle or season change (PRICE_CHANGE_TRAN_TYPE=0)

An item/location may have many changes with types 2/4/8/9 before eventually staying at 8 (for a final markdown) or resetting to 0 (if the item lifecycle should restart).

Price changes are for the end-of-day value only

Historical and Ongoing

An item price may change many times a day, but you must only send the end-of-day final position for the item/location. The file interface assumes only one record will be sent per item/location/effective date, representing the final price on that date.

For historical loads, this results in the following flow of data across all your files:

  1. Generate an initial position PRICE.csv that has all type=0 records for the item/locations you want to specify a starting price for. Load this as the very first file using the historical load ad hoc process.

  2. Generate your first month of price change records. This will have a mixture of all the price change types. New item/location records may come in with type=0 and records already established can get updates using any of the other type codes. Only send records when a price or cost value changes; do not send every item/location on every date. You also must not send more than one change per item/location/date.

  3. Repeat the monthly file generation (or more than one month if your data volume for price changes is low) and load process until all price history has been loaded for the historical timeframe.

  4. When you are ready for the cutover to batches, you must also re-seed the positions of all item/locations that need a price record on Day 1 of nightly batch execution (same as for all positional facts in RI). This is needed to fill in any gaps where currently active item/locations are not present in the historical files, but need a price record added on day 1. Use the Seeding Ad Hoc process for Pricing to do this step, not the historical load.

In most cases, you will be providing the same set of price columns for any application. These columns are listed below with additional usage notes.

Column Header Usage

ITEM

Must be the transaction-level item or SKU number and must have a record in the PRODUCT.csv file. You should provide a price record for all sellable or inventoried items.

ORG_NUM

Must be the location number where the item on the record is ranged to. Must have a record in ORGANIZATION.csv file. A price should be provided for every location where a transaction could occur for the item on the record, such as a sale or return.

DAY_DT

The date that the price record is for. In historical files it will be the effective date of the price change. In nightly batch files it must be the current system business date on all rows, you cannot post prices for any other date. The data sent nightly is for price changes effective for that one date.

PRICE_CHANGE_TRAN_TYPE

The type of price change event, represented by a numerical code as defined in the business rules earlier in this section. This is NOT part of the primary key, meaning that you can only provide a single price per item/location/date.

SELLING_UNIT_RTL_AMT_LCL

The current selling retail price of the item, at the specified location, on the specified business date. The selling price will generally reflect the current “ticket price” of the item that a customer would pay before transaction-level adjustments like coupons or loyalty awards. The price is also a key input to certain forecast functions (LLC, causal, promo lift).

BASE_COST_AMT_LCL

The unit cost of the item at this location. Only used by AI Foundation and Retail Insights reporting. In OO/PMO, the price and cost are both used to determine the Gross Margin amount for the item/location in pricing objectives.

LOC_CURR_CODE

DOC_CURR_CODE

The currency codes linked to the cost and retail amounts on the price data. The values in these fields will control how the system converts the amount (such as SELLING_UNIT_RTL_AMT_LCL) and how the EXCH_RATE.csv file data will be used. If you are providing the data in the source currency from the store or warehouse location, then DOC_CURR_CODE will be the currency of the source location. If the data is already all on one currency, then DOC_CURR_CODE will be that currency code. LOC_CURR_CODE is the primary reporting currency you wish RAP applications to operate in, so it will generally be a single hard-coded value on all your data files. Review the section on Exchange Rate dimension data for additional scenarios.

Receipts Data Requirements

Receipts data (RECEIPT.csv) is used specifically for receipt transactions where inventory units are received into an owned location (like a store or warehouse), and that receipt impacts the stock on hand for the location. The file is used for several purposes throughout RAP: it is needed by MFP for inventory plans, by IO and OO for determining first and last receiving dates by item/location, and by RI for reporting on receipt activity. The receipts data must be loaded in parallel with inventory position if AIF modules are being implemented, because the calculations for IO/OO are done up front during each load of inventory position and receipt files.

Rule Explanation

Receipt Types

The receipts are provided using a type code, with 3 specific codes supported:

  • 20 – This code is for purchase order receipts, which are usually shipments from a supplier into a warehouse (but can be into stores).

  • 44~A – These are allocation transfer receipts resulting from allocations issued to move warehouse inventory down to stores. The receipt occurs for the store location on the day it receives the shipment.

  • 44~T – These are generic non-allocation transfer receipts between any two locations.

MFP GA solution only uses type 20 transactions but the rest of the RAP solutions use all types.

Receipts vs. Transfers

Transfer receipts are not the same thing as transfers (TRANSFER.csv) and both datasets provide useful information. Transfer receipts are specific to the receiving location only and occur at the time the units arrive. Transfers are linked to both the shipping and receiving locations, and they should be sent at the time the transfer is initiated. The MFP GA solution receives transfers from the TRANSFER.csv file only, but the other solutions will want both RECEIPT.csv and TRANSFER.csv files to have the transfer-related data.

Unplanned Receipts

It is possible for a location to receive inventory it did not ask for (for example, there is no associated PO or allocation linked to those units). Such receipts should still appear as a type 44~T receipt transaction, so long as those units of inventory do get pulled into the location’s stock on hand.

In most cases, you will be providing the same set of receipt columns for any application. These columns are listed below with additional usage notes.

Column Header Usage

ITEM

Must be the transaction level item or SKU number and must have a record in the PRODUCT.csv file.

ORG_NUM

Must be the location number where the item is received. Must have a record in ORGANIZATION.csv file.

DAY_DT

The date that the receipt occurred on. Receipts can occur on any date both in history and nightly batch files.

INVRC_TYPE_CODE

Indicates the type of receipt into a location using merchandising transaction codes 20 (purchase order receipt), 44~A (allocation transfer receipt), or 44~T (non-allocation transfer receipt). Only PO receipts are used in MFP, as they represent actual inventory entering the company, and not just movement between two owned locations.

INVRC_QTY

INVRC_COST_AMT_LCL

INVRC_RTL_AMT_LCL

The units being received and the total cost/retail value of those units relative to the receiving location.

LOC_CURR_CODE

DOC_CURR_CODE

The currency codes linked to the cost and retail amounts on the receipt data. The values in these fields will control how the system converts the amount (such as INVRC_COST_AMT_LCL) and how the EXCH_RATE.csv file data will be used. If you are providing the data in the source currency from the store or warehouse location, then DOC_CURR_CODE will be the currency of the source location. If the data is already all on one currency, then DOC_CURR_CODE will be that currency code. LOC_CURR_CODE is the primary reporting currency you wish RAP applications to operate in, so it will generally be a single hard-coded value on all your data files. Review the section on Exchange Rate dimension data for additional scenarios.

Transfer Data Requirements

Transfer data (TRANSFER.csv) is used to capture movement of inventory between two locations (warehouses or stores). Transfer transactions are sent at the time the transfer is initiated at the shipping location. Transfer transactions are used primarily in Planning applications and RI reporting.

Rule Explanation

Transfer Types

Transfers are provided using a type code, with 3 specific codes supported:

  • N – Normal transfers are physical movement of inventory between two locations that impacts the stock on hand

  • B – Book transfers are financial movement of inventory in the system of record that doesn’t result in any physical movement, but still impacts the stock on hand

  • I – Intercompany transfers involve inventory moved into or out of another location that is part of a different legal entity, and therefore the transfer is treated like a purchase transaction in the source system

Most transfers are categorized as Normal (N) by default. All transfer types are sent to Planning but would be loaded into separate measures as needed based on the type. Because transfers and receipts are separate measures used for different purposes, there is no overlap despite having similar information in both files.

Transfer In vs. Transfer Out

The transfers file has two sets of measures for the unit/cost/retail value into the location and out of the location. Typically these values contain the same data, but since they are aggregated and displayed separately in the target systems, they are also separate on the input so you have full control over what goes into each measure. For example, a transfer in of 5 units to location 102 would also have a transfer out of 5 units leaving location 56 (on the same record).

In most cases, you will be providing the same set of transfer columns for any application. These columns are listed below with additional usage notes.

Column Header Usage

ITEM

Must be the transaction level item or SKU number and must have a record in the PRODUCT.csv file.

ORG_NUM

Must be the location number where the item inventory is being shipped to (i.e. the receiving location). Must have a record in ORGANIZATION.csv file.

DAY_DT

The date that the transfer was initiated on (NOT the date when it completes or is received). Transfers can occur on any date both in history and nightly batch files.

FROM_ORG_NUM

Must be the location number where the item inventory is being moved out of (that is, the shipping location). Must have a record in ORGANIZATION.csv file.

TSF_TYPE_CODE

The numerical code representing the transfer type, as described in the business rules for transfers earlier in this section. This is a part of the primary key, meaning you may have multiple records for the same item/location/from-location/date with different transfer type codes.

TSF_TO_LOC_QTY

TSF_TO_LOC_COST_AMT_LCL

TSF_TO_LOC_RTL_AMT_LCL

The units and total cost/retail values for the transfer relative to the “to” location (ORG_NUM). Will be separately aggregated and displayed in MFP.

TSF_FROM_LOC_QTY

TSF_FROM_LOC_COST_AMT_LCL

TSF_FROM_LOC_RTL_AMT_LCL

The units and total cost/retail values for the transfer relative to the “from” location (FROM_ORG_NUM). Will be separately aggregated and displayed in MFP.

LOC_CURR_CODE

FROM_DOC_CURR_CODE

FROM_LOC_CURR_CODE

DOC_CURR_CODE

The currency codes linked to the cost and retail amounts on the transfer data. The values in these fields will control how the system converts the amount (such as TSF_TO_LOC_COST_AMT_LCL) and how the EXCH_RATE.csv file data will be used. If you are providing the data in the source currency from the store or warehouse location, then DOC_CURR_CODE will be the currency of the source location. If the data is already all on one currency, then DOC_CURR_CODE will be that currency code. LOC_CURR_CODE is the primary reporting currency you wish RAP applications to operate in, so it will generally be a single hard-coded value on all your data files. The FROM columns will be the same thing applied for the FROM_ORG_NUM location. Review the section on Exchange Rate dimension data for additional scenarios.

Adjustment Data Requirements

Adjustments data (ADJUSTMENT.csv) is used to capture manual changes to stock on hand made for any reason that does not fall into one of the other categories like sales or receipts. Adjustments are used only in Planning and RI applications.

Rule Explanation

Adjustment Types

The adjustments are provided using a type code, with 3 specific codes supported:

  • 22 – These adjustments are your standard changes to inventory for wastage, spoilage, losses, and so on. In Planning they are categorized as Shrink adjustments.

  • 23 – These adjustments are for specific changes that impact the Cost of Goods Sold but are not an unplanned shrink event, such as charitable donations. In Planning they are categorized as Non-Shrink adjustments.

  • 41 – These adjustments are targeted to reporting needs specifically and are the result of a stock count activity where the inventory levels were already adjusted in the store’s inventory as part of the count, but you want the adjustment captured anyway to report against it

Only types 22 and 23 go to Planning applications. Type 41 is used within RI for reporting.

Reason Codes

Reason codes are used to identify the specific type of adjustment that occurred for that item, location, and date. If you are loading data for Planning apps, then they are not required because Planning apps do not look at reason codes. They are only used for RI reporting. There are no required codes; it will depend on the data in your source system. The codes should be numerical, and there is a Description field that must also be provided for the display name.

Positive and Negative Values

Adjustments should be positive by default. A positive adjustment on the input file means a decrease in the stock on hand at the location. A negative adjustment means an increase to stock on hand (basically you have adjusted the units back into the location’s inventory, which is less common). When the data is sent to MFP, the default planning import will invert the signs for the positive adjustments to become subtractions to inventory.

In most cases, you will be providing the same set of adjustment columns for any application. These columns are listed below with additional usage notes.

Column Header Usage

ITEM

Must be the transaction level item or SKU number and must have a record in the PRODUCT.csv file.

ORG_NUM

Must be the location number where the item inventory was adjusted. Must have a record in ORGANIZATION.csv file.

DAY_DT

The date that the adjustment occurred on. Adjustments can occur on any date both in history and nightly batch files.

INVADJ_TYPE_CODE

Indicates the type of adjustment to inventory using transaction codes 22 (non-shrink adjustment), 23 (shrink adjustment), or 41 (stock count adjustment). The codes determine the associated Planning measures the data is placed into. Code 41 is only used in Retail Insights reporting.

INVADJ_QTY

INVADJ_COST_AMT_LCL

INVADJ_RTL_AMT_LCL

The units and total cost/retail values for the adjustment. The cost and retail amounts may have varying calculations in your source system depending on how shrink and non-shrink inventory adjustments are determined. Adjustments can be negative or positive depending on whether inventory is being removed or added to the stock on hand. The sign is reversed by MFP, meaning negative amounts on the input will display as positive values on the MFP measures.

LOC_CURR_CODE

DOC_CURR_CODE

The currency codes linked to the cost and retail amounts on the adjustment data. The values in these fields will control how the system converts the amount (such as INVADJ_COST_AMT_LCL) and how the EXCH_RATE.csv file data will be used. If you are providing the data in the source currency from the store or warehouse location, then DOC_CURR_CODE will be the currency of the source location. If the data is already all on one currency, then DOC_CURR_CODE will be that currency code. LOC_CURR_CODE is the primary reporting currency you wish RAP applications to operate in, so it will generally be a single hard-coded value on all your data files. Review the section on Exchange Rate dimension data for additional scenarios.

RTV Data Requirements

Return to vendor data (RTV.csv) is used to capture returns of inventory from your stores or warehouses back to the original vendor. An RTV is a transaction that decreases your owned inventory at a location because you are shipping units to a non-owned location. RTVs are used only in Planning and RI applications.

Rule Explanation

Supplier IDs, Reason Codes, Status Codes

All of the reason code, supplier number, and status code fields in an RTV record are optional and used only for RI reporting purposes, because planning applications do not report at those levels. If you are not specifying these values, leave the columns out of the file entirely, and a default value of -1 will be assigned to the record in those columns.

Positive and Negative Values

RTV transactions should always be positive values. Only send negative values to reverse a previously-sent transaction in order to zero it out from the database.

In most cases, you will be providing the same set of RTV columns for any application. These columns are listed below with additional usage notes.

Column Header Usage

ITEM

Must be the transaction level item or SKU number and must have a record in the PRODUCT.csv file.

ORG_NUM

Must be the location number where the RTV is being shipped out from. Must have a record in ORGANIZATION.csv file.

DAY_DT

The date that the RTV occurred on. RTVs can occur on any date both in history and nightly batch files.

RTV_QTY

RTV_COST_AMT_LCL

RTV_RTL_AMT_LCL

The units and total cost/retail values for the returns to vendor.

LOC_CURR_CODE

DOC_CURR_CODE

The currency codes linked to the cost and retail amounts on the RTV data. The values in these fields will control how the system converts the amount (such as RTV_COST_AMT_LCL) and how the EXCH_RATE.csv file data will be used. If you are providing the data in the source currency from the store or warehouse location, then DOC_CURR_CODE will be the currency of the source location. If the data is already all on one currency, then DOC_CURR_CODE will be that currency code. LOC_CURR_CODE is the primary reporting currency you wish RAP applications to operate in, so it will generally be a single hard-coded value on all your data files. Review the section on Exchange Rate dimension data for additional scenarios.

Markdown Data Requirements

Markdown data (MARKDOWN.csv) is used to capture changes in retail value of owned inventory due to a permanent or temporary price change. Markdowns are used only in Planning and RI applications. There are separate measures on the input file for markdown and markup effects depending on the kind of price change and direction of the change. For regular and clearance price changes, the file captures the total change in value of owned inventory units on the day the price change goes into effect. For promotional or temporary price changes, the file should have only the marginal effects of the price change when any of that inventory is sold to a customer (since the overall value of your inventory is not changed by a temporary promotion).

Rule Explanation

Markdown Amounts

Markdown amounts are only the change in total value of inventory, not the total value itself. Permanent and clearance price changes result in markdown amounts derived like this:

Markdown Retail = (SOH*Old Retail) – (SOH*New Retail)

Markdown Retail = (150*15) – (150*12) = $450

Promotional price changes do not need the total markdown amount calculation, and instead send a promotion markdown amount at the time of any sale:

Promotional Markdown Retail = (Units Sold*Old Retail) – (Units Sold*New Retail)

Promotional Markdown Retail = (5*17) – (5*15) = $10

Markdown amounts will generally be positive values when the price was decreased, and the target systems will know when to add or subtract the markdown amounts where needed.

Markdown Types

The markdowns are provided using a type code, with 3 specific codes supported:

  • R – Regular permanent price changes that are not considered a clearance price

  • C – Clearance markdowns which are permanent and intended to be used at end-of-life for the item

  • P – Promotional markdowns which are temporary price changes or discounts that are limited to a period of time

Markup Handling

When a regular price is increased or a clearance price is set back to regular price, you can send a separate transaction with positive Markup values populated in the record. You do not need to send negative values to reverse a markdown; the target systems can use the markup measures to do that. A similar rule applies to the markdown/markup cancel measures.

Inventory Usage for PDS Measures

Markdown data is joined with inventory data when you are exporting it to Planning applications, specifically to calculate two markdown measures (reg-promo and clearance-promo markdown amounts). This means you must have a matching inventory history record for every markdown record or these two measures will not export properly. The markdown export needs the clearance flag from the inventory history to determine the measure rollups.

In most cases, you will be providing the same set of markdown columns for any application. These columns are listed below with additional usage notes.

Column Header Usage

ITEM

Must be the transaction level item or SKU number and must have a record in the PRODUCT.csv file.

ORG_NUM

Must be the location number where the markdown transaction occurred. Must have a record in ORGANIZATION.csv file.

DAY_DT

The date that the markdown occurred on. Markdowns can occur on any date both in history and nightly batch files.

RTL_TYPE_CODE

The type of markdown using one of R/P/C characters to identify it as described in the business rules above. The type code determines which measures in Planning will get the data. Regular and clearance markdowns are considered as “inventory devaluation” while promo markdowns are shown as “markdowns”. Promo markdowns are further split into clearance-promo and reg-promo using the clearance flag from Inventory Position data.

MKDN_QTY

MKDN_AMT_LCL

The units affected by a markdown and the total change in retail value as a result of a markdown. Both values will be positive numbers when representing a decrease in retail value as the result of a markdown.

MKUP_QTY

MKUP_AMT_LCL

The units affected by a markup and the total change in retail value as a result of a markup. Both values will be positive numbers when representing an increase in retail value as the result of a markup.

MKDN_CAN_QTY

MKDN_CAN_AMT_LCL

The units affected by a cancelled markdown and the total change in retail value as a result of the cancellation. Both values will be positive numbers when representing an increase in retail value as the result of the cancellation.

MKUP_CAN_QTY

MKUP_CAN_AMT_LCL

The units affected by a cancelled markup and the total change in retail value as a result of the cancellation. Both values will be positive numbers when representing a decrease in retail value as the result of the cancellation.

DOC_CURR_CODE

LOC_CURR_CODE

The currency codes linked to the retail amounts on the markdown data. The values in these fields will control how the system converts the amount (such as MKDN_AMT_LCL) and how the EXCH_RATE.csv file data will be used. If you are providing the data in the source currency from the store or warehouse location, then DOC_CURR_CODE will be the currency of the source location. If the data is already all on one currency, then DOC_CURR_CODE will be that currency code. LOC_CURR_CODE is the primary reporting currency you wish RAP applications to operate in, so it will generally be a single hard-coded value on all your data files. Review the section on Exchange Rate dimension data for additional scenarios.

Purchase Order Data Requirements

Purchase order data (ORDER_HEAD.csv and ORDER_DETAIL.csv) is used to capture the raw PO details at the lowest level. Purchase orders are used in Planning, IO, and RI applications. For Planning, the PO data is aggregated from the order level to a forward-looking summary based on the open-to-buy (OTB) week date. The raw details are used as-is in RI and IO.

Rule Explanation

Daily Data Requirements

It is expected that the PO header and detail files start as full daily snapshots of all active or recently closed orders. The detail data is maintained positionally so that, if no update is received, we will continue to carry forward the last known value. Once daily batches have started, you can transition the PO details file only to an incremental update file (header file must always be a complete snapshot). When sending data incrementally, you must include all order updates for a given date, both for open and closed orders. If an order changed at the header level (such as closing or cancelling the order), you should send all the detail lines in that order even if some didn’t change. This includes when order lines are fully received and move to 0 units remaining, these changes must be sent to RAP.

Historical Data and Past Dates

The Order interfaces do not support loading historical data or data with past dates on the detailed order-line records. Every time you load orders, it is for the current set of data for a single business date. The DAY_DT value on the detail file should be the same on all rows and be set to the business date the data is for. You also cannot reload the same date multiple times; the detail table follows the rules for positional facts as described in Positional Data Handling section below.

Order Status

The header file for POs has a variety of attributes but one of the most important is the status, which should be either A (active) or C (closed). Active orders are used in the PO calculations. When sending daily PO files, you must include both active and closed order updates, because we need to know an order has been completed so it can stop being included in calculations.

OTB EOW Date

The OTB end-of-week date is used for the Planning aggregations to create a forward-looking view of expected receipts from POs. Open order quantities are aggregated to the OTB week before being exported to Planning. If the OTB week has elapsed, the order quantities are included in next week’s OTB roll-up regardless of how far in the past the date is, because the earliest that PO can come as receipts is in the next business week.

Include On Order Indicator

There is a required flag on the order header file to tell the system that an order should be included in calculations for Planning or not. When the flag is set to Y, that order’s details will be used for the aggregated on-order values. If set to N, the order details will not be used (but will still be present in the database for other purposes like RI reporting and inventory optimization).

The ORDER_HEAD.csv and ORDER_DETAIL.csv files both have a minimum set of required fields to make the integrations within RAP function, so those will be listed out below with additional usage notes. The two files are tightly coupled and it’s expected that you send both at the same time; you will never send only one of them.

Table 8-10 ORDER_HEAD.csv

Column Header Usage

ORDER_NO

Unique identifier of a purchase order in the source system. This must be the same number used across the entire life of the order, so that you can post updates and revisions to it over time. If your source system changes the order number for any reason, be aware that you may need to keep track of old order numbers to post the updates to RAP for the right orders.

STATUS

Tells the system if the order is active (A) or closed/cancelled (C). Only active orders will be used for Planning applications, even if a closed order still has on-order quantities on the records. It is important that you post closed orders to RAP when they close, so that we have an accurate status for all orders.

OTB_EOW_DATE

The week-ending date where the open order quantities should be considered for Open To Buy (OTB) planning. This will drive the aggregation of the purchase order data into future weeks before it is loaded into Planning applications. When the OTB_EOW_DATE is in the past, any remaining open order quantities will be pushed into the next possible week-ending date that is in the future, because those units cannot be received before that time.

INCLUDE_ON_ORDER_IND

A flag to instruct the system on which orders should be considered for Open to Buy and any other open on-order calculations. If the flag is N then the order will not be sent to Planning, but it can still be used in Retail Insights reporting or custom extensions.

Table 8-11 ORDER_DETAIL.csv

Column Header Usage

ITEM

Must be the transaction level item or SKU number and must have a record in the PRODUCT.csv file. Should not be a pack item, even if the supplier would be delivering in packs. It is expected that inventory and purchase order data are always at the component item level. Pack item data should be spread to their components before sending this file.

ORG_NUM

Must be the location number where the order was placed from and will be received at. Must have a record in ORGANIZATION.csv file. It is usually a warehouse but can be a store if you allow direct-to-store deliveries.

DAY_DT

The current business date in the system for this data load. You cannot send order data for any other date except the current business date, as this is a positional table like Inventory. Past/future dates will not be accepted. The purpose of this column is mainly for reference and archival purposes (for example, when looking at old data files you will know which business date this set of records was for).

ORDER_NO

A cross-reference to the order number on ORDER_HEAD.csv.

PO_ONORD_QTY

PO_ONORD_COST_AMT_LCL

PO_ONORD_RTL_AMT_LCL

The current outstanding order quantities and total cost/retail value of the units on order. These values represent the expected units that have not been received yet. Because this interface is positional (like inventory) we will carry forward the last known quantities every day unless you update the system with new records. This means that, even when the order is received or cancelled, we must eventually get a record that moves all of these columns to 0, or we will carry forward the non-zero values forever.

DOC_CURR_CODE

LOC_CURR_CODE

The currency codes linked to the cost and retail amounts on the order data. The values in these fields will control how the system converts the amount (such as PO_ONORD_COST_AMT_LCL) and how the EXCH_RATE.csv file data will be used. If you are providing the data in the source currency from the store or warehouse location, then DOC_CURR_CODE will be the currency of the source location. If the data is already all on one currency, then DOC_CURR_CODE will be that currency code. LOC_CURR_CODE is the primary reporting currency you wish RAP applications to operate in, so it will generally be a single hard-coded value on all your data files. Review the section on Exchange Rate dimension data for additional scenarios.

It’s also necessary to understand the lifecycle of a purchase order and how that should be reflected in the data files over time. RAP will require data to be sent for each step in the order process as outlined below.

  1. When the order is approved, the ORDER_HEAD file should contain a row for the order with status=A and the ORER_DETAIL should contain the items on the order with non-zero quantities for the on-order amounts.

  2. As the lines of the order are received, ORDER_HEAD should continue to have the row for the order with every update, and ORDER_DETAIL should be sent with the order lines that require changes from the last known value. If you have the ability to detect which order lines changed, you only need to send those. RAP will remember and carry forward any order lines that were not updated. If you can’t detect the changes to order lines, just send all lines in the order every time.

  3. If any lines are cancelled from the order, you must send that update as a set of zero values on the PO_ONORD_* columns in ORDER_DETAIL to zero out the cancelled lines in RAP. Similarly, if the entire order is canceled or closed before being fully received, you must send all lines of the order with zero values on the PO_ONORD_* columns in ORDER_DETAIL and also update ORDER_HEAD to have a status of C.

  4. As order lines start to be received normally, send the new order quantities for each change, including when a line is fully received and moves to 0 units on order. When an order becomes fully received we need all rows of data in RAP to move to 0 for that order’s values, so that we stop including it in future on-order rollups.

  5. When an order is finally fully-received and closed, send one final update where ORDER_HEAD shows the status as C and the ORDER_DETAIL data is moved to 0 units on order for any lines not updated yet.

Depending on your source system, it can be difficult to detect all of these changes to the purchase orders over time and send only incremental updates. In such cases, you may always post all orders to RAP which are active or have been closed within recent history and we will merge the data into the system on top of the existing order records. Then the main requirement that must be accounted for is the cancelling or removal of order lines from an order, which must still be tracked and sent to RAP even if your source system deletes the data.

Other Fact File Considerations

The following section describes additional data considerations that may apply to your implementation depending on the types and volumes of data being provided to the platform. Review each topic closely, as it affects the data provided in the foundation files.

Positional Data Handling

The largest sets of fact data in the platform tend to be those that represent every possible item/location combination (such as prices or costs). To efficiently store and process these data volumes, a data warehouse technique known as compression is used to capture only the changed records on a day-to-day basis, effectively maintaining a “current position” for every set of identifiers, which is updated during each batch execution. The output of this compression process is called positional data, and the following functional areas use this method of data load and storage:

  •  Inventory (INV and INVU)

  •  Prices (PRICE)

  •  Costs (BCOST and NCOST)

  •  Purchase Orders (PO_ONORD) and Allocations On Order (PO_ONALC)

Positional data loads follow very specific rules and cannot be processed in the same manner as non-positional data such as sales transactions.

Table 8-12 Positional Data Rules

Rule Explanation

Data Must be Sequential

Positional data must be loaded in the order of the calendar date on which it occurs and cannot be loaded out-of-order. For example, when loading history data for inventory, you must provide each week of inventory one after the other, starting from Week 1, 2, 3, and so on.

Data Cannot be Back Posted

Positional data cannot be posted to any date prior to the current load date or business date of the system. If your current load date is Week 52 2021, you cannot post records back to Week 50: those past positions are unable to be changed. Any corrections that need to be loaded must be effective from the current date forward.

Data Must be Seeded

Because positional data must maintain the current position of all data elements in the fact (even those that are inactive or not changing) it is required to initialize or “seed” positional facts with a starting value for every possible combination of identifiers. This happens at two times:

  1. The first date in your history files must be full snapshots of all item/locations that need a value, including zero balances for things like inventory.

  2. Special seed programs are provided to load initial full snapshots of data after history is finished, to prepare you for nightly batch runs. After seeding, you are allowed to provide incremental datasets (posting only the positions that change, not the full daily or weekly snapshot). Incremental loads are one of the main benefits of using positional data, as they greatly reduce your nightly batch runtime.

Throughout the initial data load process, there will be additional steps called out any time a positional load must be performed, to ensure you accurately capture both historical and initial seed data before starting nightly batch runs.

System Parameters File

The dimension file for RA_SRC_CURR_PARAM_G.dat is not used as part of your history load process directly, but instead provides an important piece of information to the platform for operational activities. This file must contain the current business date associated with the files in the ZIP package. The file should be included with the nightly ZIP upload that contains your foundation data, such as RI_RMS_DATA.zip or RAP_DATA.zip.

The file has only two generic columns, PARAM_NAME and PARAM_VALUE. When data is sourced from RMFCS, it will be automatically generated and sent to RI in the nightly batch. If your data does not come from RMFCS, then you need to include the file manually. Currently, only two rows will be used, but future releases may look for additional parameters in the file.

The file should contain the following rows:

VDATE|20220101
PRIME_CURRENCY_CODE|USD

The parameter value with VDATE is the current business date that all your other files were generated for in YYYYMMDD format. The date should match the values on your fact data, such as the DAY_DT columns in sales, inventory, and so on. This format is not configurable and should be provided as shown. The parameter value with PRIME_CURRENCY_CODE is used by the system to set default currencies on fact files when you do not provide them yourself or if there are null currency codes on a row.