This chapter details the steps involved in executing various data loaders that are available within OFSAA. Data loaders move data from staging layer to processing layer.
Topics:
· Historical Rates Data Loader
· Stage Instrument Table Loader
· Transaction Summary Table Loader
· Pricing Management Transfer Rate Population Procedure
· Fact Ledger Stat Transformation
· Financial Element Dimension Population
· Material Currency Identifier
The Dimension Loader procedure populates dimension members, attributes and hierarchies from Staging dimension tables into dimension tables registered within OFSAAI AMHM framework. Users can view the members and hierarchies loaded by the dimension loader through AMHM screens.
NOTE |
The dimension loaders (drmDataLoader, STGDimDataLoader, and simpledimloader) load the strings into one target language only, the target language is derived from the database-login-session using USERENV. Refer to Support Note 1586342.1, if Hierarchy Filter is not reflecting correctly after making the changes to underlying Hierarchy. |
Topics:
· Enhancements to Support Alphanumeric Code in Dimensions
· Tables that are Part Of Staging
· Populating STG_<DIMENSION>_HIER_INTF Table
· Executing the Dimension Load Procedure
· Executing the Dimension Load Procedure using Master Table approach
· Updating DIM_<DIMENSION>_B <Dimension>_Code column with values from DIM_<DIMENSION>_ATTR table
· Truncate Stage Tables Procedure
Description of Dimension Loader Overview follows
The dimension loader is used to:
· Load dimension members and their attributes from the staging area into Dimension tables that are registered with the OFSAAI AMHM framework.
· Create hierarchies in AMHM.
· Load hierarchical relationships between members within hierarchies from the staging area into AMHM.
Some of the features of the dimension loader are:
· Multiple hierarchies can be loaded from staging tables.
· Validations of members and hierarchies are similar to that of being performed within AMHM screens.
· Members can be loaded incrementally or fully synchronized with the staging tables.
NOTE |
Dimension Loaders and UIs support capturing an alphanumeric code in addition to the numeric code. |
The following Data Model components are required to support dimension member code storage; changes in {6.0/7.3.0/7.3.1} are as follows:
· Release 7.3.1: Dimension Configuration via manual updates to REV_DIMENSIONS_B columns: MEMBER_DATA_TYPE_CODE and MEMBER_CODE_COLUMN. (Also See: OFSAAI Installation & Configuration Guide 7.3 and AI Administration Guide)
· Release 6.0 (7.3): Stage Dimension Interface Table alphanumeric member code column (v_< DIM >_code).
· Release 6.0 (7.3): Stage Dimension Loader Program can directly load alphanumeric member codes
· Release 6.1.1: Some new columns are added to Staging & Processor tables as a part of FSDF. These are not required by EPM applications and not part of the T2T or FSI_D tables.
For further details on display of member codes in the user interfaces, see the OFSAAI User Guide.
Dimension data is stored in the following set of tables:
· STG _<DIMENSION>_B_INTF: Stores leaf and node member codes within the dimension.
· STG_<DIMENSION>_ TL_INTF: Stores names of leaf and node and their translations.
· STG_<DIMENSION>_ ATTR_INTF: Stores attribute values for the attributes of the dimension.
· STG_<DIMENSION>_ HIER_INTF: Stores parent-child relationship of members and nodes that are part of hierarchies.
· STG_ORG_UNIT_B_INTF: Stores leaf and node member codes within the organization unit dimension.
· STG_ORG_UNIT_TL_INTF: Stores names of leaf and node and their translations for the organization unit dimension.
· STG_ORG_UNIT_ATTR_INTF: Stores attribute values for the attributes of the organization unit dimension.
· STG_ORG_UNIIT_HIER_INTF: Stores parent-child relationship of members and nodes that are part of hierarchies for the organization unit dimension.
· STG_HIERARCHIES_INTF: Stores master information related to hierarchies.
Data present in the above set of staging dimension tables are loaded into the following set of dimension tables.
· DIM_<DIMENSION>_ B: Stores leaf and node member codes within the dimension.
· DIM_<DIMENSION>_TL: Stores names of leaf and node and their translations.
· DIM_<DIMENSION>_ATTR: Stores attribute values for the attributes of the dimension.
· DIM_<DIMENSION>_HIER: Stores parent-child relationship of members and nodes that are part of hierarchies.
· REV_HIERARCHIES: Stores hierarchy related information.
· REV_HIERARCHY_LEVELS: Stores levels of the hierarchy.
· REV_HIER_DEFINITIONS: Stores definitions of the hierarchies.
Staging tables are present for all key dimensions that are configured within the OFSAAI framework. For any custom key dimension that is added by the Client, respective staging dimension tables like STG_<DIMENSION>_B_INTF, STG_< DIMENSION>_TL_INTF, STG_<DIMENSION>_ATTR_INTF, and STG_<DIMENSION>_HIER_INTF have to be created in the ERwin model.
The STG_<DIMENSION>_HIER_INTF table is designed to hold hierarchy structure. The hierarchy structure is maintained by storing the parent child relationship in the table. In the following hierarchy there are 4 levels. The first level node is 100, which is the Total Rollup. The Total Rollup node will have the N_PARENT_DISPLAY_CODE and N_CHILD_DISPLAY_CODE as the same.
Column Name |
Column Description |
V_HIERARCHY_OBJECT_NAME |
Stores the name of the hierarchy |
N_PARENT_DISPLAY_CODE |
Stores the parent Display Code |
N_CHILD_DISPLAY_CODE |
Stores the child Display Code |
N_DISPLAY_ORDER_NUM |
Determines the order in which the structure (nodes, leaves) of the hierarchy should be displayed. This is used by the UI while displaying the hierarchy. There is no validation to check if the values in the column are in proper sequence. |
V_CREATED_BY |
Stores the created by user. Hard coded as -1 |
V_LAST_MODIFIED_BY |
Stores the last modified by user. Hard coded as -1 |
Hierarchy Structure
Description of Hierarchy Structure follows
Simple Data
V_HIERARCHY_OBJECT_NAME |
N_PARENT_DISPLAY_CODE |
N_CHILD_DISPLAY_CODE |
N_DISPLAY_ORDER_NUM |
V_CREATED_BY |
V_LAST_MODIFIED_BY |
INCOME STMT |
100 |
100 |
1 |
-1 |
-1 |
INCOME STMT |
100 |
12345678901247 |
2 |
-1 |
-1 |
INCOME STMT |
12345678901247 |
12345678901255 |
1 |
-1 |
-1 |
INCOME STMT |
12345678901255 |
10001 |
1 |
-1 |
-1 |
INCOME STMT |
12345678901255 |
10002 |
2 |
-1 |
-1 |
INCOME STMT |
12345678901247 |
12345678901257 |
2 |
-1 |
-1 |
INCOME STMT |
12345678901257 |
10006 |
1 |
-1 |
-1 |
INCOME STMT |
12345678901257 |
10007 |
2 |
-1 |
-1 |
INCOME STMT |
100 |
12345678901250 |
3 |
-1 |
-1 |
INCOME STMT |
12345678901250 |
12345678901262 |
2 |
-1 |
-1 |
INCOME STMT |
12345678901262 |
30005 |
1 |
-1 |
-1 |
INCOME STMT |
12345678901250 |
12345678901264 |
1 |
-1 |
-1 |
INCOME STMT |
12345678901264 |
30006 |
1 |
-1 |
-1 |
INCOME STMT |
12345678901264 |
30007 |
2 |
-1 |
-1 |
INCOME STMT |
12345678901264 |
30008 |
3 |
-1 |
-1 |
INCOME STMT |
12345678901264 |
30009 |
4 |
-1 |
-1 |
INCOME STMT |
100 |
12345678901268 |
4 |
-1 |
-1 |
INCOME STMT |
12345678901268 |
3912228 |
1 |
-1 |
-1 |
INCOME STMT |
3912228 |
20020 |
1 |
-1 |
-1 |
INCOME STMT |
3912228 |
20021 |
2 |
-1 |
-1 |
INCOME STMT |
3912228 |
20022 |
3 |
-1 |
-1 |
Column REV_DIMENSIONS_B.MEMBER_CODE_COLUMN
In release 7.3.1: With the introduction of alphanumeric support, REV_DIMENSIONS_B.MEMBER_CODE_COLUMN column becomes important for successful execution of the dimension loader program and subsequent T2Ts. The value in this column should be a valid code column from the relevant DIM_<DIMENSION>_B (key dimension) or FSI_<DIM>_CD (simple dimension) table. The Leaf_registration procedure populates this column. The value provided to the Leaf registration procedure should be the correct DIM_<DIM>_B.<DIM>_CODE or FSI_<DIM>_CD.<DIM>_DISPLAY_CD column. Setting this will ensure that the values in this column are displayed for both numeric and alphanumeric dimensions as Alphanumeric Code in the UI. Configuration of an alphanumeric dimension also requires manual update of the REV_DIMENSIONS_B. MEMBER_DATA_TYPE_CODE column.
For more information, see OFS AAI Installation and Configuration Guide.
This procedure performs the following functions:
· Gets the list of source and target dimension tables. The dimension tables for a given dimension are stored in REV_DIMENSIONS_B table. The stage tables for a given dimension are stored in FSI_DIM_LOADER_SETUP_DETAILS.
· The parameter Synchronize Flag can be used to completely synchronize data between the stage and the dimension tables. If the flag = 'Y' members from the dimension table which are not present in the staging table will be deleted. If the flag is 'N' the program merges the data between the staging and dimension table.
· The Loader program validates the members/attributes before loading them.
The program validates the number of records in the base members table - STG_<DIMENSION>_B_INTF and translation members table - STG_<DIMENSION>_TL_INTF. The program exits if the number of records does not match
In case values for mandatory attributes are not provided in the staging tables, the loader program populates the default value (as specified in the attribute maintenance screens within AMHM of OFSAAI) in the dimension table.
The program validates for data types of attribute value. For example an attribute that is configured as 'NUMERIC' cannot have non-numeric values.
Dimension Loader validates the attribute against their corresponding dimension table. If any of the attributes is not present, then an error message will be logged inFSI_MESSAGE_LOG table.
Dimension Loader will check the number of records in Dim_<Dim_Name>_B and Dim_<Dim_Name>_TL for the language. In case any mismatch is found, then an error will be logged and loading will be aborted.
· If all the member level validations are successful the loader program inserts the data from the staging tables to the dimension tables
Note: In release 6.0 (7.3) The stage dimension loader program is modified to move alphanumeric code values from STG_< DIMENSION >_B_INTF.V_< DIM >_CODE to DIM_< DIM >_B.< DIM >_CODE column. Previously, DIM_< DIM>_B.< DIM >_CODE column was populated using the fn_updateDimensionCode procedure from the code attributes. With this enhancement users can directly load alphanumeric values.
The fn_updateDimensionCode procedure is still available for users who do not want make any changes to their ETL procedures for populating the dimension staging tables (for example, STG_< DIMENSION >_B_INTF,, STG_< DIMENSION>_ATTR_INTF).
· After this, the loader program loads hierarchy data from staging into hierarchy tables.
· In case of hierarchy data the loader program validates if the members used in the hierarchy are present in the STG_<DIMENSION>_B_INTF table.
· The program validates if the hierarchy contains multiple root nodes and logs error messages accordingly, as multiple root nodes are not supported.
· Dimension Loader will check special characters in Hierarchy. Hierarchy name with special characters will not be loaded.
· Following are the list of special characters which are not allowed in Hierarchy Name:
^&\'
After execution of the dimension loader, the user must execute the reverse population procedure to populate OFSA legacy dimension and hierarchy tables.
Dimension Leaf values can have a maximum of 14 digits.
Only 26 key (processing) dimensions are allowed in the database. Examples of seeded key leaf types are Common COA ID, Organizational Unit ID, GL Account ID, Product ID, Legal Entity ID.
The maximum number of columns that the Oracle database allows in a unique index is 32. This is the overriding constraint. After subtracting IDENTITY_CODE, YEAR_S, ACCUM_TYPE_CD, CONSOLIDATION_CD, and ISO_CURRENCY_CD, this leaves 27 columns available for Key Processing Dimensions (leaf dimensions). BALANCE_TYPE_CD is now part of the unique index so this brings the maximum number of leaf columns down to 26.
There is an integrity check performed during dimension data loading to confirm if dimension members are included in a hierarchy definition. If they are included, these members should not be deleted from the dimension member pool. If dimension members are deleted or made inactive as part of the data load, the validation will return an error message, cannot delete a member that is used as part of a hierarchy.
If you wish to override this validation, an additional parameter can be passed to the Dimension Data Loader program(fn_drmDataLoader), for example: force_member_delete. The parameter can be set to Y or N. Inputting Y allows you to override the used in hierarchy dependency validation. Inputting N is the default behavior, which performs the validation check to confirm if members are used in a hierarchy or not.
Below is the function:
function fn_drmDataLoader(batch_run_id varchar2,
as_of_date varchar2,
pDimensionId varchar2,
pSynchFlag char default 'Y',
force_member_delete char default 'N')
FSI_DIM_LOADER_SETUP_DETAILS table should have record for each dimension that has to be loaded using the dimension loader. The table contains seeded entries for key dimensions that are seeded with the application.
The following are sample entries in the setup table:
Column Name |
Description |
Sample Value |
n_dimension_id |
This stores the Dimension ID |
1 |
v_intf_b_table_name |
Stores the name of the Staging Base table |
Stg_org_unit_b_intf |
v_intf_member_column |
Stores the name of the Staging Member Column Name |
V_org_unit_id |
v_intf_tl_table_name |
Stores the name of the Staging Translation table |
Stg_org_unit_tl_intf |
v_intf_attr_table_name |
Stores the name of the Staging Member Attribute table |
Stg_org_unit_attr_intf |
v_intf_hier_table_name |
Stores the name of the Staging Hierarchy table |
Stg_org_unit_hier_intf |
d_start_time |
Start time of loader - updated by the loader program. |
|
d_end_time |
End time of loader - updated by the loader program. |
|
v_comments |
Stores Comments. |
Dimension loader for organization unit. |
v_status |
Status updated by the Loader program. |
|
v_intf_member_name_col |
Stores the name of the Member |
V_org_unit_name |
v_gen_skey_flag |
Flag to indicate if surrogate key needs to be generated for alphanumeric codes in the staging. Applicable only for loading dimension data from master tables. Not applicable for loading dimension data from interface tables. Note: Although the application UI may display an alphanumeric dimension member ID, the numeric member ID is the value stored in member-based assumption rules, processing results, and audit tables. Implications for Object Migration: Numeric dimension member IDs should be the same in both the Source and Target environments, to ensure the integrity of any member-based assumptions you wish to migrate. If you use the Master Table approach for loading dimension data and have set it up to generate surrogate keys for members, this can result in differing IDs between the Source and Target and therefore would be a concern if you intend to migrate objects which depend on these IDs. |
|
v_stg_member_column |
Name of the column that holds member code in the staging table. Applicable for loading dimension data from both master tables and interface tables. (sample value v_org_unit_code) – this appears to be the alphanumeric code |
v_org_unit_code |
v_stg_member_name_col |
Name of the column that holds member name in the staging table. Applicable only for loading dimension data from master tables. Not applicable for loading dimension data from interface tables. |
|
v_stg_member_desc_col |
Name of the column that holds description in the staging table. Applicable only for loading dimension data from master tables. Not applicable for loading dimension data from interface tables. |
|
NOTE |
Ensure FSI_DIM_LOADER_SETUP_DETAILS.V_STG_MEMBER_COLUMN is updated as mentioned following for Legal Entity and Customer dimensions. |
Dimension Loader Approach |
V_STG_MEMBER_COLUMN for Legal Entity |
V_STG_MEMBER_COLUMN for Customer |
Using Interface Table (fn_drmDataLoader) |
V_LV_CODE |
V_CUST_REF_CODE |
Using Master Table (fn_STGDimDataLoader) |
V_ENTITY_CODE |
V_PARTY_ID |
You can execute this procedure either from SQL*Plus or from within a PL/SQL block or from the Batch Maintenance window within OFSAAI framework.
To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner. The function requires 4 parameters – Batch Run Identifier, As of Date, Dimension Identifier, Synchronize flag (Optional). The syntax for calling the procedure is:
function fn_drmDataLoader(batch_run_id varchar2,
as_of_date varchar2,
pDimensionId varchar2,
pSynchFlag char default 'Y',
force_member_delete char default 'N')
where
· BATCH_RUN_ID is any string to identify the executed batch.
· AS_OF_DATE in the format YYYYMMDD.
· pDIMENSIONID dimension id.
· pSynchFlag this parameter is used to identify if a complete synchronization of data between staging and dimension table is required. The default value is 'Y'.
NOTE |
With Synch flag N, data is moved from Stage to Dimension tables. Here, an appending process happens. You can provide a combination of new Dimension records plus the data that has undergone change. New records are inserted and the changed data is updated into the Dimension table. With Synch flag Y, the Stage table data will completely replace the Dimension table data. There are a couple of checks in place to ensure that stage_dimension_loader is equipped with similar validations that the UI provides. The Data Loader does a Dependencies Check before a member is deleted. The validation checks, if there are members used in the Hierarchy that are not present in the DIM_< DIM >_B table. This is similar to the process of trying to delete a member from the UI, which is being used in the Hierarchy definition. You are expected to remove or delete such Hierarchies from the UI before deleting a member. |
For Example:
Declare
num number;
Begin
num := fn_drmDataLoader ('INFODOM_20100405','20100405' ,1,'Y','N');
End;
To execute the procedure from the OFSAAI Batch Maintenance, create a new Batch with the Task as TRANSFORM DATA and specify the following parameters for the task:
· Datastore Type: Select appropriate datastore from list
· Datastore Name: Select appropriate name from the list
· IP address: Select the IP address from the list
· Rule Name: fn_drmDataLoader
· Parameter List: Dimension ID, Synchronize Flag
The fn_drmdataloader function calls STG_DIMENSION_LOADER package which loads data from the stg_<dimension>_hier_intf to the dim_<dimension>_hier table.
From Release 8.0, RUNIT.sh utility is available to resave the UMM Hierarchy Objects. The data for AMHM hierarchies which is stored in dim_<dimension>_hier table is changed due to the fn_drmdataloader function, so the RUNIT.sh utility is executed to refresh the UMM hierarchies which have been implicitly created due to the AMHM hierarchies. This file resides under ficdb/bin area.
To run the utility directly from the console:
1. Navigate to$FIC_DB_HOME/bin of OFSAAI FIC DB tier to execute RUNIT.sh file
The following parameter needs to be provided:
§ INFODOM- Specify the information domain name whose hierarchies are to be refreshed. This is the first parameter and mandatory parameter
§ USERID- specify the AAI user id who is performing this activity. This is second parameter and mandatory as well
§ HIERARCHY- specify the hierarchy code to be refreshed. In case multiple hierarchies need to be refreshed the same can be provided and tilde (~) separated values. This is third parameter and non-mandatory parameter
For example: ./RUNIT.sh,<INFODOM>,<USERID>,<CODE1~CODE2~CODE3>
NOTE |
In case the third parameter is not specified, then all the hierarchies present in the infodom will be refreshed. |
To run the utility through the Operations module:
2. Navigate to the Operations module and define a batch.
3. Add a task by selecting the component as RUN EXECUTABLE.
4. Under Dynamic Parameter List panel, specify ./RUNIT.sh,<INFODOM>,<USERID>,<CODE1~CODE2~CODE3> in the Executable field.
After saving the Batch Definition, execute the batch to resave the UMM Hierarchy Objects.
The text and explanation for each of these exceptions follows. If you call the procedure from a PL/SQL block you may want to handle these exceptions appropriately so that your program can proceed without interruption.
· Exception 1: Error. errMandatoryAttributes
This exception occurs when the stage Loader program cannot find any data default value for mandatory attributes.
· Exception 2: Error. errAttributeValidation
This exception occurs when there is a data type mis-match between the attribute value and configured data-type for the attribute.
· Exception 3: Error. errAttributeMemberMissing
If there is a mismatch in the count between the member's base and translation table.
FSI_DIM_LOADER_SETUP_DETAILS table should have a record for each dimension that has to be loaded. The table contains entries for key dimensions that are seeded with the application.
The following columns must be populated for user-defined Dimensions.
v_stg_member_column
v_stg_member_name_col
v_stg_member_desc_col
NOTE |
Before running DRM_DIMENSION_LOADER for Legal Entity dimension, update the value of FSI_DIM_LOADER_SETUP_DETAILS.V_STG_MEMBER_COLUMN as V_LV_CODE (which is the column available in STG_LEGAL_ENTITY_B_INTF table). |
Additionally, the FSI_DIM_ATTRIBUTE_MAP table should be configured with column attribute mapping data. This table maps the columns from a given master table to attributes.
N_DIMENSION_ID |
This stores the Dimension ID |
V_STG_TABLE_NAME |
This holds the source Stage Master table |
V_STG_COLUMN_NAME |
This holds the column from the master table |
V_ATTRIBUTE_NAME |
This holds the name of the attribute the column maps to |
V_UPDATE_B_CODE_FLAG |
This column indicates if the attribute value can be used to update the code column in the DIM_<Dimension>_B table. Note: fn_STGDimDataLoader does not use FSI_DIM_ATTRIBUTE_MAP.V_UPDATE_B_CODE_FLAG |
You can execute this procedure either from SQL*Plus or from within a PL/SQL block or from the Batch Maintenance window within OFSAAI framework. To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner. The function requires 5 parameters: – Batch Run Identifier , As of Date, Dimension Identifier , MIS-Date Required Flag, Synchronize flag (Optional). The syntax for calling the procedure is:
function fn_STGDimDataLoader(batch_run_id varchar2,
as_of_date varchar2,
pDimensionId varchar2,
pMisDateReqFlag char default 'Y',
pSynchFlag char default 'N')
where
· BATCH_RUN_ID is any string to identify the executed batch.
· AS_OF_DATE in the format YYYYMMDD.
· pDIMENSIONID dimension id.
· pMisDateReqFlag is used to identify if AS-OF_DATE should be used in the where clause to filter the data.
· pSynchFlag is used to identify if a complete synchronization of data between staging and fusion table is required. The default value is 'Y'.
For Example
Declare
num number;
Begin
num := fn_STGDimDataLoader ('INFODOM_20100405','20100405' ,1,'Y','Y' );
End;
To execute the procedure from OFSAAI Batch Maintenance, create a new Batch with the Task as TRANSFORM DATA and specify the following parameters for the task:
· Datastore Type: Select appropriate datastore from list
· Datastore Name: Select appropriate name from the list
· IP address: Select the IP address from the list
· Rule Name: fn_STGDimDataLoader
· Parameter List: Dimension ID, MIS Date Required Flag , Synchronize Flag
Clients could face a problem while loading customer dimension into AMHM using the Master table approach.
Configuring the setup table for CUSTOMER dimension is pretty confusing while dealing with attributes like FIRST_NAME , MIDDLE_NAME and LAST_NAME.
Most clients would like to see FIRST_NAME , MIDDLE_NAME and LAST_NAME forming the name of the member within the customer dimension.
Currently the STG_DIMENSION_LOADER disallows concatenation of columns.
Moreover the concatenation might not ensure unique values.
As a solution to this problem we can work on the following options:
1. Create a view on STG_CUSTOMER_MASTER table with FIRST_NAME, MIDDLE_NAME and LAST_NAME concatenated and identify this column as NAME.
2. Configure the name column from the view in FSI_DIM_LOADER_SETUP_DETAILS
3. Increase the size of DIM_CUSTOMER_TL.NAME column.
4. Disable the unique index on DIM_CUSTOMER_TL.NAME or append Customer_code to the NAME column.
5. The NAME column will be populated into the DIM_CUSTOMER_TL.NAME column.
Populate customer_code into the DIM_CUSTOMER_TL.NAME column.
The stage dimension loader procedure does not insert or update the <Dimension>_code column in the Dim_<Dimension>_B table. This is an alternate method for updating the < Dimension>_Code column in the Dim_< Dimension>_B table, retained to accommodate implementations prior to the enhancement where we enable loading the code directly to the dimension table instead of from the attribute table. It is not recommended for new installations. This section explains how the <Dimension>_code can be updated.
Steps to be followed
1. A new attribute should be created in the REV_DIM_ATTRIBUTES_B / TL table.
NOTE |
You should use the existing CODE attribute for the seeded dimensions. PRODUCT CODE, COMMON COA CODE, and so on. |
2. The fsi_dim_attribute_map table should be populated with values.
The following columns must be populated:
§ N_DIMENSION_ID (Dimension id)
§ V_ATTRIBUTE_NAME (The attribute name)
§ V_UPDATE_B_CODE_FLAG (This flag should be 'Y'). Any given dimension can have only one attribute with V_UPDATE_B_CODE_FLAG as 'Y'. This should only be specified for the CODE attribute for that dimension.
Example:
N_DIMENSION_ID |
4 |
V_ATTRIBUTE_NAME |
'PRODUCT_CODE' |
V_UPDATE_B_CODE_FLAG |
'Y' |
V_STG_TABLE_NAME |
'stg_product_master' |
V_STG_COLUMN_NAME |
'v_prod_code' |
NOTE |
The values in V_STG_TABLE_NAME and V_STG_COLUMN_NAME are not used by the fn_updateDimensionCode procedure, however these fields are set to NOT NULL and should be populated. |
3. Load STG_<DIMENSION>_ATTR_INTF table with data for the new ATTRIBUTE created.
NOTE |
The attribute values must first be loaded using the stage dimension loader procedure, fn_drmDataLoader, before running this procedure. This procedure will pull values from the DIM_<DIMENSION>_ATTR table. If these rows do not exist for these members prior to running this procedure, the DIM_<DIMENSION>_B.<DIMENSION>_CODE field will not be updated. |
4. Execute the fn_updateDimensionCode function. The function updates the code column with values from the DIM_<DIMENSION>_ATTR table.
You can execute this procedure either from SQL*Plus or from within a PL/SQL block or from the Batch Maintenance window within OFSAAI framework.
To run the procedure from SQL*Plus, login to SQL*Plus as the Atomic Schema Owner. The function requires 3 parameters – Batch Run Identifier , As of Date, Dimension Identifier. The syntax for calling the procedure is:
function fn_updateDimensionCode (batch_run_id varchar2,
as_of_date varchar2,
pDimensionId varchar2)
where
· BATCH_RUN_ID is any string to identify the executed batch.
· AS_OF_DATE in the format YYYYMMDD.
· pDIMENSIONID dimension id
For Example
Declare
num number;
Begin
num := fn_updateDimensionCode ('INFODOM_20100405','20100405',1 );
End;
You need to populate a row in FSI_DIM_LOADER_SETUP_DETAILS.
For example, for FINANCIAL ELEM CODE, to insert a row into FSI_DIM_LOADER_SETUP_DETAILS, following is the syntax:
INSERT INTO FSI_DIM_LOADER_SETUP_DETAILS (N_DIMENSION_ID) VALUES ('0'); COMMIT;
To execute the procedure from OFSAAI Batch Maintenance, create a new Batch with the Task as TRANSFORM DATA and specify the following parameters for the task:
· Datastore Type: Select appropriate datastore from list
· Datastore Name: Select appropriate name from the list
· IP address: Select the IP address from the list
· Rule Name: Update_Dimension_Code
· Parameter List: Dimension ID
This procedure performs the following functions:
· The procedure queries the FSI_DIM_LOADER_SETUP_DETAILS table to get the names of the staging table used by the Dimension Loader program.
· The MIS Date option only works to the Master Table approach (fn_STGDimDataLoader) dimension loader. It is not applicable to dimension data loaded using the standard Dimension Load Procedure (fn_drmDataLoader).
Executing the Truncate Stage Tables Procedure
You can execute this procedure either from SQL*Plus or from within a PL/SQL block or from the Batch Maintenance window within OFSAAI framework.
To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner. The function requires 4 parameters – Batch Run Identifier, As of Date, Dimension Identifier, Mis Date Required Flag. The syntax for calling the procedure is:
function fn_truncateStageTable(batch_run_id varchar2,
as_of_date varchar2,
pDimensionId varchar2,
pMisDateReqFlag char default 'Y')
where
· BATCH_RUN_ID is any string to identify the executed batch.
· AS_OF_DATE in the format YYYYMMDD.
· pDIMENSIONID dimension id.
· pMisDateReqFlag is used to identify the data needs to be deleted for a given MIS Date. The default value is 'Y'.
For Example
Declare
num number;
Begin
num := fn_truncateStageTable ('INFODOM_20100405','20100405' ,1,'Y' );
End;
To execute the procedure from OFSAAI Batch Maintenance, create a new Batch with the Task as TRANSFORM DATA and specify the following parameters for the task:
· Datastore Type: Select appropriate datastore from list
· Datastore Name: Select appropriate name from the list
· IP address: Select the IP address from the list
· Rule Name: fn_truncateStageTable
· Parameter List: Dimension ID, MIS-Date required Flag
Currently the dimension loader program works only for key dimensions.
Simple Dimension Loader provides the ability to load data from stage tables to Simple dimension tables.
For example, the user can load data intoFSI_ACCOUNT_OFFICER_CD and FSI_ACCOUNT_OFFICER_MLS using the Simple Dimension Loader program.
Simple dimension of type 'writable and editable' can use this loading approach. This can be identified by querying rev_dimensions_b.write_flag = 'Y', rev_dimensions_b.dimension_editable_flag ='Y' and rev_dimensions_b.simple_dimension_flag = 'Y'.
Topics:
· Creating Simple Dimension Stage Table
· Configuration of Setup Tables
· Executing the Simple Dimension Load Procedure
You can create stage tables for the required simple dimensions by using the following template:
STG_<DIM>_MASTER |
|||
COLUMN_NAME |
DATA TYPE |
PRIMARY KEY |
NULLABLE |
v_< DIM>_display_code |
Varchar2(10) |
Y |
N |
d_Mis_date |
Date |
Y |
N |
v_Language |
Varchar2(10) |
Y |
N |
v_< DIM>_NAME |
Varchar2(40) |
|
N |
v_Description |
Varchar2(255) |
|
N |
v_Created_by |
Varchar2(30) |
|
Y |
v_Modified_by |
Varchar2(30) |
|
Y |
Here is a sample structure:
STG_ACCOUNT_OFFICER_MASTER |
|||
COLUMN_NAME |
DATA TYPE |
PRIMARY KEY |
NULLABLE |
v_acct_officer_display_code |
Varchar2(10) |
Y |
N |
d_Mis_date |
Date |
Y |
N |
v_Language |
Varchar2(10) |
Y |
N |
v_Name |
Varchar2(40) |
|
N |
v_Description |
Varchar2(255) |
|
N |
v_Created_by |
Varchar2(30) |
|
Y |
v_Modified_by |
Varchar2(30) |
|
Y |
Here are some examples:
· Example For FSI CD/MLS tables:
CREATE TABLE <XXXXX>_FSI_<DIM>_CD -- ACME_FSI_ACCT_STATUS_CD
(<DIM>_CD NUMBER(5) -- ACCT_STATUS_CD
,LEAF_ONLY_FLAG VARCHAR2(1)
,ENABLED_FLAG VARCHAR2(1)
,DEFINITION_LANGUAGE VARCHAR2(10)
,CREATED_BY VARCHAR2(30)
,CREATION_DATE DATE
,LAST_MODIFIED_BY VARCHAR2(30)
,LAST_MODIFIED_DATE DATE
<dim>_display_CD VARCHAR2(10)
);
· Example for FSI_<DIM>_MLS table:
CREATE TABLE <XXXXX>_FSI_<DIM>_MLS -- ACME_FSI_ACCT_STATUS_CD
(<DIM>_CD NUMBER(5) -- ACCT_STATUS_CD
,LANGUAGE VARCHAR2(10)
,<DIM> VARCHAR2(40) -- ACCT_STATUS
,DESCRIPTION VARCHAR2(255)
,CREATED_BY VARCHAR2(30)
,CREATION_DATE DATE
,LAST_MODIFIED_BY VARCHAR2(30)
,LAST_MODIFIED_DATE DATE
);
NOTE |
FSI_<DIM>_CD and FSI_<DIM>_MLS should follow the same standards as mentioned above, else Loader will not work as expected. |
The REV_DIMENSIONS_B table holds the following target table information:
The target FSI_<DIM>_CD/MLS table can be retrieved from REV_DIMENSIONS_B table as follows:
· dimension_id: Holds the id of the simple dimension that needs to be loaded.
· member_b_table_name: Holds the name of the FSI_<DIM>_CD target table. For example, FSI_ACCOUNT_OFFICER_CD
· member_tl_table_name: Holds the name of the FSI_<DIM>_MLS table name. For example, FSI_ACCOUNT_OFFICER_MLS
· member_col: Holds the Column Name for which Surrogate needs to be generated. For example, ACCOUNT_OFFICER_CD
· member_code_column: Holds the Name of the joining column name from FSI_<DIM>_CD Display code column. For example, ACCOUNT_OFFICER_DISPLAY_CD
· key_dimension_flag: N
· dimension_editable_flag: Y
· write_flag: Y
· simple_dimension_flag: Y
The FSI_DIM_LOADER_SETUP_DETAILS stores the STG_<DIM>_MASTER table details as follows:
FSI_DIM_LOADER_SETUP_DETAILS |
STG_<DIM>_MASTER |
N_DIMENSION_ID |
<dimension_id> For example, 617 |
V_INTF_B_TABLE_NAME |
Stage table name For example, STG_ACCOUNT_OFFICER_MASTER |
V_GEN_SKEY_FLAG |
Default will be 'Y', it generates Surrogate Key. When 'N' then stage display code column will be used as a surrogate key. For example, FSI_ACCOUNT_OFFICER_CD.ACCOUNT_OFFICER_DISPLAY_CD should be numeric. |
V_STG_MEMBER_COLUMN |
Stores the stage display code column. For example, STG_ACCOUNT_OFFICER_MASTER.v_acct_officer_display_code |
V_STG_MEMBER_NAME_COL |
Stores the stage column name. For example, STG_ACCOUNT_OFFICER_MASTER. v_Name |
V_STG_MEMBER_DESC_COL |
Stores the stage description column name. For example, STG_ACCOUNT_OFFICER_MASTER. v_description |
There are two ways to execute the simple dimension load procedure:
· Running Procedure Using SQL*Plus
To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner:
function fn_simpledimloader(batch_run_id VARCHAR2, as_of_date VARCHAR2, pdimensionid VARCHAR2,
pMisDateReqFlag char default 'Y', psynchflag CHAR DEFAULT 'N')
SQLPLUS > declare
result number;
begin
result := fn_simpledimloader ('SimpleDIIM_BATCH1','20121212','730','N','Y');
end;
/
§ BATCH_RUN_ID is any string to identify the executed batch.
§ AS_OF_DATE is in the format YYYYMMDD.
§ pDIMENSIONID is the dimension ID.
§ pSynchFlag this parameter is used to identify if a complete synchronization of data between staging and dimension table is required. The default value is 'Y'.
§ pMisDateReqFlag : Filter will be placed on the input stage table to select only the records which falls on the given as_of_date. Default value is Y. If complete stage table data needs to be considered, then it should be passed 'N'.
NOTE |
With Synch flag N, data is moved from Stage to Dimension tables. Here, an appending process happens. You can provide a combination of new Dimension records plus the data that has undergone change. New records are inserted and the changed data is updated into the Dimension table. With Synch flag Y, the Stage table data will completely replace the Dimension table data. |
· Simple Dimension Load Procedure Using OFSAAI Batch Maintenance.
To execute Simple Dimension Loader from OFSAAI Batch Maintenance, a seeded Batch is provided.
The batch parameters are:
§ Datastore Type: Select the appropriate datastore from list
§ Datastore Name: Select the appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: fn_simpledimloader
§ Parameter : 'Pass the dimension id for which DT needs to be executed, psynchflag'
For example, '730,N,Y'
NOTE |
In case of FSI_ACCOUNT_OFFICER_CD query: SELECT dimension_id FROM rev_dimensions_b where member_b_table_name = 'FSI_ACCOUNT_OFFICER_CD' Pass the dimension_id. |
§ Psynchflag: By default it is N, data is moved from Stage to Dimension tables. Here, an appending process happens. You can provide a combination of new Dimension records plus the data that has undergone change. New records are inserted and the changed data is updated into the Dimension table. With Synch flag 'Y', the Stage table data will completely replace the Dimension table data.
Below are the list of error messages which can be viewed in view log from UI or fsi_messge_log table from back end filtering for the given batch id. On successful completion of each task, messages gets into log table.
In the event of failure, following are the list of errors that may occur during the execution:
· Exception 1: When REV_DIMENSIONS_B is not having proper setup details.
Meaning: For Simple Dimension write_flag, simple_dimension_flag, dimension_editable_flag should be Y in rev_dimensions_b for the given Dimension id.
· Exception 2: When FSI_DIM_LOADER_SETUP_DETAILS table is not having proper set up details.
Meaning: Setup details are not found for the dimension id.
· Exception 3: When Display code Column is non numeric and trying to use as a surrogate key.
Meaning: Display code Column should be numeric as v_gen_skey_flag N
Historical data for currency exchange rates, interest rates and economic indicators can be loaded into the OFSAA historical rates tables through the common staging area. The T2T component within OFSAAI framework is used to move data from the Stage historical rate tables into the relevant OFSAA processing tables. After loading the rates, users can view the historical rate data through the OFSAA Rate Management UI's.
Topics:
· Tables Related to Historical Rates
· Executing the Historical Rates Data Loader T2T
Historical rates are stored in the following staging area tables:
· STG_EXCHANGE_RATE_HIST: This staging table contains the historical exchange rates for Currencies used in the system.
· STG_IRC_RATE_HIST: This staging table contains the historical interest rates for the Interest Rate codes used in the system.
· STG_IRC_TS_PARAM_HIST: This staging table contains the historical interest rate term structure parameters, used by the Monte Carlo engine.
· STG_ECO_IND_HIST_RATES: This staging table stores the historical values for the Economic Indicators used in the system.
Historical rates in OFSAA Rate Management are stored in the following processing tables:
· FSI_EXCHANGE_RATE_HIST: This table contains the historical exchange rates for the Currencies used in the system.
· FSI_IRC_RATE_HIST: This table contains the historical interest rates for the Interest Rate codes used in the system.
· FSI_IRC_TS_PARAM_HIST: This table stores the historical interest rate term structure parameters, used by the Monte Carlo engine.
· FSI_ECO_IND_HIST_RATES: This table contains the historical values for the Economic Indicators used in the system.
Data for historical rates commonly comes from external systems. Such data must be converted into the format of the staging area tables. This data can be loaded into the staging area using the F2T component of the OFSAAI framework. Users can view the loaded data by querying the staging tables and various log files associated with the F2T component.
You can launch the Historical Rates Data Loader from the following:
· Interest Rates Summary page
· PL/SQL block
· Operations Batch
To launch from the Interest Rates Summary page:
1. Click the Data Loader icon on the Interest Rates Summary grid toolbar.
2. A warning message will appear: Upload all available Interest Rates and Parameters?
3. Click Yes. The process will load all valid data included in the staging table.
There are four pre-defined T2T mappings configured and seeded in OFSAA for the purpose of loading historical rates. These can be executed from the Batch Maintenance within OFSAAI.
To execute the Historical Exchange Rates Data Loader, create a new Batch and specify the following parameters:
· Datastore Type: Select appropriate datastore from the drop down list
· Datastore Name: Select appropriate name from the list. Generally it is the Infodom name.
· IP address: Select the IP address from the list
· Rule Name: T2T_EXCHANGE_RATE_HIST
· Parameter List: No Parameter is passed. The only parameter is the As of Date selection which is made when the process is executed.
To execute the Historical Interest Rates Data Loader, create a new Batch and specify the following parameters:
· Datastore Type: Select appropriate datastore from the drop down list
· Datastore Name: Select appropriate name from the drop down list
· IP address: Select the IP address from the list
· Rule Name: T2T_IRC_RATE_HIST
· Parameter List: No Parameter is passed. The only parameter is the As of Date selection which is made when the process is executed.
To execute the Historical Term Structure Parameter Data Loader, create a new Batch and specify the following parameters:
· Datastore Type: Select appropriate datastore from list
· Datastore Name: Select appropriate name from the list
· IP address: Select the IP address from the list
· Rule Name: T2T_IRC_TS_PARAM_HIST
· Parameter List: No Parameter is passed. The only parameter is the As of Date selection which is made when the process is executed.
To execute the Historical Economic Indicator Data Loader, create a new Batch and specify the following parameters:
· Datastore Type: Select appropriate datastore from the drop down list
· Datastore Name:: Select appropriate name from the drop down list
· IP address: Select the IP address from the list
· Rule Name: T2T_ECO_IND_HIST_RATES
· Parameter List: No Parameter is passed. The only parameter is the As of Date selection which is made when the process is executed.
After executing any of the above batch processes, check the T2T component logs and batch messages to confirm the status of the data load.
The T2T component can fail under the following scenario:
· Unique constraint error: Target table may already contain data with the primary keys that the user is trying to load from the staging area.
The T2T component can only perform Insert operations. In case the user needs to perform updates, previously loaded records should be deleted before loading the current records. Function fn_deleteFusionTables is used for deleting the records in the target that are present in the source. This function removes rows in the table if there are matching rows in the Stage table. This function requires entries in the FSI_DELETE_TABLES_SETUP table to be configured. Configure the following table for all columns that need to be part of the join between the Stage table and Equivalent table.
Users can create new or use existing Data Transformations for deleting a Table. The parameters for the Data Transformation are:
· 'Table to be deleted'
· Batch run ID
· As of Date
Column Name |
Column Description |
Sample Value |
STAGE_TABLE_NAME |
Stores the source table name for forming the join statement |
STG_LOAN_CONTRACTS |
STAGE_COLUMN_NAME |
Stores the source column name for forming the join statement |
V_ACCOUNT_NUMBER |
FUSION_TABLE_NAME |
Stores the target table name for forming the join statement |
FSI_D_LOAN_CONTRACTS |
FUSION_COLUMN_NAME |
Stores the target column name for forming the join statement |
ACCOUNT_NUMBER |
NOTE |
Insert rows in FSI_DELETE_TABLES_SETUP for all columns that can be used to join the stage with the equivalent table. In case if the join requires other dimension or code tables, a view can be created joining the source table with the respective code tables and this view can be part of the above setup table. |
The Forecast Rate Data Loader procedure loads forecast rates into the OFSAA ALM Forecast rates processing area tables from staging tables. In ALM, Forecast Rate assumptions are defined within the Forecast Rate Assumptions UI. The Forecast Rates Data Loader supports the Direct Input and Structured Change methods only for exchange rates, interest rates and economic indicators. Data for all other forecast rate methods should be input through the User Interface. After executing the forecast rates data loader, users can view the information in the ALM - Forecast Rates Assumptions UI.
Topics:
· Forecast Rate Data Loader Tables
· Populating Forecast Rate Stage Tables
· Forecast Rate Loader Program
· Executing the Forecast Rate Data Load Procedure
Forecast rate assumption data is stored in the following staging area tables:
· STG_FCAST_XRATES: This table holds the forecasted exchange rate data for the current ALM modeling period.
NOTE |
For Direct Input Method, both N_FROM_BUCKET and N_TO_BUCKET column contain the same bucket number for the record in STG_FCAST_XRATES table. |
· STG_FCAST_IRCS: This table holds the forecasted interest rate data for the current ALM modeling period.
· STG_FCAST_EI: This table holds the forecasted economic indicator data for the current ALM modeling period.
Rates present in the above staging tables are copied into the following ALM metadata tables.
· FSI_FCAST_IRC_DIRECT_INPUT, FSI_FCAST_IRC_STRCT_CHG_VAL.
· FSI_FCAST_XRATE_DIRECT_INPUT, FSI_FCAST_XRATE_STRCT_CHG.
· FSI_FCAST_EI_DIRECT_INPUT, FSI_FCAST_EI_STRCT_CHG_VAL
· STG_FCAST_EI
v_forecast_name |
The Name of the Forecast Rate assumption rule as defined. The Forecast name indicates the Short Description for the Forecast Rate Sys ID as stored in the FSI_M_OBJECT_DEFINITION_TL table. In case the forecast sys id is provided, then populate this field with -1. |
v_scenario_name |
This field indicates the Scenario Name for which the Forecast Rate data is applicable. |
v_economic_indicator_name |
This field indicates the Economic Indicator Name for which the Forecast data is applicable. |
n_from_bucket |
This field indicates the Start Bucket Number for the given scenario. |
fic_mis_date |
This field indicates the current period As of Date applicable to the data being loaded. |
n_fcast_rates_sys_id |
The System Identifier of the forecast rate assumption rule to which this data will be loaded. In case forecast name and folder are provided, then populate this field with -1. |
v_folder_name |
Name of the folder that holds the Forecast Rate assumption rule definition. In case the forecast sys id is provided, then populate this field with -1. |
v_ei_method_cd |
The Forecast method of economic indicator values include: Direct Input or Structured change. |
|
Use DI - For Direct Input or SC - For Structured Change |
n_economic_indicator_value |
This field indicates the value for the Economic Indicator for the given scenario and time bucket. |
n_to_bucket |
This field indicates the End Bucket Number for the assumption. |
· STG_FCAST_XRATES
v_forecast_name |
The Name of the Forecast Rate assumption rule as defined. The Forecast name indicates the Short Description for the Forecast Rate Sys ID as stored in the FSI_M_OBJECT_DEFINITION_TL table. In case the forecast sys id is provided, then populate this field with -1. |
v_scenario_name |
This field indicates the Scenario Name for which the Forecast Rate data is applicable. |
v_iso_currency_cd |
From ISO Currency Code (like USD, EUR, JPY, GBP) of the forecast rate. |
n_from_bucket |
This field indicates the Start Bucket Number for the given scenario. |
fic_mis_date |
This field indicates the As of Date for which the data being loaded is applicable. |
n_fcast_rates_sys_id |
The System Identifier of the assumption rule to which this data will be loaded. In case forecast name and folder are provided, then populate this field with -1. |
v_folder_name |
Name of the folder that holds the Forecast Rate assumption rule definition. In case the forecast sys id is provided, then populate this field with -1. |
n_to_bucket |
This field indicates the End Bucket Number for the given scenario. |
v_xrate_method_cd |
The Forecast method for exchange rate values include: Direct Input or Structured change. Use DI - For Direct Input or SC - For Structured Change |
n_exchange_rate |
This field indicates the Exchange rate for the Currency and given bucket Range.The value in N_EXCHANGE_RATE should be the rate used to convert 1 unit of the V_TO_CURRENCY_CD currency to the currency stored in V_FROM_CURRENCY_CD. For example, if V_TO_CURRENCY_CD = 'USD', then enter the exchange rate to convert 1 unit USD to another currency. |
· STG_FCAST_IRCS
v_forecast_name |
The Name of the Forecast Rate assumption rule as defined. The Forecast name indicates the Short Description for the Forecast Rate Sys ID as stored in the FSI_M_OBJECT_DEFINITION_TL table. In case the forecast sys id is provided, then populate this field with -1. |
v_scenario_name |
This field indicates the Scenario Name for which the Forecast Rate data is applicable. |
v_irc_name |
The IRC Name indicates the Name of Interest Rate Code . |
n_interest_rate_term |
This field indicates the Interest Rate Term applicable for the row of data. |
v_interest_rate_term_mult |
This field indicates the Interest Rate Term Multiplier for the row of data being loaded. |
n_from_bucket |
This field indicates the Start Bucket Number for the given scenario. |
fic_mis_date |
This field indicates the As of Date for which the data being loaded is applicable. |
n_fcast_rates_sys_id |
The System Identifier of the interest rate code forecast rate definition. In case the forecast name and folder are provided, then populate this field with -1. |
v_folder_name |
Name of the folder that holds the Forecast Rate assumption rule definition. In case the forecast sys id is provided, then populate this field with -1. |
n_interest_rate |
This field indicates the Interest Rate Change for the specified Term and for the given scenario. |
n_to_bucket |
This field indicates the End Bucket Number for the given scenario. |
v_irc_method_cd |
The Forecast method of interest rate code values include: Direct Input or Structured change. |
|
Use DI - For Direct Input or SC - For Structured Change |
The Forecast Rate Loader program updates the existing forecast rates to new forecast rates in the ALM Forecast Rate tables for Direct Input and Structured Change forecasting methods.
NOTE |
The Forecast Rate Loader can only update existing forecast rate assumption rule definitions. The initial Forecast Rate assumption rule definition and initial methods must be created through the Forecast Rates user interface within Oracle ALM. |
The Forecast Rates Data Loader performs the following functions:
The User can load forecast rate assumptions for either a specific Forecast Rate assumption rule or multiple forecast rates assumption rules.
1. To Load a specific Forecast Rate assumption rule, the user should provide either the Forecast Rate name and a folder name as defined in Oracle ALM or the Forecast Rate System Identifier.
2. When the load parameter is to load a specific Forecast Rate assumption rule for a given As of Date, the loader checks for Forecast Name/Forecast Rate System Identifier's presence in the Object Definition Registration Table. If it's present, then the combination of Forecast Name/Forecast Rate system Identifier and As of Date is checked in each of the Forecast Rate Staging Tables one by one.
3. The data loading is done from each of the staging tables for the Direct Input and Structured change methods where the Forecast Name and As of Date combination is present.
4. When the load parameter is the Load All Option (Y), the Distinct Forecast Name from the 3 staging tables is verified for its presence in Object Definition Registration table and the loading is done for each of the Forecast Names.
5. Messages for each of the steps is written into the FSI_MESSAGE_LOG table.
After the Forecast rate loader processing is completed, the user should query the ALM Forecast Rate tables to look for the new forecast rates. Also, the user can verify the data just loaded using the Forecast Rate Assumption UI.
You can launch the Forecast Data Loader from the following:
· Forecast Rates Summary page
· PL/SQL block
· Operations Batch
To launch from the Forecast Rates Summary page:
1. Click the Data Loader icon on the Forecast Rates Summary page. A warning message will appear: Upload all available Forecast Rates?
2. Click Yes. The process will load all valid data included in the staging table.
To run the Forecast Rate Loader from SQL*Plus, login to SQL*Plus as the Schema Owner. The procedure requires six parameters
1. Batch Execution Identifier (batch_run_id)
2. As of Date (mis_date)
3. Forecast Rate System Identifier (pObject_Definition_ID)
4. Option for Loading All or any Specific Forecast Rate assumption rule. If the Load All option is 'N' then either the Forecast Rate Assumption rule Name Parameter with the Folder Name or Forecast Rate Sys ID should be provided else it raises an error (pLoad_all)
5. Forecast Rate assumption rule Name (pForecast_name)
6. Folder name (pFolder_Name)
7. The syntax for calling the procedure is:
fn_stg_forecast_rate_loader(batch_run_id varchar2,
mis_date varchar2,
pObject_Definition_ID number,
pLoad_all char default 'N',
pForecast_name varchar2,
pFolder_Name varchar2
p_user_id varchar2
p_appid varchar2
)
where
§ BATCH_RUN_ID is any string to identify the executed batch.
§ mis_date in the format YYYYMMDD.
§ pObject_Definition_ID -The Forecast Rate System Identifier in ALM
§ pLoad_all indicates option for loading all forecast rates.
§ pForecast_Name. This can be null i.e '' when the pLoad_all is 'Y' else provide a valid Forecast Rate assumption rule Name.
§ pFolder_Name indicates the name of the Folder where the forecast rate assumption rule was defined.
§ p_user_id indicates the user mapped with the application in rev_app_user_preferences. This will be used to fetch as of date from rev_app_user_preferences.This is a mandatory parameter.
§ p_appid is the application name. This is a mandatory parameter.
For Example:
If the user wants to Load all forecast rates assumption rules defined within a folder, say RTSEG then
Declare
num number;
Begin
Num:= fn_stg_forecast_rate_loader('INFODOM_FORECAST_RATE_LOADER',
'20100419',
null,
'Y',
Null,
'RTSEG',
'ALMUSER1',
'ALM');
End;
The loading is done for all forecast rates under folder 'RTSEG' for as of Date 20100419.
Sample Data for STG_FCAST_IRCS to Load all forecast rates defined within a folder
Description of STG_FCAST_IRCS follows
Sample Data for STG_FCAST_XRATES to Load all forecast rates defined within a folder
Description of STG_FCAST_XRATES follows
Sample Data for STG_FCAST_EI to Load all forecast rates defined within a folder
Description of STG_FCAST_EI follows
8. If the user wants to Load a specific forecast rate assumption rule, they should provide the unique Forecast Rate System Identifier.
Declare
num number;
Begin
Num:= fn_stg_forecast_rate_loader('INFODOM_FORECAST_RATE_LOADER',
'20100419',
10005,
'N',
Null,
Null,
'ALMUSER1',
'ALM');
End;
Sample Data for STG_FCAST_IRCS to load data for specific Forecast Rate providing the Forecast Rate System Identifier
Description of STG_FCAST_IRCS follows
Sample Data for STG_FCAST_XRATES to load data for specific Forecast Rate providing the Forecast Rate System Identifier
Description of STG_FCAST_XRATES follows
Sample Data for STG_FCAST_EI to load data for specific Forecast Rate providing the Forecast Rate System Identifier
Description of STG_FCAST_EI follows
NOTE |
To Load data for specific Forecast Rate providing the Forecast Rate System Identifier, the value of Forecast rate Name and Folder Name in the staging tables should be -1. |
9. If the user wants to Load a specific forecast rate assumption rule within the Folder providing the name of Forecast Rate as defined in ALM.
Declare
num number;
Begin
Num:= fn_stg_forecast_rate_loader('INFODOM_FORECAST_RATE_LOADER',
'20100419',
Null,
'N',
'LOADER_TEST',
'RTSEG',
'ALMUSER1',
'ALM');
End;
Sample Data for STG_FCAST_XRATES to Load a specific forecast rate within the Folder providing the name of Forecast Rate as defined in ALM
Description of STG_FCAST_EI follows
Sample Data for STG_FCAST_EI to Load a specific forecast rate within the Folder providing the name of Forecast Rate as defined in ALM
Description of STG_FCAST_EI follows
If the NUM value is 1, it indicates the load completed successfully, check the FSI_MESSAGE_LOG for more details.
To execute Forecast Rate Loader from OFSAAI Batch Maintenance, a seeded Batch is provided.
<INFODOM>_FORECAST_RATE_LOADER is the Batch ID and Forecast Rate Loader is the description of the batch.
1. The batch has a single task. Edit the task.
2. If the user intends to load data for all Forecast Rates under a Folder, then provide the batch parameters as shown.
§ Datastore Type: Select the appropriate datastore from list
§ Datastore Name: Select the appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: Forecast_Rate_loader
§ Datastore Type: Select the appropriate datastore from list
§ Datastore Name: Select the appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: Forecast_Rate_loader
Sample Data for STG_FCAST_IRCS to Load all forecast rates defined within a folder
Sample Data for STG_FCAST_XRATES to Load all forecast rates defined within a folder
Sample Data for STG_FCAST_EI to Load all forecast rates defined within a folder
3. If the user wants to load data for a specific Forecast Rate assumption rule, provide the Forecast Rate System Identifier, then define the batch parameters.
§ Datastore Type: Select the appropriate datastore from list
§ Datastore Name: Select the appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: Forecast_Rate_loader
Sample Data for STG_FCAST_IRCS to load data for a specific Forecast Rate assumption rule, with the Forecast Rate System Identifier already provided
Sample Data for STG_FCAST_XRATES to load data for a specific Forecast Rate assumption rule with the Forecast Rate System Identifier already provided
Sample Data for STG_FCAST_EI to load data for a specific Forecast Rate assumption rule with the Forecast Rate System Identifier already provided
NOTE |
To Load data for specific Forecast Rate assumption rules, provide the Forecast Rate System Identifier and the value of Forecast rate Name and Folder Name in the staging tables should be -1. |
4. If the user wants to load data for specific Forecast Rate assumption rules, provide the Forecast Rate Name as defined in ALM, then define the batch parameters as shown.
§ Datastore Type: Select an appropriate datastore from list
§ Datastore Name: Select an appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: Forecast_Rate_loader
Sample Data for STG_FCAST_IRCS to Load a specific forecast rate assumption rule, within the Folder, provide the name of Forecast Rate rule as defined in ALM
Sample Data forSTG_FCAST_XRATES to Load a specific forecast rate assumption rule, within the Folder, provide the name of Forecast Rate rule as defined in ALM
Sample Data for STG_FCAST_EI to Load a specific forecast rate assumption rule, within the Folder, provide the name of Forecast Rate rule as defined in ALM
5. Save the Batch.
6. Execute the Batch for the required As of Date.
The Forecast Rate Data Loader can have the following exceptions:
· Exception 1: Error. While fetching the Object Definition ID from Object Registration Table
This exception occurs if the forecast rate assumption rule name is not present in the FSI_M_OBJECT_DEFINTION_TL table short_desc column.
· Exception 2: Error. More than one Forecast Sys ID is present.
This exception occurs when there is more than one Forecast Sys ID present for the given forecast rate assumption rule name.
· Exception 3: Error. Forecast Rate assumption rule Name and As of Date combination do not exist in the Staging Table.
This exception occurs when the Forecast Rate assumption rule Name and as of date combination do not exist in the Staging Table.
Prepayment Model assumptions are defined within the Prepayment Model rule User Interfaces in OFSAA ALM, HM and FTP applications. You can input prepayment rates directly through the UI, or import the rates from Excel into the UI. You can also use the Prepayment Rate Data Loader procedure to populate Prepayment Model rates into the OFSAA metadata table from the corresponding staging table. This data loader program can be used to update the Prepayment Model rates on a periodic basis. After loading the prepayment rates, you can view the latest data in the Prepayment Model assumptions UI.
Topics:
· Prepayment Rate Loader Tables
· Executing the Prepayment Model Data Loader
The Loader uses the following staging and target tables:
· STG_PPMT_MODEL_HYPERCUBE: This staging table contains prepayment rates for the selected prepayment dimensions.
· FSI_PPMT_MODEL_HYPERCUBE: The loader copies rates into this target table for the associated Prepayment Dimension combinations present in the FSI_M_PPMT_MODEL table.
The Prepayment Rate Data Loader program populates the target OFSAA Prepayment Model table with the values from the staging table. The procedure will load prepayment rate data for a specified Prepayment Model rule or all Prepayment models that are present in the staging table. The program assumes that the Prepayment Model assumptions have already been defined using OFSAA Prepayment Model rule UIs before loading Prepayment Model rates.
The program performs the following functions:
1. The Data Loader accepts the AS_OF_DATE as a parameter, that is, date to load all prepayment rates from the Staging table into the OFSAA metadata table for the specific as of date.
2. The program performs certain checks to determine if:
The Prepayment Model dimensions present in staging are the same as those present in the OFSAA Prepayment Model metadata tables.
The bucket members of each of the dimensions present in staging are same as those present in the metadata tables.
The number of records present in the STG_PPMT_MODEL_HYPERCUBE table for a Prepayment Model is less than or equal to the maximum number of records that are allowed, which is determined by multiplying the number of buckets per dimension of the Prepayment Model.
PPMT_MDL_SYS_ID |
DIMENSION_ID |
NUMBER_OF_BUCKETS |
20100405 |
8 |
2 |
20100405 |
4 |
3 |
Then the maximum number of records = number of buckets of dimension 8 * number of buckets of dimension 4
That is, maximum number of records = 2 * 3
Therefore, maximum number of records = 6 records
Check is made by Prepayment Rate Data Loader whether the number of records present in STG_PPMT_MODEL_HYPERCUBE table for a Prepayment model 20100405 is less than or equal to 6 or not.
3. If the above quality checks are satisfied, then the rates present in the Staging table are updated to the OFSAA prepayment model metadata table.
4. Any error messages are logged in the FSI_MESSAGE_LOG table and can be viewed in OFSAAI Log Viewer UI.
After the Prepayment Rate loader is completed, you should query the FSI_PPMT_MODEL_HYPERCUBE table to look for the new rates. Also, you can verify the data using the Prepayment Model Assumption UI.
Populating the data into STG_PPMT_MODEL_HYPERCUBE
· V_PPMT_MDL: The Name of the Prepayment Model as stored in FSI_M_OBJECT_DEFINITION_TL table. If Prepayment Model name is given, also provide the Folder name. If the Prepayment Model System ID is provided, then populate this field with -1.
· N_ORIG_TERM: Original term of the contract
· N_REPRICING_FREQ: The number of months between instrument repricing
· N_REM_TENOR: Remaining term of the contract (in Months)
· N_EXPIRED_TERM: Expired term of the contract (in Months)
· N_TERM_TO_REPRICE: Repricing term of the contract (in Months)
· N_COUPON_RATE: The current gross rate on the instrument
· N_MARKET_RATE: Forecast rate representing alternate funding
· N_RATE_DIFFERENCE: Spread between the current gross rate and the market rate
· N_RATE_RATIO: Ratio of the current gross rate to the market rate
· N_PPMT_RATE: User defined prepayment rate for the associated dimension value combination
· FIC_MIS_DATE: The As of Date for which the data being loaded is applicable
· V_FOLDER_NAME: Name of the Folder which holds the Prepayment Model. If the Prepayment Model System ID is provided, then populate this field with -1. If Folder name is provided, then provide Prepayment Model name as well.
· N_PPMT_MDL_SYS_ID: The System Identifier (Object Definition ID) of the Prepayment Model to which this data will be loaded. If Prepayment Model name and Folder are provided, then populate this field with -1.
Column mapping from source to target
Source STG_PPMT_MODEL_HYPERCUBE to Target FSI_PPMT_MODEL_HYPERCUBE mapping:
N_ORIG_TERM -> ORIGINAL_TERM
N_REPRICING_FREQ ->REPRICING_FREQ
N_REM_TENOR -> REMAINING_TERM
N_EXPIRED_TERM -> EXPIRED_TERM
N_TERM_TO_REPRICE -> TERM_TO_REPRICE
N_COUPON_RATE -> COUPON_RATE
N_MARKET_RATE -> MARKET_RATE
N_RATE_DIFFERENCE -> RATE_DIFFERENCE
N_RATE_RATIO -> RATE_RATIO
N_PPMT_RATE -> PPMT_RATE
N_PPMT_MDL_SYS_ID -> PPMT_MDL_SYS_ID when N_PPMT_MDL_SYS_ID <> -1, otherwise it performs a lookup in FSI_M_OBJECT_DEFINITION_TL based on the Name and Folder provided in the staging tabl
Example
Based on the FSI_M_PPMT_MODEL table, for data in the staging table with Prepayment Model System ID 20100405:
PPMT_MDL_SYS_ID DIMENSION_ID NUMBER_OF_BUCKETS
20100405 8 2
20100405 4 3
The maximum number of records = (number of buckets of dimension 8) * (number of buckets of dimension 4).
That is, maximum number of records = 2 * 3
Therefore, maximum number of records = 6 records.
The Prepayment Rate Data Loader checks whether the number of records present in STG_PPMT_MODEL_HYPERCUBE table for Prepayment Model 20100405 is less than or equal to 6.
If the above quality checks are satisfied, then the rates present in the Staging table are updated to the OFSAA Prepayment Model metadata table.
Any error messages are logged in the FSI_MESSAGE_LOG table.
You can launch the Data Loader from the following:
· Prepayment Models summary page
· PL/SQL block
· Operations Batch
· Prepayment Models summary page: To launch from the Prepayment Models summary page:
a. Click the Data Loader icon on the Prepayment Models summary grid toolbar. A warning message will appear: Upload all available Prepayment Rates?
b. Click Yes. The process will load all valid data included in the staging table.
· PL/SQL block: To execute theLoader within a PL/SQL block:
To run the function from SQL*Plus, log in to SQL*Plus as the Schema Owner. The loader requires two parameters
§ Batch Execution Name
§ As Of Date
Syntax:
fn_PPMT_RATE_LOADER(batch_run_id IN VARCHAR2, as_of_date IN VARCHAR2)
Where:
§ BATCH_RUN_ID is any string to identify the executed batch.
§ As_of_Date is the execution date in the format YYYYMMDD.
For Example:
Declare
num number;
Begin
Num:= fn_PPMT_RATE_LOADER('INFODOM_20100405','20100405');
End;
The loader is executed for the given as of date. If the return value (NUM) is 1, this indicates the load completed successfully. Check the FSI_MESSAGE_LOG for more details.
· Operations Batch: To run from Operations Batch framework:
You can create a new Batch with the Component = 'TRANSFORM DATA' and specify the following parameters for the task:
§ Datastore Type: Select appropriate datastore from list
§ Datastore Name: Select appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: ppmt_rate_loader
§ Parameter List: None
Viewing the results:
Any error messages are logged in the FSI_MESSAGE_LOG table. If you launch the Loader from the Prepayment Models Summary page or Operations Batch, you can view processing messages in OFSAAI in the Operations and selectView Log UI, where the Component Type = Data Transformation and the Batch Run ID = the ID for your run.
You can also spot check results of the load as follows:
· Query the FSI_PPMT_MODEL_HYPERCUBE table to confirm existence of the new rates.
· Use the Prepayment Model rule UI to select your rule and View your rates.
The Prepayment Model Rate Loader can have the following exceptions:
· Exception 1: Error while fetching the Object Definition ID from Object Definition Table.
This exception occurs if the prepayment model name is not present in the FSI_M_OBJECT_DEFINTION_TL table.
· Exception 2: Error. More than one prepayment model sys ID is present for the given definition.
This exception occurs when there is more than one Prepayment Model System ID present for the Prepayment Model name in staging.
· Exception 3: Error. Data is present in additional dimension ID column than those defined in FSI_M_PPMT_MODEL.
This exception occurs if rates are specified in staging for the dimensions that are not part of the Prepayment Model definition.
· Exception 4: The value in the Dimension ID column is not matching with the value present in the corresponding column in metadata table.
This exception occurs if rates are specified in staging for the dimension members that are not part of the Prepayment Model definition.
· Exception 5: The number of records for the staging table for a given Prepayment Model Name is more than those calculated by multiplying the number of buckets in FSI_M_PPMT_MODEL table for the given model name.
This exception occurs if there are excess records in staging compared to OFSAA metadata tables for the given Prepayment Model.
Data in staging instrument tables are moved into respective OFSAA processing instrument tables using OFSAAI T2T component. After loading the data, users can view the loaded data by querying the processing instrument tables.
Topics:
· Mapping To OFSAA Processing Tables
· Populating Accounts Dimension
· Executing T2T Data Movement Tasks
Following are examples of some of the various application staging instrument tables:
· STG_LOAN_CONTRACTS: holds contract information related to various loan products including mortgages.
· STG_TD_CONTRACTS: holds contract information related to term deposit products.
· STG_CASA: holds information related to Checking and Savings Accounts.
· STG_OD_ACCOUNTS: holds information related to over-draft accounts.
· STG_CARDS: holds information related to credit card accounts.
· STG_LEASES: holds contract information related to leasing products.
· STG_ANNUITY_CONTRACTS: holds contract information related to annuity contracts.
· STG_INVESTMENTS: holds information related to investment products like bond, equities, and so on.
· STG_MM_CONTRACTS: holds contract information related to short term investments in money market securities.
· STG_BORROWINGS: holds contract information related to various inter-bank borrowings.
· STG_FX_CONTRACTS: holds contract information related to FX products like FX Spot, FX Forward, and so on. Leg level details, if any, are stored in various leg-specific columns within the table.
· STG_SWAPS_CONTRACTS: holds contract information related to various types of swaps. Leg level details, if any, are stored in various leg-specific columns within the table.
· STG_OPTION_CONTRACTS: holds contract information related to various types of options. Leg level details, if any, are stored in various leg-specific columns within the table.
· STG_FUTURES: holds contract information related to interest rate forwards and all types of futures. Leg level details, if any, are stored in various leg-specific columns within the table.
· STG_LOAN_COMMITMENTS: contains all existing columns from STG_LOAN_CONTRACTS
NOTE |
You can modify any existing instrument table to include the columns by adding the COMMITMENT_CONTRACTS super type. If you want to execute the Forward Rate transfer pricing pricing against tables in addition to FSI_D_LOAN_COMMITMENTS, then add the required columns and do so by adding the COMMITMENT_CONTRACTS super type via ERWIN. |
Data can be loaded into staging tables through F2T component of OFSAAI. After data is loaded, check for data quality within the staging tables, before moving into OFSAA processing tables. Data quality checks can include:
· Number of records between external system and staging instrument tables.
· Valid list of values in code columns of staging.
· Valid list of values in dimension columns like product, organization unit, general ledger, and so on. These members should be present in the respective dimension tables.
· Valid values for other significant columns of staging tables.
Following are examples of some of the pre-defined application T2T mappings between the above staging tables and processing tables:
· T2T_LOAN_CONTRACTS: for loading data from STG_LOAN_CONTRACTS to FSI_D_LOAN_CONTRACTS.
· T2T_MORTGAGES: for loading data from STG_LOAN_CONTRACTS to FSI_D_MORTGAGES.
· T2T_CASA: for loading data from STG_CASA to FSI_D_CASA.
· T2T_CARDS: for loading data from STG_CARDS to FSI_D_CREDIT_CARDS.
· T2T_TD_CONTRACTS: for loading data from STG_TD_CONTRACTS to FSI_D_TERM_DEPOSITS.
· T2T_ANNUITY_CONTRACTS: for loading data from STG_ANNUITY_CONTRACTS to FSI_D_ANNUITY_CONTRACTS.
· T2T_BORROWINGS: for loading data from STG_BORROWINGS to FSI_D_BORROWINGS.
· T2T_FORWARD_CONTRACTS: for loading data from STG_FUTURES to FSI_D_FORWARD_RATE_AGMTS.
· T2T_FUTURE_CONTRACTS: for loading data from STG_FUTURES to FSI_D_FUTURES.
· T2T_FX_CONTRACTS: for loading data from STG_FX_CONTRACTS to FSI_D_FX_CONTRACTS.
· T2T_INVESTMENTS: for loading data from STG_INVESTMENTS to FSI_D_INVESTMENTS.
· T2T_LEASES_CONTRACTS: for loading data from STG_LEASES_CONTRACTS to FSI_D_LEASES.
· T2T_MM_CONTRACTS: for loading data from STG_MM_CONTRACTS table to FSI_D_MM_CONTRACTS.
· T2T_OPTION_CONTRACTS: for loading data from STG_OPTION_CONTRACTS to FSI_D_OPTION_CONTRACTS.
· T2T_SWAP_CONTRACTS: for loading data from STG_SWAPS_CONTRACTS to FSI_D_SWAPS.
· T2T_OD_ACCOUNTS: for loading data from STG_OD_ACCOUNTS to FSI_D_CREDIT_LINES.
· T2T_LOAN_COMMITMENTS: for loading data from STG_LOAN_COMMITMENTS to FSI_D_LOAN_COMMITMENTS
COLUMN |
SOURCE/OUTPUT COLUMN |
DESCRIPTION |
COMMIT_START_DATE |
From source – Mandatory for Rate |
The date on which the Rate Lock period starts |
COMMIT_MAT_DATE |
From source – Mandatory for Rate |
The date on which the Rate Lock period expires. Corresponds to the Loan Origination Date |
COMMIT_TERM |
From source – Mandatory for Rate |
The Rate Lock term period. Equal to COMMIT_MAT_DATE – COMMIT_START_DATE |
COMMIT_TERM_MULT |
From source – Mandatory for Rate |
The Rate Lock term multiplier |
COMMIT_FEE_TO_CUST |
From source – Optional |
The fee that is charged to the customer for the Rate Lock agreement. |
COMMIT_OPTION_COST_PCT |
Calculated – Output column |
The Rate Lock cost expressed as a percentage |
COMMIT_OPTION_COST |
Calculated – Output column |
The calculated Rate Lock cost charged by treasury to the banker – Rate Lock % * Balance |
COMMIT_OPTION_TYPE_CD |
From source - Mandatory |
Refers to Option Type (default = 1). At present, we support only European. If Option Type is European, then compute Rate Lock option cost using the Black Swaption formula. Otherwise do nothing. |
NOTE |
FSI_D_LOAN_COMMITMENTS - Contains all existing columns from FSI_D_LOAN_CONTRACTS, plus the following columns which support the calculations |
You can view the Database Extract definitions by performing the following steps:
NOTE |
The Data Management Tools and Data Ingestion were previously known as Data Integrator Framework and Warehouse Designer respectively. These new terminologies are applicable only for OFSAAI versions 7.3.2.3.0 and above. |
1. Navigate to Unified Metadata Manager and select Data Management Tools, and then select Data Ingestion, and then select Database Extracts section.
2. In the left pane, expand the application as defined during application installation and click the Data Source defined during application installation.
3. Expand the required T2T definition to view the Database Extract definitions.
You can view the Source - Target mapping definitions by performing the following steps:
NOTE |
The Data Management Tools and Data Ingestion were previously known as Data Integrator Framework and Warehouse Designer respectively. These new terminologies are applicable only for OFSAAI versions 7.3.2.3.0 and above. |
4. Navigate to Unified Metadata Manager and select Data Management, and then select Data Ingestion, and then select Database Extracts.
5. In the left pane, expand the application as defined during application installation and click the Data Source defined during application installation.
6. Expand the required T2T definition to view the extract definition.
7. Click the required Database Extract definition.
The selected Database Extract definition details are displayed with the available Source - Target mappings under the Source - Target Mappings grid.
NOTE |
Staging instrument tables contain alphanumeric display codes for various IDENTIFIER and CODE columns. T2T mapping looks up in respective dimension tables for fetching an equivalent numeric ID and CODE corresponding to the alphanumeric display code. Hence, these dimension tables should be populated with the alphanumeric display code before executing any data movement tasks. |
Account Number is an alphanumeric unique identifier within each staging instrument tables. ID_NUMBER is a numeric unique identifier within processing instrument tables. Hence, there is a need to generate a numeric surrogate key for each of the account number. This information is stored in DIM_ACCOUNT table.
Function fn_popDimAccount is a function to populate numeric surrogate key for each account number. The function performs the following:
· In case surrogate key generation is required, then it uses a sequence to populate DIM_ACCOUNT table.
· In case surrogate key generation is not required, then it expects that the account number to be numeric and populates DIM_ACCOUNT with that information.
Create a new Batch with the Task and specify the following parameters for the task to populate DIM_ACCOUNT table:
· Datastore Type: Select appropriate datastore from the drop down list.
· Datastore Name:: Select appropriate name from the list. Generally it is the Infodom name.
· IP address: Select the IP address from the list.
· Rule Name: fn_popDimAccount
· Parameter List:
Surrogate Key Required Flag – Y or N
Batch run ID and As Of Date are passed internally by the batch to the Data Transformation task.
Before executing T2T data movement tasks, user should ensure that all the dimension tables that are required for instruments data are loaded. The following are some of the mandatory dimensions:
· DIM_ACCOUNTS
· DIM_PRODUCTS_B
· DIM_GENERAL_LEDGER_B
· DIM_COMMON_COA_B
· DIM_ORG_UNIT_B
Create a new Batch with the Task and specify the following parameters for the task for loading Historical Exchange Rates:
· Datastore Type: Select appropriate datastore from the drop down list.
· Datastore Name:: Select appropriate name from the list. Generally it is the Infodom name.
· IP address: Select the IP address from the list.
· Rule Name: Select the appropriate T2T name from the above list.
· Parameter List: No Parameter is passed. The only parameter is the As of Date Selection while execution.
Check T2T component logs and batch messages for checking the status of load.
T2T component can fail because of following cases:
· Unique constraint error: Target table may already contain the primary keys that are part of the staging tables.
· NOT NULL constraint error: do not have values for NOT NULL columns in the target table.
T2T component can only perform Insert operations. In case user needs to perform updates, previously loaded records should be deleted before loading the current records.
Function fn_deleteFusionTables is used for deleting the records in the target that are present in the source. This function removes rows in the table if there are matching rows in the Stage table. This function needs FSI_DELETE_TABLES_SETUP to be configured. Configure the following table for all columns that need to be part of the join between Stage table and Equivalent table.
Create a new Batch with the Task and specify the following parameters for the task to delete existing records:
· Datastore Type:: Select appropriate datastore from the drop down list.
· Datastore Name:: Select appropriate name from the list. Generally it is the Infodom name.
· IP address: Select the IP address from the list.
· Rule Name: fn_deleteFusionTables
· Parameter List: ’Table to be deleted'
Batch run ID and As Of Date are passed internally by the batch to the Data Transformation task.
Sample record forFSI_DELETE_TABLES_SETUP table is given following:
Column Name |
Column Description |
Sample Value |
STAGE_TABLE_NAME |
Stores the source table name for forming the join statement |
STG_LOAN_CONTRACTS |
STAGE_COLUMN_NAME |
Stores the source column name for forming the join statement |
V_ACCOUNT_NUMBER |
FUSION_TABLE_NAME |
Stores the target table name for forming the join statement |
FSI_D_LOAN_CONTRACTS |
FUSION_COLUMN_NAME |
Stores the target column name for forming the join statement |
ACCOUNT_NUMBER |
NOTE |
Insert rows in FSI_DELETE_TABLES_SETUP for all columns that can be used to join the stage with the equivalent table. In case if the join requires other dimension or code tables, a view can be created joining the source table with the respective code tables and that view can be part of the above setup table. |
Data in Transaction Summary table is moved into customer table using OFSAAI T2T component. After loading the data, users can view the loaded data by querying the customer table.
Topics:
· Flow Diagram for Customer T2T
· Executing T2T Data Movement Task
· DIM_Account: DIM account will be populated using the SCDs/ FN_POPDIMACCOUNT. These 30 tasks represent to the 30 SCD processes where different product processors would be the source, and DIM_ACCOUNT would be the target. Run MAP_REF_NUMs 188 to 217. For more information, see the Populating Accounts Dimension section.
· DIM_<XXXX>_B/TL/HIER/ATTR population: Run DT FN_DRMDATALOADER/FN_STGDIMDATALOADER to populate B/TL/HIER/ATTR tables of the key dimensions such as Product, Common_COA, GL, Org Unit, Customer, and so on.
· DIM_Party Population: Run the SCD MAP_REF_NUM 168. Dim_PARTY loads party_skey and other customer information. STG_PARTY_MASTER is the source table for Dim_party. Dim_party join to FSI_D_<xxx> table through DIM_CUSTOMER_B The data flow as DIM_CUSTOMER_B.customer_code = dim_party.v_party_id and dim_party.f_latest_record_indicator ='Y' and dim_customer_b.customer_id = fsi_d_mortgages.customer_id.
· Run all the Stage to Processing layer T2Ts. This will populate all the processing area tables such as, FI_D_<xxxx>.
Example: RunT2T_ANNUITY_CONTRACTS, T2T_MORTGAGES, T2T_MM_CONTRACTS and so on. For more information, see the Stage Instrument Table Loader section.
· Run the Processing toFSI_D_INST_SUMMARY T2Ts.
Example: T2T_INS_SUMM_ANNUITY, T2T_INS_SUMM_BORROWINGS, T2T_INS_SUMM_CASA and so on. For more information, sStage Instrument Table Loader section.
If LRM is integrated with FTP, then perform following steps prior to run actual T2T to populate FSI_D_CUSTOMER T2T
· Dim_Run: Dim_Run table that holds RUN_SKEY and RUN_ID. Finalize the run to pick the right RUN_SKEY mapped to FSI_LRM_ACCT_CUST_DETAILS table.
· FSI_LRM_ACCT_CUST_DETAILS: Populate this table using the T2T
· SETUP_PARAMETERS_MASTER: Manually update this table with finalized RUN_ID in the column Param_value column. Below table shows the default values. PARAM_VALUE should be manually updated. Finalize RUN_ID as in DIM_RUN.T2T expect PARAM_APP_ID to 'FTP' and PARAM_NAME to RUN_ID_FOR_CUSTOMER_T2T_FROM_LRM_TO_FTP. To pick the records from LRM, following row needs to be updated with finalized run_id in PARAM_VALUE. Without this step completion, no rows will be picked from LRM.
PARAM_SEQ |
PARAM_APP_ID |
PARAM_NAME |
PARAM_VALUE |
1 |
FTP |
RUN_ID_FOR_CUSTOMER_T2T_FROM_LRM_TO_FTP |
-1 |
Query to find the finalized run id from DIM_RUN:
select * from dim_run r where r.f_reporting_flag ='Y'
Instrument summary list of T2Ts
FSI_D_INST_SUMMARY gets loaded with the following T2Ts:
· T2T_INS_SUMM_ANNUITY
· T2T_INS_SUMM_BORROWINGS
· T2T_INS_SUMM_BREAK_FUND_CHG
· T2T_INS_SUMM_CASA
· T2T_INS_SUMM_CREDIT_CARDS
· T2T_INS_SUMM_CREDIT_LINES
· T2T_INS_SUMM_GUARANTEES
· T2T_INS_SUMM_INVESTMENTS
· T2T_INS_SUMM_LDGR_STAT_INST
· T2T_INS_SUMM_LEASES
· T2T_INS_SUMM_LOANS
· T2T_INS_SUMM_MERCHANT_CARDS
· T2T_INS_SUMM_MM_CONTRACTS
· T2T_INS_SUMM_MORTGAGES
· T2T_INS_SUMM_MUTUAL_FUNDS
· T2T_INS_SUMM_OTHER_SERVICES
· T2T_INS_SUMM_RETIREMENT_ACC
· T2T_INS_SUMM_TERM_DEPOSITS
· T2T_INS_SUMM_TRUSTS
Create a new Batch with the Task and specify the following parameters
· Datastore Type: Select appropriate datastore from the drop down list.
· Datastore Name: Select appropriate name from the list. Generally it is the Infodom name.
· IP address: Select the IP address from the list.
· Rule Name: T2T_FSI_D_CUSTOMER
· Parameter List: No Parameter is passed. The only parameter is the As of Date Selection while execution.
Take care of following points if LRM is installed:
· To pick data from FSI_LRM_ACCT_CUST_DETAILS, RUN_ID needs to be finalized. That is in DIM_RUN f_reporting_flag ='Y' B.
· Manually update the finalized RUN_ID into setup_parameters_master table (for example -1 is the RUN_ID which needs to be manually overwritten. C.
· Incase RUN_ID is not finalized that is f_reporting_flag not = 'Y' then no data will be picked from FSI_LRM_ACCT_CUST_DETAILS for the given as_of_date.
DIM_party holds customer level details. STG_PARTY_MASTER is the source table for Dim_party. Party_skey uniquely identifies the customer records.
Create a new Batch with the Task and specify the following parameters for the task for loading DIM_Party:
· Component: Run Executable
· Datastore Type: Select appropriate datastore from list
· Datastore Name: Select appropriate name from the list
· IP address: Select the IP address from the list
· Executable: scd,168
· Wait: N
· Batch Parameter: Y
NOTE |
Instrument tables to instrument Summary T2T pulls the customer related information from Dim party. So, populating Dim party should be done prior to T2T run from Instrument to Instrument Summary T2T execution. |
The following topics are included in this section:
· Mapping To OFSAA Summary Table
· Executing T2T Data Movement Tasks
· Re-Load Of Instrument Summary Data
The following are the pre-defined T2T mappings between FSI_D_< XXX > tables to FSI_D_INST_SUMMARY tables:
· T2T_INS_SUMM_ANNUITY
· T2T_INS_SUMM_BORROWINGS
· T2T_INS_SUMM_BREAK_FUND_CHG
· T2T_INS_SUMM_CASA
· T2T_INS_SUMM_CREDIT_CARDS
· T2T_INS_SUMM_CREDIT_LINES
· T2T_INS_SUMM_GUARANTEES
· T2T_INS_SUMM_INVESTMENTS
· T2T_INS_SUMM_LDGR_STAT_INST
· T2T_INS_SUMM_LEASES
· T2T_INS_SUMM_LOANS
· T2T_INS_SUMM_MERCHANT_CARDS
· T2T_INS_SUMM_MM_CONTRACTS
· T2T_INS_SUMM_MORTGAGES
· T2T_INS_SUMM_MRTGAGE_BCK_SC
· T2T_INS_SUMM_MUTUAL_FUNDS
· T2T_INS_SUMM_OTHER_SERVICES
· T2T_INS_SUMM_RETIREMENT_ACC
· T2T_INS_SUMM_TERM_DEPOSITS
· T2T_INS_SUMM_TRUSTS
User can view the extract definitions by going through the following steps:
1. Go to Data Integrator and select Source Designer, and then select Define Extracts.
2. Under FUSION_APPS application, click Data Source name.
3. Click on any of the T2T definition to view the extract definition. User can view the mapping definitions by going through the following steps:
4. Go to Data Integrator and select Warehouse Designer, and then select Define Mapping.
5. Under FUSION_APPS application, click Data Source Name.
6. Click on any of the T2T definition to view the mapping definition.
· Instrument tables should be loaded before loading the Instrument summary Information related to those instruments.
· Account Number is an alphanumeric unique identifier within each staging tables. ID_NUMBER is a numeric unique identifier within processing Instrument tables. Hence, there is a need to look up into a DIM_ACCOUNT dimension table for a numeric surrogate key for each of the alphanumeric account number. This dimension table DIM_ACCOUNT will be populated as part of the process that loads instrument tables. For more information on loading instrument tables, see Loading Instrument Table Data
· Before executing T2T data movement tasks, user should ensure that all the dimension tables that are required for instruments data are loaded. The following are some of the mandatory dimensions:
§ DIM_ACCOUNTS
§ DIM_PRODUCTS_B
§ DIM_GENERAL_LEDGER_B
§ DIM_COMMON_COA_B
§ DIM_ORG_UNIT_B
Create a new Batch with the Task and specify the following parameters for the task for Loading Instrument Summary table:
· Datastore Type: Select appropriate datastore from the drop down list.
· Datastore Name: Select appropriate name from the list. Generally it is the Infodom name.
· IP address: Select the IP address from the list.
· Rule Name: Select the appropriate T2T name from the above list.
· Parameter List: No Parameter is passed. The only parameter is the As of Date Selection while execution. Check T2T component logs and batch messages for checking the status of load. T2T component can fail because of following cases:
· Unique constraint error: Target table may already contain the primary keys that are part of the staging tables.
· NOT NULL constraint error: Staging table do not have values for mandatory columns of the target table.
T2T component can only perform Insert operations. In case user needs to perform updates, previously loaded records should be deleted before loading the current records.
Function fn_deleteFusionTables is used for deleting the records in the target that are present in the source. This function removes rows in the table if there are matching rows in the Stage table. This function needs FSI_DELETE_TABLES_SETUP to be configured. Configure the following table for all columns that need to be part of the join between Stage table and Equivalent table.
Create a new Batch with the Task and specify the following parameters for the task to delete existing records:
· Datastore Type: Select appropriate datastore from the drop down list.
· Datastore Name: Select appropriate name from the list. Generally it is the Infodom name.
· IP address: Select the IP address from the list.
· Rule Name: fn_deleteFusionTables
· Parameter List: Table to be deleted
Batch run ID and As Of Date are passed internally by the batch to the Data Transformation task.
Data in staging Transaction Summary tables are moved into respective OFSAA processing transaction summary tables using OFSAAI T2T component. After loading the data, users can view the loaded data by querying the processing transaction tables.
Topics:
· Mapping To OFSAA Processing Tables
· Executing T2T Data Movement Tasks
· Re-Load Of Transaction Summary Data
Following are examples of various application staging transaction summary tables:
· STG_LOAN_CONTRACT_TXNS_SUMMARY: holds transaction summary information related to the loan contracts that are present in staging instrument table for loan contracts, that is STG_LOAN_CONTRACTS.
· STG_CARDS_TXNS_SUMMARY: holds transaction summary information related to the credit cards present that are present in staging instrument table for credit cards, that is STG_CARDS.
· STG_CASA_TXNS_SUMMARY: holds transaction summary information related to the checking and saving accounts that are present in staging instrument table for CASA, that is STG_CASA.
· STG_MERCHANT_CARD_TXNS_SUMMARY: holds transaction summary information related to the merchant cards that are present in staging instrument table for merchant cards, that is STG_MERCHANT_CARDS.
· STG_OTHER_SERVICE_TXNS_SUMMARY: holds transaction summary information related to other services that are present in staging instrument table for other services, that is STG_OTHER_SERVICES.
· STG_TERMDEPOSITS_TXNS_SUMMARY: holds transaction summary information related to the term deposits that are present in staging instrument table for term deposits, that is STG_TD_CONTRACTS.
· STG_TRUSTS_TXNS_SUMMARY: holds transaction summary information related to the trust accounts that are present in staging instrument table for trusts, that is STG_TRUSTS.
NOTE |
These tables are required for PFT application and used in the Allocation definitions. |
Data can be loaded into staging tables through F2T component of OFSAAI. After data is loaded, check for data quality within the staging tables, before moving into OFSAA processing tables. Data quality checks can include:
· Number of records between external system and staging transaction summary tables.
· Valid list of values in code columns of staging.
· Valid list of values in dimension columns like product, organization unit, general ledger, and so on. These members should be present in the respective dimension tables.
· Valid list of values in dimension columns like product, organization unit, general ledger, and so on. These members should be present in the respective dimension tables.
· Valid values for other significant columns of staging tables.
Following are examples of the pre-defined T2T mappings between the above application staging tables and processing tables:
· T2T_STG_CARDS_TXNS_SUMMARY: for loading data from STG_CARDS_TXNS_SUMMARY to FSI_D_CREDIT_CARDS_TXNS.
· T2T_STG_CASA_TXNS_SUMMARY: for loading data from STG_CASA_TXNS_SUMMARY to FSI_D_CASA_TXNS.
· T2T_LOAN_CONTRACT_TXNS_SUMMARY: for loading data from STG_LOAN_CONTRACT_TXNS_SUMMARY to FSI_D_LOAN_CONTRACTS_TXNS.
· T2T_STG_MERCHANT_CARD_TXNS_SUMMARY: for loading data from STG_MERCHANT_CARD_TXNS_SUMMARY to FSI_D_MERCHANT_CARDS_TXNS.
· T2T_STG_OTHER_SERVICE_TXNS_SUMMARY: for loading data from STG_OTHER_SERVICE_TXNS_SUMMARY to FSI_D_OTHER_SERVICES_TXNS.
· T2T_STG_TERMDEPOSITS_TXNS_SUMMARY: for loading data from STG_TERMDEPOSITS_TXNS_SUMMARY toFSI_D_TERM_DEPOSITS_TXNS.
· T2T_STG_TRUSTS_TXNS_SUMMARY: for loading data from STG_TRUSTS_TXNS_SUMMARY to FSI_D_TRUSTS_TXNS.
You can view the Database Extract definitions by performing the following steps:
NOTE |
The Data Management Tools and Data Ingestion were previously known as Data Integrator Framework and Warehouse Designer respectively. These new terminologies are applicable only for OFSAAI versions 7.3.2.3.0 and above. |
User can view the extract definitions by going through the following steps:
1. Navigate to Unified Metadata Manager and select Data Management Tools, and then select, Data Ingestion, and then select Database Extracts section.
2. In the left pane, expand the application as defined during application installation and click the Data Source defined during application installation.
3. Expand the required T2T definition to view the Database Extract definitions.
You can view the Source - Target mapping definitions by performing the following steps:
NOTE |
The Data Management Tools and Data Ingestion were previously known as Data Integrator Framework and Warehouse Designer respectively. These new terminologies are applicable only for OFSAAI versions 7.3.2.3.0 and above. |
4. Navigate to Unified Metadata Manager and select Data Management Tools, and then select, Data Ingestion, and then select Database Extracts section.
5. In the left pane, expand the application as defined during application installation and click the Data Source defined during application installation.
6. Expand the required T2T definition to view the extract definition.
7. Click the required Database Extract definition.
8. The selected Database Extract definition details are displayed with the available Source - Target mappings under the Source - Target Mappings grid.
NOTE |
Staging transaction summary tables contain alphanumeric display codes for various IDENTIFIER and CODE columns. T2T mapping looks up in respective dimension tables for fetching an equivalent numeric ID and CODE corresponding to the alphanumeric display code. Hence, these dimension tables should be populated with the alphanumeric display code before executing any data movement tasks. |
· Instrument tables should be loaded before loading the transaction summary information related to those instruments.
· Account Number is an alphanumeric unique identifier within each staging transaction summary tables. ID_NUMBER is a numeric unique identifier within processing transaction summary tables. Hence, there is a need to look up into a DIM_ACCOUNT dimension table for a numeric surrogate key for each of the alphanumeric account number. This dimension table DIM_ACCOUNT will be populated as part of the process that loads instrument tables. For more information on loading instrument tables, see Loading Instrument Table Data.
· Before executing T2T data movement tasks, user should ensure that all the dimension tables that are required for instruments data are loaded. The following are some of the mandatory dimensions:
§ DIM_ACCOUNTS
§ DIM_PRODUCTS_B
§ DIM_GENERAL_LEDGER_B
§ DIM_COMMON_COA_B
§ DIM_ORG_UNIT_B
Create a new Batch with the Task and specify the following parameters for the task for loading Historical Exchange Rates:
· Datastore Type: Select appropriate datastore from the drop down list.
· Datastore Name: Select appropriate name from the list. Generally it is the Infodom name.
· IP address: Select the IP address from the list.
· Rule Name: Select the appropriate T2T name from the above list.
· Parameter List: No Parameter is passed. The only parameter is the As of Date Selection while execution.
Check T2T component logs and batch messages for checking the status of load.
T2T component can fail because of following cases:
· Unique constraint error: Target table may already contain the primary keys that are part of the staging tables.
· NOT NULL constraint error: Staging table do not have values for mandatory columns of the target table.
T2T component can only perform Insert operations. In case user needs to perform updates, previously loaded records should be deleted before loading the current records.
Function fn_deleteFusionTables is used for deleting the records in the target that are present in the source. This function removes rows in the table if there are matching rows in the Stage table. This function needs FSI_DELETE_TABLES_SETUP to be configured. Configure the following table for all columns that need to be part of the join between Stage table and Equivalent table.
Create a new Batch with the Task and specify the following parameters for the task to delete existing records:
· Datastore Type: Select appropriate datastore from the drop down list.
· Datastore Name: Select appropriate name from the list. Generally it is the Infodom name.
· IP address: Select the IP address from the list.
· Rule Name: fn_deleteFusionTables
· Parameter List: 'Table to be deleted'
Batch Run ID and As Of Date are passed internally by the batch to the Data Transformation task.
Sample record for FSI_DELETE_TABLES_SETUP table is given following:
Column Name |
Column Description |
Sample Value |
STAGE_TABLE_NAME |
Stores the source table name for forming the join statement |
STG_LOAN_CONTRACTS |
STAGE_COLUMN_NAME |
Stores the source column name for forming the join statement |
V_ACCOUNT_NUMBER |
FUSION_TABLE_NAME |
Stores the target table name for forming the join statement |
FSI_D_LOAN_CONTRACTS |
FUSION_COLUMN_NAME |
Stores the target column name for forming the join statement |
ACCOUNT_NUMBER |
NOTE |
Insert rows in FSI_DELETE_TABLES_SETUP for all columns that can be used to join the stage with the equivalent table. In case if the join requires other dimension or code tables, a view can be created joining the source table with the respective code tables and that view can be part of the above setup table. |
The LEDGER_STAT load utility is an Oracle stored procedure used to load your ledger data into the Oracle Financial Services Analytical Applications (OFSAA) LEDGER_STAT table.
Topics:
· Overview of the Load Process
· Features of the Load procedure
· Setup for the LEDGER_STAT Load utility
· Tables Cleanup After Truncation Of Ledger_Stat
There are three types of load tables that can be used for loading ledger data.
· Type I (FISCAL_ONE_MONTH): Load table contains ONE_MONTH column for storing data corresponding to one of the twelve fiscal months.
· Type II (FISCAL_RANGE): Load table contains M1 to M12 columns for storing data corresponding to twelve fiscal months.
· Type III (CALENDAR_MONTHS): Load table contains AS_OF_DATE for storing data corresponding to an as-of-date. While Type II table contains ledger data across fiscal months in a single row, Type III contains the same information in multiple rows. Type III supports calendar dates and data can be for one or multiple dates.
ASCII Ledger data is loaded into any of the above staging or load tables using F2T component of OFSAAI framework. This component can be used for loading any flat file data into tables. For more information on how to load data using F2T, see OFSAAI User Guide.
LEDGER_STAT load utility is a PL/SQL procedure and loads data from the above staging tables into LEDGER_STAT table, based on the configuration. Runtime parameters, such as the name of the load table, which all columns to load, ADD or REPLACE update functionality, and whether or not to create offset records are passed as parameters to the procedure and these are inserted into the Load Batch table (FSI_LS_LOAD_BATCH).
The procedure is implemented as an Oracle PL/SQL stored procedure so it can be invoked from SQL*Plus or Batch execution screen within OFSAAI Batch Maintenance component. Input parameters are read from the batch/parameter table and validated for correctness, completeness and consistency before the load begins. Parameter errors are written to a Message column in the batch/parameter table and FSI_MESSAGE_LOG table. Runtime statistics are written to the batch/parameter record following completion of the load for that record.
NOTE |
For supporting loading LEDGER_STAT from Type III staging table, a global temporary table (GTT) is created within database. Data is moved from global temporary table into LEDGER_STAT table. |
The LEDGER_STAT load utility is the only supported method for loading your ledger data into the LEDGER_STAT table. The LEDGER_STAT load utility offers the following features:
· You can load ledger data for one month or for a range of months.
· You can also load ledger data based on calendar as-of-dates.
· A month can be undone individually, using the Ledger Load Undo process. You can do this even though the month to be undone is included in a multiple-month load.
· You can update columns in existing LEDGER_STAT rows using either the additive or replacement functionality.
· You can bypass the upsert logic and insert all the rows from the load table using the INSERT_ONLY mode. This functionality can be used either for first-time loads or to reload for all months with each load.
The following are the limitations.
· Load Table Rows Must Be Unique
· A restriction imposed by the use of bulk SQL (as opposed to row-by-row) processing is that all the rows in the load table(s) must be unique. This means that there is one row in the load table for one row in LEDGER_STAT. A unique index is created on each load table to enforce this uniqueness and provide acceptable performance.
· Defining Financial Elements in AMHM
· Occasionally, your load table may contain dimension member values for one or more dimensions that are not defined in AMHM. The LEDGER_STAT load procedure loads these rows anyway, except for the rows containing undefined or incompletely defined FINANCIAL_ELEM_ID values.
· Any new values forFINANCIAL_ELEM_ID must first be defined in AMHM before running the load. Specifically, the load procedure needs the AGGREGATE_ METHOD value for each FINANCIAL_ELEM_ID value so that the YTD columns in LEDGER_STAT can be computed using the appropriate method.
Setting up and Executing a Type III (or Type 3) Ledger Stat Load Using STG_GL_DATA
The Type 3 load takes data from STG_GL_DATA and transfers it into the LEDGER_STAT table.
Steps to follow to setup and run a Type III Ledger Stat Load:
1. Step 1: Populate STG_GL_DATA
The following columns in STG_GL_DATA must be populated with valid values:
Column |
Description |
V_GL_CODE |
General Ledger Code value. |
FIC_MIS_DATE |
This field indicates the current period As of Date applicable to the data being loaded. |
V_ORG_UNIT_CODE |
Org Unit Code value. |
V_SCENARIO_CODE |
Populate with a value from the CONSOLIDATION_DISPLAY_CODE column from the FSI_CONSOLIDATION_CD table (ex. ACTUAL, BUDGET). |
V_CCY_CODE |
ISO Currency Code from FSI_CURRENCIES (ex. USD) |
V_PROD_CODE |
Product Code value. |
V_FINANCIAL_ELEMENT_CODE |
Populate with a value from the FINANCIAL_ELEM_CODE column from the DIM_FINANCIAL_ELEMENTS_B table (ex. ENDBAL, AVGBAL). |
V_COMMON_COA_CODE |
Common COA Code value. |
N_AMOUNT_LCY |
Balance |
The following columns in STG_GL_DATA must be populated because they are defined as NOT NULL but can be defaulted to the value of your choice because they are not used: V_LV_CODE
V_BRANCH_CODE
F_CONSOLIDATION_FLAG
V_GAAP_CODE
2. Step 2: Verify data exists in the view STG_GL_DATA_V
The following SQL statement is used to populate this view:
SELECT v_data_origin DS,
f_consolidation_flag ACCUM_TYPE,
fcc.consolidation_cd CONSOLIDAT,
v_ccy_code ISOCRNCYCD,
dfeb.financial_elem_id FINANC_ID,
doub.org_unit_id ORG_ID,
dglb.gl_account_id GL_ACCT_ID,
dccb.common_coa_id CMN_COA_ID,
dpb.product_id PRDCT_ID,
fic_mis_date AS_OF_DATE,
n_amount_lcy VALUE,
0 baltypecd
FROM STG_GL_DATA SGD,
DIM_GENERAL_LEDGER_B DGLB,
DIM_ORG_UNIT_B DOUB,
DIM_PRODUCTS_B DPB,
DIM_FINANCIAL_ELEMENTS_B DFEB,
DIM_COMMON_COA_B DCCB,
FSI_CURRENCIES FC,
FSI_CONSOLIDATION_CD FCC
WHERE NVL(n_amount_lcy, 0) <> 0
AND SGD.V_GL_CODE = DGLB.GL_ACCOUNT_CODE
AND SGD.V_ORG_UNIT_CODE = DOUB.ORG_UNIT_CODE
AND SGD.V_PROD_CODE = DPB.PRODUCT_CODE
AND SGD.V_FINANCIAL_ELEMENT_CODE = DFEB.FINANCIAL_ELEM_CODE
AND SGD.V_COMMON_COA_CODE = DCCB.COMMON_COA_CODE
AND SGD.V_CCY_CODE = FC.ISO_CURRENCY_CD
AND SGD.V_SCENARIO_CODE = FCC.CONSOLIDATION_DISPLAY_CODE;
Important: As seen in the code above, the view references the _CODE columns on the dimension tables. For example, COMMON_COA_CODE on DIM_COMMON_COA_B and ORG_UNIT_CODE on DIM_ORG_UNIT_B. These code columns must be populated for data to exist in STG_GL_DATA_V.
The Update_Dimension_Code (fn_updatedimensioncode) program populates these Code columns using data from values in the Code dimension Attribute (for example, COMMON COA CODE, ORG UNIT CODE, and so on.)
The BALTYPECD column has a default value of 0 in the View, as this column is not null in LEDGER_STAT. Baltypecd is not a Dimension. It indicates the credit or debit of the same account details. Since same account can hold both credit and debit, this column should be populated in the source with a value. It is the part of the unique Index and Not Null column in LEDGER_STAT.
3. Step 3: If using the Type 3 Ledger Stat Load for the first time, run the GTT table creation procedure.
The GTT table creation procedure creates the Global Temporary Table LS_LOAD_TABLE_GTT_V.
Thefn_ledger_load_create_gtt function creates the table LS_LOAD_TABLE_GTT_V and the index UK_GTT for use in the Type 3 Ledger Stat Load.
NOTE |
If the GTT table has not been created and you try to execute the Ledger Stat Load, you will get the following error in FSI_MESSAGE_LOG: WRAPPER_LEDGER_STAT_LOAD- Error: -942: ORA-00942: table or view does not exist |
4. Step 4: Populate FSI_LS_LOAD_BATCH
You need to populate the following columns:
RUN_FLAG |
Y |
SEQUENCE |
Sequence value (ex. 1) |
LOAD_TABLE_NAME |
STG_GL_DATA |
ONE_MONTH_ONLY |
N |
UPDATE_MODE |
ADD or REPLACE |
INSERT_ONLY |
Y or N |
CREATE_OFFSETS |
N |
IS_CALENDAR_MONTH |
Y |
START_CALENDAR_MONTH |
Starting date to load in format YYYYMMDD. |
END_CALENDAR_MONTH |
Ending date to load in format YYYYMMDD. |
5. Step 5: Run the Ledger Stat Load
Use the following command to run the Type 3 Ledger Stat Load in SQL*Plus as the atomic user:
DECLARE
x NUMBER :=0;
BEGIN
x :=
ofsa_util.wrapper_ledger_stat_load('BATCH_ID ','MIS_DATE','TABLE_NAME',
TABLE_TYPE', 'UPDATE_MODE', 'INSERT_ONLY', 'START_DATE', 'END_DATE')
dbms_output.put_line ('The return variable is ' || x);
END;
DECLARE x NUMBER :=0; BEGIN x := ofsa_util.wrapper_ledger_stat_load('ARALSLOADTYPE3_4','20110111','STG_GL_DATA', 'CALENDAR_MONTHS', 'ADD', 'Y', '20101231', '20101231'); dbms_output.put_line ('The return variable is ' || x); END;
After the Ledger Load completes, check the tables FSI_MESSAGE_LOG and FSI_LS_LOAD_BATCH for errors.
NOTE |
For ledger load table name is ledger_load and data source value is the V_DATA_ORIGIN from STG_GL_DATA. For Ledger_stat with the same data source will have same identity code. |
Insert happens into FSI_DATA_IDENTITY with new identity code if the new Data origin used in the STG_GL_DATA.
Update happens in case same set of Data source used STG_GL_DATA.
Updates set happens
start_time = SYSDATE,
end_time = SYSDATE,
number_of_entries = no.of entries for the current load
Here, STG_GL_DATA is considered as a source of input for LEDGER_LOAD. In case of other source table, the rule is same in populating data into FSI_DATA_IDENTITY.
A view is created on the LEDGER_STAT table called LSL. The purpose of this view is to provide shorter column names for the load procedure. The LSL view must contain the same columns as LEDGER_STAT. Column alias for each columns within the view should match the COLUMN_ALIAS user-defined property that is set for each column of LEDGER_STAT table in the ERwin model.
For any user-defined dimensions in your LEDGER_STAT you must complete the following steps.
· In ERwin model, look up the COLUMN_ALIAS User Defined Property (UDP) for added dimension columns within LEDGER_STAT table.
· Specify the value of the property COLUMN_ALIAS.
· Modify the view to include new dimension columns. Use the same COLUMN_ALIAS that was mentioned in the ERwin model in the load table view.
This step is applicable for loading ledger data from Type I or Type II load table. Staging table STG_GL_DATA (used for Type III load) is packaged with the application. Multiple load tables (Type I or Type II) can be created as required by the System Administrator. Table structure for the Type I and Type II load tables is given in the following sections:
-- --------------------------------------------------------------------
-- Uncomment the m1..m12 columns if you plan to load a range of months (Type II Load Table).
-- Add lines for all of the LEDGER_STAT user-defined leaf columns in the place
-- indicated below. Don't forget to add commas if you need to.
-- -------------------------------------------------------------------
CREATE TABLE &load_table_name(
ds VARCHAR2(12) NOT NULL, -- data_source
year_s NUMBER(5) NOT NULL,
accum_type char(1) NOT NULL,
consolidat NUMBER(5) NOT NULL,
isocrncycd VARCHAR2(3) DEFAULT '002' NOT NULL,
financ_id NUMBER(14) NOT NULL,
org_id NUMBER(14) NOT NULL,
gl_acct_id NUMBER(14) NOT NULL,
cmn_coa_id NUMBER(14) NOT NULL,
prdct_id NUMBER(14) NOT NULL,
baltypecd NUMBER(5) DEFAULT 0 NOT NULL,
--
-- m1 NUMBER(15,4),
-- m2 NUMBER(15,4),
-- m3 NUMBER(15,4),
-- m4 NUMBER(15,4),
-- m5 NUMBER(15,4),
-- m6 NUMBER(15,4),
-- m7 NUMBER(15,4),
-- m8 NUMBER(15,4),
-- m9 NUMBER(15,4),
-- m10 NUMBER(15,4),
-- m11 NUMBER(15,4),
-- m12 NUMBER(15,4),
-- one_month_amt NUMBER(15,4)
--
-- --------------------------------------------------------------------
-- Other leaf columns (PROPERTY_COLUMN from REV_COLUMN_PROPERTIES for LEDGER_STAT):
-- -------------------------------------------------------------
-- . . .
--
)
This step is applicable for loading ledger data from Type I or Type II load table. A unique index has to be created on each load table specifying the column alias for each column within the load table. Column alias should match the column alias specified for columns within LEDGER_STAT table. LEDGER_STAT load procedure identifies the source columns that need to be loaded using the column aliases and not by the physical column names. Column alias for LEDGER_STAT columns are specified in the user-defined property (UDP) COLUMN_ALIAS within ERwin model. Refer to ERwin model for getting the column alias for each of the LEDGER_STAT columns. Definition of the unique index is given following:
CREATE UNIQUE INDEX &load_table_name
ON &load_table_name ( ds,
year_s,
accum_type,
consolidat,
isocrncycd,
financ_id,
org_id,
gl_acct_id,
cmn_coa_id,
prdct_id
baltypecd,
---------------------------------------------------------
-- Include all additional LEDGER_STAT primary key
-- leaf columns here (use PROPERTY_COLUMN from REV_COLUMN_PROPERTIES):
-- -------------------------------------------------------
-- . . .
--
)
The unique key of the load table must be identical to the unique key of LEDGER_STAT, with the exception that instead of IDENTITY_CODE, which is in LEDGER_STAT, the load table has a column called DS (Data Source).
This step is applicable for loading ledger data from Type I or Type II load table. In addition to load tables, views have to be created on the staging tables similar to the view LSL that was created on LEDGER_STAT. A view has to be created on each load table specifying the columns alias for each column within the load table. Column alias should match the column alias specified for columns within LEDGER_STAT table. LEDGER_STAT load procedure identifies the source columns that need to be loaded using the column alias. Column alias for LEDGER_STAT columns are specified in the user-defined property (UDP) COLUMN_ALIAS within ERwin model. See the ERwin model for getting the column alias for each of the LEDGER_STAT columns. View definition is given following:
-- --------------------------------------------------------------------
-- Uncomment the m1..m12 columns if you plan to load a range of months (Type II Load table).
-- Add lines for all of the LEDGER_STAT user-defined leaf columns in the place
-- indicated below. Don't forget to add commas if you need to.
-- --------------------------------------------------------------------
CREATE OR REPLACE VIEW &load_table_name._v AS
SELECT ds,
year_s,
accum_type,
consolidat,
isocrncycd,
financ_id,
org_id,
gl_acct_id,
cmn_coa_id,
prdct_id,
baltypecd,
--
-- NVL(m1,0) AS m1,
-- NVL(m2,0) AS m2,
-- NVL(m3,0) AS m3,
-- NVL(m4,0) AS m4,
-- NVL(m5,0) AS m5,
-- NVL(m6,0) AS m6,
-- NVL(m7,0) AS m7,
-- NVL(m8,0) AS m8,
-- NVL(m9,0) AS m9,
-- NVL(m10,0) AS m10,
-- NVL(m11,0) AS m11,
-- NVL(m12,0) AS m12,
--
NVL(one_month_amt,0) AS one
--
-- --------------------------------------------------------------------
-- Other leaf columns (PROPERTY_COLUMN from REV_COLUMN_PROPERTIES for LEDGER_STAT):
-- --------------------------------------------------------------------
-- . . .
--
FROM &load_table_name
WHERE NVL(one_month_amt,0) <> 0;
--
-- OR NVL(m1,0) <> 0
-- OR NVL(m2,0) <> 0
-- OR NVL(m3,0) <> 0
-- OR NVL(m4,0) <> 0
-- OR NVL(m5,0) <> 0
-- OR NVL(m6,0) <> 0
-- OR NVL(m7,0) <> 0
-- OR NVL(m8,0) <> 0
-- OR NVL(m9,0) <> 0
-- OR NVL(m10,0) <> 0
-- OR NVL(m11,0) <> 0
-- OR NVL(m12,0) <> 0;
In case, the custom dimensions are added to the load table, views need to be modified to reflect the same.
This step is applicable for loading ledger data from Type III. Calendar dates present in the data of Load table are converted to the corresponding Fiscal Year/Month. Conversion from calendar date to fiscal year & month is done based on the START_MONTH column present in FSI_FISCAL_YEAR_INFO table. These derived fiscal year & fiscal month are then inserted in an intermediate Global Temporary Table (GTT) after aggregating the rows of same months/years. Therefore, if 12 rows are present for the same fiscal year each corresponding to a different month, then global temporary table may have maximum of one row corresponding to the fiscal months, these 12 rows represent.
GTT needs to contain valid dimension member identifiers and numeric codes. Since staging table contains alphanumeric identifiers and codes, a view is created on STG_GL_DATA table joining with other relevant dimension and CD/MLS tables before being used in the GTT creation.
Global temporary table can be created in 2 ways:
1. Using PL/SQL
Declare
output number;
Begin
Output:= fn_ledger_load_create_gtt('BATCH_ID', 'AS_OF_DATE', 'TABLE_NAME');
End;
AS_OF_DATE is the date for which GTT is created, in YYYYMMDD format.
TABLE_NAME is the staging table name STG_GL_DATA.
An example of running the function from SQL*Plus is as follows:
SQL> var output number;
SQL> execute :output:= fn_ledger_load_create_gtt('BATCH_ID', '20100519', 'STG_GL_DATA');
2. Using OFSAAI Batch Maintenance
To execute the procedure from OFSAAI Batch Maintenance, run the batch mentioned following and specify the following parameters:
§ Datastore Type: Select appropriate datastore from list
§ Datastore Name: Select appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: fn_ledgerLoadGTTCreation
§ Parameter List: AS_OF_DATE and TABLE_NAME
TABLE_NAME is the staging table name STG_GL_DATA.
AS_OF_DATE should be passed as 'YYYYMMDD' format.
NOTE |
BATCHID will be passed explicitly in Batch Maintenance. The appropriate table parameters are enclosed in single quotes. |
LEDGER_STAT Loader utility uses the following tables:
· FSI_FISCAL_YEAR_INFO: The table contains the fiscal year information. This is a setup table.
· FSI_LS_LOAD_BATCH: The table contains the parameters for the load batch that needs to be executed for loading ledger data from staging or load table into LEDGER_STAT. This is a setup table.
· STG_GL_DATA: The staging table contains the ledger data for various as-of-dates.
· LEDGER_STAT: The processing table contains the ledger data for various fiscal months. This is loaded from staging table.
Data for ledger can come from external systems. Such data has to be in the format of the staging table. This data can be loaded into staging through F2T component of OFSAAI framework. Users can view the loaded data by querying the staging tables and various log files associated with F2T component.
You can execute this procedure either from SQL*Plus or from within a PL/SQL block or from Batch Maintenance window within OFSAAI framework.
To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner. The procedure/batch requires the following 9 parameters:
1. BATCH_ID- Any unique number to identify the execution run.
2. MIS_DATE- Date on which the loading is done expressed in YYYYMMDD format.
3. TABLE_NAME- STG_GL_DATA(Type III) or any other load table (TYPE I or TYPE II)
4. TABLE_TYPE- FISCAL_ONE_MONTH or FISCAL_RANGE (TYPE I or TYPE II)
5. CALENDAR_MONTHS (TYPE III)
6. UPDATE_MODE- ADD/REPLACE
7. INSERT_ONLY- Y/N
8. START_DATE- Calendar start date in YYYYMMDD
9. END_DATE- Calendar end date in YYYYMMDD
The input parameter logic for the Type III, Type II and Type I tables.
· If Start_Date and End_Date are null then month part of MIS_Date is taken for processing a particular month. (Ex: if MIS_DATE is 20101231 then the December calendar month data is processed).
· In this case the Start_Date and End_Date becomes optional.
· The Start_Date and End_Date parameters will hold numeric values identifying the fiscal month. The value of these parameters will be between 1 and 12 (that is, M1 till M12).
· The Start_Date and End_Date should be same.
· In this case the Start_Date and End_Date are mandatory.
· The Start_Date and End_Date parameters will hold numeric values identifying the fiscal month. The value of these parameters will be between 1 and 12 (that is, M1 till M12).
· The Start_Date and End_Date parameters will specify the range of fiscal months which are to be processed. Ex: M1 till M6 in case the Start_Date and End_Date values are 1 and 6.
· In this case the Start_Date and End_Date are mandatory.
Ledger Load can be executed in 2 different ways:
· Using PL/SQL:
By using the function
ofsa_util.wrapper_ledger_stat_load('BATCH_ID','MIS_DATE', TABLE_NAME', TABLE_TYPE', 'UPDATE_MODE', 'INSERT_ONLY','START_DATE','END_DATE');
Example:
DECLARE
x NUMBER :=0;
BEGIN
x := ofsa_util.wrapper_ledger_stat_load('batch_id_1','20090202','STG_GL_DATA','CALENDAR_MONTHS','ADD','Y','20070430','20080331');
dbms_output.put_line ('The return variable is ' || x);
END;
· Using Batch Maintenance
To execute the procedure from OFSAAI Batch Maintenance, create a new Batch with the Task as TRANSFORM DATA and specify the following parameters for the task:
§ Datastore Type: Select appropriate datastore from list
§ Datastore Name: Select appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: fn_ledgerDataLoader
§ Parameter List: <Same as mentioned above in the parameter list>
The data for the Ledger can have more than one currency at a time that is multi currencies input.
To execute multi currencies, you need to enable the flag using the following statement:
update fsi_db_info f1 set f1.multi_currency_enabled_flg =1;
commit;
NOTE |
FTP supports the following two scenarios for Ledger Migration: |
· If multi-currency is not used:
FSI_DB_INFO
Multi_currency_enabled_flg = 0
Currency_type_enabled_flg = 0
Functional_currency_cd = same as iso_currency_cd in ledger and instrument data
Ledger Data (FE 100 and FE 140) must exist in LS and must only be in Functional currency (iso_currency_cd = functional currency and currency_type_cd = 2)
· If multi-currency is used:
FSI_DB_INFO
Multi_currency_enabled_flg = 1
Currency_type_enabled_flg = 1
Functional_currency_cd = same as iso_currency_cd in ledger and instrument data
Instrument Data can be in multiple currencies Ledger Data (FE 100 and FE 140) must exist in LS and must only be in Functional currency (iso_currency_cd = functional currency and currency_type_cd = 2). Exchange rate data should be present in Exchange Rate Direct Access table for converting entered currency to functional currency.
The ledger load program throws both user defined exceptions and Oracle database related exceptions. These exception messages could be seen in FSI_MESSAGES_LOG table with the help of the batch_id which was used during execution. The exception list includes all possible validations on the parameters that were passed and database related exceptions.
The LEDGER_STAT procedure makes entries into certain audit tables. Whenever the user truncates/deletes the Data from LEDGER_STAT, he needs to additionally remove the auditing entries from the tables FSI_DATA_IDENTITY, FSI_M_SRC_DRIVER_QUERY and FSI_LS_MIGRATION_RESULTS. This procedure enables the user to clean up these audit tables.
Executing the clean up of Ledger_Stat Load Procedure
To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner:
Fn_ledger_stat_cleanup(batch_run_id VARCHAR2,
as_of_date VARCHAR2)
SQLPLUS > declare
result number;
begin
result:=Fn_ledger_stat_cleanup ('LEDGER_CLEANUP_BATCH1','20121212');
end;
/
· BATCH_RUN_ID is any string to identify the executed batch.
· AS_OF_DATE in the format YYYYMMDD.
To execute Ledger Stat Clean up from OFSAAI Batch Maintenance, a seeded Batch is provided.
The following batch parameters are mentioned:
· Datastore Type: Select the appropriate datastore from list
· Datastore Name: Select the appropriate name from the list
· IP address: Select the IP address from the list
· Rule Name: Fn_ledger_stat_cleanup
At times customers source some of their processed cash flow result data (not payment schedule) from 3rd party information providers or from internal systems. The cash flow data loader provides a way to load externally sourced cashflows into an ALM process and have these aggregate into the ALM result output tables. These external cash flows can be loaded into the Stg_Account_Cash_Flows table and merged with OFSAA generated results by executing Cash Flow Loader.
· Data can be aggregated at Product / Organisation Unit / Currency level or given at account level. Behaviour of Cash Flow Loader is controlled by setup entries in tables SETUP_MASTER and FSI_CASH_FLOW_LOADER_SETUP.
· ALM generates results in both base and consolidated currency. Same capability is available via Cash Flow Loader. Cash flows in both base currency and consolidated currency can be loaded. The loader does not do currency conversion and consolidation. It is expected to be done at source or during data load to staging. This is controlled by currency type column. If only base currency data is given then only RES_DTL and FSI_O_RESULT_MASTER are populated. If consolidated is given then CONS_DTL and FSI_O_CONSOLIDATED_MASTER are also populated.
· The cash flows in stage are mapped to process and scenario which must be configured in ALM. Currently the loader supports only deterministic processes. Stochastic processes are not supported.
· There is no filter for instrument type cd cash flow loader package. However, it validates stage supplied instrument type codes with OOB provided instrument type codes, so that only valid records get processed.
· If irregular cashflows is used in STG_PAYMENT_SCHEDULE table (AMRT Type as 801) , then cash flows are required to be loaded from last payment date till maturity date to populate the first interest payment accurate. This same logic is followed while loading to STG_ACCOUNT_CASH_FLOW as well. STG_ACCT_CASH can have only one instrument type cd to distinguish between the CASH FLOWS loaded from different type of accounts.
· STG_ACCOUNT_CASH_FLOW: This table is used to store the cash flow generated by the different sources for loading purpose. This is a staging table where ALM expected cashflows data to be loaded from back office/bank. There is no dependency of FSI_D tables to load this table.
Data is expected as per following details:
Column name |
Data expectation |
Extraction Date (fic_mis_date) |
MIS date for which the given data is valid, also called As of Date |
Cash Flow Date (d_cash_flow_date) |
Calendar date on which cash flow or other event occurs |
Id Number (n_account_id) |
This is equivalent of ID_NUMBER in EPM processing tables (ex: FSI_D_LOAN_CONTRACTS) and is used to map cash flows with their corresponding Instrument record. If aggregated cash flows are loaded then this column can be defaulted to -1 |
Identity Code (n_acct_data_identity_cd) |
This is equivalent of IDENTITY_CODE in EPM processing tables (ex: FSI_D_LOAN_CONTRACTS) and stores as of date in number (YYYYMMDD) format |
Cash Flow Amount (n_cash_flow_amount) |
This column stores the cash flow or other amount, depending on financial element, on event date |
Cash Flow Sequence (n_cash_flow_sequence) |
Sequence in which event occurs is mentioned here. It can be a running number and is used to identify order in which an event occurs if there are multiple events on same date. |
Currency Type Code (n_currency_type_cd) |
This column decides whether the given cash flows are for Base (Natural) currency or consolidated (Reporting) currency. Based on it loader will move to either RES_DTL_XXX or CONS_DTL_XXX table. Expected values are: ‘1’ for base/natural (also called entered) currency and ‘2’ for consolidated/reporting (also called functional) currency. Corresponding reference tables are FSI_CURRENCY_TYPE_CD and FSI_CURRENCY_TYPE_MLS |
|
· If you have selected Consolidate to Reporting Currency in ALM Process, then following cases are possible with respect to Consolidation Flag, n_currency_type_cd =1, and n_currency_type_cd = 2. · Case 1: If Consolidation Flag is OFF and N_Currency_Type_Cd = 1, then process will execute successfully and only RES_DTL_XX will get populated. · Case 2: If Consolidation Flag is OFF, N_Currency_Type_Cd = 1, and N_Currency_Type_Cd = 2, then process will execute successfully and only RES_DTL_XX will populate for records N_Currency_Type_Cd=1 and ignore records of type N_Currency_Type_Cd=2. · Case 3: If Consolidation Flag is ON and N_Currency_Type_Cd = 1, then Loader would fail as it expects reocrds for N_Currency_Type_Cd=2 when CONSOLIDATED_OUTPUT_FLG is ON. · Case 4: If Consolidation Flag is ON, N_Currency_Type_Cd 1, and N_Currency_Type_Cd = 2, then it logs as No Data in the instrument table for the given FIC MIS DATE.Loading data to fsi_o_consolidated_master Failed error in FSI_MESSAGE_LOG. You need to load data if Consolidation Flag is ON and N_Currency_Type_Cd = 2. Here, N_Currency_Type_Cd = 2 signifies that the records are of consolidation type and meant for CONT_DTL once processing them. Similarly, N_Currency_Type_Cd = 1 indicates that non-consolidated records in stage and meant for RES_DLT processing. · Note: If there are records for n_currency_type_cd = 1 and n_currency_type_cd =2 does not records, then Cash Flow Loader cannot use the same set of records loaded for n_currency_type_cd = 1 and convert it for consolidation. |
Instrument Type Code (n_instrument_type_cd) |
This identifies the type of instrument that is, loan, deposit and so on. for which data is being loaded. Corresponding reference tables are FSI_INSTRUMENT_TYPE_CD and FSI_INSTRUMENT_TYPE_MLS |
Scenario Number (n_scenario_no) |
An ALM process can have multiple forecast rate scenario. This column indicates the scenario for which data has been loaded. It is used by loader to map data to corresponding scenario of ALM process. Reference tables are FSI_CASH_FLOW_LOADER_SETUP (scenario_num) and DIM_FCST_RATES_SCENARIO (n_scenario_num). Loader takes ALM Process Id as input and then checks corresponding scenario numbers in ALM metadata tables for validation |
Account / Contract Code (v_account_number) |
This column stores the alpha-numeric unique account or contract number for which data is being loaded. This is generally the unique identifier from operational source systems. Corresponding reference table in DIM_ACCOUNT (v_account_number). If aggregated cash flows are loaded then this column can be defaulted to -1 |
Cash Flow Type (v_cash_flow_type) |
Indicates whether the cash flow is Inflow or Outflow. Values expected are 'I' for inflow and 'O' for outflow. Note that, this column is not used by cash flow loader, hence does not impact any result in ALM. It is used by Liquidity Risk Management (LRM) application and is a mandatory column in table. |
Currency Code (v_ccy_code) |
Three letter ISO currency code in which the cash flow amount is denominated must be given in this column. |
Common Coa Code (v_common_coa_code) |
Common Chart of Account code of the account number for which data is loaded must be given here. Corresponding reference table is DIM_COMMON_COA_B (common_coa_code) |
Data Origin (v_data_origin) |
Code of the source system from where data is obtained is expected here. Corresponding reference table is DIM_DATA_ORIGIN (v_data_source_code) |
Financial Element Code (v_financial_element_code) |
This indicates the financial element that is, nature of amount loaded. Corresponding reference table is DIM_FINANCIAL_ELEMENTS_B (financial_elem_code) |
Gl Account Code (v_gl_account_code) |
General Ledger Account code of the account number for which data is loaded must be given here. Corresponding reference table is DIM_GENERAL_LEDGER_B (gl_account_code) |
Lv Code (v_lv_code) |
Legal Entity code of the account number for which data is loaded must be given here. Corresponding reference table is DIM_LEGAL_ENTITY_B (legal_entity_code) |
Organization Unit Code (v_org_unit_code) |
Organisation or Business Unit code of the account number for which data is loaded must be given here. Corresponding reference table is DIM_ORG_UNIT_B (org_unit_code) |
Product Code (v_prod_code) |
Product code of the account number for which data is loaded must be given here. Corresponding reference table is DIM_PRODUCTS_B (product_code) |
· Output tables: Aggregated Cash Flows will be populated in the following output tables
§ RES_DTL_XX
§ CONS_DTL_XX
§ FSI_O_RESULT_MASTER
§ FSI_O_CONSOLIDATED_MASTER
XX denotes the Process ID.
· SETUP_MASTER: This table will be used in the case of instrument cash flows. For Instrument cash flows, an entry against V_COMPONENT_VALUE of the SETUP_MASTER table should have values either 0 or 1 which indicate if id numbers or account numbers are provided, respectively.
· FSI_ALM_DETERMINISTIC_PROCESS: This table will be used for loading cash flows in CONSOLIDATED tables. To populate consolidated tables, theCONSOLIDATED_OUTPUT_FLG should be 1 in the fsi_alm_deterministic_process table against the cash flow process ids.
· FSI_CASH_FLOW_LOADER_SETUP: This table will have all the process ids for cash flow loader. Only those processes will be executed which have status 'N' in FSI_CASH_FLOW_LOADER_SETUP table. In such case, those processes will be already existing into the system.
· FSI_M_USER_ACTIVE_TIME_BUCKETS: For cash flow loader, user should be mapped to an active time bucket in the FSI_M_USER_ACTIVE_TIME_BUCKETS table.
· TIME BUCKETS: The following tables will store the time bucket details:
§ FSI_TIME_BUCKET_MASTER
§ FSI_M_LR_IRR_BUCKETS
§ FSI_LR_IRR_BUCKETS_AUX
§ FSI_TIME_BKT_ISB
§ FSI_TIME_BKT_LR_LRR_DATES
· Check if Batch run id or mis date or User name is null. If Yes, then write message in FSI_MESSAGE log and exit.
· Check if the given user name exists in FSI_M_USER_ACTIVE_TIME_BUCKETS table. If user does not exist, then write error message in FSI_MESSAGE table and exit.
· The time bucket mapped to user in FSI_M_USER_ACTIVE_TIME_BUCKETS table should be present in FSI_INCOME_SIMULATION_BUCKETS table. If time bucket is not present in fsi_income_simulation_buckets table, then write error message in fsi_message table and exit.
· Only financial elements corresponding to Income Simulation Buckets (ISB), Liquidity risk (LR) and Interest Rate Risk (IRR) will be processed.
· If process id is given as parameter, then run the loader program for the given process id. If process id is not given, then the loader program will be run for all the process ids mapped in the set up table with status 'N'.
· Verify that the given process is present in FSI_ALM_DETERMINISTIC_PROCESS table and FSI_M_ALM_PROCESS table. If not present, then write error message in fsi_message table and exit.
· Check if the given process id mapped in fsi_cash_flow_loader_setup table is of status 'N'.
· Cash-flow date of each record must correspond correctly to valid time bucket dates in FSI_TIME_BKT_ISB and/or FSI_TIME_BKT_LR_IRR_DATES tables. However if this check fails, the user will be informed of the improper records through an error message (see Exception 9).
· Check if the set up table is mapped correctly or not.
Check if the given process id mapped to a scenario is present in the stg_account_cash_flow table.
Check if the entire scenario mapped in the set up table with a functional currency value has its base currency present.
· Check if 'consolidated_output_flag' in the FSI_ALM_DETERMINISTIC_PROCESS table is 1. If yes, write to consolidated tables.
· Verify that all the dimension ids given in the stg_account_cash_flow table is valid and present in the respective dimension tables.
· In the case of instrument level cash flows, the following conditions should be satisfied to proceed:
In the setup_master table for the V_COMPONENT_CODE =123, the v_component_value should be either 0 or 1 which indicates account number or identity number is given in the STG_ACCOUNT_CASH_FLOW table, respectively.
Identity code and id_number/account number should be present in the stg_account_cash_flow table.
The user can execute this Cashflow Loader from either SQL*Plus or from within a PL/SQL block or from the Batch Maintenance window within OFSAAI framework.
Cash Flow Loader can be executed directly from SQL Plus. User must login to the database using schema user id and password. The procedure requires 4 parameters:
· As of Date (mis_date)
· User Name: Should be present in fsi_m_user_active_time_buckets table
· Process id
· Batch Execution Identifier (batch_run_id)
declare
result number :=0;
begin
result := fn_cash_flow_loader(batch_run_id => :batch_run_id,
mis_date => :mis_date,
p_user_name => :p_user_name,
p_process_sys_id => :p_process_sys_id);
if result = 0 then
dbms_output.put_line('Cash Flow Loader Failed');
else
dbms_output.put_line('Cash Flow Loader Succesfully completed');
end;
where
· BATCH_RUN_ID is any string to identify the executed batch.
· mis_date in the format YYYYMMDD.
· P_USER_NAME: The user name present in FSI_M_USER_ACTIVE_TIME_BUCKETS table.
· P_PROCESS_SYS_ID can be null or can have value to process specific process id.
· Case 1. When Process id is null:
declare
result number :=0;
begin
result := fn_cash_flow_loader('INFODOM_ CASH_FLOW_LOADER',
'20100419',
’ALMUSER’, NULL);
if result = 0 then
dbms_output.put_line('Cash Flow Loader Failed');
else
dbms_output.put_line('Cash Flow Loader Succesfully completed');
end;
· Case 2. When Process id is not null:
declare
result number :=0;
begin
result := fn_cash_flow_loader('INFODOM_ CASH_FLOW_LOADER',
'20100419',
’ALMUSER', 120003
);
if result = 0 then
dbms_output.put_line('Cash Flow Loader Failed');
else
dbms_output.put_line('Cash Flow Loader Succesfully completed');
end;
SETUP_MASTER Sample Data
For instrument cash flows, the V_COMPONENT_VALUE should be either 1 or 0. If the value is '1' then Identity code and Account number should be populated in the stg_account_cash_flows table . If the Value is '0' then Identity code and ID Number should be populated in the stg_account_cash_flows table.
FSI_CASH_FLOW_LOADER_SETUP Sample Data
NOTE |
Only the process id with Status 'N' will process while executing the loader program. After the successful execution of the loader program the value in the STATUS columns will be changed to 'Y' in FSI_CASH_FLOW_LOADER_SETUP for the process id. |
FSI_M_ALM_PROCESS Sample Data
Continuation...
STG_ACCOUNT_CASH_FLOWS Sample Data
Instrument Cashflow Data
Continuation...
Aggregate Data Sample
Continuation...
To execute the Cash Flow Loader from OFSAAI Batch Maintenance, a seeded Batch is provided.
Execution Steps
1. Select <INFODOM>_CASH_FLOW_LOADER as the Batch ID and Cash Flow Loader is the description of the batch.
2. The batch has a single task. Edit the task.
3. If the user wants to load all the process ids given in the FSI_CASH_FLOW_LOADER_SETUP table for the given as of date, then Process_id parameter should be null.
4. Specify the following parameters:
§ Data store Type: Select appropriate data store from list
§ Data store Name: Select appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: CashFlowLoader
5. If the user wants to process the specific process id mentioned in the FSI_CASH_FLOW_LOADER_SETUP table for the given as of date, then the process_id parameter should be given.
6. Specify the following parameters:
§ Data store Type: Select appropriate data store from list
§ Data store Name: Select appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: CashFlowLoader
7. Save the batch
8. Execute the Batch defined for the required As of Date.
All the exceptions will be logged in FSI_MESSAGE_LOG table. Cash Flow Loader program can raise the following exceptions:
· Exception 1
Load fails:Cannot pass null for the parameter As of Date, User name, Batch run id
As of date, User Name and Batch run id cannot be passed as null.
· Exception 2
Load Fails: User name does not exist in the Table fsi_m_user_active_time_buckets
User name in the Parameter should be mapped with an active time bucket in the fsi_m_user_active_time_buckets table.
· Exception 3
Load Fails: Data for the selected As of date does not exist in stg_account_cash_flows Table
Stage Account Cash Flow does not have data for the given As Of Date.
· Exception 4
Load Fails: Time bucket sys id is not present in fsi_income_simulation_buckets
Active time bucket mapped to the user name should be present in the Fsi_Income_Simulation_Bucket table.
· Exception 5
Load Fails: dates calculation failed
Date Calculation for Fsi_Time_Bkt_Isb and Fsi_Time_Bkt_Lr_Irr_Dates failed for the given time bucket sys id.
· Exception 6
Load Fails: The process id process_sys_id does not exist in the Table fsi_m_alm_process
Process id which is either passed as parameter or picked up from the Fsi_Cash_Flow_loader_Setup table should be present in the fsi_m_alm_process table.
· Exception 7
Load Fails. The process id process_sys_id does not exist in the Table Fsi_alm_deterministic_process
Process id which is either passed as parameter or picked up from the Fsi_Cash_Flow_loader_Setup should be present in the Fsi_alm_deterministic_process table.
· Exception 8
Load Fails. The process id process_sys_id does not exists in the table fsi_cash_flow_loader_setup or the status is not set to N for the process
Process id which is either passed as parameter, either does not exist in the Fsi_Cash_Flow_loader_Setup table or the status is not 'N'.
· Exception 9
Date-check failed: Certain cash-flow dates in ''STG_ACCOUNT_CASH_FLOWS'' for current process are out of range of buckets defined in FSI_TIME_BKT_ISB_DATES/FSI_TIME_BKT_LR_IRR_DATES table(s). For details, run query: <<QUERY>>
This error is generated when the cash-flow dates in 'STG_ACCOUNT_CASH_FLOWS' are out of the time-bucket date ranges defined in either FSI_TIME_BKT_ISB_DATES or FSI_TIME_BKT_LR_IRR_DATES.
In the actual error message (logged in FSI_MESSAGE_LOG), <<QUERY>> is replaced by a SQL query that the user can copy and execute on the schema in which the batch was run. This will provide the user with an output of the problematic records from staging area.
· Exception 10
Load Fail: The Data for the N_SCENARIO_NO mapped to the process process_sys_id does not exist in the STG_ACCOUNT_CASH_FLOWS table
Scenario Number mismatch for the Fsi_Cash_Flow_Loader and Stg_Account_Cash_Flow
· Exception 11
Load Fail: Does not have base currency
Base currency should be present in Stg_Account_Cash_Flow.
· Exception 12
CONSOLIDATED_OUTPUT_FLG in fsi_alm_deterministic_process table is 1 but no data for n_currency_type_cd in STG_ACCOUNT_CASH_FLOWS table
Consolidated Flag for the process id is set to 1 but n_currency_type_cd in Stg_Account_Cash_Flow is not set.
· Exception 13
Load Fail: For the N_SCENARIO_NO mapped in the set up table, the dimension code given is incorrect
Dimensions present in Stg_Account_Cash_Flow are not present in the corresponding dimension tables.
· Exception 14
All the account numbers are not present in the STG_ACCOUNT_CASH_FLOWS table
Records in Stg_Account_Cash_Flow do not have account number populated.
· Exception 15
All the Identity codes are not present in the STG_ACCOUNT_CASH_FLOWS table
Records in Stg_Account_Cash_Flow do not have identity code populated.
· Exception 16
All the Id Numbers are not present in the STG_ACCOUNT_CASH_FLOWS table
Records in Stg_Account_Cash_Flow do not have identity number populated.
· Exception 17
Instrument type code given for the process l_process_sys_id is wrong
Instrument code present in the Stage table is not mapped in FSI_INSTRUMENT_TYPE_MLS
· Exception 18
No Data in the instrument table for the given FIC MIS DATE. Loading data to FSI_O_RESULT_MASTER failed
Instrument table corresponding to instrument code in Stg_Account_Cash_Flow does not have data for the given As of date.
This function populates FSI_M_PROD_TRANSFER_RATE table from FSI_PM_GENERATED_INSTRMTS table for particular Effective date.
After executing this procedure, you should query FSI_M_PROD_TRANSFER_RATE table.
Executing the POPULATE_PM_TRANS_RATE_TABLE (earlier known as POPULATE_TPOL_TRANS_RATE) Procedure
You can execute this procedure either from SQL*Plus or from within a PL/SQL block or from Batch Maintenance window within OFSAAI framework.
To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner.
The procedure requires the following 6 parameters:
· Batch Id (Batch_Id): can be used to see the log of the procedure executed.
· Misdate (Mis_date): the date for which batch is run.
· Run Id (p_v_run_id): Unique Run ID for the run.
· Process Id (p_v_process_id): Unique Process ID for the batch.
· Run Execution Id (p_v_run_execution_id): Unique Run Execution Id for the Run.
· Run skey (p_n_run_skey): Unique run skey generated by the run.
The syntax for calling the procedure is:
Declare
output number;
Begin
Output:= POPULATE_PM_TRANS_RATE_TABLE (Batch_Id varchar2,
Mis_date varchar2,
p_v_run_id varchar2,
p_v_process_id varchar2,
p_v_run_execution_id varchar2,
p_n_run_skey varchar2);
End;
Mis_date should be passed as 'YYYYMMDD' format.
An example of running the function from SQL*Plus is as follows:
SQL> var output number;
SQL> execute: output:= POPULATE_PM_TRANS_RATE_TABLE('Batch_Id',
'20100131,' $RUNID=1306182237482', '$PHID=1228363751510', '$EXEID=RQEXE016','$RUNSK=99');
To execute the stored procedure from within a PL/SQL block or procedure, see the example that follows.
SQL> declare
output number;
begin
Output:= POPULATE_PM_TRANS_RATE_TABLE ('Batch_Id','Mis_date',
'p_v_run_id','p_v_process_id','p_v_run_execution_id',' p_n_run_skey');
End;
/
To execute the procedure from OFSAAI Batch Maintenance, create a new Batch with the Task as TRANSFORM DATA and specify the following parameters for the task:
· Datastore Type : Select appropriate datastore from the list
· Datastore Name : Select appropriate name from the list
· IP address : Select the IP address from the list
· Rule Name : POPULATE_PM_TRANS_RATE_TABLE
BATCHID and MISDATE will be passed explicitly in Batch Maintenance
ALM_BI_TRANSFORMATION data definition transforms the Asset Liability Management (ALM) processing results of an executed ALM process to ALMBI fact tables.
This internally calls PL/SQL function FN_ALM_BI_TRANSFORMATION.
function FN_ALM_BI_TRANSFORMATION(p_batch_run_id varchar2,
p_as_of_date varchar2,
PID number,
p_re_run_flag char)
Where the parameters are,
· p_batch_run_id: It is the batch run id. Batch Run ID value is passed from the Batch execution UI. Therefore, it is not required to define it as a parameter value in Batch Maintenance.
· p_as_of_date: This parameter value is passed from the Batch execution UI. Therefore, it is not required to define it as a parameter value in Batch Maintenance.
· PID: Pass the ALM Process Sys ID for which the transformation has to be done.
· p_re_run_flag: This parameter value determines whether the transformation for the ALM process is for the first time or not.
Possible values are 'Y' or 'N'
Where
'Y': Yes (This means that the transformation was already done and the user is trying to redo the transformation once again for the ALM process).
'N': No (This means that the user is executing the transformation for the first time for the ALM process).
|
The values for parameters PID and p_re_run_flag has to be entered in the Parameter List during the batch definition. |
1. If the user is trying to do transformation of ALM process 200009 for the first time, then the values that must be entered in the Parameter List are 200009, 'N'.
2. If the user is trying to do transformation of ALM process 200011, for which he had already done the transformation, then the values that must be entered in the Parameter List are 200011, 'Y'.
Hierarchy Flattening Transformation is used to move the hierarchy data from the parent child storage structure in EPM AMHM (Attribute, Member and Hierarchy Management) model to a level based storage structure in OFSAA BI applications. In EPM AMHM model, hierarchy data for any hierarchy created on seeded or user defined dimensions using the AMHM is stored within hierarchy tables of respective dimensions. This is moved to the REV_HIER_FLATTENED table in OFSAA BI applications after flattening by the Hierarchy flattening process.
batch_hierTransformation is a seeded Data Transformation program installed as part of the OFSAA BI applications installers.
NOTE |
Refer to Support Note 1586342.1, if Hierarchy Filter is not reflecting correctly after making the changes to underlying Hierarchy. |
You can execute this procedure from SQL Plus/PLSQL/Batch Maintenance window within OFSAAI framework.
· Using SQL Plus/PLSQL
function rev_batchHierFlatten(batch_run_id varchar2,
mis_date varchar2,
pDimensionId varchar2,
pHierarchyId varchar2,
)
§ Function Name: rev_batchHierFlatten
§ Parameters: batch_run_id, mis_date, pDimensionId, pHierarchyId
Where the parameters are,
— batch_run_id: It is the batch run id. Batch Run ID value is passed from the Batch execution UI. Therefore, it is not required to define it as a parameter value in Batch Maintenance.
— mis_date: This parameter value is passed from the Batch execution UI. Therefore, it is not required to define it as a parameter value in Batch Maintenance. Follow the date format, YYYYMMDD
— pDimensionId- Enter the Dimension id . To find dimension id, execute the following query in database to find the value and use the value in dimension id column for the dimension name / description to be processed:
Select b.dimension_id,t.dimension_name,t.description from rev_dimensions_b b inner join rev_dimensions_tl t on b.dimension_id = t.dimension_id and t.dimension_name like '<dimension name>'
Replace <dimension name> in the preceding query with the Dimension Name you find in the UI (Financial Service Application >Master Maintenance > Dimension Management) for the dimension on which the Hierarchy you want to flatten is configured.
— pHierarchyId: Enter Hierarchy id. If all the hierarchies belonging to a dimension are to be processed then, provide NULL as the parameter value. Else, provide the System Identifier of the hierarchy that needs to be transformed.
Execute the following query in database if only a single hierarchy is to be processed and use the value in hierarchy_id column as parameter for the hierarchy to be processed:
select b.object_definition_id , short_desc,long_desc from fsi_m_object_definition_b b inner join fsi_m_object_definition_tl t on b.object_definition_id = t.object_definition_id and b.id_type = 5
— If all the hierarchies for GL Account dimension must be processed, the parameter list should be given as follows (where '2' is the dimension id for the seeded dimension GL Account):
'2',null
— If a particular hierarchy with code 1000018112 must be processed (you can obtain this code by executing the preceding query in the database), the parameter list should be given as follows:
'2', '1000018112'
SQL Example
SQL> var fn_return_val number;
SQL> execute :fn_return_val:= rev_batchHierFlatten ('Batch1 ', '20091231 ', '2 ', '1000018112');
SQL> print fn_return_val
PLSQL Example
DECLARE
fn_return_val number := null;
BEGIN
fn_return_val := rev_batchHierFlatten('Batch1',
'20091231',
'2',
1000018112');
IF fn_return_val = 1 THEN
Dbms_output.put_line('Execution status of batchHierFlatten is'
||fn_return_val || ' --Successful');
ELSIF fn_return_val = 0 THEN
Dbms_output.put_line('Execution status of batchHierFlatten is'
||fn_return_val || ' --FAILURE');
END IF;
EXCEPTION
WHEN OTHERS THEN
Dbms_output.put_line('Execution status of batchHierFlatten is'
|| SQLCODE || '-' || SQLERRM);
END;
On successful execution of rev_batchHierFlatten function in Database, value returned will be 1 or 0. 1 indicates successful execution and 0 indicates failure in execution. This function will be present in Atomic Schema.
· Using OFSAAI Batch Maintenance
To execute the procedure from OFSAAI Batch Maintenance, run the following batch and specify the following parameters:
§ Datastore Type: Select appropriate datastore from the list
§ Datastore Name: Select appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: batch_hierTransformation
§ Parameter List: Dimension ID, Hierarchy ID
DIM_DATES_POPULATION is a seeded Data Transformation which is installed as part of the OFSAA BI applications installers. Time dimension population transformation is used to populate the dim_dates table with values between two dates specified by the user.
NOTE |
During data transformation, the data will be loaded into FISCAL columns by reading the start date/end date information from DIM_FINANCIAL_YEAR table. Users can enter data manually into DIM_FINANCIAL_YEAR table. |
FSI_LEDGER_STAT_TRM is a seeded Data Transformation which is installed as part of the OFSAA BI applications installers. Fact Ledger Population transformation is used to populate the FCT_LEDGER_STAT table from the Profitability LEDGER_STAT table. Database function LEDGER_STAT_TRM is called by the function FSI_LEDGER_STAT_TRM.
Financial Element Dimension Population involves populating custom Financial Elements created into DIM_FINANCIAL_ELEMENT table from DIM_FINANCIAL ELMENT_B table.
Topics:
· Tables Used by the Financial_Elem_Update Transformation
· Executing the Financial_Elem_Update Transformation
· Checking the Execution Status
All the post install steps mentioned in the Oracle Financial Services Analytical Applications Infrastructure (OFSAAI) Installation and Configuration guide and Oracle Financial Services Profitability Management User Guide.
1. Application User must be mapped to a role that has seeded batch execution function (BATPRO).
2. Seeded and Custom Financial Elements are required to be available in DIM_FINANCIAL_ELEMENTS_B, DIM_FINANCIAL_ELEMENTS_TL tables.
3. Before executing a batch check if the following servers are running on the application server (For more information on how to check if the services are up and on and how to start the services if you find them not running, refer to any OFSAA BI User Guide- for example, Oracle Financial Services Analytical Applications Infrastructure User Guide).
§ Iccserver
§ Router
§ AM Server
§ Messageserver
§ Olapdataserver
4. Batches will have to be created for executing the function.
DIM_FINANCIAL_ELEMENT: This table stores the seeded and custom Financial Elements.
For more details on viewing the structure of the tables, see OFSAA EPM Erwin Data Model.
To execute the function from OFSAAI Information Command Center (ICC) frame work, create a batch by performing the following steps:
NOTE |
For a more comprehensive coverage of configuration and execution of a batch, see Oracle Financial Services Analytical Applications Infrastructure User Guide. |
1. From the Home menu, select Operations, then select Batch Maintenance.
2. Click New Batch + and enter the Batch Name and description.
3. Click Save.
4. Select the Batch you have created in the earlier step by clicking on the checkbox in the Batch Name container.
5. Click New Task +.
6. Enter the Task ID and Description.
7. Select Transform Data, from the components list.
8. Select the following from the Dynamic Parameters List and then click Save:
§ Datastore Type: Select appropriate datastore from the list
§ Datastore Name: Select appropriate name from the list
§ IP address: Select the IP address from the list
§ Rule Name: Select Financial_Elem_Update from the list of all available transformations. (This is a seeded Data Transformation which is installed as part of the OFSAA BI applications installers. If you don't see this in the list, contact Oracle support)
§ Parameter List: OFSAAI Application User Name (See the following for details on Parameter list).
Application User Name: This is the OFSAAI application user name which the transformation uses for inserting in DIM_FINANCIAL_ELEMENT table.
Sample parameter for this task is 'APPUSER'.
9. Execute the batch.
The function can also be executed directly on the database through SQLPLUS. Details are:
§ Function Name :fn_dim_financial_elem_update
§ Parameters : pBatch_Id, pas_of_date, appuser_name
§ Sample parameter values : 'Batch1','20091231', 'APPUSER'
The status of execution can be monitored using the Batch Monitor screen.
NOTE |
For a more comprehensive coverage of configuration & execution of a batch, see Oracle Financial Services Analytical Applications Infrastructure User Guide. |
The status messages in batch monitor are :
· N: Not Started
· O: On Going
· F: Failure
· S: Success
The Event Log window in Batch Monitor provides logs for execution with the top row being the most recent. If there is any error during execution, it will get listed here. Even if you see Successful as the status in Batch Monitor it is advisable to go through the Event Log and re-check if there are any errors. The execution log can be accessed on the application server by going to the following directory $FIC_DB_HOME/log/date. The file name will have the batch execution id.
The database level operations log can be accessed by querying the FSI_MESSAGE_LOG table. The batch run id column can be filtered for identifying the relevant log.
Check the .profile file in the installation home if you are not able to find the paths mentioned earlier.
Following messages will be available in the FSI_MESSAGE_LOG table after executing the batch.
Starting to update DIM_FINANCIAL_ELEMENT
Please provide application user name (In case OFSAAI application user name is not passed as a parameter).
Successfully Completed.
After successful execution of the batch, user can verify custom financial element present in DIM_FINANCIAL_ELEMENT table.
The Payment Pattern Loader provides the ability to load bulk payment pattern definitions through a back end procedure. This Loader reads the stage table data, does data quality checks on the same, and load them into FSI_PAYMENT_PATTERN and FSI_PAYMENT_PATTERN_EVENT tables, if the stage table data is valid.
Following is the stage table to input the payment pattern:
Table Name: STG_PAYMENT_PATTERN
Column Name |
Column Datatype |
Column Null Option |
Column Is PK |
Column Comment |
V_AMRT_TYPE |
VARCHAR2(5) |
NOT NULL |
Yes |
Amortization code between 1000 to 69999. Patterns between this range will be consider for payment processing |
N_EVENT_ID |
NUMBER(5,0) |
NOT NULL |
Yes |
Event Identity Number |
N_SPLIT_ID |
NUMBER(5,0) |
NOT NULL |
Yes |
Holds number of patterns with in split pattern |
V_PATTERN_TYPE |
VARCHAR2(40) |
NOT NULL |
Yes |
List of values could be Absolute, Relative, Split |
V_TERM_TYPE |
VARCHAR2(40) |
NOT NULL |
Yes |
List of values could be Principal and Interest, Principal Only, Interest Only, Level Principal, Final Principal & Interest, Other |
V_AMRT_TYPE_DESC |
VARCHAR2(255) |
NOT NULL |
No |
Alpha numeric value |
N_PCT_VALUE |
NUMBER(8,4) |
NULL |
No |
Percentage applied to each pattern in case of split pattern type |
V_PAYMENT_EVENT_MONTH |
VARCHAR2(20) |
NULL |
No |
Month in which payment event should occur |
N_PAYMENT_EVENT_DAY |
NUMBER(2) |
NULL |
No |
Number of Days Payment type |
N_PAYMENT_EVENT_FREQ |
NUMBER(5,0) |
NULL |
No |
Number of times payment event should occur |
V_PAYMENT_EVENT_FREQ_MULT |
VARCHAR2(40) |
NULL |
No |
List of values could be Days,Months,Years |
N_PAYMENT_EVENT_REPEAT_VALUE |
NUMBER(5,0) |
NULL |
No |
Holds number of times payment frequency should repeat |
N_AMOUNT |
NUMBER(14,2) |
NULL |
No |
Amount |
V_AMOUNT_TYPE |
VARCHAR2(40) |
NULL |
No |
List of values could be % of original Payment,% of current payment,absolute value |
V_PAYMENT_TYPE |
VARCHAR2(30) |
NULL |
No |
List of values could be Conventional, Level principal, Non-amortizing |
The loader program performs the following data quality checks:
1. The following values will be checked against the relevant look tables as mentioned
§ Pattern Type: FSI_PATTERN_TYPE_MLS
§ CashFlowType: FSI_PAYMENT_TYPE_MLS (Should accept only 100-Principal and Interest and 300-Interest Only)
§ Month: FSI_MONTHS_MLS
§ Multiplier: FSI_MULTIPLIER_MLS
§ Payment Method: FSI_AMOUNT_TYPE_MLS (For Conventional accept only % of Original Payment, % of Current Payment and Absolute value)
§ Payment Type: FSI_PMT_PATTERN_TYPE_MLS
2. While defining any pattern type like relative or absolute, MONTH and DAY combination should be unique
3. MONTH and DAY pair should have valid month and day combination, such as January 31 days and February 28 days (Leap year was not considered) and so on.
4. If cash flow value is Principal and Interest then N_AMOUNT cannot be blank. If it is Interest only then V_PAYMENT_TYPE and N_AMOUNT should be blank.
5. When payment type is a Non-amortizing and Payment pattern is Relative then N_PAYMENT_EVENT_FREQ and N_PAYMENT_EVENT_REPEAT_VALUE should have values which could range between 1 to 9999.
6. One Split pattern can have any number of definition, however the sum of N_PCT_VALUE of all the definition should be 100% and all the payment patterns in the split should be defined.
7. All the following fields should have this validation on place:
§ Day: Positive Integer Number Range from 1 to 31 depends on the month for which day.
§ Percentage: Positive Integer or Decimal Number
§ Frequency: Positive Integer range from 1 to 9999
§ Repeat: Positive Integer range from 1 to 9999
§ Value: Integer numbers from 0 to 9999999999
8. For each payment pattern and payment type combinations fields relevant to that would be populated by the user remaining columns should be populated with default values:
§ Payment Pattern: Absolute
§ Payment Type: Conventional / Level Principal
§ Columns gets populated with user values: Code , Description, Pattern Type, Payment Type, Month, Day, Cash Flow Type, Payment Method,Value, Percentage (in case of Split pattern type
§ Payment Type: Non amortizing Payment type.
§ Columns gets populated with user values: Code , Description , Pattern Type, Payment Type, Month, Day, Percentage (in case of Split pattern type)
§ Payment Pattern: Relative
§ Payment Type: Conventional / Level Principal
§ Columns gets populated with user values: Code, Description, Pattern Type, Payment Type, Frequency, Multiplier, Repeat, Cash Flow Type, Payment Method, Value, Percentage (in case of Split pattern type)
§ Payment Type: Non amortizing Payment type.
§ Columns gets populated with user values: Code, Description, Pattern Type, Payment Type, Frequency, Multiplier, Repeat, Percentage (in case of Split pattern type).
The loader program defaults values for each column in case values provided by user are not relevant for the pattern and payment patterns they defined.
There are two ways to execute the Payment Pattern Loader procedure:
· Running Procedure Using SQL*Plus
To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner:
SQLPLUS > declare
result number;
begin
result := fn_paymentpattern ('Payment_Pattern_20121212_1','20121212');
end;
/
§ BATCH_RUN_ID is any string to identify the executed batch.
§ AS_OF_DATE is in the format YYYYMMDD
· Payment Pattern Loader Procedure Using OFSAAI Batch Maintenance
To execute Payment Pattern Loader from OFSAAI Batch Maintenance, a seeded Batch is provided.
The batch parameters are:
§ Datastore Type: Select the appropriate datastore from list.
§ Datastore Name: Select the appropriate name from the list.
§ IP address: Select the IP address from the list.
§ Rule Name: fn_paymentpattern
Below are the list of error messages which can be viewed in view log from UI or FSI_MESSAGE_LOG table from back end filtering for the given batch id. On successful completion of each task messages gets into log table.
In the event of failure, following are the list of errors that may occur during the execution:
· Exception 1: PATTERN TYPE IS NOT MATCHING THE LIST OF VALUES
· Exception 2: TERM TYPE IS NOT MATCHING THE LIST OF VALUES
· Exception 3: PAYMENT MULTIPLIER FREQ IS NOT MATCHING THE LIST OF VALUES
· Exception 4: EVENT MONTH IS NOT MATCHING THE LIST OF VALUES
· Exception 5: PMT PATTERN IS NOT MATCHING THE LIST OF VALUES
· Exception 6: AMOUNT TYPE IS NOT MATCHING THE LIST OF VALUES
· Exception 7: v_amrt_type_cd range between 1000 to 69999
· Exception 8: n_payment_event_freq_cd and n_payment_event_repeat_value should have values range between 1 to 9999
· Exception 9: N_PCT_VALUE POSITIVE INTEGER OR DECIMAL NUMBER
· Exception 10: N_SPLIT_ID POSITIVE NUMBER
· Exception 11: If Term Type is PRINCIPAL AND INTEREST, then it should range from 1 to 999999999
· Exception 12: f Term Type is INTEREST ONLY AMOUNT and AMOUNT TYPE CD should be blank
· Exception 13: ERR while deleting stg_payment_pattern unwanted records
· Exception 14: Month and Day pair should have valid combination. Like January 31 days
· Exception 15: Pct value should be 100 in case of split pattern for other patterns it should be =0
· Exception 16: Error during percentage check
· Exception 17: Error during inserting
This loader will provide the ability to load user defined GAP Limits through a back end procedure.
For more information, see the ALM User guide on OHC.
The Loader uses the following staging and target tables:
· STG_ALM_GAP_LIMIT_DTL — This staging table contains preliminary user-provided data that will subsequently undergo data quality checks.
· FSI_ALM_GAP_LIMIT_DTL — The loader copies limit bucket-sets into this table after quality checks; only bucket-sets that pass quality checks are populated in this table
There are two ways to execute the Gap Limit Loader procedure:
· Running Procedure Using SQL*PlusTo run the function from SQL*Plus, log in to SQL*Plus as the Schema Owner. The loader requires two parameters
§ Batch Execution Name
§ As Of Date
Syntax:
fn_load_fsi_alm_gap_limits (batch_run_id IN VARCHAR2, as_of_date IN VARCHAR2)
For example:
SQLPLUS > declare
result number;
begin
result := fn_load_fsi_alm_gap_limits (‘INFODOM_20100405','20100405’);
end;
/
· Gap Limit Loader Procedure Using OFSAAI Batch Maintenance
To execute Material Currency Loader from OFSAAI Batch Maintenance, a seeded Batch <INFODOM>_GAP_LIMITS_LOADER is provided.
The batch parameters are:
§ Datastore Type: Select the appropriate datastore from list.
§ Datastore Name: Select the appropriate name from the list.
§ IP address: Select the IP address from the list.
§ Rule Name: fn_load_fsi_alm_gap_limits
§ Parameter List: None
Below are the list of error messages which can be viewed in view log from UI or FSI_MESSAGE_LOG table from back end filtering for the given batch id. On successful completion of each task messages gets into log table.
In the event of failure, following are the list of errors that may occur during the execution:
· Exception 1: ALM GAP Limit Loader exited but no records were inserted into FSI table.
All GAP limit bucket-sets in the STG table failed data quality checks and/or there was some internal error
· Exception 2: Errors recorded in internal memory but could not be logged in FSI_MESSAGE_LOG. Exiting.
Error messages could not be logged
· Exception 3: Could not insert records into internal memory. It may be a count mismatch b/w consolidated_records and pk_iter.
Internal processing of limit bucket-sets failed during execution, please contact support
· Exception 4: No Records found in STG Table for As_Of_Date: <DATE>
STG table had no records for the selected date of execution of utility
· Exception 5: Error in prc_load_fsi_alm_gap_limits: <ERROR MESSAGE>
Unexpected error in execution
· Exception 6: Invalid Legal_Entity_Code: <CODE>
Check if the legal entity code matches with valid legal entities in DIM_LEGAL_ENTITY_B (leaf nodes only and should be enabled)
· Exception 7: Invalid Org_Unit_Code: <CODE>
Check if the organisation unit code matches with valid codes in DIM_ORG_UNIT_B(leaf nodes only and should be enabled)
· Exception 8: Invalid Currency: <CCY>
Please ensure that the currency code for a bucket set is a valid ISO code
· Exception 9: Invalid Currency_Type_Code: <CCY_Type>
Check if the currency type is valid and present in FSI_CURRENCY_TYPE_MLS
· Exception 10: Invalid Time_Bucket_Name: <BKT_NAME>
Check if the bucket name provided matches valid entries in FSI_TIME_BUCKET_MASTER
· Exception 11: Invalid Bucket_Number: <NUM> for Time_Bucket_Name: <BKT_NAME>
Ensure bucket number in DIM_RESULT_BUCKET corrseponds to the bucket name of in the row; Names are case insensitive
· Exception 12: Invalid Start_Date_Index<INDEX> for Time_Bucket_Name: <BKT_NAME>
The start date index in FSI_LR_IRR_BUCKETS_AUX matches for the given bucket name
· Exception 13: Invalid Forecast_Rate_Rule_Name: <FCST_RULE_NAME>
Ensure the forecast rate rule applied corresponds to those present in FSI_M_OBJECT_DEFINITION_TL; Names are case insensitive
· Exception 14: Invalid Scenario_Name: <SCEN_NAME> for Forecast_rate_rule_name: <FCST_RULE_NAME>
The scenario applied should correspond to the forecast rate rule being applied
· Exception 15: Invalid Repricing_GAP_Measure: <MEASURE>
Repricing gap measure should be one of 'NET REPRICE GAP', 'CUMULATIVE REPRICE GAP'; case insensitive
· Exception 16: Eff_End_Date[<DATE>] must be later than or same as AS_OF_DATE[<EXECUTION_DATE>]
Execution date must always be less than the effective end date of a limit bucket-set
· Exception 17: Eff_End_Date[<DATE>] must be later than Eff_Start_Date[<DATE>]
The effective end date of a limit-bucket set must be later than its start date
· Exception 18: Invalid Limit_Method: <LIMIT_METHOD>
Bucket limit method must be one of 'ABSOLUTE' or 'RELATIVE'; case insensitive
· Exception 19: GAP_Limits cannot be negative
Self-explanatory
· Exception 20: GAP lwr_limit must be lesser than upr_limit
Self-explanatory
· Exception 21: Bucket Continuity Constraint: Current bucket''s lower_limit must be previous bucket''s upper_limit+1
Self-explanatory
· Exception 22: Bucket Limit_Method Mismatch: All buckets in a set must follow the first bucket''s limit method
Self-explanatory
· Exception 23: For Limit Method: RELATIVE, first bucket''s lower_limit must be 0
Self-explanatory
· Exception 24: For Limit Method: RELATIVE, last bucket''s upper_limit must be 100
Self-explanatory
· Exception 25: For Limit Method: RELATIVE, lower_limit must be b/w [0,upper_limit)
The lower limit of a bucket in a bucket-set must start from previous bucket's lower limit+1 or 0 if it is the first bucket in a bucket-set; the upper limit must be greater than lower limit or be exactly 100 if it is the last bucket in the set.
· Exception 26: For Limit Method: RELATIVE, upper_limit must be b/w (lower_limit,100]
The lower limit of a bucket in a bucket-set must start from previous bucket's lower limit+1 or 0 if it is the first bucket in a bucket-set; the upper limit must be greater than lower limit or be exactly 100 if it is the last bucket in the set.
· Exception 27: Record will be rejected due to error(s) in row(s) indicated by Bkt_num(s): <BAD_BKT_NUMS>
The row itself passes DQ data quality checks but will still be rejected due to errors elsewhere in its limit bucket-set
· Exception 28: Error in Data Quality Validator : dq_validator.
DQ validation function failed
· Exception 29: Error in bulk_logging, erroroneous rows could not be recorded.
Bulk logging of error messages to FSI_MESSAGE_LOG failed; transactions to FSI_* table will be rolled-back
· Exception 30: No good records were found in the STG Table.
All GAP limit bucket-sets in the STG table failed data quality checks
· Exception 31: Error in target_insert_update: <ENGINE GENERATED ERROR MESSAGE>.
Possible reason: Two or more exact same bucket sets with typographical differences are present(that is, one bucket has v_scenario_name = A_Lim_Bucket and another has v_scenario_name = a_lim_bucket). It is advisable to avoid stray spaces in strings and keep everything in all-caps
· Exception 32: Error in target_insert_update
Function for inserting good records to FSI_* table failed
· Exception 33: Could not cleanup old records for batch run id: <BATCH_RUN_ID>
Previous error messages inFSI_MESSAGE_LOG for the same BATCH_RUN_ID as run could not be deleted; The execution may have still gone through
As per the standardized approach of IRRBB, the loss in economic value of an equity is calculated for each currency with material exposures. Material exposure is defined as those accounting for more than 5% of either banking book assets or liabilities. Utility moves data from FSI_D_<INSTRUMENT TABLE> to FCT_ALM_SIGNIFICANT_CURRENCY.
For more information, see the ALM User Guide.
· FSI_D_CREDIT_CARDS
· FSI_D_MUTUAL_FUNDS
· FSI_D_RETIREMENT_ACCOUNTS
· FSI_D_TERM_DEPOSITS
· FSI_D_ASSET_BACK_SEC
· FSI_D_BORROWINGS
· FSI_D_INVESTMENTS
· FSI_D_ANNUITY_CONTRACTS
· FSI_D_CASA
· FSI_D_LOAN_CONTRACTS
· FSI_D_CREDIT_LINES
· FSI_D_GUARANTEES
· FSI_D_MERCHANT_CARDS
· FSI_D_MORTGAGES
· FSI_D_TRUSTS
· FSI_D_CAPFLOORS
· FSI_D_MM_CONTRACTS
· FSI_D_LEASES
· FSI_D_OTHER_SERVICES
· FSI_D_LEDGER_STAT_INSTRUMENT
· FSI_D_LOAN_COMMITMENTS
· FSI_D_FUTURES
· FSI_D_CAPFLOORS
· FSI_D_SWAPS
There are two ways to execute the Material Currency Loader procedure:
· Running Procedure Using SQL*Plus
To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner:
For example:
SQLPLUS > declare
result number;
begin
result := fn_signf_currency_loader (‘ALMUSER’,’Y’,05,>,’USD’);
end;
/
Here,
§ ALMUSER is the user Name for which the task is to be executed
§ Set the Off-balance sheet flag as ‘Y’ to enable off-balance sheet instruments, ‘N’ to disable them.
§ .05 is Material Currency Threshold. .05 is the threshold defined by BASEL norms but the parameter is configurable between [0,1], limits inclusive
§ Is Comparison Operator; valid choices include [>,>=,<,<=]
§ USD is reporting currency ISO code
· Material Currency Loader Procedure Using OFSAAI Batch Maintenance
To execute Material Currency Loader from OFSAAI Batch Maintenance, a seeded Batch <INFODOM>_MATERIAL_CURR_IDENTIFICATION is provided.
The batch parameters are:
§ Datastore Type: Select the appropriate datastore from list.
§ Datastore Name: Select the appropriate name from the list.
§ IP address: Select the IP address from the list.
§ Rule Name: fn_signf_currency_loader
§ Parameter List: User Name, off-Balance Sheet Flag, Material Currency Threshold, Comparison Operator, and Reporting Currency Code (for description of the configurable paramters, see above)
During the course of execution, certain exception messages are logged in FSI_MESSAGE_LOG. These can be viewed from the UI or from the back-end by filtering FSI_MESSAGE_LOG for the current ‘batch_run_id’. On successful completion of the task, a message indicating success is logged. In the event of execution failure or intermediate errors, messages from the following list can be logged:
· Exception 1: Table '<INSTRUMENT_TABLE_NAME>' not found in current schema. This table will be ignored.
The table may have been removed or currently does not exist in the schema
· Exception 2: Error during inserting asset records from '<INSTRUMENT_TABLE_NAME>': <ENGINE_GENERATED_MESSAGE>. Table will be ignored.
The columns types of the table may have changed or columns may have been removed
· Exception 3: Error in utility/task ''ALM Material Currency Identification''. Please check if username is valid. In rare cases, it may be an internal DB error.
The username provided either doesnot exist or the user's product dimension preference is not one of (PRODUCT, GENERAL_LEDGER, or COMMON COA). Very rarely it could be an internal mapping error
· Exception 4: Error in utility/task ''ALM Material Currency Identification''; off-balance-sheet flag must be 'Y' or 'N'.
Illegal user-input parameter
· Exception 5: Error in utility/task ''ALM Material Currency Identification'' during insert/merge operation on '<ISNTRUMENT_TABLE_NAME>'. OPERATION ABORTED.
Internal error during merging of liability records with asset records in the TMP_* table (TMP table stores intermediate records before final consolidation into the FCT_* table)
· Exception 6: Error in utility/task ''ALM Material Currency Identification''; Please ensure that 1. material threshold is between [0,1]; 2. a valid comparison operator has been provided; 3. Reporting currency is a valid ISO code.
Illegal user-input parameter(s)
· Exception 7: Error in utility/task ''ALM Material Currency Identification''; Insert operation on FCT_ALM_SIGNIFICANT_CURRENCY failed, OPERATION ABORTED. Probable reason: foreign key violation in FCT_* table on column 'N_ENTITY_SKEY'.
Internal error during insertion of final records in to the result table due to a Foreign Key Violation
· Exception 8: Error in utility/task ''ALM Material Currency Identification''; Insert/Update operation on DIM_DATES failed; OPERATION ABORTED.
Internal error because the chosen date of execution does not exist in an internal table of indexed dates and associated information (DIM_DATES)
· Exception 9: Unhandled exception in utility ''ALM Material Currency Identification''; FCT_ALM_SIGNIFICANT_CURRENCY will be returned to its initial state. Error: <ENGINE_GENERATED_ERROR>
Unexpected internal error during execution; it may be a primary key violation during insertion of records into the FCT_* table.
· Exception 10: Could not cleanup old records for batch run id: <BATCH_RUN_ID>
Previous error messages in FSI_MESSAGE_LOG for the same BATCH_RUN_ID as run could not be deleted; utility’s execution may have still gone through
The Behaviour Pattern Loader provides the ability to load bulk Behaviour pattern definitions through a back end procedure. This Loader reads input data from STG_BEHAVIOUR_PATTERN_NRP table – performs data quality checks on the same – and loads the definitions into FSI_BEHAVIOUR_PATTERN_MASTER and FSI_BEHAVIOUR_PATTERN_DETAIL tables based on following conditions:
· New BP: If pattern code in stage table is not already present in FSI table, then insert data after quality check and necessary transformations
· Existing BP: If pattern code in stage is already present in FSI table, then compare ‘created date’. If date in stage table is higher (that is, more recent) than that in FSI table, then overwrite the definition otherwise skip and log appropriate message.
The utility is called from a Batch called Behaviour Pattern Loader. The utility is designed to be executed for a single AS OF DATE only, that is, it would fetch the data from STAGE table for the given MIS date and push the records to DTL/MASTER table accordingly.
The structure of the Stage table is as follows:
Table Name: STG_BEHAVIOUR_PATTERN_NRP:
Column Name |
Logical Name |
Data Type |
Null Allowed ? |
PK |
Column Comments |
FIC_MIS_DATE |
Extraction Date |
DATE |
No |
Yes |
Date on which the behaviour pattern definition was created. Normally indicates the calendar date from which it is valid. |
F_REPLICATING_PORTFOLIO_FLG |
Replicating Portfolio Flag |
CHAR(1) |
Yes |
No |
This indicates whether the behaviour pattern definition is for replicating portfolio (FTP use case) or not. List of Values are Y for Replicating Portfolio and N for Non-replicating portfolio. |
N_PATTERN_CD |
Pattern Code |
NUMBER(5) |
No |
Yes |
Code assigned to behaviour pattern definition. This must be a number between 70000 to and 99999 |
N_PATTERN_PERCENTAGE |
Pattern Percentage |
NUMBER(22,6) |
Yes |
No |
This stores the percentage of current balance that is used as cash flow on the event date. Within one pattern code sum of percentages must not exceed 100. |
N_SEQUENCE_NUMBER |
Sequence Number |
NUMBER(3) |
No |
Yes |
Within one pattern multiple tenors can be defined. Sequence denotes the order of each tenor. |
N_PATTERN_TENOR |
Pattern Tenor |
NUMBER(5) |
Yes |
No |
This is the tenor specified in behaviour pattern definition and is used to decide cash flow event. It must be read in conjunction with Tenor Unit. |
V_PATTERN_SUBTYPE_CD |
Behaviour Sub Type Display Code |
VARCHAR2(5) |
Yes |
No |
This indicates the sub-type of behaviour for which pattern is defined. Expected values are: If Behaviour Type Display Code is Non Maturing (NM) then expected values are CR for Core and VL for Volatile; If Behaviour Type Display Code is Non Performing (NP) then expected values are SS for Substandard, DF for Doubtful and L for Loss; If Behaviour Type Display Code is Devolvement and Recovery (DR) then expected values are SD for Sight Devolvement, SR for Sight Recovery, UD for Usance Devolvement, UR for Usance Recovery, U for Usance and S for Sight. |
V_CREATED_BY |
Created By |
VARCHAR2(20) |
Yes |
No |
Identifier for the user or model that created the behaviour pattern definition. It can also denote the system from which definition is sourced. |
V_PATTERN_DESCRIPTION |
Pattern Description |
VARCHAR2(255) |
Yes |
No |
Description for the behaviour pattern definition given by user. |
V_PATTERN_NAME |
Pattern Name |
VARCHAR2(30) |
Yes |
No |
Name of the behaviour pattern definition given by user. |
V_PATTERN_TYPE_CD |
Behaviour Type Display Code |
VARCHAR2(40) |
Yes |
No |
This indicates the type of behaviour for which pattern is defined. Expected values are NM for Non Maturing, NP for Non Performing and DR for Devolvement and Recovery. |
V_PATTERN_TENOR_UNIT |
Pattern Tenor Unit |
VARCHAR2(1) |
Yes |
No |
This indicates the unit in which Tenor is specified. List of values are D for Days, M for Months and Y for Years. |
Following checks will be performed on the intermediate data populated by the user in STAGE table. The following list of values will be checked against the relevant look-up tables:
· Pattern Code should be between 70000 and 99999.
· BP Pattern Name:
§ If it is a new pattern code, then name should not be already used by another BP.
§ If it is an existing pattern code, then name should be same as BP existing in FSI table. If names are different, then that from FSI table will be retained.
· Behaviour Type Display Code should be present in table FSI_BEHAVIOUR_TYPE_CD column BEHAVIOUR_TYPE_DISPLAY_CD
§ Behaviour Sub Type Display Code should be present in table FSI_BEHAVIOUR_SUB_TYPE_CD column BEHAVIOUR_SUB_TYPE_DISPLAY_CD
· Tenor should be a valid number.
· Tenor unit should be present in tableFSI_MULTIPLIER_CD column MULTIPLIER_CD.
· Percentage for one pattern code sum of percentage across all Behaviour Type Display Code must not exceed 100. Sequence for one pattern code sequence number must not repeat.
· Executing the Procedure Using SQL*Plus
To run the procedure from SQL*Plus, login to SQL*Plus as the Schema Owner:
SQLPLUS > declare
result number;
begin
result := fn_Behaviour_pattern_loader ('Behaviour_Pattern_20121212_1','20121212');
end;
/
§ BATCH_RUN_ID is any string to identify the executed batch.
§ AS_OF_DATE is in the format YYYYMMDD
· Behaviour Pattern Loader Procedure Using OFSAAI Batch Maintenance
To execute Behaviour Pattern Loader from OFSAAI Batch Maintenance, a seeded Batch is provided.
The batch parameters are:
§ Datastore Type: Select the appropriate datastore from list.
§ Datastore Name: Select the appropriate name from the list.
§ IP address: Select the IP address from the list.
§ Rule Name: fn_behaviourpattern
Below is a list of error messages generated during execution. They can be viewed in the ‘view log’ from UI or inFSI_MESSAGE_LOG table after filtering for the given batch_run_id. Appropriate messages are also logged on successful completion of railure of the utility, as the case may be.
Following error messages may be logged during execution:
· Exception 1: No records found in STG_BEHAVIOUR_PATTERN_NRP for <DATE>
STG_* table has no records for the chosen execution date
· Exception 2: Error in Wrapper_bp_loader: <ENGINE_GENERATED_ERROR_MESSAGE>
Execution of the utility failed due to some unexpected internal error
· Exception 3: Issue in look_up_tbl procedure: <ENGINE_GENERATED_ERROR_MESSAGE>
Procedure to look-up values against matching tables failed; this is an internal error
· Exception 4: Issue in procedure: key_val_look_up: <ENGINE_GENERATED_ERROR_MESSAGE>
Procedure to look-up key-value pairs against matching tables failed; this is an internal error
· Exception 5: ROW # <ROWNUM>: N_PATTERN_CD out of range.
N_PATTERN_CD exceeded 99999
· Exception 6: ROW # <ROWNUM>: N_TENOR is not a valid number (must be >0).
Self-explanatory
· Exception 7: ROW # <ROWNUM>: BEHAVIOUR TYPE DOES NOT MATCH VALID BEHAVIOUR TYPES
Behaviour Pattern Type for a definition must be one of NM, NP or DR
· Exception 8: ROW # <ROWNUM>: TENOR UNIT DOES NOT MATCH VALID TENOR UNITS.
Tenor Units must be one of D, M or Y
· Exception 9: ROW # <ROWNUM>: BEHAVIOUR SUB-TYPE EITHER DOES NOT EXIST OR IS INCORRECT FOR GIVEN BEHAVIOUR TYPE.
Behaviour Sub-Type in a given pattern set must be as follows: <BEHAVIOUR TYPE>::[VALID SUB-TYPES] ::: NM[CR,VL], NP[SS,DF,L] & DR[SD,SR,UD,UR,U,S]
· Exception 10: ROW # <ROWNUM>: Warning: Replicating_Portfolio_Flag was NOT NULL but neither Y nor N; Will be replaced with N
Self-explanatory
· Exception 11: ROW # <ROWNUM>: New pattern code but name is already in use for PATTERN_CD: <PATTERN_CD_NAME>. Please note names are NOT case sensitive.
The pattern will be rejected as two pattern code definitions cannot have the same name
· Exception 12: ROW # <ROWNUM>: Warning: PATTERN_CODE already exists in FSI_BEHAVIOUR_PATTERN_MASTER with name: <NAME>. This name will be retained. Only the newer(date) of these two records will be kept.
If a pattern code present in STG_* table already exists in the FSI_*_MASTER table, then the FSI_* table will be updated with the new definition but the name from FSI_*_MASTER will be retained if the date of execution for the batch is older than that for which the record already exists in the MASTER table.
· Exception 13: Error in Dq_validator: <ENGINE_GENERATED_MESSAGE>
Function for validating STG_* table records failed; this is an internal error
· Exception 14: Definitions with negative N_PATTERN_PERCENTAGE. This PATTERN_CD will be ignored.
Some pattern definitions the STG_* table had negative percentage values
· Exception 15: Pattern codes with N_PATTERN_PERCENTAGE violation found (>100). These will be ignored.
Account percentage allocation across behaviour pattern types in a Pattern definitions in STG_* table must add to 100
· Exception 16: No violations of N_PATTERN_PERCENTAGE found for any pattern code & behaviour type combination.
The definitions that remained after data quality checks had no anomalies in their pattern percentages
· Exception 17: Error in procedure percentage_chk: <ENGINE_GENERATED_MESSAGE>
The percentage checker failed; this is an internal error
· Exception 18: Duplicate older Behaviour Pattern Definition in STG_TABLE. It will be ignored.
The pattern definitions in STG_* table will be ignored if they are older than those in MASTER table
· Exception 19: Date comparison check passed; either no clashes found between STG_BEHAVIOUR_PATTERN_NRP records and FSI_BEHAVIOUR_PATTERN_MASTER records or all definitions in Stage Table were older.
Self-explanatory
Exception 20: Error in flg_older: <ENGINE_GENERATED_MESSAGE>
Procedure for verification of date-clashes between new records in STG_* and those already present in FSI_* MASTER failed; this is an internal error
· Exception 21: Error During Merge Operation in FSI_BEHAVIOUR_PATTERN_MASTER: <ENGINE_GENERATED_MESSGE>
Records could not be inserted into FSI_BEHAVIOUR_PATTERN_MASTER; this is either an internal error or a Primary Key violation
· Exception 22: Error During Merge Operation in FSI_BEHAVIOUR_PATTERN_DETAIL: <ENGINE_GENERATED_MESSGE>
Records could not be inserted into FSI_BEHAVIOUR_PATTERN_DETAIL; this is either an internal error or a Foreign Key violation
· Exception 23: Could not clean-up old records for batch run id: <BATCH_RUN_ID>
Previous error messages inFSI_MESSAGE_LOG for the same BATCH_RUN_ID as run could not be deleted; The execution may have still gone through