4       Regulatory Reporting Solution Data Flow

This chapter provides an understanding of the data flow. It explains what happens within data flow and how various processing aspects are integrated with the overall data flow.

Topics:

·        Data Preparation

·        Overview of OFS REG REP RBI User Interface 

·        Mapping of Results to Line Items in Reporting

·        Mapping Metadata

·        AgileREPORTER: Submission

Data Preparation

This section explains the input data preparation from OFSAA.

Topics:

·        Assumptions for Data Preparation

·        RBI Run Chart

·        Reclassification of Standard Dimensions

·        Mappers for Reclassification of Reg Dimensions

·        Configuring Setup Tables for Standard Set of Values

·        Run or Execution Expectations

·        Consolidation

·        Projection Data

·        Data Flow from Sources Systems to Staging Area

·        Data Flow from Staging to Results Area

·        Data flow from Staging to Processing Area

·        Data Flow from Processing to Results Area

·        Guidelines for Data Loading to Result Area Tables in Data Foundation for Regulatory Reporting Implementations

·        FSDF Entity Information

·        Fact Tables or Entities

Assumptions for Data Preparation

The following assumptions must be considered before Data preparation:

1.     REG REP is a reporting solution, which uses data from underlying fact tables directly for reporting. The end-user is expected to prepare the load for the required data in the reporting area accordingly. Although this has a thin processing layer to reclassify to regulatory dimensions and bands, all the processing measures are expected to be from respective applications and provide as required.

2.     It is integrated with the results area of the respective processing application, and any change in the underlying processing can disturb the REG REP data sourcing.

3.     Baseline and stress data must be populated with appropriate codes. Inaccurate mappings lead to inaccurate results. For details please refer to Relationship between Run and Stress.

4.     For usage of consolidation dimension (which has values like Actual, Budget, Forecast, and so on), all historical data is expected to be tagged as actual to report vintage data, as per report requirements. For projection data, for a given run and Projection Period (quarter/year), only one set of data is expected to be stored.

5.     All processing reporting requirements requiring cash flows, integration package expects bucketed cash flow as an input (meaning a time bucket for cash flow and cash flow amount is expected as input).

6.     Need to Populate MPIN_ACC_PROD_REG_PROD;MPIN_ACC_PARTY_REG_PARTY mapper tables.

RBI RUN CHART

Oracle Financial Services Regulatory Reporting for Reserve Bank of India Lombard Risk Integration Pack provides the RBI RUN Chart listing the tasks required for the population of data for RBI Reports. This covers the following tasks:

·        Setup Table Population

·        Stage Dimension Load

·        Seeded Dimension Data Population

·        Common data Population

·        Common Tasks like Exchange Rate Population

·        RBI Specific Data Population and Transformation

·        Derived Entity Refresh

Download the OFS REG REP RBI 8.1.1.0.0 Run Chart from the MOS.

Reclassification of Standard Dimensions

This section provides information about Standard Dimension Tables in the Regulatory Reporting for Reserve Bank of India Lombard Risk Integration Pack (OFS REG REP RBI) application and step-by-step instructions to use this section.

Topics:

·        Overview of Reclassification of Standard Dimensions 

·        Overview of Reclassification of Standard Dimensions Population 

·        Dimension Data Expectations through SCD 

·        Overview of Mappers for Reclassification of Standard Dimensions 

·        Maintenance of Mapper for Reclassification of Standard Dimensions 

·        Loading Mapper Maintenance through Backend 

·        Usage of Mapper Tables in Data Flow and Reports 

Overview of Reclassification of Standard Dimensions

There are certain Standard Dimensions in OFS REG REP RBI, which are pre-populated with a standard set of values. These values are used by downstream applications for various reporting requirements. There are equivalent customer-specific dimension tables that are populated using the Slowly Changing Dimension (SCD) process. It is required to reclassify these user-specific values to standard specific values as the reporting expects these standard set of values. The reclassification is done using out of the box Mapper Definitions under the Mapper Maintenance screen.

Overview of Reclassification of Standard Dimensions Population

These are the out of the box User Specific dimensions to Standard Dimensions reclassification available in OFS REG REP RBI.

Table 6: Reclassification of Standard Dimensions Population

User Specific Dimension  

Standard Dimension

DIM_BALANCE_CATEGORY

Balance Category

DIM_STD_BALANCE_CATEGORY

Standard Balance Category

DIM_CREDIT_LINE_PURPOSE

Credit Line Purpose

DIM_STD_CREDIT_LINE_PURPOSE

Standard Credit Line Purpose

DIM_CREDIT_LINE_TYPE

Credit Line Type

DIM_STD_CREDIT_LINE_TYPE

Standard Credit Line Type

DIM_IRC

Interest Rate Curve

DIM_STANDARD_IRC

Standard Interest Rate Curve

DIM_LOB

Line of Business

DIM_STANDARD_LOB

Standard Line of Business

DIM_MITIGANT_TYPE

Mitigant Type

DIM_STD_MITIGANT_TYPE

Standard Mitigant Type

DIM_PARTY_TYPE

Party Type

DIM_STANDARD_PARTY_TYPE

Standard Party Type

DIM_PRODUCT

Product

DIM_STANDARD_PRODUCT_TYPE

Standard Product Type

DIM_GL_ACCOUNT 

General Ledger

DIM_STD_GL_TYPE 

Standard General Ledger Type

DIM_VEHICLE_TYPE 

Vehicle Type

DIM_STD_VEHICLE_TYPE 

Standard Vehicle Type

DIM_WRITE_OFF_REASONS

Write Off Reasons

DIM_STD_WRITE_OFF_REASONS

Standard Write Off Reasons

DIM_RECOVERY_TYPE 

Recovery Type

DIM_STD_RECOVERY_TYPE 

Standard Recovery Type

Dimension Data Expectations through SCD

By default, all standard dimensions will be pre-populated with seeded data. It is mandatory to have data in user-specific dimensions and then maintain the reclassifications. Therefore, you must execute the SCDs and then map the reclassification codes under Mapper Maintenance.

NOTE:   

For BSRII .report to have expected page instances created on Branch BSR Code , the same branch codes are expected to be populated in stg_geography_master.v_branch_code_part1.

Overview of Mappers for Reclassification of Standard Dimensions

These are out of the box mappers that are available in OFS REG REP RBI for the standard dimension reclassifications:

·        MAP_PROD_CODE_STD_PROD_TYPE: Mapper for Product Code to Standard Product Code

·        MAP_PARTY_TYP_STD_PARTY_TYP: Mapper for Party Type Code to Standard Party Type Code

·        MAP_CRDLN_TYP_STD_CRDLN_TYP: Mapper for Credit Line Type to Standard Credit Line Type

·        MAP_DIM_IRC_STD_IRC: Mapper for Interest Rate Code to Standard Interest Rate Code

·        MAP_DIM_LOB_STD_LOB: Mapper for Line of Business Code to Standard Line of Business Code

·        MAP_BAL_CAT_STD_BAL_CAT: Mapper for Balance Category to Standard Balance Category

·        MAP_CRDLN_PUR_STD_CRDLN_PUR: Mapper for Credit Line Purpose to Standard Credit Line Purpose

·        MAP_MITG_TYP_STD_MITGN_TYP: Mapper for Mitigant Type to Standard Mitigant Type

·        MAP_CREDIT_SCR_MDL_REG_MDL: Mapper for Credit Score Model To Reg Credit Score Model

·        MAP_DIM_GL_ACCT_STD_GL_TYPE: Mapper for General Ledger Account to Standard General Ledger Account Type

·        MAP_GL_CODE_REP_LINE: Mapper for GL Code to Repline Code

·        MAP_RECVR_TYP_STD_RECVR_TYP: Mapper for Common Recovery Type to Standard Recovery Type

·        MAP_VEHCL_TYP_STD_VEHCL_TYP: Mapper for Vehicle Type to Standard Vehicle Type

·        MAP_WRTOFF_STD_WRTOFF_REASN: Mapper for Write Off Reasons to Standard Write Off Reasons

Maintenance of Mapper for Reclassification of Standard Dimensions

The mapper can be maintained under OFSAAI.

1.     Login to OFSAAI application.

2.     Navigate to Financial Services Data Foundation, then select Unified Analytical Metadata, then select Business Metadata Management, and then select Map Maintenance.

Figure 30: Map Maintenance page

This illustration shows the navigation to the map maintenance page starting with Regulatory Reporting for US Treasury, then selecting Administration and then selecting Map Maintenance.

3.     For illustration, we have selected Mapper for Mitigant Type to Standard Mitigant Type. Click Mapper Maintenance.

Figure 31: Map Maintenance Search Results page

This illustration shows the example of Mapper for Mitigant Type to Standard Mitigant Type in the map maintenance.

4.     The OFS REG REP RBI maps OTH and MSG ready-to-use for this mapper. The remaining mappings can be maintained by the user according to user-specific values.

Figure 32: Mapper Maintenance Edit Page

This illustration shows the Map Maintenance Edit details.

Prerequisites for Mapper Maintenance

1.     Login to OFSAAI application.

2.     Navigate to Financial Services Data Foundation, then select Unified Analytical Metadata, then select Business Metadata Management, and then select Save Metadata. Load all the required user-specific dimensions using SCD.

3.     To Resave these hierarchies, select these hierarchies and click Save

§       HCMDF001 - Hier - Map Common Product

§       HCMDF002 - Hier - Map Common Standard Product Type

§       HCMDF003 - Hier - Map Common Party Type

§       HCMDF004 - Hier - Map Common Standard Party Type

§       HCMDF005 - Hier - Map Common Interest Rate Curve

§       HCMDF006 - Hier - Map Common Standard Interest Rate Curve

§       HCMDF007 - Hier - Map Common Line of Business

§       HCMDF008 - Hier - Map Common Standard Line of Business

§       HCMDF009 - Hier - Map Common Credit Line Type

§       HCMDF010 - Hier - Map Common Standard Credit Line Type

§       HCMDF011 - Hier - Map Common Credit Line Purpose

§       HCMDF012 - Hier - Map Common Standard Credit Line Purpose

§       HCMDF013 - Hier - Map Common Mitigant Type

§       HCMDF014 - Hier - Map Common Standard Mitigant Type

§       HCMDF015 - Hier - Map Common Balance Category

§       HCMDF016 - Hier - Map Common Standard Balance Category

§       HCMDF017 - Hier - Map Common General Ledger Code

§       HCMDF018 - Hier - Map Common Standard General Ledger Type

§       HCMDF019 - Hier - Map Common Vehicle Type

§       HCMDF020 - Hier - Map Common Standard Vehicle Type 

§       HCMDF021 - Hier - Map Common Write Off Reasons

§       HCMDF022 - Hier - Map Common Standard Write Off Reasons

§       HCMDF023 - Hier - Map Common Recovery Type

§       HCMDF024 - Hier - Map Common Standard Recovery Type

Figure 33: Metadata Resave Page

This illustration shows the Metadata Resave Screen with the list of available hierarchies, you can select the hierarchy and click Save.

Possible Mapping Combinations

One Standard Dimension table in the source can be mapped only to one Standard Dimension table. One to Many or Many to Many mapping leads to error in T2T as the records are duplicated. From the illustration, the possible combinations for Mitigant Type to Standard Mitigant Type mapping are One to One and Many to One mapping.

·        One to One Mapping: You can map one Mitigant Type data model to one Standard Mitigant Type data model using the Mapper Maintenance screen. Here, you must select one value in the Mitigant Type data model and one value in the Standard Mitigant Type data model.

·        Many to One Mapping: You can map many values in the Mitigant Type data model to one value in the Standard Mitigant Type data model using the Mapper Maintenance screen.

To conduct One to One or Many to One mapping:

1.     Login to OFSAAI application.

2.     Navigate to Financial Services Data Foundation, then select Unified Analytical Metadata, then select Business Metadata Management, and then select Map Maintenance

Figure 34: Map Maintenance Page

This illustration shows the navigation to map maintenance page starting with Regulatory Reporting for US Treasury, then selecting Administration and then selecting Map Maintenance.

3.     Click the +Add icon to create a new map; otherwise, select an existing Map. For illustration, Mapper for Party Type Code to Standard Party Type Code value is selected. Click the Mapper Maintenance icon.

4.     The Mapper Maintenance window opens (in this illustration, the Map - Mapper for Party Type Code to Standard Party Type Code window opens). To conduct One to One or Many to One mapping, in the Member Combinations section, click Add.

Figure 35: Map Maintenance Add Page

This illustration shows the map maintenance addition details.

5.     The Add Mappings pop-up window opens. In this illustration:

§       To map One to One, select one value in the Hier - Map Common Mitigant Type data model and one value in the Hier - Map Common Standard Mitigant Type data model, and click Go. Repeat this step for each One to One data model mapping, and then click Save.

In this illustration, MSG - Missing is mapped to AFC - Auto Financing Company.

Figure 36: One to One Mapping Page

This illustration shows the one to one mapping of data model.

§       To map Many to One, select multiple (two in this illustration) values in the Hier - Map Common Mitigant Type data model and one value in the Hier - Map Common Standard Mitigant Type data model, and then click Go. Click Save.

In this illustration, MSG-Missing and OTH-Others are mapped to the AFC-Auto Financing Company.

Figure 37: Many to One Mapping Page

This illustration shows the many to one mapping of data model.

6.     An acknowledgment is displayed: Confirm Save?

7.     Click Yes to confirm and save data. In the Mapper Maintenance window, in the Mapped combinations and the Mapped member's sections, you can see the newly conducted mapping.

Loading Mapper Maintenance through Backend

Load each Physical table in Atomic Schema with V_MAP_ID as mentioned against each mapper,

V_MEMBER_1 => Customer Specific Value Dimension's Member Code, V_MEMBER_2 => Standard Dimensions Member Code.

This is the list of Mapper Physical Tables and required details:

Table 7: Mapper Physical Table

PHYSICAL TABLE

V_MAP_ID

MAP_MITG_TYP_STD_MITGN_TYP

1514359498413

MAP_DIM_IRC_STD_IRC

1511442223838

MAP_PROD_CODE_STD_PROD_TYPE

1511441227779

MAP_DIM_LOB_STD_LOB

1511442482993

MAP_CRDLN_PUR_STD_CRDLN_PUR

1511528494678

MAP_PARTY_TYP_STD_PARTY_TYP

1511441945154

MAP_BAL_CAT_STD_BAL_CAT

1514359600480

MAP_CRDLN_TYP_STD_CRDLN_TYP

1511527713328

MAP_CREDIT_SCR_MDL_REG_MDL

1497513837744

MAP_DIM_GL_ACCT_STD_GL_TYPE

1523447233065

MAP_GL_CODE_REP_LINE

1494610765133

MAP_RECVR_TYP_STD_RECVR_TYP

1524045220417

MAP_VEHCL_TYP_STD_VEHCL_TYP

1524044256132

MAP_WRTOFF_STD_WRTOFF_REASN

1524044617123

Usage of Mapper Tables in Data Flow and Reports

The mapper maintenance output is always physically stored in underlying tables. These tables are registered in OFSAA as an object. Therefore, these tables can be used, without any restrictions, in any of the metadata that requires reclassification. OFS REG REP RBI Data Flows (T2Ts) make use of this information to populate the Standard Dimension Surrogate Keys of Results area tables.

Mappers for Reclassification of Reg Dimensions

The following Mapper tables must be configured as a prerequisite for LR v7 report (V_MEMBER_1 => Customer Specific Value Dimension's Member Code and V_MEMBER_2 => Reg Dimensions Member Code):

·        MPIN_ACC_PARTY_REG_PARTY: Mapper for Party to Reg Party

·        MPIN_ACC_PROD_REG_DEPOSIT: Mapper for Product to Reg Deposit

·        MPIN_ACC_PROD_REG_PROD: Mapper for Product to Reg Product

Configuring Setup Tables for Standard Set of Values

The setup configurations which are required to be done before executing the RBI Regulatory Reporting Run are as follows.

SETUP_MASTER Table

The SETUP_MASTER table in the Atomic Schema must be modified with the required values for RBI.

Table 8: SETUP_MASTER_Table

V_COMPONENT_ CODE

V_COMPONENT_DESC

V_COMPONENT_VALUE

Description

DEFAULT_FINANCIAL_ELEMENT

Default Financial Element

DEFAULT

Component Value to be updated according to the values used in STG_GL_DATA.V_FINANCIAL_ELEMENT_CODE. This is used for Fact Management Reporting T2T.

DEFAULT_FX_RATE_SRC

Default FX Rate Source

DEFAULT

Component Value to be updated according to the values used in STG_EXCHANGE_RATE_HIST.V_RATE_DATA_ORIGIN. This is used for Calculating the Reporting Currency.

DEFAULT_MARKET_CENTER

Market Center Identifier

DEFAULT

Component Value to be updated according to the values used in STG_INSTRUMENT_MARKET_PRICES.V_MKT_CENTER_ID. This is used for Calculating the Instrument Close Price.

RBI_DEFAULT_PD_MODEL

PD Model for RBI Regulatory Reporting

DEFAULT

Component Value to be updated according to the values used in STG_PD_MODEL_MASTER.V_PD_MODEL_CODE. This is used for Calculating PD Model Band Skey.

Run or Execution Expectations

Run refers to execution. It is assumed that at different periods, different combinations of parameters, and different data require different executions. From a reporting perspective, as required by regulators, RRDF Application requires data for the following executions:

1.     Current Data or Execution

a.     Reporting month-end data

b.     Projection Data

2.     Historical (trend or vintage) Data

a.     Yearly

b.     Quarterly

3.     Stressed Data

For the DSBROR, RBSTR3, SLR2, CRR, SLR and FORMVIII reports, the SETUP_MASTER table should be updated as follows:

1.     DSBROR: It is expected to display Domestic and Overseas data separately. In such cases, Data is expected separately at each legal entity level within the organization structure. Domestic Data is populated in the report as data for the legal entities within India. Overseas Data is populated in the report as data for the legal entities outside India.

2.     RBSTR3: It is expected to display the Year-To-Date (YTD) balance reported quarterly.

3.     SLR2, CRR, SLR, FORMVIII: It is expected to update the CURRENT_MIS_DATE in the SETUP_MASTER table.

Run execution is expected to be carried out for the fortnightly reports SLR2 and CRR for the below mentioned set of dates.

The FIC MIS DATE is referred as T in the below table.

Table 9: Run Execution Dates for Fortnightly Reports

Run Execution Date

Sample Date

T-13

12-Mar-16

T-12

13-Mar-16

T-11

14-Mar-16

T-10

15-Mar-16

T-9

16-Mar-16

T-8

17-Mar-16

T-7

18-Mar-16

T-6

19-Mar-16

T-5

20-Mar-16

T-4

21-Mar-16

T-3

22-Mar-16

T-2

23-Mar-16

T-1

24-Mar-16

T

25-Mar-16

T-28

26-Feb-16

 

Populate the following tables before executing reports in Reporter Portal, and after populating data in the OFSAA results tables through a scheduled batch:

·        SETUP_MASTER: The following parameters must be updated before every regulatory reporting Runs.

Table 10: Setup Master Table for Run Execution

V_COMPONENT_CODE

V_COMPONENT_DESC

V_COMPONENT_VALUE (Sample Value)

CURRENT_QUARTER_NAME

Current Quarter Name

2014-Q2

PREVIOUS_YEAR

Previous Year

2013-2014

PREVIOUS_QUARTER_NAME

Previous Quarter Name

2014-Q1

CURRENT_YEAR

Current Year

2014-2015

CURRENT_MIS_DATE

Current MIS Date

2014-06-30

DEFAULT_GAAP

DEFAULT_GAAP

INGAAP

 

NOTE:   

For the LR v7 report, SETUP_MASTER should be updated for DEFAULT_GAAP value as INGAAP before executing the Account Dimension SCD (<INFODOM>_REG_RBI_ACCOUNT_SCD: This Batch is for Account Dimension from Product Processor Tables for RBI Regulatory Reporting).

 

·        FCT_REG_RUN_LEGAL_ENTITY_MAP: As an Organization should have a hierarchical structure and reporting could happen for the entity at any level in the hierarchy, the applicable reporting entity should be provided as part of every regulatory reporting Run in this table.

Consolidation

Consolidation is handled as part of the Financial Services Data Foundation (FSDF). Consolidation in FSDF refers to the elimination of intracompany transactions, that is, any kind of transactions between two parties or entities which are part of the reporting organizational hierarchy for a given execution. When there is only one legal entity involved in an execution, it is called as SOLO Entity vs earlier one as CONSOLIDATED Entity.

It is expected that in the staging area, the customer loads the data from the source system and then uses consolidation logic to arrive at the consolidated output for results.

·        The scope of consolidation is about the list of Entities that participate in consolidation.

·        Legal Entity Structure is looked through ORGANIZATION STRUCTURE DIMENSION.  This store's a parent-child relationship. This is stored only once.

·        While moving the data, Legal Entity can move related entities to the processing/reporting area.

·        The legal structure being finalized once, this structure only stores one parent-child relationship.

Figure 40: Consolidation

This illustration shows the transactions between two parties or entities which are part of the reporting organizational hierarchy for a given execution

·        The transaction/exposure between SUB 1 and SUB 2 must be eliminated when reporting for Parent A.

·        The transaction/exposure between SUB 1 and SUB 3 must not be eliminated when reporting for Parent A.

·        It is the customer for banking products and the issuer for traded securities that are considered for the intracompany elimination.

Consider the following example:

Table 11: Account Summary Table

FSDF AREA

ENTITY CODE

ACCOUNT NUMBER

CUSTOMER

ISSUER

STAGE LOAN CONTRACTS

SUB 1

ACCOUNT 1

SUB 2

 

STAGE LOAN CONTRACTS

SUB 1

ACCOUNT 2

PARTY 1

 

STAGE INVESTMENT CONTRACTS

SUB 1

ACCOUNT 3

PARTY 1

SUB 2

FCT COMMON ACCOUNT SUMMARY

SUB 1

ACCOUNT 2

PARTY 1

 

FSI INTRA COMPANY ACCOUNT

SUB 1

ACCOUNT 1

SUB 2

 

FSI INTRA COMPANY ACCOUNT

SUB 1

ACCOUNT 3

PARTY 1

SUB 2

 

As shown in the preceding table, Account 1 is moved to the FSI INTRA COMPANY ACCOUNT and Account Summary tables. Run Enabled tables contain records specific to the selected legal entity and consolidation type.

Consolidation is also linked to multiple hierarchies banking organizations have. Multiple hierarchies refer to the different grouping of group entities under different parents for given regulatory requirements.

The hierarchy structure is thus primary input to the consolidation process. Depending on whether you have multiple hierarchies or not, there are two data flows.

Consolidation with Multiple Organization Structure Hierarchy:

1.     You load Organization Structure Hierarchy to STAGE ORG STRUCTURE MASTER table, which is moved to the ORG STRUCTURE DIMENSION using the SCD component.

2.     Execution specific organization structure hierarchies along with parent and child entity codes are populated in STAGE LEGAL ENTITY HIERARCHY INTERFACE table, which is moved to LEGAL ENTITY HIERARCHIES DIMENSION using SCD component.

3.     Execution specific Consolidation percentage is loaded in STAGE ENTITY CONSOLIDATION PERCENTAGE table, where the child entity code, the parent entity code, and the consolidation percentage is populated. This is moved to FACT ENTITY CONSOLIDATION PERCENTAGE table using Table to Table transformation. In FSDF 804 release, this feature is not supported yet.

4.     The STAGE LEGAL ENTITY HIERARCHY is used for the Consolidation process and not the one from ORGANIZATION STRUCTURE DIMENSION.

Figure 42: Consolidation with Multiple Organization Structure Hierarchy

This illustration shows the multiple type of consolidation of the elimination of accounts with other entities.

5.     If you do not have Multiple Hierarchy, STAGE LEGAL ENTITY HIERARCHY which is used for the Consolidation process can be populated from ORG STRUCTURE DIMENSION instead of the STAGE LEGAL ENTITY HIERARCHY.

Figure 43: Consolidation without Multiple Organization Structure Hierarchy

This illustration shows the solo type of consolidation of the elimination of accounts with other entities.

NOTE:   

A Solo Run does not require any type of consolidation of the elimination of accounts with other entities.

 

Additional Data Preparations to handle Consolidation

The entity FCT_REG_RUN_LEGAL_ENTITY_MAP is used once you select REPORTING ENTITY from AgileREPORTER. This table is populated as part of the RBI Run Execution.

Table 12: Data Preparation for Consolidation

RUN TYPE

FIC MIS DATE

REPORTING ENTITY

RUN EXECUTION

SOLO

20160330

LE1

12

SOLO

20160330

LE2

14

CONSOLIDATED

20160330

LE1

16

CONSOLIDATED

20160330

LE2

16

CONSOLIDATED

20160330

LE3

16

 

For the solo run, only one reporting entity is expected to be included whereas consolidated run includes all entities involved in execution. This entity provides flexibility to select one REPORTING ENTITY in AgileREPORTER and select relevant data for the particular execution based on if it is consolidated or solo.

Relationship between Run and Stress

In the OFS REG REP RBI application, for example, the BSR II Annual report picks up reporting data based on the Reporting Run that populates the underlying Fact Table(s). Reporting Run is a flag, which must be marked as 'Y' in a DIM_RUN table so that, the OBIEE reporting layer selects a particular run execution.

In this application, a Run comprises:

1.     Baseline Run: The Bank Holding Company (BHC) may have multiple runs. The run used for reporting is marked with a Reporting Flag = Y. This is the Baseline run for a given reporting date. It is referred to as Baseline because the values that it represents are not stressed and the BHC may use these base values for stressing them according to various scenarios. A history of such runs accumulated over some time provides historical runs.

2.     Stress Run: Stress runs hold data, which are stressed by a certain percentage/basis point over the Baseline figures. The BHC expects these figures to reflect the business/risk position under predetermined business scenarios/economic conditions.

3.     Identification of Baseline and Stress run occurs from STRESS DIMENSION.

In this application, the required stress runs are tagged to a Baseline run. If the BHC performs stress runs, the relevant runs which are intended for reporting are identified and tagged with a reporting Baseline run using the V_RUN_ID in the DIM_RUN.

DIM RUN stores n_run_skey / v_execution_id, which are execution specific for every run definition which is v_run_id. Therefore, the run definition can remain constant over some time and different executions provide different outputs due to underlying data changes.

DIM_STRESS conveys the stress definition. Additionally, it links the original run Definition (v_run_id) and Stressed run ID (v_stressed_run_id). You must refer to the DIM_RUN table to get the expected run execution of these runs definitions about a particular date / n_mis_date_skey.

The same fact table stores both the Baseline data and the Stressed data, uniquely identified through Scenario codes (and Run Skeys).

Refer to the Business Metadata.xls present in the installer package for details on different Fact Tables used for related reports.

Projection Data

The following points provide information on the projection data:

1.     Baseline run also populates projected date data.

2.     This application requires projected data at two levels - Quarterly and Annual.

3.     The DIM_CONSOLIDATION table is used to identify the projections. It contains the codes for projected quarters and years as required by the templates.

4.     In the Fact tables, projection data is referred to with respective Consolidation codes (scenario code for FCT_MGMT_REPORTING). BHC must populate the data accordingly.

5.     In the following example, FQ1 means Financial Quarter 1, FY1 means Financial Year 1, and so on.

Table 13: Projection Data Example 1

Consolidation Code

Consolidation Description

Reporting Line

Scenario

EOP Balance

100

Actual

100

BSL

426,367

400

FQ1

100

BSL

608,618

401

FQ2

100

BSL

870,502

402

FQ3

100

BSL

567,736

403

FQ4

100

BSL

846,196

404

FQ5

100

BSL

775,027

410

FY1

100

BSL

470,092

411

FY2

100

BSL

473,880

412

FY3

100

BSL

942,034

413

FY4

100

BSL

497,889

414

FY5

100

BSL

807,813

 

NOTE:   

·        For Movement measures data is not carried from one reporting period to another. For example, Profit or Loss. Where General ledger balances such as loan outstanding are carried forward from one year to another, profit and loss are period specific.

·        Therefore, unlike End of Period (EoP) balance, movement values for quarter actuals must be derived for reporting. For historical data, net sales for quarter 3 is the difference between the sales figure as of the end of quarters 2 and 3. You do not need to provide this difference as a download. Movement data for actual is identified through different runs and the respective values are summed up.

·        Only those records, whose corresponding runs fall between the fiscal month start date and end date of the reporting quarter are selected for summation. Each Run has an associated date, and runs can be performed daily. Assuming that runs are performed daily in a given quarter (90 days), REG REP sums up data points across all 90 days to arrive at a quarter-end movement figure.

 

Table 14: Projection Data Example 2

Code

Projected Period

Reporting Line

Scenario

Run ID

Date

Projected Amount

Movement

100

Actual

100

BSL

RUNID001

10-Oct-13

300,000

900,000

100

Actual

100

BSL

RUNID002

15-Nov-13

100,000

100

Actual

100

BSL

RUNID003

20-Nov-13

300,000

100

Actual

100

BSL

RUNID004

30-Dec-13

200,000

400

FQ1

100

BSL

--

--

--

608,618

401

FQ2

100

BSL

--

--

--

870,503

402

FQ3

100

BSL

--

--

--

567,736

410

FY1

100

BSL

--

--

--

470,093

411

FY2

100

BSL

--

--

--

473,881

412

FY3

100

BSL

--

--

--

942,035

 

However, when the projection of net sales for quarter 2 next year is to be performed, no derivation is required. Projections data for the said quarter can be directly downloaded in the respective Fact table(s) for reporting.

Data Flow from Source Systems to Staging Area

The staging area is populated with data from various data sources, such as GL data, Account data, Customer data, Trading data, Currency data, and Master data. Refer to Data Integration Hub (DIH) User Guide in OHC Documentation Library for details. DIH enables to load the data from the source systems to the OFSAA staging tables, through logical interfaces, known as Application Data Interfaces (ADI). DIH provides a set of User Interfaces (UI), which is used to define and maintain External Data Descriptor (EDD), Application Data Interfaces, and map the EDDs and ADIs through connectors.

Data Flow from Staging to Results Area

This section details the pass-through data, transformed data, and classification.

NOTE:   

Data flow from Staging to Results Area is available only for LR/STL, BSRII, and Leverage Ratio.

Pass-Through Data

Pass-through data refers to the static data that is pre-processed and flows to the results area directly. The Common Staging Area (CSA) model represents the data entry point into the FSDF. CSA provides a simplified, unified data sourcing area for inputs required by analytical applications and engines. It consists of over 400 tables and nearly 9000 columns organized into distinct subjects.

The staging area is a physical data model, which is deployed using the Analytical Application Infrastructure, which manages it. The design of the staging area data model is to allow efficient data loading for analytics. It thus has crucial differences from a general-purpose repository of operational/transactional data across a bank.

The staging area acts as the single source of data and contains unified data requirements for various banking areas such as Loans and Losses, Off-balance Sheet products, Securities, Derivatives, Capital Data, Management Ledger and General Ledger. A common example of this category includes various monetary amounts, dates, and so on.

Data Flow from Staging to Processing Area

The staging area of the FSDF serves as a container for analytical processing from sourcing to consumption. Such processing is usually delivered in the form of discrete units called analytical applications, spanning different analytical use cases ranging from Finance to Risk to Compliance.

These applications consist of custom-built computational engines and numerical libraries and can execute processes on the data that range from simple aggregations to complex, multi-step stochastic processes such as Monte-Carlo simulation.

Analytical applications place varying demands on the data infrastructure in terms of volumes and speed, and therefore, place different demands on the data architecture. In practice, the normalized (3NF) design favored for Enterprise Data warehouses often fails to be efficient or performant when it comes to analytical processing across a wide range of use cases.

Therefore, the OFSDF recognizes the need for distinct application-specific working stores, separate from the staging and reporting area. For example, the OFSAA Asset and Liability Management (ALM) Application has a distinct set of ALM-specific tables, as does the Market Risk solution.

NOTE:   

The structure of these processing area stores is decided by the actual analytical application and engine used. The OFSAA suite of applications is organized this way, with each application managing a specific set of tables/schemas within the processing area.

 

The processing area tables/schemas are not part of the OFSDF. This is because OFSDF is intended to be an open platform. Other analytical applications and engines can equally provision data out of OFSDF by mapping their input requirements appropriately to the OFSDF staging area model.

Data Flow from Processing to Results Area

This step is similar to Data Flow from Staging to Results Area. It involves either pass through data from processing to results or loading directly to results (refer Section 4.1.13). This is mostly due to processing measures such as Fair Value, Risk-Weighted Assets, and so on.

Guidelines for Data Loading to Result Area Tables in Data Foundation for Regulatory Reporting Implementations

Regulatory reports make use of data available across several fact tables in the OFSAA Data Foundation Model and these result tables are either loaded from the raw data sourced from source systems via out-of-box T2Ts or processed data output from various OFSAA applications.

For example, Fact LRM Account Summary (FCT_LRM_ACCOUNT_SUMMARY) which stores the liquidity risk related attributes and metrics computed by OFSAA LRM application, Fact Loan Loss Forecasting and Provision Account Summary (FCT_LLFP_ACCOUNT_SUMMARY) which stores the attributes and measures computed by OFSAA LLFP application.

However, there can be several implementation use cases in the regulatory reporting space where the customer may not have licensed any of the OFSAA application and therefore must put additional custom effort to design an ETL process to load the required data elements into the respective fact tables referenced by the report. The following section highlights some of the guidelines that the customer can consider when designing a data flow for such a use case:

·        Consistent Usage of Run Identifier

Most of the fact tables used in regulatory reporting are Run enabled and have a composite primary key inclusive of Run Identifier that enables the same snapshot of data to be loaded multiple times into the target fact table for any given execution date. All the out of the box processes that impact data used in regulatory reports are executed as part of an integrated Run to ensure that Run Identifier is consistent across fact tables. Since the reporting is done on an integrated schema, the customs data flow design must keep this integrity intact. This essentially means that the custom ETL processes designed to load the data directly into the fact tables must be able to leverage the Run Identifier generated by the Run Engine during execution. Run Identifier information is available in the DIM_RUN table.

·        Correct Dimensional Lookup Configuration

Dimensional Identifiers are typically part of referential integrity constraints with the fact table so the custom ETL processes must ensure that lookups retrieve valid surrogate keys for a given value of the business key. The intermediate staging structure must ensure all the business keys are persisted correctly and the lookup condition is designed on the correct dimension table.

For example, FCT_LRM_ACCOUNT_SUMMARY.n_asset_level_skey DIM_ASSET_LEVEL.n_asset_level_skey. The business key (v_asset_level_code) must be sourced and persisted to ensure correct values are populated in the target column, that is, FCT_LRM_ACCOUNT_SUMMARY.n_asset_level_skey.

·        Data Loading Guidelines for handling Negative or Credit Balances

To handle Negative Balances in Regulatory Reporting, there are two primary sources of the negative balances:

a.     Natural asset negative balances from the system of records

b.     Adjustment entries or Plug entries.

The reporting requirement is to show the genuine asset negative balances as liabilities where adjustment entries should be aggregated to the same heading assets or liabilities as they are loaded. RBI uses General Ledger type from the General Ledger Account dimension. Primarily following two General Ledger Type codes are used for this purpose.

a.     ASSET

b.     LIABILITY

General Ledger is available in every contract or product processor table as General Ledger code. Following products are considered for the treatment of negative balances:

a.     Loans and Cards

i.       Loans are reported under the Assets category in the Balance Sheet. There are cases when the customer makes an excess payment towards the loan account which makes the end of the period account balance becoming credit balance or negative balance.

ii.     When excess payment is made, then the account will no longer fall under the Asset category, but it becomes a liability for the financial institution and must be reported as non-interest-bearing demand deposits in respective line items.

iii.   To avoid reporting of the excess payment as assets, you must assign a General Ledger code to the given account with V_GL_TYPE_CODE = 'LIAB'.

iv.   When for any loan regulatory reclassification assigned with GL code having V_GL_TYPE_CODE = 'LIAB', it excludes the reporting for all asset line items and it is added to Liability in respective line items.

v.     Accounts created for Adjustment or Plug entries must have General Ledger code having V_GL_TYPE_CODE = 'AST'. This adds up to the same asset line item resulting also or reduction of overall reporting amount for a given line item based on the sign of the end of period (EOP) balance.

vi.   Accounts created for Adjustment or Plug entries for excess payments must have General Ledger code having V_GL_TYPE_CODE = 'LIAB'. This adds up to the same Liability line item resulting also or reduction of overall reporting amount for a given line item based on the sign of the End of Period (EOP) balance.

Data Mapping (T2T)

Data Mapping refers to the process of retrieving unstructured data from data sources for further data processing, storage, or migration. This feature is commonly known as RDBMS source to RDBMS target (T2T) framework in the OFSAA world and can be leveraged when source data is available in the Oracle database. Dimensional lookups must be handled via the T2Ts join condition and expressions. Refer to OFS AAI User Guide for more details on configuring a T2T.

Data File Mapping (Flat File to RDBMS Target - F2T)

If the source data is available in file structures, the OFSAA F2T component can be used to bring the data in the OFSAA ecosystem. As lookups cannot be configured in an F2T, this component must be used in conjunction with the T2T component, that is, data is first loaded from the file to an interim staging structure using the F2T component followed by data load to the target result area table using the T2T component.  This is the least recommended approach as there is a need for interim table structure in the data model and involves multiple data hops that add to the overhead.

See the Oracle Financial Services Analytical Applications Infrastructure User Guide for more details on OHC  configuring an F2T.

FSDF Entity Information

The FSDF entity information is given in the Dimension Tables and Data Elements documents available on the MOS page.

OFS Regulatory Reporting for Reserve Bank of India - Dimension Tables <release version>

Fact Tables or Entities

For all tables with data flow type tagged as a Processing, it is recommended that you map data directly to the result area if processing application is not part of the OFSAA product suite. For example, Basel computations, RWA Numbers, and Capital Ratio are taken from the processing area which is populated by OFSAA or other Basel applications.

For processed tables, you can look for the following options:

·        OFSAA Data Integration Hub (DIH) product

·        Flat File

·        Table-to-Table Transformation with the source being processing application

Overview of OFS REG REP RBI User Interface

This section provides details to log in to the OFSAA application, view report summary, view schedule summary, view cells, and map data schedules. It includes:

Logging in to OFS REG REP RBI UI

After the applications are installed and configured, to access the OFS REG REP RBI UI you must log in to the OFSAAI environment using the OFSAAI login page.

NOTE:   

The built-in security system ensures that you are permitted to access the window and actions based on the authorization only.

 

To access the OFS REG REP RBI UI, follow these steps:

1.     Enter the OFSAAI URL in your browser. The OFSAAI login page is displayed.

Figure 45: OFSAAI Log In

This illustration shows the login page for the OFSAAI application. You can select the required Language, enter valid User ID and Password, and then click Login.

2.     Select the desired language from the Language drop-down list.

3.     Enter your User ID and Password. When you log into OFSAAI, the OFSAA Applications page is displayed.

Figure 46: OFSAA Applications Screen

This illustration shows the OFSAAI applications page with FSDF and REG REP US Treasury applications in it.

4.     Select the Financial Services Data Foundation. The FSDF landing page is displayed.

Figure 47: Financial Services Data Foundation Landing Page

This illustration shows the FSDF Landing page from OFSAAI applications login.

5.     Or select the Oracle Financial Services Regulatory Reporting for Reserve Bank of India. The Oracle Financial Services Regulatory Reporting for Reserve Bank of India landing page is displayed.

Figure 48: Oracle Financial Services Regulatory Reporting for Reserve Bank of India Page

This illustration shows the RBI Landing page from OFSAAI applications login.

6.     Select the Navigation Menu Navigation Menu Icon in the OFS REG REP RBI UI to access the following windows:

a.     Home

b.     Data Elements

c.     Metadata Management

i.       Dataset

ii.     Build Hierarchy

iii.   Measure

iv.   Business Processor

v.     Derived Entity

vi.   Reports

d.     Metadata Browser

e.     Process

i.       Process Modeller

ii.     Process Monitor

f.       Process Execution Summary

Mapping of Line Items to Reporting Requirements of Lombard Risk

Figure 49 explains the flow of data between OFSAA and AgileREPORTER.

Figure 49: Data Flow between OFSAA and AgileREPORTER

This illustration shows that the OFSAA provides the data to AgileREPORTER in the form of derived entities.

OFSAA provides the data to AgileREPORTER in the form of derived entities. The derived entity is an existing OFSAA higher-order metadata object and can be physicalized as a materialized view in the database. Derived entities store aggregated data from base fact entities specified in the dataset and have the necessary dimensions and measures. Dimensional and measure combination stored within the derived entity is mapped to cells within the report. This mapping is maintained within the Dimensional mapping template. Decision Process within AgileREPORTER reads the derived entities and dimension mapping information to derive the data for reporting. Derived entities are created based on measures, hierarchies, and datasets.

Figure 50: Decision Process in AgileREPORTER

This illustration shows the Decision Process within AgileREPORTER reads the derived entities and dimension mapping information to derive the data for reporting. Derived entities are created based on measures, hierarchies, and datasets.

Each regulatory report contains numerous schedules. Each schedule contains various cells that need to be reported. Each cell or box is uniquely identified by a cell reference (or box identifier). OFSAA and Lombard Risk provide a unique cell reference to the cell.

Each cell is mapped to a set of dimensions and measures within the OFSAA. A group of cells within the schedule have similar mappings (such as the same dimensions but different member codes). Such groups are identified to create logical sections within the schedule. A derived entity is created for each of these logical sections within the schedule.

The dataset associated with the derived entity provides data for the specific derived entity. Data such as measures, in a derived entity are aggregated based on dimensions that are included in the derived entity, even though the fact entities in the dataset contain complete details of the data.

Some of the cells in the schedule can be derived as per the logic provided by the regulator. Derivation could be an expression built using values from other cells. Examples of derivation are ratio, node-level rollup, a direct reference to cells in other schedules within the report. These derivations are performed within the Lombard Risk Reporter portal. OFSAA provides data only for the cells that are not derived.

The Decision Process within Lombard Risk Reporter Portal uses the dimension mapping template to interpret data present in the derived entity. The decision process creates form data by reading the information from the derived entity and derives the necessary data that will be used by the Lombard Risk Reporter Portal to display report data.

Refer to the excel sheet for the list of Reporting Lines used across all the RBI returns.

NOTE:   

Metadata for data transformation is available as part of the data warehouse configuration pack provided Out-of-Box / pre-configured from OFSAA. You need not perform any mapping for the reports. However, this information can be useful for maintenance or extensions when Out-of-Box pack is not available.

Mapping Metadata

The list of reports with the corresponding Mapping Metadata Information is present in the Hierarchy Measure Linkages document present in My Oracle Support page.

AgileREPORTER: Submission

The AgileREPORTER is a web-based regulatory reporting tool provided by Lombard Risk. It provides necessary features to address e-filing workflow, validation and submission process and supports reports (called as forms/returns) for various jurisdictions. AgileREPORTER provides a reliable and efficient infrastructure to compile, generate, and submit regulatory reports.

Lombard Risk Reporter portal stores data related to forms/returns in its schema. Lombard Risk application supports the loading of data into its schema in the following ways:

·        Cell References File hand-off: It is used when data providers compute all the information required for reports and pass the data that is required for each cell in the report.

·        Base Data hand-off: It is used when data providers pass base data to the Lombard Risk application and expect computations that are required for each cell to be performed within the Lombard Risk application.

However, the Lombard Risk Reporter portal supports dimensional mapping-based approaches for OFSAA. In this approach, data hand-off is based on dimensions and measures similar to the pattern of information storage in OFSAA. Decision table mapping process within the Lombard Risk Reporter portal maps dimensions and measures to cell references.