Post-Data Warehouse Load Validations

Validate the following scenarios after the data warehouse load is complete.

On this page:

Data Replication

# Validation Script or Steps to Validate

K01

Validate that initial replication sync job executed successfully.

Run the following query in the target database as replication user (i.e. CCB1REP) for the configured context and no result should be displayed.

select * from cdc_sync_log where task='SYNC' and iteration=(select max(iteration) from ccb1rep.cdc_sync_log c where model_cd like 'CCB1%') and completed_on is null;

K02

Validate that all replication tables have the same data as source except few additional journal columns for ETL purpose.

Run the following query in target database as a MDADM user and validate that no result is displayed.

select * from b1_table_sync where rows_extracted<>rows_loaded and context_cd='CCB1' and rows_skipped is null;

K03

Validate that primary key exist for all replicated tables and primary key constraint should be enabled.

Run the following query in target database as a MDADM user and validate that no result is displayed.

select s.tbl_name from b1_source_table s,b1_prod_instance i where s.prod_flg=i.prod_flg and i.context_cd='CCB1' and s.base_replicate_flg='Y' and not exists(select * from all_constraints c where c.constraint_type='P' and owner='CCB1REP' and status='ENABLED' and s.tbl_name=c.table_name);

K04

Validate that index on slicing timestamp column exists for all replicated tables.

Run the following query in target database as a MDADM user and validate that no result is displayed.

select s.tbl_name from b1_source_table s,b1_prod_instance i where s.prod_flg=i.prod_flg and i.context_cd='CCB1' and s.base_replicate_flg='Y' and not exists(select 1 from all_ind_columns ind where index_owner='CCB1REP' and s.tbl_name=ind.table_name and ind.column_name='JRN_SLICING_TS');

K05

Validate that required functional replication index must be created.

A few functional indexes are created by the initial sync job on replication schema tables. These indexes are to improve the performance for target ETL jobs.

Run the following query in target database as a MDADM user and validate that no result is displayed.

select * from b1_replication_index_vw vw where vw.context_cd='CCB1' and not exists (select 1 from all_indexes ai where ai.owner=vw.schema_name and ai.index_name=vw.index_name);

K06

Validate that after the initial data sync to the replication temporary import schema is dropped.

Run the following query in target database as a MDADM user and validate that no result is displayed.

select * from all_users where username like '%IMP';

K07

Validate that run all is active and running at a regular time interval.

Run the following query in target database as a MDADM user and validate the following:

Select sess_name,sysdate,sess_beg,sess_end,sess_status from oua_work.snp_session where sess_name='B1_RUN_ALL' order by sysdate desc;

1. Execute and monitor the query result for multiple times.

2. Based on time interval set for the scheduling agent 'B1_RUN_ALL', a new entry should be getting inserted into this table with session status of R.

This concludes the initial data replication. The next step is to enable target ETL jobs to load Dimensions, Facts, and Materialized Views.

Back to Top

Initial Load

These checks need to be performed after the loading of initial source data into the replication schema and target data warehouse.

# Validation Script or Steps to Validate

K08

Ensure that all one-time load bucket dimensions are loaded successfully and must contain data.

Run the following query in target database as an MDADM user and validate the following:

select 'select count(1) from dwadm.'||t.entity_name ||';' Bucket_Dim_Row_Count from b1_target_entity t, b1_config_vw v, b1_prod_instance i where t.target_entity_id=v.target_entity_id and v.context_cd=i.context_cd and t.sched_type_flg = 'INT' and i.context_cd='CCB1';

1. Above query will display SQL queries to get the count of rows for each one-time load dimensions for configured context.

2. Run these SQL queries and validate that the count is greater than 2.

K09

Ensure that all SCD type 1 dimensions except one-time load bucket dimensions must be loaded till the checkpoint time.

Run the following query in target database as an MDADM user and validate that no result is displayed.

select t.entity_name,c.last_sync_dttm,chk.Check_Point_Time from b1_target_entity t, b1_job_config c, b1_prod_instance i , (select max(chk.last_update_ts) Check_Point_Time from b1_checkpoint chk, b1_prod_instance pi where substr(chk.group_name,1,4)=pi.context_cd and pi.context_cd='CCB1') chk where t.target_entity_id=c.target_entity_id and c.prod_flg=i.prod_flg and c.instance_num=i.instance_num and t.sched_type_flg='INC' and t.entity_type_flg='SCD1' and i.context_cd='CCB1' and c.last_sync_dttm < chk.Check_Point_Time;

K10

Ensure that all SCD type 2 must be loaded until the checkpoint time.

Run the following query in target database as an MDADM user and validate that no result is displayed.

select t.entity_name,c.last_sync_dttm,chk.Check_Point_Time from b1_target_entity t, b1_job_config c, b1_prod_instance i , (select max(chk.last_update_ts) Check_Point_Time from b1_checkpoint chk, b1_prod_instance pi where substr(chk.group_name,1,4)=pi.context_cd and pi.context_cd='CCB1') chk where t.target_entity_id=c.target_entity_id and c.prod_flg=i.prod_flg and c.instance_num=i.instance_num and t.sched_type_flg='INC' and t.entity_type_flg='SCD2' and i.context_cd='CCB1' and c.last_sync_dttm < chk.Check_Point_Time;

K11

Ensure that all accumulation facts must be loaded until the checkpoint time.

Run the following query in target database as an MDADM user and validate that no result is displayed.

select t.entity_name,c.last_sync_dttm,chk.Check_Point_Time from b1_target_entity t, b1_job_config c, b1_prod_instance i , (select max(chk.last_update_ts) Check_Point_Time from b1_checkpoint chk, b1_prod_instance pi where substr(chk.group_name,1,4)=pi.context_cd and pi.context_cd='CCB1') chk where t.target_entity_id=c.target_entity_id and c.prod_flg=i.prod_flg and c.instance_num=i.instance_num and t.sched_type_flg='INC' and t.entity_type_flg='ACCU' and i.context_cd='CCB1' and c.last_sync_dttm < chk.Check_Point_Time;

K12

Ensure that all snapshot facts must be loaded until the time where system date is greater than the snapshot end date.

Run the following query in target database as an MDADM user and validate that last sync time should same as checkpoint time.

select t.entity_name,c.slice_dur_type_flg,c.last_sync_dttm ,case when c.slice_dur_type_flg='M' then trunc(chk.Check_Point_Time,'Month') when c.slice_dur_type_flg='W' then trunc(chk.Check_Point_Time,'WW') when c.slice_dur_type_flg='D' then trunc(chk.Check_Point_Time) when c.slice_dur_type_flg='HH' then trunc(chk.Check_Point_Time,'HH') else c.last_sync_dttm end as Check_Point_Time from b1_target_entity t, b1_job_config c, b1_prod_instance i , (select max(chk.last_update_ts) Check_Point_Time from b1_checkpoint chk, b1_prod_instance pi where substr(chk.group_name,1,4)=pi.context_cd and pi.context_cd='CCB1') chk where t.target_entity_id=c.target_entity_id and c.prod_flg=i.prod_flg and c.instance_num=i.instance_num and t.sched_type_flg='INC' and t.entity_type_flg='SNAP' and i.context_cd='CCB1';

K13

Ensure that all MVs (if they exist) must be created.

Run the following query in target database as an MDADM user and validate that no result is displayed.

select j.prod_flg, t.entity_name from b1_config_vw j, b1_target_entity t where t.target_entity_id=j.target_entity_id and t.entity_type_flg='M-VW' and j.context_cd='CCB1' and not exists (select mview_name from all_mviews mv where mv.mview_name=t.entity_name and mv.owner='DWADM');

This concludes the validation of initial data warehouse load. The next step is to start GoldenGate replicat processes to sync incremental data into the replication schema and load incremental data into data warehouse.

Back to Top

Incremental Load

# Validation Script or Steps to Validate

K14

Validate that all GG processes are running fine in both source and target without any lag.

Open GoldenGate command line interface (ggsci) in both source and target and run the following command:

ggsci> info all

Validate that GG Capture, Pump, and Replicat processes must be in running state.

K15

Validate that no ETL jobs are in error state.

Run the following query in target database as an MDADM user and validate that no result is displayed.

select * from b1_jobs_vw where status_flg='E';

K16

Validate that all SCD type 1 dimensions except one-time load bucket dimensions must be loaded till the checkpoint time after incremental transactions created in source.

Run the SQL query given for scenario K09 and validate that no records are displayed.

K17

Validate that all SCD type 2 dimensions must be loaded until the checkpoint time after incremental transactions created in source.

Run the SQL query given for scenario K10 and validate that no records are displayed.

K18

Validate that all accumulation facts must be loaded until the checkpoint time after incremental transactions created in source.

Run the SQL query given for scenario K11 and validate that no records are displayed.

K19

Validate that all snapshot facts must be loaded until the time where system date is greater than the snapshot end date.

Run the SQL query given for scenario K12 and validate that no records are displayed.

K20

Validate that all MVs (if they exist) must be refreshed until the time all its dependent dimensions and facts are loaded.

Run the following query in target database as an MDADM user and validate that no result is displayed.

select t.entity_name,c.last_sync_dttm,chk.Check_Point_Time from b1_target_entity t, b1_job_config c, b1_prod_instance i , (select max(chk.last_update_ts) Check_Point_Time from b1_checkpoint chk, b1_prod_instance pi where substr(chk.group_name,1,4)=pi.context_cd and pi.context_cd='CCB1') chk where t.target_entity_id=c.target_entity_id and c.prod_flg=i.prod_flg and c.instance_num=i.instance_num and t.sched_type_flg='INC' and t.entity_type_flg='M-VW' and i.context_cd='CCB1' and c.last_sync_dttm < chk.Check_Point_Time;

This concludes the validation of incremental data loading into target data warehouse. The next step is to validate all pre-built analytics.

Back to Top

Analytics

# Validation Script or Steps to Validate

K21

Validate that no error is displayed for out-of-the-box analytics.

To validate this scenario:

1. Login to the Oracle Analytics Server using analytics URL.

2. Open each dashboard for the configured source product.

3. Go to each page in the dashboard.

4. Ensure analytics are getting loaded successfully without any errors.

K22

Validate that all maps must be displaying data.

To validate this scenario:

1. Go to Administration → Manage Map Data

2. Ensure Layers, Background Maps and Images got imported successfully.

K23

Validate that drill back to the source application must be working.

To validate this scenario:

1. Login to Oracle Utilities Analytics Server using analytics URL.

2. Click on the Drill back link from the analytics that support drill back.

3. Validate that it should navigate to source application.

This concludes the validation of all components of the Oracle Utilities Analytics Warehouse product and the loading of initial and incremental data into the Data Warehouse. After all these validations, the next step is to monitor the health of the product regularly so that issues can be identified early and resolved quickly without impacting the business.

Back to Top