Leading Practices

Customizations

Changes and modifications to the Retail Insights delivered code or development of new code is considered customization. Retail Insights does not support custom code developed by clients unless the issue related to customization can be recreated using Retail Insights delivered objects. Customizations are also not supported in the cloud service model, so the following sections apply only to on-premise installations. Listed below are recommendations that will help you in maintaining Retail Insights code:

  • Naming convention: it is recommended that you use a good and consistent naming convention when customizing Retail Insights delivered code or building new code in the Retail Insights environment.

    This strategy is helpful in identifying custom code and also helps when merging a retailer's Retail Insights repository with future releases of the Retail Insights repository. There is a possibility of losing customizations to Retail Insights provided ODI scripts or Oracle Analytics repository, if the customized code uses the same object/script names that are used by Retail Insights.

  • As a best practice, keep all the documentation up-to-date for capturing any changes or new code that has been developed at a site. For example, if table structure has been customized, create or update the custom Data Model Guide with these changes.

  • While customizing the rpd, do not make any changes directly on the main shipped/original rpd. Make a copy of the original rpd and start the changes on the copied rpd which will be the modified version. This is useful while applying any patches in future releases of Retail Insights through Oracle Analytics's merge utility. For more details refer to the Managing Oracle BI Repository Files chapter of the Oracle Analytics Metadata Repository Builder's Guide.

ODI Best Practices

For customizations to existing ODI code or while creating new ODI code, refer to the ODI Best Practices Guide included with your product code.

Oracle Analytics Best Practices

  • Create aliases for the objects created in the physical layer for usability purposes.

  • Do not design the business layer model as a snow-flake model.

  • Any level key on ident's must be set to non-drillable.

  • In the presentation layer, fact folders (presentation tables) must contain only metrics and dimension folders (presentation tables) must contain only attributes.

  • For a development environment, it is recommended to use a multi-user environment. For more information on setting up a multi-user environment, refer to the Completing Setup and Managing Oracle BI Repository Files chapter of the Oracle Analytics Server Administration Guide.

Batch Schedule Best Practices

The following best practices are recommended for Retail Insights:

Automation

The batch schedule should be automated as per the Oracle Retail Insights Operations Guide. Any manual intervention should be avoided. POM callbacks and external dependencies can be configured to link the RI batch with an on-premise scheduling system.

Recoverability

Set up the batch schedule in such a manner that the batch can resume from the point where it failed. In the cloud, this is managed automatically as part of the POM batch scheduler.

Retail Insights Loading Batch Execution Catch-Up

Loading batch (sil) execution catch-up can be achieved by backing up the staging table data. The following scenarios explain when users can benefit from this. This approach is considered a customization on Retail Insights programs and is not supported.

  • Catch-up: When Retail Insights is not ready for implementation, users can use history data stored in the staging backup tables (explained later in this section) to catch-up on the data loading once the system is implemented or becomes available.

  • Retail Insights database systems are down: When Retail Insights database systems are down, users can use history data stored in the staging backup tables (explained later in this section) to load the data once the system becomes available.

The following steps illustrate/show how the Loading Batch Execution catch-up solution works:

  1. Create Retail Insights staging tables for each corresponding staging table using the DDL for the staging tables provided. For more information, see the Oracle Retail Insights Administration Guide.

  2. Set up the Retail Insights ODI Universal Adapter programs, so they are ready to be executed against source files provided by the source system and load into the staging tables created in previous setup.

  3. Create a one-to-one backup table for each staging table. This backup table uses the same DDL as the staging table along with an additional field (load_date) which can be mapped to source system business date. This date is used as a filter when the backup data is ready to be moved to the staging table.

  4. Execute the Universal Adapter program to populate the staging tables created in the first step.

  5. Move staging table data into backup staging table with the correct business date. By default, Retail Insights does not provide the backup table DDL or backup data population scripts.

  6. Repeat the process of executing the Universal Adapter program and taking the backup to the staging backup table periodically (daily, once in two days, weekly, and so on), until the Retail Insights systems are available. Note that the staging table only contains the current business day's data, while the backup staging table contains data for all the business dates when the program was executed.

  7. Once the Retail Insights systems become available, move the backup staging table data to the staging tables, one day at a time. This can be done by using 'load_date' as a filter on the source backup staging table data.

  8. Once one business date data is moved to the staging table, SIL programs and corresponding PLP programs need to be executed for loading data into final data warehouse tables.

  9. Repeat the process of moving data from backup staging to staging and executing SIL and PLP programs until all the data for all business dates from backup staging tables is loaded into the fact and dimension tables.

High Availability

Depending on your specific requirements and for facilitating performance improvement, a reporting mirror (exact copy of existing data warehouse) can be created. With this approach, one database can be used for ETL processes and the second database instance can be used by users for running their reports. There are several ways (database level solutions, operating system level solutions and hardware level solutions) of creating a database mirror. Consult with your IT resources or database administrator for evaluating available options.If this approach is adopted, you must run your queries from the reporting mirror area, not from core data warehouse area. Take the following into consideration:

  • Consider this approach for large data warehouse implementations.

  • Creating data marts can be a good option when implementing mirroring.

  • Build a user notification mechanism should be built to notify users after the data has been refreshed on the mirror.

Advantages

  • High availability of data warehouse. When batch is running, the users access the mirror and the only downtime is when data is copied over from the core data warehouse to the mirror.

  • There are no conflicts between user queries and the ETL batch schedule.

Disadvantages

  • Storage requirements are increased.

  • Additional database maintenance is required.

Batch Efficiency

Keep revisiting the batch timings on a periodic basis to identify the candidates for performance improvements.