Skip Headers
Oracle® Retail Predictive Application Server and Applications Security Guide
Release 14.1.1
E61143-01
  Go To Table Of Contents
Contents

Previous
Previous
 
Next
Next
 

7 RPAS Integration

This chapter covers integrating information across multiple RPAS domains.

Integrating User Dictionaries

While user dictionaries cannot be shared across domains, they can be copied. This process involves exporting the users from one domain into either a users.xml or a users.db format. The users.xml file is easy to manually write or edit. However, because this file is plain-text, it cannot be used to store password information. When you import a users.xml file into a domain, you will be forced to specify a temporary password that will apply to all admin users, and another one that will apply to all non-admin accounts. These passwords automatically expire after their first use. Fusion Client deployments can skip this step by specifying the -noPassword option.

Since this approach is not automation-friendly, an administrator can pre-generate the temporary passwords by converting the user.xml file to a users.db format. This process will prompt for the passwords, then hashes them and stores them in an RPAS database. An automation process can then be set up to accept the users.db without prompting the user for anything. This step should only be used for Classic Client deployments.

Integrating Hierarchy and Dimension Data

Hierarchy information is not automatically kept in sync across domains. They can be manually sync'd up by exporting and importing hierarchy files. Domains with non-conforming hierarchies can still be synchronized by using filterHier from a master file to remove the non-conforming dimensions.

Files created by exportHier and filterHier inherit the user's default file permission (umask). The file loaded by loadHier requires only read permission but the domain's input directory and the "processed" directory under it requires write permission as loadHier will moved the data files once it completes.

Integrating Measure Data

The RPAS platform stores data within an embedded BTree database located within the domain on the file system. As such, it is necessary to manage the integration of the data within an RPAS domain with other domains or with outside systems through a set of data import and export operations. The primary operations used for this are the loadhier and loadmeasure utilities for importing data and the exportHier and exportmeasure utilities for exporting data.

The RPAS platform supports the importing of data from and exporting of data to text files. These files provide an efficient method of moving large amounts of data into our out of an RPAS domain.

Because of the use of files in the load and export process, users must be aware of conventions regarding the files used for the process and how they interact with file system security. In order for RPAS utilities to import data, that data must be contained within appropriately formatted flat files staged to the input directory of the domain.

The user executing the utilities must have read and write privileges to the files used by the process. The name of the file resources used for these processes must conform to standards defined for the utility.

For more information on the data loading process, see the following documentation:

  • For the Fusion Client, see Chapter 10: Data Management in the Oracle Retail Predictive Application Server Administration Guide for the Fusion Client.

  • For the Classic Client, see Chapter 8: Data Management in the Oracle Retail Predictive Application Server Administration Guide for the Classic Client.

Transfer Data Utility

transferData is a regular command line utility which should only be given execution rights to system admin.

transferData requires both READ and WRITE on both the source and destination domain. transferData will acquire locks on the source domain. Therefore, online operations on the source will be affected.

For more information on the transfer data facility, see the following documentation:

  • For the Fusion Client, see Chapter 9: Hierarchy Management in the Oracle Retail Predictive Application Server Administration Guide for the Fusion Client.

  • For the Classic Client, see Chapter 7: Hierarchy Management in the Oracle Retail Predictive Application Server Administration Guide for the Classic Client.

ODBC/JDBC Driver

The RPAS ODBC/JDBC Driver provides a SQL interface to the Oracle RPAS Embedded Database (OREDB) which includes both domain data and workbook data.

This driver presents OREDB as a read-only relational database to ODBC and JDBC client applications for reporting or integration purposes.

The ODBC/JDBC Driver requires authentication by RPAS user name and password and supports the same position level security as the regular RPAS Server does. SSL can be configured to protect the network communications of the driver.

ODI

Oracle Data Integrator (ODI) provides a declarative design approach for defining data transformation and integration processes, resulting in faster and simpler development and maintenance. Based on its unique ELT (Extract, Load, and Transform) architecture (as opposed to the traditional ETL architecture), ODI guarantees the highest level of performance possible for the execution of data transformation and validation processes. ODI helps with the data integration and sharing among heterogeneous hardware platforms and software systems. Specifically, data integration among Relational Databases (such as Oracle DBMS) and RPAS-based applications, including data transfer between RDBMS and RPAS domains, and data transfer/sharing across multiple RPAS domains. ODI is built on several components all working together around a centralized metadata repository. Among the components, there are graphical modules that ODI users directly interact with, and run time components (ODI Agents) that run on source and target systems.

General Considerations (applies to all integration)

  • ODI logs into RPAS domains using a user name and password set up by RPAS's usermgr utility.

  • ODI connects to an RPAS domain using a JDBC protocol through the ODBC data service provided by RPAS. Information about RPAS's ODBC data service can be found in the Oracle Retail Predictive Application Server Administration Guide in section "RPAS ODBC/JDBC Driver".

  • RPAS domains are read-only to ODI, ODI cannot modify domains except by running RPAS server's loadmeasure.

  • When ODI reads data from an RPAS domain, it uses the domain's security, including user-level and dimension-level security settings.

  • ODI keeps the following data in Oracle tables:

    • Configuration information, including domain paths and usernames/passwords.

    • Activity logs, including time of data transfer, names of measures, number of records transferred, and error messages.

RPAS application to RPAS application integration Considerations

  • ODI creates an OVR or RPL file in the "input" folder of the receiving RPAS domain and then runs RPAS server's loadmeasure.

  • Application-specific information (for RPAS-to-RPAS integration) is in the RPAS Apps ODI Implementation Guide.

RPAS domain to Oracle DBMS Considerations

  • ODI logs into Oracle using an Oracle schema/password pair and uses the security settings corresponding to that schema.

  • ODI can read from and write to the Oracle tables, although the current integration, MFP-to-RA, only writes to Oracle tables.

  • More information about MFP-to-RA integration is in the Oracle Retail Analytics Installation Guide.

RETL

RETL is an Oracle program. The name is an acronym for Retail Extract Transform (and) Load. It is also called "rfx". It is used to transform the data from one system's format to the other. It needs two sets of XML schema files. One set describes the format of the incoming data; the other describes the format into which it will be transformed. A RETL data transform is typically invoked from within a shell script.

RETL requires both READ and WRITE on both the source and destination domain.

The integration between RMS and RPAS/RDF is accomplished by one system exporting data in flat files, transforming the exported file format to match the target system, and then loading the transformed data files into the target system. We currently use a program called RETL to transform the data from one system's format to the other.

The RMS to RDF integration is composed of the set of these scripts, the schema files, and the RETL program.