Go to primary content
Oracle® Retail Pricing Operations Guide
Release 22.1.301.0
F60048-03
  Go To Table Of Contents
Contents

Previous
Previous
 
 

4 Backend System Administration and Configuration

This chapter of the operations guide is intended for administrators who provide support and monitor the running system.

Supported Environments

See the Oracle Retail Price Management Installation Guide for information about requirements for the following:

  • RDBMS operating system

  • RDBMS version

  • Middle tier server operating system

  • Middle tier

  • Compiler

Exception Handling

The two primary types of exceptions within the Pricing system are the following:

  • System exceptions

    For example, server connection and/or database issues are system exceptions. System exceptions can bring the system to a halt. For example, the connection to the server is lost.

  • Business exceptions

    This exception indicates that a business rule has been violated. Most exceptions that arise in the system are business exceptions. For example, a user tries to approve a price change that causes a negative retail.

Logging Configuration

Logging within Pricing utilizes the ADF built-in logging framework to log system messages and exceptions. This framework is embedded in the application code to allow for configurable logging to suit the needs of the retailer.

Please note that batch client programs log the messages, errors to a log file configured in batch_logging.properties. Server logging is done using standard WebLogic logging infra structure.

ADF Logging

This is a wrapper class of java logger class. It adds ADF convenience methods. All other java logger methods as well are available for user. The following are the different logging levels possible.

  • SEVERE (highest value)

  • WARNING

  • INFO

  • CONFIG

  • FINE

  • FINER

  • FINEST (lowest value)


Note:

In a production environment, the logging setting should be set to Severe or Warning, so that system performance is not adversely impacted.

Batch Client Logging

The pricing batch client java programs write error messages, warnings to a log file configured in batch_logging.properties. The logging mechanism is based on FileHandler java API.

By default, the log file is configured to be created in the logs folder under user home directory (%h) with the name batch_log appended with a random number (%u). See below batch_logging properties file more details.

Batch_logging Properties

The batch_logging.properties file holds all of the information relevant to logging for batch clients.

Table 4-1

Parameter Description

Handlers

A comma-delimited list of handler class names that are added to the root Logger. The default handlers are java.util.logging.FileHandler andjava.util.logging.ConsoleHandler (with a default level of INFO).

.level

Sets the log level for all FileHandler instances. The default log level is INFO.

java.util.logging.FileHandler.pattern

The log file name pattern. The default is %h/../logs/batch_log%u.log which means that the file is named batch_log%u.log where:.

%h the value of the "user.home" system property

%u is a unique number to resolve conflicts between simultaneous Java processes

java.util.logging.FileHandler.limit

The maximum size of the file, in bytes. If this is 0, there is no limit. The default is 1000000 (which is 1 MB). Logs larger than 1MB roll over to the next log file.

java.util.logging.FileHandler.count

The number of log files to use in the log file rotation. The default is 365 (which produces a maximum of 365 log files).

java.util.logging.FileHandler.level

Sets the log level for all FileHandler instances. The default log level is FINEST.

java.util.logging.ConsoleHandler.level

Sets the default log level for all ConsoleHandler instances. The default log level is FINEST..

java.util.logging.FileHandler.append

Specifies whether the FileHandler should append onto any existing files (defaults to true)


Configurable GTTCapture

The conflict checking engine within Pricing utilizes Global Temporary Tables (GTT) extensively which allow for a performance gain, but means that transactional data is lost when the process completes. When attempting to troubleshoot issues within the conflict checking engine around GTT data, this leads to difficulty researching and recreating issues.

A configuration within Pricing allows for capturing this GTT data while processing through the conflict checking engine in an autonomous fashion so that the data is available for review after the process has completed. Data can be captured from the following set of tables:

  • RPM_FUTURE_RETAIL_GTT

  • RPM_PROMO_ITEM_LOC_EXPL_GTT

  • RPM_CUST_SEGMENT_PROMO_FR_GTT

  • RPM_CLEARANCE_GTT

  • RPM_FR_ITEM_LOC_EXPL_GTT

The system is designed to capture data from any of these GTTs based on configuration. Data can be captured from one or more of these tables during conflict checking and can be captured at a configurable start point and optionally beyond the starting point. There are five options for starting points when capturing GTT data:

  • GTT Initial Population

  • Merge Price Event into Timelines

  • Roll Forward

  • Payload Population

  • Future Retail Purge

The system will also allow for specifying if GTT data should be captured for a specific user in the system or for any user. When specifying a user id to capture data for, the user id needs to match with the user defined within LDAP and should have matching case between LDAP and the GTT capture configuration.

All configuration is handled via the RPM_CONFIG_GTT_CAPTURE table by direct table updates. It is possible to set up all the necessary configurations (starting point, specific user, capture data beyond start point and what tables to capture data from) and disable the capturing of this data all together by setting the ENABLE_GTT_ CAPTURE field to 'N'. Once the GTT capture configurations are established and enabled on the RPM_CONFIG_GTT_CAPTURE table, nothing more needs to be done other than to process a price event through conflict checking.

When the system does capture data from the GTT tables, it will always capture all data on the specified tables at the "starting point" and then only capture updated or newly created data for each statement beyond that point when data is being captured beyond the starting point. In such a scenario, the evolution of a record will be easily available for viewing and troubleshooting efforts with the impact of every statement being identified easily.

A batch process (PurgeGttCaptureBatch.sh) will purge all data captured from the GTT tables to allow for only pertinent data to be in place at any given time. This purge process does not have to run prior to capturing GTT data in conflict checking, however it is expected that capturing this data will produce a large volume of data in many scenario. By purging this data before running the conflict checking process again for new data to be captured, it will be easier to examine the data.