Go to primary content
Oracle® Retail Predictive Application Server Cloud Edition Administration Guide
Release 22.2.401.0
F72005-01
  Go To Table Of Contents
Contents

Previous
Previous
 
Next
Next
 

19 Logging Framework

The logging framework improves existing log output to provide substantially more data. This utility also provides functionality to convert the server log to json logs, which can be further consumed by modern cloud-based logging frameworks such as Kibana.

Log File and Format

Each server log has both a regular log file with extension .log and an accompanying metadata file. For the task log, the metadata file is task_result.json. For the user's log, the metadata file is the log file with an extra .json extension.

The json file includes the following:

  • taskId. This is derived from the task data. TaskID is optional. The logs in the user's directory do not contain it.

  • sessionId. The session ID for the log. Optional.

  • userId. The user ID of the user who submits the task or starts the session.

  • source. This can be derived from the task data. For logs in the user directory, the source is convoserver.

  • status. The status of the task or session. Values are success, failed, and aborted.

In the log file, in addition to the log message, each log line must have the following components:

  • TimeStamp. TimeStamp is in the format of: 2021-08-30T14:56:55.357Z.

  • Severity. Log level for Severity. It is the first character after the opening < in each logline. D is for Debug, P is for Profile, U is for Audit, I is for Information, W is for Warning, E is for Error, N is for None, and B is for Basic. The log level can be manually set as a PDS property for all users using the OAT task. See the task List/Set/Unset PDS environment variable in Chapter 5, "Online Administration Tools" for setting and unsetting the log level.

  • Role. The intended role for the log. This entry is important for errors and exceptions. In the log file, the role is logged as r:X, where X is the value D, S, C, or U. D indicates Developer, S indicates Support, U indicates User, and C indicates Customer. This is to make the log line as short as possible.

  • Operation. Optional. In the log file, the operation is logged as op:XXXX. If omitted, it will default to a question mark (?). It usually indicates that there is no valid operation at the time of logging (for example, logging generated during program starts or exits). The valid Operations include Workbook Build, Open, Calc, Refresh, Commit, Custom Menu, Batch (Generic), Batch Calc, Batch Load Dim, and Batch Load Fact.

If the log represents an exception or error, the following information is also logged.

  • MsgCode. This field is optional and is for error logs only. It is displayed in the format [Component]-[Sub-component]-[Number]) format (for example, UIS-LOGN-0017). The MsgCode is attached to the end of the translated message with a prefix of msgcode:.

  • InstanceID. In addition to error code, each exception must have a unique uuid ErrorInstance code generated. This is used to identify and locate the error in the logs. The instance ID is attached to the end of the translated message with a prefix instanceid:.

Access to Logs

The user can access the log files using the Task Status Dashboard, shown in Figure 19-1.

If the user highlights a particular task, the buttons Download Log and Watch Live Log will be enabled. Click Download Log to start a download process that will download the complete task log. Click Watch Live Log to see a pop-up showing the beginning of the log, as shown in Figure 19-2.

The window displays the first section of the task's log. Click Next to display the next N lines of logs. Use First and Last to show the current first or last section of the log.

If the task has not yet finished, the user can enable the Auto-Refresh slide control; the display will then automatically refresh every few seconds to show the latest logs.

logAggregator Utility

The logAggregator utility translates the plain rpas logs into json strings that can be consumed by ELK.The utility runs in the background on a timer. Every n seconds, it wakes up and checks the output directories in the RDM for any update log files.

If any log file have been updated since the last time it checked, the utility converts the newly generated logs into json line by line, injecting extra json fields as it sees fit, and casts the json object to the output console for the ELK to ingest.

logAggregator Syntax / Usage

logAggregator -pds $RDM_PATH -interval $INTVThe interval arguments specifies the interval before the utility must wait to wake up again. If the interval is too long, it may force the utility to process too much data at once. If the interval is too short, it may impact overall performance.

The utility must be started at the time the container is started, and it must run to the time the container stops.

Watch Log Directories on Wake up

Once the utility wakes up, tit monitors the rdm output directory for updated log files. For each log file it finds, it also looks for the json file for metadata. It starts to process the log file once the metadata is provided, and stop processing once the status entry in the metadata indicates that the process has completed successfully or has failed. Once logAggregator determines that the logging process is complete and has processed all the loglines in the log file, it will removed the log file from the disk.

Generate Json Log Entries

The logAggregator utility translates each log line into a json object, based on the information provided in the log and the metadata. Once the json object is generated, the logAggregator will print the json to standard output for ELK.

Json Log Format

The logAggregator utility translates the logline to json objects with the following entries:

  • id. The transaction ID that identifies the origin of the log. This can be the taskId if the log is for a task, or sessionId if it is for a user session.

  • userid. The login user or the user who started the task.

  • timestamp. In the format required.

  • source. The source of the log. This can be a Classname (Java/C++) or a binary name (convoserver, mace, wbbatch, loadDimData, loadFactData, registerApp, and so on).

  • severity. The log level.

  • role. The roles to which the log is targeted.

  • msgcode. For error logging. A message-code in the format [Component]-[Sub-component]-[Number]) (for example, UIS-LOGN-0017). This indicates a login sub-component from the UI Server.

  • instanceid. For error logging. A unique UUID for each instance of a logged message.

  • msg. The actual log message, which can contain the log message as well as an exception trace.

  • operation.The type of operation that produced the log (for example, WBBuild, Calc, Commit, Refresh, CustomMenu action, Batch, and so on). This helps the user identify the functional area that cannot be identified by the source field, as most RPASCE logs are generated by the Convoserver utility.