Logging to a Central Database

Introduction

Oracle Health Insurance application and the runtime technology stack supports various logging capabilities.

Examples of logs that are currently there:

  • Security Log: to track (failed) login attempts.

  • Protected Health Information (PHI) log: to track which health insurance-payer employee accesses what member-related information.

  • Dynamic Logic Log: to track Dynamic Logic execution.

  • Application Log: contains informational messages, warnings, and errors. This log contains all messages that the Oracle Health Insurance application logs but are not present in the Security log, the PHI log, or the Dynamic Logic log.

  • JVM GC Log: garbage collection statistics.

  • Server Output Logs: technical information about the state of specific JVM processes that the application server writes.

All Oracle Health Insurance applications use the Logback framework for logging data. Logback supports a wide variety of log appenders. The best Appender is probably the file appender that each JVM uses to run the application for writing log files to disk. In the Logback configuration, get more detailed diagnostic data from the system by adjusting the logging level for specific loggers (for example, per-package or even per-class).

The Oracle Health Insurance Logback Database Appender

When each JVM writes data to separate log files, it is relatively hard to get a complete overview. As an alternative, Oracle Health Insurance applications can stream the following Oracle Health Insurance application logs into database tables:

Table 1. The Logback Database Appender
Log Database Table HTTP API Resource

Application log

log_application_events

generic/logapplicationevents

PHI log

log_phi_events

generic/logphievents

Dynamic Logic log

log_dylo_events

generic/logdynamiclogicevents

Access or query logged messages using the HTTP API resource that the last column specifies.

For efficiency reasons, Oracle Health Insurance applications come with a specific Logback Database Appender for the Application and Dynamic Logic log. These loggers initially write to a file-backed buffer queue. The Oracle Health Insurance Database Appender (eventually) persists log events from the file-backed queue to the database.

When accessing the PHI data, Oracle Health Insurance applications persist PHI access in the database if system property ohi.logging.target has a value database, as part of the request.

Starting with the 3.21.2.0.0 release of Oracle Health Insurance applications, Oracle Health Insurance Logback Database Appender is no longer useful for logging PHI access.

The Oracle Health Insurance Database Appender assumes log messages format as JSON documents with a specific structure. It uses the well-known Logstash Logback Encoder for JSON encoding. The Oracle Health Insurance Database Appender applies the following default configuration settings for the encoder:

<encoder class="net.logstash.Logback.encoder.LogstashEncoder">
  <providers>
    <mdc/>
    <context/>
    <nestedField>
      <fieldName>markers</fieldName>
      <providers>
        <logstashMarkers/>
      </providers>
    </nestedField>
    <nestedField>
      <fieldName>arguments</fieldName>
      <providers>
        <arguments/>
      </providers>
    </nestedField>
    <pattern>
      { "ohiLevel": "%ohiLevel" }
    </pattern>
    <fieldNames>
      <timestamp>timestamp</timestamp>
    </fieldNames>
  </providers>
</encoder>

Keep the defaults that Oracle Health Insurance applications rely upon while adjusting the default settings. The following example adds a specific configuration for stack traces:

<encoder class="net.logstash.Logback.encoder.LogstashEncoder">
  <providers>
    <mdc/>
    <context/>
    <nestedField>
      <fieldName>markers</fieldName>
      <providers>
        <logstashMarkers/>
      </providers>
    </nestedField>
    <nestedField>
      <fieldName>arguments</fieldName>
      <providers>
        <arguments/>
      </providers>
    </nestedField>
    <pattern>
      { "ohiLevel": "%ohiLevel" }
    </pattern>
    <fieldNames>
      <timestamp>timestamp</timestamp>
    </fieldNames>
    <stackTrace>
      <throwableConverter class="net.logstash.Logback.stacktrace.ShortenedThrowableConverter">
        <maxDepthPerThrowable>30</maxDepthPerThrowable>
        <maxLength>2048</maxLength>
        <shortenedClassNameLength>20</shortenedClassNameLength>
        <exclude>^sun\.reflect\..*\.invoke</exclude>
        <exclude>^net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
        <rootCauseFirst>true</rootCauseFirst>
      </throwableConverter>
    </stackTrace>
  </providers>
</encoder>

The following table lists other configuration options for the Oracle Health Insurance Database Appender:

Table 2. Configuration Options for Database Appender
Parameter Explanation Required

connectionSource

For the Oracle Health Insurance Database Appender to work, it needs access to a database. Make sure to only use a JNDI Connection Source.

Without specifying a connection source, the application does not start. It will throw an IllegalStateException with the message "OhiAppender cannot function without a connection source".

Yes

logType

Currently, supported log types are application and dylo. Without specifying a log type, the application will not start. It will throw an IllegalStateException with the message "OhiAppender cannot function without a specific log type".

Yes

bufferDir

The Database Appender buffers log messages in a file-backed queue or buffer. The system writes the queue files in the specified directory.

The application uses the value of system property java.io.tmpdir if there is no specified value for bufferDir. If the application cannot write to the directory, then the application will not start. It will throw an IllegalStateException with a message "Cannot write to path <path>".

Yes

drainInterval

By default, the file-backed queue drains every five milliseconds. Overwrite this value if it is desirable. The minimum is one millisecond.

Specifying a drain interval of less than one millisecond prevents the application from starting. It will throw an IllegalStateException with the message "Invalid timeout to drain queue specified: <specified_time_interval>".

No

drainBatchSize

By default, messages from the file-backed queue persist in batches with a max size of 2000. Overwrite this value if it is desirable. The minimum batch size is one (note that such a value is extremely inefficient!).

If the specified batch size is smaller than one, the application will not start. It will throw an IllegalStateException with the message "Invalid batch size for draining queue specified: <specified_time_interval>".

No

rollCycle

Files for the file-backed queue rollover periodically. To limit disk space usage there is automatic removal of files that are no longer in use. This parameter allows specification of the roll cycle. The default value is HOURLY, which allows storing 256 million entries per hour (disk space permitting). The alternative value is DAILY (four billion entries per day).

Contact Oracle Health Insurance development team in an unusual case where the buffer needs to accommodate larger amounts of log entries.

No

duplicateDuration

Setting a value for this parameter filters duplicate events with level ERROR. This prevents the system from logging many occurrences of the same ERROR in a short time, which may cause its logging capability to be flooded.

The application considers an ERROR event to be a duplicate if the last duplicateDuration milliseconds logs an event for the same logger and with the same (raw) message.

This only applies to ERROR messages in the Application log. It is disabled by default.

Setting this value to 300000 milliseconds will log an error message once per thread and will suppress the same error message that logs in that thread for the next five minutes.

No

The log type parameter for the Appender drives:

  • Proper filtering of log messages. For example, to ensure that PHI log messages do not show up in the Application log.

  • Storing of log messages in the database.

The following paragraphs list the Logback configuration for each of these logs.

Please note that:

  • The below Logback configuration samples omit the encoder, implying the use of the Oracle Health Insurance default settings for it.

  • The buffer directory lists imaginary locations /writable/log/storage/buffer/…​.. Make sure to configure a writable buffer directory. If that is not the case, then the application will log an IllegalStateException in the managed server log.

Logging Dynamic Logic Execution

Sample configuration for Oracle Health Insurance Database Appender for Dynamic Logic execution logging:

<configuration debug="true" scan="true" scanPeriod="60 seconds">
 <contextListener class="ch.qos.Logback.classic.jul.LevelChangePropagator"/>

 <conversionRule conversionWord="ohiLevel"
  converterClass="com.oracle.healthinsurance.utils.logging.Logback.OhiLogLevelConverter"/>

 <appender name="dyloAppender"
  class="com.oracle.healthinsurance.loggingsupport.appender.impl.OhiAppender">
     <connectionSource class="ch.qos.Logback.core.db.JNDIConnectionSource">
         <jndiLocation>jdbc/policiesUserOhiApplicationDS</jndiLocation>
     </connectionSource>
     <logType>dylo</logType>
     <bufferDir>/writable/log/storage/buffer/dylo</bufferDir>
 </appender>

 <root level="info">
     <appender-ref ref="dyloAppender" />
 </root>
</configuration>

Note that all Dynamic Logic scripts are in package namespace ohi.dynamiclogic. To enable logging any statements for that package, add the following logger:

<logger name="ohi.dynamiclogic" level="trace"/>

Application Log

Sample configuration for Oracle Health Insurance Database Appender for remaining log messages (that must not end up in the PHI log, or the Dynamic Logic log):

<configuration debug="true" scan="true" scanPeriod="60 seconds">
 <contextListener class="ch.qos.Logback.classic.jul.LevelChangePropagator"/>

 <conversionRule conversionWord="ohiLevel"
  converterClass="com.oracle.healthinsurance.utils.logging.Logback.OhiLogLevelConverter"/>

 <appender name="applicationAppender"
  class="com.oracle.healthinsurance.loggingsupport.appender.impl.OhiAppender">
     <connectionSource class="ch.qos.Logback.core.db.JNDIConnectionSource">
         <jndiLocation>jdbc/policiesUserOhiApplicationDS</jndiLocation>
     </connectionSource>
     <logType>application</logType>
     <bufferDir>/writable/log/storage/buffer/application</bufferDir>
 </appender>

 <root level="info">
     <appender-ref ref="applicationAppender" />
 </root>
</configuration>

How to Track Oracle Health Insurance Appender Status

The Oracle Health Insurance Logback Database Appender can write messages about its state to the Oracle Health Insurance logs. Track status of the Appender by using Logback status listeners and redirecting messages it writes to Standard Out and Standard Error to the server log.

Enable Logback Status Listeners for the Oracle Health Insurance Database Appender

Enable the Logback status listener by:

  • Setting the Logback configuration to debug flag as true as in the samples above.

  • Attaching a status listener as in the following example:

<configuration>
  <statusListener class="ch.qos.Logback.core.status.OnConsoleStatusListener" />
  ...
</configuration>

Refer to the Logback documentation for a complete overview of capabilities. The Oracle Health Insurance Logback Database Appender writes Logback status messages during initialization and shutdown.

How to Track the Logger’s Status Using Logback

The Oracle Health Insurance Database Appender makes use of separate threads to write messages from the file-backed queue to the database. Within these threads, write diagnostic messages enabling Logback loggers. Log the messages about the Oracle Health Insurance Database Appender to a different Appender. The following is a sample Logback configuration for tracking the Oracle Health Insurance Database Appender, by writing messages to a specific file Appender:

<conversionRule conversionWord="ohiLevel"
 converterClass="com.oracle.healthinsurance.utils.logging.Logback.OhiLogLevelConverter"/>

<appender name="fileAppender" class="ch.qos.Logback.core.rolling.RollingFileAppender">
    <encoder>
        <pattern>%d{ISO8601} %ohiLevel %logger{3} - %m%n</pattern>
    </encoder>
    <rollingPolicy class="ch.qos.Logback.core.rolling.TimeBasedRollingPolicy">
        <fileNamePattern>target/ohiAppenderFile-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
        <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.Logback.core.rolling.SizeAndTimeBasedFNATP">
            <maxFileSize>100MB</maxFileSize>
        </timeBasedFileNamingAndTriggeringPolicy>
    </rollingPolicy>
</appender>

<logger name="com.oracle.healthinsurance.loggingsupport.appender.impl.QueueFileListener"
 level="debug" additivity="false">
    <appender-ref ref="fileAppender"/>
</logger>
<logger name="com.oracle.healthinsurance.loggingsupport.appender.impl.OhiAppender"
 level="debug" additivity="false">
    <appender-ref ref="fileAppender"/>
</logger>
<logger name="com.oracle.healthinsurance.loggingsupport.appender.impl.LoggingEventBatchPersister"
 level="debug" additivity="false">
    <appender-ref ref="fileAppender"/>
</logger>

Access centrally stored log data using HTTP API resources. The following paragraphs provide details for these and many examples.

Steps to Retrieve Log Messages

Allow users access to HTTP API resources properly.

Retrieve the default representation for each resource from the metadata specification or fetch a sample response by executing a GET request to any of the resources. For example, the default representation for application log events looks like the following example:

{"id": "96558",
 "application": "authorizations",
 "applicationVersion": "3.18.1.0.0",
 "eventLevel": "ERROR",
 "eventLogger": "c.o.h.d.d.i.c.p.DataReplicationTargetSynchronizationTaskTypeProcessor",
 "eventMessage": "Failed to retrieve and / or store events, closing data replication extraction task",
 "eventThread": "[STANDBY] ExecuteThread: "18" for queue: "weblogic.kernel.Default (self-tuning)"",
 "hostName": "a_host",
 "instanceName": "authorizations_node1",
 "instanceType": "an_instance",
 "links": [
  {"href": "http://localhost:27041/api/generic/logapplicationevents/96558",
   "rel": "self"
  }
 ],
 "eventTimestamp": {
   "value": "2018-01-11T17:42:27.13+01:00"
  },
 "persistedTimestamp": {
   "value": "2018-01-11T17:42:31.304+01:00"
  }
}

Note that many attributes repeat for other logged messages as well.

When querying log messages, make sure to always filter on eventDate as the system indexes that. As the number of log messages grows, using these as an access path makes sure request response times remain acceptable.

Example: Steps to Check for Warnings and Errors in the Application Log

Use Query API capabilities to look for specific log messages. The following GET request returns errors or warnings in a specific hour on a specific day, ordered by the eventTimestamp of the messages:

/generic/logapplicationevents?q=eventLevel.in("WARN","ERROR") \
.and.eventDate.gt("2018-01-11T17:00:00").and.eventDate.lt("2018-01-12T18:00:00") \ &orderBy=eventTimestamp:asc

Execute it using the following POST request:

Accept: application/vnd.oracle.insurance.resource+json

"resourceRepresentation":
{
"fields":"eventMessage|eventTimestamp"
}

The results must look like the following example, a more concise version of the previous application log event’s sample:

{"id": "96558",
 "eventMessage": "Failed to retrieve and / or store events, closing data replication extraction task",  "links": [
{"href": "http://localhost:27041/api/generic/logapplicationevents/96558",
   "rel": "self"
  }
 ],
 "eventTimestamp": {
   "value": "2018-01-11T17:42:27.13+01:00"
  }
}

Note the difference between eventDate and eventTimeStamp:

  • The precision for the eventDate is seconds. The system indexes eventDate specifically for executing such queries efficiently.

  • The precision for the eventTimestamp includes microseconds and time zone information.

How to Check Access to PHI Data

The application writes an entry to the PHI log when a user accesses an Insured Member's Personal Identifiable Information. This can be through a web service or via the application’s user interface.

Steps to Follow Dynamic Logic Execution

Use the Dynamic Logic log to inspect Dynamic Logic executions.

For every Dynamic Logic execution, the system generates a unique execution ID that all logged messages, during the execution, carry.

Following is a sample message that the system logs for executing a Dynamic Logic script (edited for readability):

{
  "message" : "execution starts",
  "logger_name" : "ohi.dynamiclogic",
  "level" : "TRACE",
  "markers" : {
    "scriptexecutionid" : "da2e6719-e5d2-4112-85b9-4321929aeed9"
  }
}
{
  "message" : "Binding foo set to bar",
  "logger_name" : "ohi.dynamiclogic",
  "level" : "TRACE",
  "markers" : {
    "scriptexecutionid" : "da2e6719-e5d2-4112-85b9-4321929aeed9"
  }
}
{
  "message" : "result: true of type: class java.lang.Boolean",
  "logger_name" : "ohi.dynamiclogic",
  "level" : "TRACE",
  "markers" : {
    "scriptexecutionid" : "da2e6719-e5d2-4112-85b9-4321929aeed9"
  }
}
{
  "message" : "execution completed",
  "logger_name" : "ohi.dynamiclogic",
  "level" : "TRACE",
  "markers" : {
    "scriptexecutionid" : "da2e6719-e5d2-4112-85b9-4321929aeed9"
  }
}

How to Bundle Log Messages in a Data File Set

Use HTTP API resource generic/logfilesetspecifications to specify parameters for bundling multiple messages in a Data File Set.

Bundling log messages is not available for log type phi.

The following table explains the attributes for a log file set specification:

Table 3. How to Bundle Log Messages in a Data File Set
Attribute Explanation Required Sample Value

logType

The log type for which messages bundle into a set.

Yes

One of application or dylo.

startDateTime

The application selects messages with an eventTimestamp between the specified startDateTime and endDateTime (see next row) to include in the data file set.

Yes

2018-01-12T00:00:00

endDateTime

The time between startDateTime and endDateTime must not exceed two days by default. Adjust system property ohi.logging.fileset.max.timespan to specify larger periods. Configure the database to accommodate the creation of data file sets that span a larger time.

Yes

2018-01-14T00:00:00

logLevel

The log level filter. Specification of:

ERROR results in the selection of messages with level ERROR only.

WARN results in the selection of messages with levels ERROR and WARN.

INFO results in the selection of messages with levels ERROR, WARN, and INFO.

DEBUG results in the selection of messages with levels ERROR, WARN, INFO, and DEBUG.

TRACE results in the selection of messages with levels ERROR, WARN, INFO, DEBUG, and TRACE.

No

ERROR

logger

Filter for a specific logger.

No

healthinsurance.loggingsupport.components

The resource allows creating a log file set specification only. It is impossible to change or delete existing log file set specifications.

After the application accepts the log file set specification request, it starts the creation of the Data File Set immediately. The application creates data files containing messages in the original JSON format. The application returns the GET requests on the link to retrieve the resource location to the data file set. Note that creating the Data File Set may take some time, depending on the number of messages that need bundling and the workload of the system. The system returns an error message in case the creation of the Data File Set failed.

Oracle Health Insurance applications keep the log file set specification and the resulting Data File Set for two days. The daily purge routine removes any older Data File Sets and the log file set specifications for these.

How to Add Loggers and Set Log Levels

At startup, the application takes the initial Logback configuration from a configuration file. Make sure that a single copy of that configuration is available for all nodes that execute an instance of an Oracle Health Insurance application. Use HTTP API resource generic/loggers to manage loggers:

  • Get an overview of custom loggers using the generic/loggers API by executing a GET request to generic/loggers.

  • Add a logger with a specific log level by executing a POST request to generic/loggers.

  • Update the level of an existing logger by executing a PUT request to generic/loggers/{id}.

  • Unsetting a logger by executing a DELETE request to generic/loggers/{id}. Here, the logger will inherit the log level of its parent.

The application propagates any changes made via the generic/loggers API to all nodes that execute an instance of an Oracle Health Insurance application.

For example, for adding a logger for package com.oracle.healthinsurance.datareplication.service with level DEBUG that is active for 120 minutes after its specification, send the following JSON payload:

{"logger": "com.oracle.healthinsurance.datareplication.service"
,"logLevel": "DEBUG"
,"duration": 120
}

The application will automatically remove the logger after the 120 minutes specification. To create a durable logger that the application cannot remove, set the duration to 0. The default duration for a logger is one hour. Alternatively, logger is no longer needed, remove it by sending a DELETE request for it.

The application does not add changes that the generic/loggers API makes to the Logback configuration file. Instead, it re-activates durable loggers after restarting.

At startup, the application initializes the specified loggers in the following order:

  1. Loggers that are specific in the Logback configuration file.

  2. After that loggers that are defined using the generic/loggers API. Note that customer-defined loggers may overwrite settings for loggers that are in the Logback configuration file.

The customer must maintain the customer-defined durable loggers.

Steps to Use the Diagnostic Context

Oracle Health Insurance applications log data in context. For example:

  • During Claims processing, Claims logs messages of the specific Claim, making it possible to gather all log messages for a specific Claim.

  • Track log messages that the application writes while processing web service requests using a trace ID provided in the request. If the application does not receive a trace ID, then it generates one and returns that in the response.

Steps to Purge Log Messages

When Oracle Health Insurance application starts, it registers a daily job in the database of purging log messages. Frequently purging log messages helps limit the number of log messages that the database stores.

Use HTTP API resource generic/logeventretentionperiods to manage retention periods for the various log types, application, dylo, and phi:

  • Get an overview of specific retention periods by executing a GET request to generic/logeventretentionperiods.

  • Add a retention period for a specific log type by executing a POST request to generic/logeventretentionperiods.

  • Update the retention period for a log type by executing a PUT request to generic/logeventretentionperiods/{id}.

  • Remove a retention period for a log type by executing a DELETE request to generic/logeventretentionperiods/{id}.

For example, POST the following JSON payload for setting a retention period of 15 days for log type application:

{"logType": "application"
,"retentionPeriod": 15
}

Default, Oracle Health Insurance applications enforce a minimum retention period for PHI messages of 1825 days, approximately five years. Adjust system property ohi.logging.phi.min.retentionperiod for changing that value.

Use HTTP API resource generic/loglevelretentionperiods to override log type-specific retention periods on a per the log level basis.

For example, POST the following JSON payload for setting a retention period of three days for event retention period application and level DEBUG:

{"logLevel": "DEBUG"
,"retentionPeriod": 3
,"logEventRetentionPeriod": {
    "logType": "application"
 }
}
The retention period for a level must be between one day and the number of days for the specified event retention period.

For efficiency reasons, log messages purge in sets or partitions. This is per day for the Application and Dynamic Logic log, per month for the PHI log. As the system prohibits deleting the last set or partition, this may cause storing logged messages with the expired retention period. Logging new messages in more recent sets or partitions, purges them.