Java SDK

The Oracle Communications Unified Assurance Java Software Development Kit (SDK) provides a Java microservice template and sample microservice. The template provides built-in capabilities so that you can focus on adding functionality while referring to the sample microservice for Unified Assurance design patterns.

Java SDK Overview

The Java microservice template provides the following built-in capabilities:

The java-ms-example sample microservice provides an example of how to leverage the built-in capabilities of the Java microservice template. It uses the standard Unified Assurance microservices design patterns, with the following application stages:

  1. Initialization: The application enables the base capabilities, such as logging, configurations, internal metrics, and internal connections such as Pulsar and databases. If any of the initialization tasks fail, the application is brought down gracefully. The Java SDK provides the majority of the initialization tasks ready to use so that the application is ready for the next stage.

    You can choose the list of initialization tasks based on the application needs, but at minimum logging and instrumentation are required. You can also enable a redundancy check and establish connection to external applications. Refer to Redundancy section for further instructions.

  2. Data collection and processing: The application collects data, then transforms and enriches it to conform to the applicable schema. The application then publishes the collected and processed data to downstream and persistence applications, as configured in the application logic.

  3. Graceful shutdown: The application releases all resources and gracefully shuts down wherever possible.

Both the template and sample microservice are included in the sdk-lib package, which is included when you install Unified Assurance, in the $A1BASEDIR/bin/Package directory. See "Installing the SDK" for information about installing the sdk-lib package.

Java Microservice Template Details

This section provides more details about the microservice template capabilities.

About Logging

The logging capability is handled by the Apache Log4j utility. The following files control logging configurations:

When the application starts up, logging is automatically initialized at the level set in the LOG_LEVEL environment property. If the property is not set, the default of INFO is used.

About Log Message Patterns

Log message patterns for the application are defined in the Layout.json file, which contains the following:

{
   "@timestamp" : {
      "$resolver" : "timestamp",
      "pattern" : {
         "format" : "yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'",
         "timeZone" : "UTC"
      }
   },
   "app" : "${env:APP_NAME}",
   "appId" : "${env:APP_ID}",
   "level" : {
      "$resolver" : "level",
      "field" : "name"
   },
   "message" : {
      "$resolver" : "message",
      "stringified" : true
   }
}

To maintain a consistent log format across all Unified Assurance microservices, Oracle recommends against changing the Layout.json file.

Changing the Log Level

By default, the log level is set to INFO at the root level. You can configure it by setting the LOG_LEVEL variable to one of the following values:

Using Different Log Levels by Java Package

You can optionally use different log levels for different Java library packages within the application. For example, you may need a finer degree of logging to troubleshoot an issue, but extensive logs from some packages create too much noise. To filter out the noise, you can set the log level for the noisy packages to a less detailed level, like ERROR, while setting the root log level, which applies to all other packages, to DEBUG.

You achieve this by adding individual loggers for different packages in log4j2.xml.

To add loggers to log4j2.xml:

  1. Open log4j2.xml in an editor.

  2. Inside the element, add an element for the root level logger and a new logger element for each package or group of packages you want to control individually:

       <Loggers>
         <Root level="${env:LOG_LEVEL}">
             <AppenderRef ref="ConsoleAppender"/>
         </Root>
         <Logger name="org.package1, org.package2" level="${env:PKG_LOG_LEVEL}">
             <AppenderRef ref="ConsoleAppender"/>
         </Logger>
         <Logger name="org.package3" level="${env:PKG3_LOG_LEVEL}">
             <AppenderRef ref="ConsoleAppender"/>
         </Logger>
       </Loggers>
    

    where:

    • org.package1, org.package2, and org.package3 are the Java packages you are configuring logs for, such as io.prometheus or org.neo4j.driver.

    • PKG_LOG_LEVEL and PKG3_LOG_LEVEL are the environment variables you will use to configure the log levels for the packages.

  3. Save and close the file.

  4. Open the values.yaml file for the application's Helm chart.

  5. Inside the ConfigData element, add and set the new log variables. For example:

    PKG_LOG_LEVEL: DEBUG
    PKG3_LOG_LEVEL: ERROR
    
    This sets the log level for the packages defined in log4j2.xml.

About Configurations

The configurations capability control the application's connections to internal and external systems and databases, including schemas and shards.

The application loads configurations from:

Use the following guidelines when you add new configurations as environment variables:

!! note When the application loads the configurations from values.yaml, they are prefixed with app. For example, the STREAM_INPUT variable would be loaded as app.stream.input.

The following example shows the configuration to stream data from an external database using TLS certificates. STREAM_INPUT configures the connection to the Event schema in shard 2 of the db2.example.com database using the xyz username. The variables to use for TLS certificates are identified by using the localdb prefix. Each of the certificate variables follows the proper naming and prefixing conventions. They use LOCALDB to identify the connection context and TLS for grouping.

STREAM_INPUT: "mysql://xyz@db2.example.com/Event/2?tls=localdb"

LOCALDB_TLS_CA: "/path_to_ca"

LOCALDB_TLS_CERT: "/path_to_cert"

LOCALDB_TLS_KEY: "/path_to_key"

Within the application, you can use the confProperties Java function to get specific sets of configuration properties using their prefix. For example:

About Pulsar Connectivity

The Pulsar connectivity capability connects the application to the Pulsar microservice running in the Unified Assurance cluster and creates a Pulsar consumer and producer based on the topic provided for the application.

The Java MS Example microservice shows an example of how to stream input data from one topic and produce data to another topic.

About MySQL Connectivity

The MySQL connectivity capability connects the application to the Assure1 and Event MySQL databases for both failover and single server connections. The application reads the connectivity information from the /config/Assure1.yaml file.

To connect to a specific database, pass the configuration properties for that database to the MySQL Connector. The Java MS Example microservice shows an example of the MySQL URI format that you can parse to connect to a specific database.

About Neo4j Connectivity

The Neo4j connectivity capability connects the application to the Neo4j Graph database for both failover and single server connections. The application reads the connectivity information from the /config/Assure1.yaml file.

A Neo4j watchdog service continuously monitors the application in failover mode and switches between primary and secondary database instances.

Neo4j Connectivity Variables

This table shows the variables that you can configure for Neo4j:

Variable Description Default
GRAPH_DB_CONNECTION_TIMEOUT In a single server installation, the number of seconds to wait for a connection before timing out. 60
GRAPH_DB_FAILOVER_CONNECTION_TIMEOUT In a failover installation, the number of seconds to wait for a connection before timing out. 10
GRAPH_DB_HEALTH_CHECK_DELAY The number of seconds the watchdog service waits before checking the database health. 30

About Instrumentation

The instrumentation capability gathers application-internal performance metrics using Prometheus. This is standard for Unified Assurance microservices; each microservice includes a Prometheus server to scrape metrics and add them to the Metric database. See "Prometheus Metrics Processor" in Unified Assurance Implementation Guide for more information about the Prometheus microservice.

!! note Although the default Prometheus port is 9090, Unified Assurance microservices use port 9092 for Prometheus. For consistency, Oracle recommends using port 9092 for Prometheus in your Java microservices.

Instrumentation Variables

This table shows the variables that you can configure for instrumentation:

Variable Description Default
PROM_PORT The Prometheus port where the HTTP server is running the /metrics endpoint. 9092

About Redundancy

The redundancy capability provides application failover. A redundancy poller runs continuously in a secondary pod to monitor the Prometheus /metrics endpoint for the status of pods in the primary cluster.

When the poller invokes the Prometheus /metrics endpoint, one of the following happens:

For information about application failover and redundancy in Unified Assurance microservices generally, see Cross-Data Center Redundancy in Unified Assurance Concepts.

Redundancy Variables

This table shows the variables that you can configure for redundancy:

Variable Description Default
REDUNDANCY_POLL_PERIOD The number of seconds the poller waits before invoking the /metrics endpoint. 5
REDUNDANCY_FAILOVER_THRESHOLD The number of consecutive failed poller attempts before failing over to the secondary pod. 4
REDUNDANCY_FALLBACK_THRESHOLD The number of consecutive successful responses before falling back to the primary pod. 1

Redundancy Check Location

Depending on your application, you can implement the redundancy check at different stages:

Redundancy Use Cases

You can use the redundancy capability in the following use cases:

Failover to Secondary and Fallback to Primary

When the primary pod is evicted due to resource unavailability:

  1. Failover to secondary: Kubernetes tries to respin the pod within milliseconds.

    • If resources are available, the primary pod stays up and starts receiving the events.

    • If resources are not available, the pod status is set to pending and the secondary pod automatically starts collecting the events.

  2. Fallback to primary:

    1. The primary pod comes back up as part of initialization, the /metrics endpoint is active, and the redundancy poller gets a success response.

    2. The poller notifies the secondary pod to stop processing the events.

    3. Event collection falls back to the primary pod and the secondary pod goes back to standby position.

Network Connectivity Issues Between Primary and Secondary

When the primary pod is up and running but there is a connectivity issue between clusters, the /metrics endpoint will return an error. The redundancy capability assumes that the primary pod is unavailable, so fails over to the secondary pod, which starts processing the events. Both pods process the events simultaneously, resulting in duplicate events.

In this scenario, an administrator needs to intervene manually to resolve the connectivity and clear the duplicate events.

Building a Java Microservice Using the SDK

When building a Java microservice using the SDK, keep the following points in mind:

The basic steps for building a Java microservice using the Java SDK are:

  1. Build the code base:

    1. Install the sdk-lib and sdk-img packages as described in "Installing the SDK."

    2. Set the following environment variables, using appropriate values for your application:

      export APPNAME=<application name>
      export PACKAGENAME=<application's main package name>
      export MAINCLASSNAME=<application's main class name>
      export APPVERSION=v6.0.4-1
      export CHARTVERSION=1.0.0
      export DEVDIR=/tmp/$APPNAME
      export A1BASEDIR=/opt/assure1
      export PRESWEBFQDN=<primary presentation server name>
      
    3. As the assure1 user, run the following commands:

      cp -a $A1BASEDIR/distrib/sdk/microservice/java-ms-template $DEVDIR
      cd $DEVDIR
      
      find . -type f -exec sed -i -e "s/javatemplate/$PACKAGENAME/g" {} \;
      find . -type f -exec sed -i -e "s/java-ms-template/$APPNAME/g" {} \;
      find . -type f -exec sed -i -e "s/JavaTemplate/$MAINCLASSNAME/g" {} \;
      
      cd src/main/java/oracle/communications/unifiedassurance
      mv javatemplate $PACKAGENAME
      cd $PACKAGENAME
      mv JavaTemplate.java $MAINCLASSNAME.java
      cd $DEVDIR
      
    4. Make sure the application-dependent initializations are handled in the appropriate locations.

    5. Implement the application functionality and corresponding configurations.

    6. Configure the graceful shutdown hook to release or close all resources that your application uses. For example, the following excerpt from the GracefulShutdownService.java file in both the template and the example microservice closes Prometheus resources:

      ...
      
      public class GracefulShutdownService {
      private static final Logger LOGGER = LoggerFactory.getLogger(GracefulShutdownService.class);
      
      public static void closeApplicationResources() {
          LOGGER.info("Releasing application resources");
          //Application resources to be released
          PrometheusService.getInstance().closePrometheusMetricsEndpoint();
          Pulsar.getInstance().closePulsar();
         }
      }
      
      You can add the resources that your microservice uses to the list.

    7. Update Docker and Helm charts as needed for your implemented functionality.

  2. Build the Docker image by running the following command:

    a1docker build -t assure1/$APPNAME:$APPVERSION --build-arg WEBFQDN=$PRESWEBFQDN .    
    
  3. Package the Helm chart by running the following command:

    a1helm package helm --app-version $APPVERSION --version $CHARTVERSION
    
  4. Install the Docker image and test it on your local machine as follows:

    1. Tag the local Docker image with the PRESWEBFQDN repository:

      a1docker tag assure1/$APPNAME:$APPVERSION $PRESWEBFQDN/assure1/$APPNAME:$APPVERSION
      
    2. Push the image to the PRESWEBFQDN repository:

      a1docker push $PRESWEBFQDN/assure1/$APPNAME:$APPVERSION
      
    3. Remove the local copy:

      a1docker rmi assure1/$APPNAME:$APPVERSION
      
    4. Copy the microservice Helm chart to the Unified Assurance chart museum:

      cp $APPNAME-$CHARTVERSION.tgz $A1BASEDIR/var/chartmuseum
      
    5. Set the file permissions:

      chmod 644 $A1BASEDIR/var/chartmuseum/$APPNAME-$CHARTVERSION.tgz
      
      6. Update the Helm repository:

      a1helm repo update
      
    6. Deploy the Helm chart to your preferred Unified Assurance namespace, for example, a1-zone1-pri:

      a1helm install $APPNAME assure1/$APPNAME -n <target namespace> --set global.imageRegistry=$PRESWEBFQDN
      

Java SDK Reference Information

This section contains reference information about the library dependencies, SDK classes, and Helm charts used by the Java SDK. When developing your application, it is a best practice to remove any of these that you don't need.

Java Microservice Template Dependency Libraries

The Java Microservice Template includes the following dependency libraries, set in the build.gradle file for the application:

Java Microservice Template SDK Classes

This table shows the SDK classes of the Java microservice template capabilities:

Capability SDK Class
Configurations configuration - ConfigHandler
Pulsar connectivity streams - Pulsar
MySQL connectivity persistence - MySqlConnector.java
Neo4j connectivity persistence - Neo4jConnector.java
Instrumentation instrumentation - PrometheusService.java
Redundancy redundancy - Redundancy.java

Java MS Example Helm Charts

You use Helm charts to manage your Java microservices. The sample microservice contains reference Helm charts that you can modify for your needs to deploy your application and handle application variables, redundancy, and KEDA auto-scaling.

The following sample charts are available in the helm directory: