Develop the Solution

Each part of this solution has been implemented with Java and uses Maven to retrieve the dependencies needed, as defined by its POM file. It includes a simple shell script that runs the application by invoking Maven to compile and execute the code.

Before you can run the script, you need to modify the Environment class in each case to define the appropriate connection details; for example, the Queue OCID, deployment region, and so on. For the function and the microservice, because the code is executed within a container, this requires some additional steps. The readme in the repository describes steps necessary to wrap the JAR within a container. Before deployment, the executable artefacts for both the function and the OKE-hosted microservice are in a Container Registry (OCIR) .

Tailor the Producer Code

The producer implementation (com.demo.samples.basic.QueueProducer) is very simple, made up of a main method and two additional methods that help produce the message content. The main method builds the connection and transmission objects and then goes into an infinite loop of creating new messages and sending them. To tailor the payload the method, you only need to modify prepareMessage. Currently, this method creates a simple message with a GUID and takes advantage of the fact that OCI Queue allows the API to send twenty messages at a time.

Tailor the Consumer Code

The consumer (com.demo.consumer.QueueConsumer) takes its configuration from environment variables pushed through from the OKE configuration. This makes it very easy to reconfigure the consumer to different Queues. The bulk of the work is conducted in the main method which, once it has a queue connection, executes the request for messages. A helper method called prepareGetMessageRequest creates the message request itself. The method identifies the specific queue and sets the duration prepareGetMessageRequeston will wait for a response (allowing you to configure long polling) and the maximum number of messages (up to 20) that can be returned.
Once the messages are retrieved, the processMessage method then processes them.

Note:

In this playbook, the process is simply put to sleep, although you should understand that a real application might need some time to process the message.
As the processMessage method applies a thread sleep to simulate a backend message processing workload, you will see the scaling mechanism work. Once all received messages are processed, the Queue is told to delete them.

Tailor the Queue Length Function Code

The Queue length function contains a class called QueueLength (in package com.example.fn) which is implemented in a manner compliant to how an OCI Function needs to work. This in turn uses a separate class, GetStats, which uses environment variables injected by the OCI Function’s configuration to connect to the Queue and requests the statistics. The results are taken from the REST response and returned in a JSON structure.

Given the simplicity and decisioning of the vertical scaling performed outside the Function, you have little need to modify this code.

Configuring Settings to Control Scaling

Beyond configuring the Provider, Consumer and Function parameters so the solution can connect to a Queue, you need to configure is the settings to control the scaling implemented by using KEDA. You can see this in so-object.yaml, which needs to be sent to OKE by using kubectl (all commands are provided in the relevant readme file in the repository).

The configuration provides details describing how frequently KEDA needs to trigger the call to the API Gateway, boundaries on how many target service instances are allowed and the name of the target. The configuration also includes the trigger definition, which indicates the URL to invoke to get the current demand, and the threshold at which instances can scale up or down, including the path into the JSON object returned to KEDA to make the decision.