The architecture diagram shows the deployment of OCI services including Kubernetes worker nodes and the associated data flows to implement this playbook. It contains an OCI region and external internet users.

The OCI region comprised a virtual cloud network (VCN) that spans three fault domains and these OCI network services: It also contains an API gateway and instances of OKE, OCI Functions, and OCI Queue. Access to the VCN is controlled by an internet gateway and a service gateway.

The VCN within the region contains a private subnet, which itself contains an OKE cluster. This subnet, too, spans the three fault domains. The OKE cluster within the subnet also spans the fault domains and extends beyond the subnet to contain the aforementioned OKE instance. Within the OKE lackluster two of the fault domains contain worker nodes.

Outside the region, the internet component contains a message listener and internet user groups.

Data flow in this diagram is described by numbers in the preceding diagram and represent this event sequence:
  1. The locally hosted producer puts messages into the OCI Queue.
  2. Our OCI Consumer instance(s) retrieve messages from the queue. Within the code, the consumption rate is constrained by using a delay. This ensures the provider is generating more messages than a single consumer can remove from the queue. As a result, the scaling mechanisms will work.
  3. Periodically, a Kubernetes scheduled job will support KEDA to invoke the published API to get the number of messages on the queue.
  4. The API Gateway directs the request onto an instance of the OCI Function.
  5. The OCI Function interrogates the OCI Queue.
  6. The response is returned, which will result in KEDA triggering an increase or decrease the instances of the microservice.
Additionally, this implementation also allows the user to, at any time, interrogate the state of the Queue depth is labeled a.