Platform Layer Overview

In the Oracle Private Cloud Appliance architecture, the platform layer is the part that provides a standardized infrastructure for on-premises cloud services. It controls the hardware layer, enables the services layer, and establishes a secure interface that allows those layers to interact in a uniform and centralized manner. Common components and features of the infrastructure services layer are built into the platform, thus simplifying and accelerating the deployment of those microservices.

Fundamental Services

To fulfill its core role of providing the base infrastructure to deploy on-premises cloud services, the platform relies on a set of fundamental internal services of its own. This section describes their function.

Hardware Management

When a system is initialized, low level platform components orchestrate the provisioning of the management node cluster and the compute nodes. During this process, all nodes, including the controllers of the ZFS Storage Appliance, are connected to the required administration and data networks. When additional compute nodes are installed at a later stage, the same provisioning mechanism integrates the new node into the global system configuration. Additional disk trays are also automatically integrated by the storage controllers.

The first step in managing the hardware is to create an inventory of the rack's components. The inventory is a separate database that contains specifications and configuration details for the components installed in the rack. It maintains a history of all components that were ever presented to the system, and is updated continuously with the latest information captured from the active system components.

The services layer and several system components need the hardware inventory details so they can interact with the hardware. For example, a component upgrade or service deployment process needs to send instructions to the hardware layer and receive responses. Similarly, when you create a compute instance, a series of operations needs to be performed at the level of the compute nodes, network components and storage appliance to bring up the instance and its associated network and storage resources.

All the instructions intended for the hardware layer are centralized in a hardware management service, which acts as a gateway to the hardware layer. The hardware management service uses the dedicated and highly secured platform API to execute the required commands on the hardware components: server ILOMs, ZFS storage controllers, and so on. This API runs directly on the management node operating system. It is separated from the container orchestration environment where microservices are deployed.

Service Deployment

Oracle Private Cloud Appliance follows a granular, service-based development model. Functionality is logically divided into separate microservices, which exist across the architectural layers and represent a vertical view of the system. Services have internal as well as externalized functions, and they interact with each other in different layers.

These microservices are deployed in Kubernetes containers. The container runtime environment as well as the registry are hosted on the three-node management cluster. Oracle Cloud Native Environment provides the basis for container orchestration, which includes the automated deployment, configuration and startup of the microservices containers. By design, all microservices consist of multiple instances spread across different Kubernetes nodes and pods. Besides high availability, the Kubernetes design also offers load balancing between the instances of each microservice.

Containerization simplifies service upgrades and functional enhancements. The services are tightly integrated but not monolithic, allowing individual upgrades on condition that compatibility requirements are respected. A new version of a microservice is published to the container registry and automatically propagated to the Kubernetes nodes and pods.

Common Service Components

Some components and operational mechanisms are required by many or all services, so it is more efficient to build those into the platform and allow services to consume them when they are deployed on top of the platform. These common components, add a set of essential features to each service built on top of the platform, thus simplifying service development and deployment.

  • Message Transport

    All components and services are connected to a common transport layer. It is a message broker that allows components to send and receive messages written in a standardized format. This message transport service is deployed as a cluster of three instances for high availability and throughput, and uses TLS for authentication and traffic encryption.

  • Secret Service

    Secrets used programmatically throughout the system, such as login credentials and certificates, are centrally managed by the secret service. All components and services are clients of the secret service: after successful authentication the client receives a token for use with every operation it attempts to execute. Policies defined within the secret service determine which operations a client is authorized to perform. Secrets are not stored in a static way; they have a limited lifespan and are dynamically created and managed.

    During system initialization, the secret service is unsealed and prepared for use. It is deployed as an active/standby cluster on the management nodes, within a container at the platform layer, but outside of the Kubernetes microservices environment. This allows the secret service to offer its functionality to the platform layer at startup, before the microservices are available. All platform components and microservices must establish their trust relationship with the secret service before they are authorized to execute any operations.

  • Logging

    The platform provides unified logging across the entire system. For this purpose, all services and components integrate with the Fluentd data collector. Fluentd collects data from a pre-configured set of log files and stores it in a central location. Logs are captured from system components, the platform layer and the microservices environment, and made available through the Loki log aggregation system for traceability and analysis.

  • Monitoring

    For monitoring purposes, the platform relies on Prometheus to collect metric data. Since Prometheus is deployed inside the Kubernetes environment, it has direct access to the microservices metrics. Components outside Kubernetes, such as hardware components and compute instances, provide their metric data to Prometheus through the internal network and the load balancer. The management nodes and Kubernetes itself can communicate directly with Prometheus.

  • Analytics

    Logging and monitoring data are intended for infrastructure administrators. They can consult the data through the Service Web UI, where a number of built-in queries for health and performance parameters are visualized on a dashboard. Alerts are sent when a key threshold is exceeded, so that appropriate countermeasures can be taken.

  • Database

    All services and components store data in a common, central database. It is a MySQL cluster database with instances deployed across the three management nodes and running on bare metal. Availability, load balancing, data synchronization and clustering are all controlled by internal components of the MySQL cluster. For optimum performance, data storage is provided by LUNs on the ZFS Storage Appliance, directly attached to each of the management nodes. Access to the database is strictly controlled by the secret service.

  • Load Balancing

    The management nodes form a cluster of three active nodes, meaning they are all capable of simultaneously receiving inbound connections. The ingress traffic is controlled by a statically configured load balancer that listens on a floating IP address and distributes traffic across the three management nodes. An instance of the load balancer runs on each of the management nodes.

    In a similar way, all containerized microservices run as multiple pods within the container orchestration environment on the management node cluster. Kubernetes provides the load balancing for the ingress traffic to the microservices.