Understanding Microservices

Overview

Microservices are the latest addition to the Unified Assurance Hyperscale Architecture. The central idea behind microservices is that some types of applications become easier to build and maintain when they are broken down into smaller, composable pieces which work together. Each component is continuously developed and separately maintained, and the application is then simply the sum of its constituent components. This is in contrast to traditional, monolithic or service oriented applications which are all developed all in one piece.

Applications built as a set of modular components are easier to understand, easier to test, and most importantly easier to maintain over the life of the application. It enables organizations to achieve much higher agility and be able to vastly improve the time it takes to get working improvements to production.

Characteristics of Microservices

Description of illustration characteristics-of-microservices.png

Technology Components

Unified Assurance uses the best-of-breed components to implement microservices into its solution. Each of these components was carefully selected based on their industry adoption, flexibility of integration, and ease of use.

Docker

Docker provides an improved user experience for creating and sharing container images and as a result saw great adoption over other container implementations. Containers are a natural fit for microservices, matching the desire for lightweight and nimble components that can be easily managed and dynamically replaced. Unlike virtual machines, containers are designed to be pared down to the minimal viable pieces needed to run whatever the one thing the container is designed to do, rather than packing multiple functions into the same virtual or physical machine. The ease of development that Docker and similar tools provide help make possible rapid development and testing of services. The Docker daemon runs on all internal presentation servers and all servers installed with Cluster roles.

Docker Registry

The Docker Registry is a stateless, highly scalable web server that stores and distributes Docker images within each Unified Assurance instance. Docker Registry runs as a standalone Docker container on each Unified Assurance internal presentation server. The Registry runs behind the Unified Assurance web server which acts as a reverse proxy and secures the Registry with TLS client certificate authentication. Docker images are pushed into the Registry during the installation and update of specific Unified Assurance image packages. Images are pulled down from each Docker daemon on behalf of a Kubernetes cluster.

Kubernetes

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. Kubernetes provides a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more.

Rancher Kubernetes Engine (RKE) is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It works on bare-metal and virtualized servers. RKE solves the problem of installation complexity, a common issue in the Kubernetes community. With RKE, the installation and operation of Kubernetes is both simplified and easily automated, and it’s entirely independent of the operating system and platform you’re running.

Helm

Helm helps you manage Kubernetes applications. Helm is a package manager for Kubernetes that allows developers and operators to more easily package, configure, and deploy applications and services onto Kubernetes clusters. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

ChartMuseum

ChartMuseum is a stateless, highly scalable web server that stores and distributes Helm charts within each Unified Assurance instance. ChartMuseum runs as a standalone Docker container on each Unified Assurance internal presentation server. This chart repository runs behind the Unified Assurance web server which acts as a reverse proxy and secures the repository with TLS client certificate authentication. Helm charts are pushed into the repository during the installation and update of specific Unified Assurance image packages. Charts are pulled down from each Helm client for deployment into a Kubernetes cluster.

Hyperscale Clusters

Unified Assurance runs stateless microservices for the new collection and processing application tiers in Kubernetes clusters. The existing Service Oriented Architecture (SOA) collection and processing application tiers will exist outside of these clusters. One or more clusters can exist per Unified Assurance instance, but all servers of each cluster must reside in the same data center or availability zone.

Cluster and Namespace Examples

Cluster and Namespace Examples

Description of illustration cluster-and-namespace-examples.png

Cluster Roles

Each cluster requires at least one primary server running the Kubernetes control plane and etcd stateful configuration stores. Production clusters should be setup with at least 3 primaries. The role known as Cluster.Master provides the definition to deploy the Kubernetes primary applications on desired servers.

The role known as Cluster.Worker provides the definition to run any additional Kubernetes workloads. The Cluster.Worker role can be defined on the same server as Cluster.Master if the server has plenty of resource capabilities.

Namespaces

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. Unified Assurance has an opinionated view of namespaces. Each cluster has a monitoring namespace to check the health of the cluster and containers, a messaging namespace to run the message bus for microservice pipelines, and one or more namespaces per device zone. The zoned namespaces isolate application discovery and polling to specific Unified Assurance device zones. Additionally, each zone is separated into a primary and secondary namespace to provide cross-cluster failover.

Monitoring

The health of the Kubernetes cluster components and the containers in each cluster are monitored from applications in the a1-monitoring namespace. Performance metrics are pulled and stored locally in each cluster. The metrics are moved to long term storage in the Unified Assurance Metric database. Metric KPIs collected not only provide analytics coupled with alerting thresholds, they can be used by the Kubernetes Horizontal Pod Autoscaler to increase the number of replicas of Pod deployments dynamically as needed.

Pulsar Message Bus

Apache Pulsar is a multi-tenant, high-performance solution for server-to-server messaging. It provides very low publish and end-to-end latency and guarantees message delivery with persistent message storage. Pulsar provides the backbone for Unified Assurance microservice pipelines and runs in the a1-messaging namespace.

Microservice Pipelines

Microservices are simple, single-purpose applications or system components that work in unison via a lightweight communication mechanism. That communication mechanism is very frequently a publish/subscribe messaging system, which has become a core enabling technology behind microservice architectures and a key reason for their rising popularity. A central principle of publish/subscribe systems is decoupled communications, wherein producers don’t know who subscribes, and consumers don’t know who publishes; this system makes it easy to add new listeners or new publishers without disrupting existing processes.

The first Unified Assurance pipeline available is the Event pipeline. The most basic setup of the pipeline connects event collectors to the FCOM processor for normalization and finally to the Event Sink for storage in the Event database.

Event Pipeline

Description of illustration event-pipeline.png

High Availability and Redundancy

Kubernetes High-Availability is about setting up Kubernetes, along with its supporting components in a way that there is no single point of failure. A single primary cluster can easily fail, while a multi-primary cluster uses multiple primary nodes, each of which has access to same worker nodes. In a single primary cluster, the important component like API server, controller manager lies only on the single primary node and if it fails you cannot create more services, pods etc. However, in case of Kubernetes HA environment, these important components are replicated on multiple primaries (usually three primaries) and if any of the primaries fail, the other primaries keep the cluster up and running.

Kubernetes nodes pool together their resources to form a more powerful machine. When you deploy programs onto the cluster, it intelligently handles distributing work to the individual nodes for you. If any nodes are added, removed, or failed, the cluster will shift around work as necessary.

Deploying Microservices

A Kubernetes cluster must be setup on one or more servers before microservices can be deployed. Refer to Microservice Cluster Setup for instructions.