Microservice Cluster Setup

Creation of Kubernetes clusters is the first step in deploying microservices. Unified Assurance minimizes the complexities around creating and maintaining Kubernetes clusters. Rancher Kubernetes Engine (RKE) is a command line tool for deployment of Kubernetes. The Unified Assurance clusterctl application is a frontend to RKE and provides the configuration necessary for the opinionated setup.

Review the architecture and components involved in Understanding Microservices before the creation of a cluster.

Dependencies

When running commands as root, one of the following must be done.

Roles

The Cluster.Master and Cluster.Worker roles must be installed on one or more servers. For a single-server development system, the recommendation is to install both roles. On production systems, each data center or availability zone should have at least 3 servers with the Cluster.Master role. These servers can also have the Cluster.Worker role depending on the resources available. This Package command must be run as root.

Setup SSH Keys

Each server in an Unified Assurance instance needs an SSH key for the assure1 user to access each other. This step is not needed on the primary presentation server. This CreateSSLCertificate command should be run as assure1.

$A1BASEDIR/bin/CreateSSLCertificate --Type SSH

Creating Clusters

The clusterctl command line application provides the interface for creating, updating, and removing clusters. It determines the servers in each cluster by the Cluster.* roles, and whether those servers have been associated to an existing cluster. The clusterctl command must be run as root.

$A1BASEDIR/bin/cluster/clusterctl create

Note:

For redundant clusters across data centers, care must be taken if all the roles are installed before cluster creating. By default, clusterctl will pull in all available servers and create a single cluster. To define 2 separate clusters, the hosts can be specified explicitly.

  1. Primary cluster:
$A1BASEDIR/bin/cluster/clusterctl create --host cluster-pri1.example.com --host cluster-pri2.example.com --host cluster-pri3.example.com
  1. Redundant cluster:
$A1BASEDIR/bin/cluster/clusterctl create --host cluster-sec1.example.com --host cluster-sec2.example.com --host cluster-sec3.example.com --secondary

Update the Helm Repository

The Helm Repository should be updated on at least one server in a cluster, usually one of the primaries.

su - assure1
export WEBFQDN=<Primary Presentation Web FQDN> 
a1helm repo update 

Install Helm Packages

Helm packages are installed as releases and can have unique names. By default, the convention is to name the release the same as the Helm chart. Each install needs to define the location of the Docker Registry and the namespace to install the release. Additional configuration can be set during install depending on the options provided for each chart.

Helpful Troubleshooting

Helm deployments and the associated Kubernetes Pods, services, and other components can fail to initialize or crash unexpectedly. Here are a few helpful commands to aid troubleshooting these issues.

Note:

These commands must be run as the assure1 user:

su - assure1