Microservice Cluster Setup

Creation of Kubernetes clusters is the first step in deploying microservices. Unified Assurance minimizes the complexities around creating and maintaining Kubernetes clusters. Rancher Kubernetes Engine (RKE) is a command line tool for deployment of Kubernetes. The Unified Assurance clusterctl application is a frontend to RKE and provides the configuration necessary for the opinionated setup.

Review the architecture and components involved in Understanding Microservices before the creation of a cluster.

Dependencies

When running commands as root, one of the following must be done.

Roles

The Cluster.Master and Cluster.Worker roles must be installed on one or more servers. For a single-server development system, the recommendation is to install both roles. On production systems, each data center or availability zone should have at least 3 servers with the Cluster.Master role. These servers can also have the Cluster.Worker role depending on the resources available. This Package command must be run as root.

Setup SSH Keys

Each server in an Unified Assurance instance needs an SSH key for the assure1 user to access each other. This step is not needed on the primary presentation server. This CreateSSLCertificate command should be run as assure1.

$A1BASEDIR/bin/CreateSSLCertificate --Type SSH

Creating Clusters

The clusterctl command line application provides the interface for creating, updating, and removing clusters. It determines the servers in each cluster by the Cluster.* roles, and whether those servers have been associated to an existing cluster. The clusterctl command must be run as root.

In the below command, replace the <Cluster Name> placeholder with something relevant for the server(s) being added to the cluster. Some examples are:

$A1BASEDIR/bin/cluster/clusterctl create <Cluster Name>

Note:

For redundant clusters across data centers, care must be taken if all the roles are installed before cluster creating. By default, clusterctl will pull in all available servers and create a single cluster. To define 2 separate clusters, the hosts can be specified explicitly.

  1. Primary cluster:

    $A1BASEDIR/bin/cluster/clusterctl create <Cluster Name> --host cluster-pri1.example.com --host cluster-pri2.example.com --host cluster-pri3.example.com
    
  2. Redundant cluster:

    $A1BASEDIR/bin/cluster/clusterctl create <Cluster Name> --host cluster-sec1.example.com --host cluster-sec2.example.com --host cluster-sec3.example.com --secondary
    

Setup Redundancy

Run the clusterctl command with the join option to combine the clusters into a redundant pairing. The clusterctl command must be run as root.

$A1BASEDIR/bin/cluster/clusterctl join --primaryCluster <PrimaryHostFQDN> --secondaryCluster <SecondaryHostFQDN>

Note:

Remove Redundancy

Run the clusterctl command with the detach option to remove the redundant pairing in a cluster. The clusterctl command must be run as root.

$A1BASEDIR/bin/cluster/clusterctl detach

Update the Helm Repository

The Helm Repository should be updated on at least one server in a cluster, usually one of the primaries.

su - assure1
export WEBFQDN=<Primary Presentation Web FQDN> 
a1helm repo update 

Install Helm Packages

Helm packages are installed as releases and can have unique names. By default, the convention is to name the release the same as the Helm chart. Each install needs to define the location of the Docker Registry and the namespace to install the release. Additional configuration can be set during install depending on the options provided for each chart.

Customizing the Cluster Configuration File

Some installations may need to customize the configuration file that is used when creating clusters.

Creating a New Cluster

When creating a new cluster, this can be done by editing the template file found in this location:

$A1BASEDIR/etc/rke/cluster-tmpl.yml

Clusters can then be created as described above.

Updating an Existing Cluster

For a server with clusters already running, update the cluster.yml file, then update the clusters using the clusterctl upgrade command. The configuration file is located in the following location:

$A1BASEDIR/etc/rke/cluster.yml

Once the file has been changed, the following command must be run to upgrade the clusters with the new configurations:

Note:

This command must be run as the root user:

$A1BASEDIR/bin/cluster/clusterctl upgrade

Example changing the file size limitation used by the Vision ingress controller

One example is to change the maximum body size for the ingress controller. In the relevant configuration file, find the ingress section. In the options definition of that section, edit or add the following line to change the maximum size allowed:

proxy-body-size: 15m

Then run the upgrade command above when upgrade existing clusters, or follow the documentation to create new clusters.

Helpful Troubleshooting

Helm deployments and the associated Kubernetes Pods, services, and other components can fail to initialize or crash unexpectedly. Here are a few helpful commands to aid troubleshooting these issues.

Note:

These commands must be run as the assure1 user:

su - assure1