Microservice Cluster Setup
Creation of Kubernetes clusters is the first step in deploying microservices. Unified Assurance minimizes the complexities around creating and maintaining Kubernetes clusters. Rancher Kubernetes Engine (RKE) is a command line tool for deployment of Kubernetes. The Unified Assurance clusterctl application is a frontend to RKE and provides the configuration necessary for the opinionated setup.
Review the architecture and components involved in Understanding Microservices before the creation of a cluster.
Dependencies
When running commands as root, one of the following must be done.
-
Export the LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=$A1BASEDIR/lib
-
or source the .bashrc file:
source $A1BASEDIR/.bashrc
Roles
The Cluster.Master and Cluster.Worker roles must be installed on one or more servers. For a single-server development system, the recommendation is to install both roles. On production systems, each data center or availability zone should have at least 3 servers with the Cluster.Master role. These servers can also have the Cluster.Worker role depending on the resources available. This Package command must be run as root.
-
Install both the Cluster.Master and Cluster.Worker roles by using the Cluster meta role:
$A1BASEDIR/bin/Package install-role Cluster
-
Install only the Cluster.Master role:
$A1BASEDIR/bin/Package install-role Cluster.Master
-
Install only the Cluster.Worker role:
$A1BASEDIR/bin/Package install-role Cluster.Worker
Note:
These roles can be added during installation of a new server by specifying them to SetupWizard.
Setup SSH Keys
Each server in an Unified Assurance instance needs an SSH key for the assure1 user to access each other. This step is not needed on the primary presentation server. This CreateSSLCertificate command should be run as assure1.
$A1BASEDIR/bin/CreateSSLCertificate --Type SSH
Creating Clusters
The clusterctl command line application provides the interface for creating, updating, and removing clusters. It determines the servers in each cluster by the Cluster.* roles, and whether those servers have been associated to an existing cluster. The clusterctl command must be run as root.
$A1BASEDIR/bin/cluster/clusterctl create
Note:
For redundant clusters across data centers, care must be taken if all the roles are installed before cluster creating. By default, clusterctl will pull in all available servers and create a single cluster. To define 2 separate clusters, the hosts can be specified explicitly.
- Primary cluster:
$A1BASEDIR/bin/cluster/clusterctl create --host cluster-pri1.example.com --host cluster-pri2.example.com --host cluster-pri3.example.com
- Redundant cluster:
$A1BASEDIR/bin/cluster/clusterctl create --host cluster-sec1.example.com --host cluster-sec2.example.com --host cluster-sec3.example.com --secondary
Update the Helm Repository
The Helm Repository should be updated on at least one server in a cluster, usually one of the primaries.
su - assure1
export WEBFQDN=<Primary Presentation Web FQDN>
a1helm repo update
Install Helm Packages
Helm packages are installed as releases and can have unique names. By default, the convention is to name the release the same as the Helm chart. Each install needs to define the location of the Docker Registry and the namespace to install the release. Additional configuration can be set during install depending on the options provided for each chart.
-
The Unified Assurance Trap Collector microservice documentation has the information needed to deploy the service.
-
An example event collection and processing pipeline includes multiple microservices:
-
The Unified Assurance Trap Collector microservice will receive traps.
-
The Unified Assurance FCOM Processor microservice processes the traps.
-
The Unified Assurance Event Sink microservice inserts the traps into the database.
-
Helpful Troubleshooting
Helm deployments and the associated Kubernetes Pods, services, and other components can fail to initialize or crash unexpectedly. Here are a few helpful commands to aid troubleshooting these issues.
Note:
These commands must be run as the assure1 user:
su - assure1
-
Look at all running pods:
a1k get pods --all-namespaces
-
Describe a pod to get events if it fails to start:
a1k describe pod <Pod Name> -n <Namespace>
Note:
The
<Pod Name>
and<Namespace>
values are available in the get pods command above. -
Get and tail logs of a running pod:
a1k logs <Pod Name> -n <Namespace> -f
Note:
The
<Pod Name>
and<Namespace>
values are available in the get pods command above. -
Uninstalling a microservice:
a1helm uninstall <Release Name> -n <Namespace>
Note:
The
<Release Name>
and<Namespace>
values are available in the output of the following command:a1helm list --all-namespaces