Note:

Use Terraform to Deploy Multiple Kubernetes Clusters across different OCI Regions using OKE and Create a Full Mesh Network using RPC

Introduction

In this tutorial, we will explain how to create multiple Kubernetes clusters using Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) and we will deploy these clusters in three different countries (regions). To speed up the deployment, and to deploy the Kubernetes clusters consistently with the least amount of configuration mistakes we will use Terraform and some custom bash scripts.

image

We have also manually deployed single clusters using the quick create method and the custom create method.

This tutorial is an update based on this documentation: Deploying multiple Kubernetes cluster on Oracle Cloud.

Objectives

Task 1: Determine the Topology (Star vs Mesh)

We are building these Kubernetes clusters to deploy a container-based application that is deployed across all regions. To allow communication between these Kubernetes clusters we need to have some form of network communication. For now, this is out of the scope of this tutorial, but we need to make some architectural decisions upfront. One of these decisions is to determine if we want to allow direct communication between all regions or we want to use one region as the hub for all communication and the others as a spoke.

Star Topology: The star topology allows communication between the regions using one single hub region. So if the Kubernetes clusters in San Jose wants to communicate with the Kubernetes clusters in Dubai, it will use Amsterdam as a transit hub.

image

Mesh Topology: The mesh topology allows direct communication to and from all regions (Kubernetes clusters). So if the Kubernetes clusters in San Jose wants to communicate with the Kubernetes clusters in Dubai, it can communicate directly.

image

In this tutorial, we are going to build a mesh topology, and this connectivity will be done using DRG and RPC.

Task 2: Prepare your Environment for Authentication and Run Terraform Scripts

Before using Terraform we need to prepare our environment. To use Terraform, open a terminal. In this tutorial, we are using OS X terminal application.

image

  1. Run the following command to verify that Terraform is installed, added to your path, and what the version is.

    Last login: Thu Apr  4 08:50:38 on ttys000
    iwhooge@iwhooge-mac ~ % terraform -v
    zsh: command not found: terraform
    iwhooge@iwhooge-mac ~ %
    
  2. You can see that the command is not found, this means that either Terraform is not installed, or that it is not added to the path variable.

image

As you can see Terraform is not installed so we need to install it. You will notice that it is not only installing Terraform but there are multiple steps required to deploy the Terraform application and to prepare the environment for our full end-to-end scripting solution to deploy three Kubernetes clusters in three different regions.

The following image provides guidance on the necessary tasks to follow.

image

Task 2.1: Install Homebrew

Terraform can be installed using different methods. In this tutorial, we will install Terraform using Homebrew.

Homebrew is a package manager for MacOS (and Linux) that can be used to install applications and their required dependencies is like apt or yum.

Task 2.2: Use Homebrew to Install Terraform

In this task, we will use Homebrew to install the Terraform package.

Task 2.3: Create Local RSA Keys for OCI Authentication

To allow authentication with OCI using an API key, we need to generate a new private and public key for this purpose only.

  1. Run the following command to change the directory to your home directory.

    iwhooge@iwhooge-mac ~ % cd ~/
    
  2. Run the following command to verify that you are in your home directory.

    iwhooge@iwhooge-mac ~ % pwd
    
  3. Verify that your home directory is correct.

    /Users/iwhooge
    
  4. Run the following command to create a new directory that will contain the information to authenticate with OCI.

    iwhooge@iwhooge-mac ~ % mkdir .oci
    
  5. Run the following command to generate a private RSA key.

    iwhooge@iwhooge-mac ~ % openssl genrsa -out ~/.oci/4-4-2023-rsa-key.pem 2048
    Generating RSA private key, 2048 bit long modulus
    .........................................................................................................................................+++++
    ......+++++
    e is 65537 (0x10001)
    
  6. Run the following command to make the private key file readable.

    iwhooge@iwhooge-mac ~ % chmod 600 ~/.oci/4-4-2023-rsa-key.pem
    
  7. Run the following command to generate a public RSA key from the private key.

    iwhooge@iwhooge-mac ~ % openssl rsa -pubout -in ~/.oci/4-4-2023-rsa-key.pem -out ~/.oci/4-4-2023-rsa-key-public.pem
    
  8. Verify that the key writing is completed.

    writing RSA key
    
  9. Run the following command to look at the content of the private RSA key.

    iwhooge@iwhooge-mac ~ % cat ~/.oci/4-4-2023-rsa-key.pem
    
  10. Verify the content of the private RSA key.

    -----BEGIN RSA PRIVATE KEY-----
    MIIEpQIBAAKCAQEA52+LJ+gp3MAJGtXTeQ/dmqq6Xh1zufK0yurLt/0w/DuxqEsL
    RT7x+Znz6EOVLx34Ul27QnHk7bhXaDCuwInnaOTOiS97AnLuFM08tvFksglnJssA
    JsszfTzUMNf0w4wtuLsJ5oRaPbVUa01TIm6HdwKAloIKYSn6z8gcvfLLItkyvPRo
    XXX
    w3yip+Yxr1YN3LjpDbZk4WTagKWoVQzp5nrfZlyU7ToZcMpUn/fIUsI=
    -----END RSA PRIVATE KEY-----
    
  11. Run the following command to look at the content of the public RSA key.

    iwhooge@iwhooge-mac ~ % cat ~/.oci/4-4-2023-rsa-key-public.pem
    
  12. Verify the content of the public RSA key.

    ----BEGIN PUBLIC KEY-----
    MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA52+LJ+gp3MAJGtXTeQ/d
    XXX
    mtHVtjLM1ftjYlaRSG5Xl/xdKMC8LH0bxpy3XXzLmDrYCP3LrhrIG8Xmuzsji6Hw
    TQIDAQAB
    -----END PUBLIC KEY-----
    iwhooge@iwhooge-mac ~ %
    

image

Task 2.4: Generate Local SSH Keys for Bastion Host Authentication

We also need to create local SSH keys to authenticate with the Bastion host. This is another key that we are using for authentication with the OCI Console (API).

  1. Run the following command to change the directory to your SSH directory.

    iwhooge@iwhooge-mac ~ % cd ~/.ssh/
    
  2. Run the following command to verify that you have a public and private SSH key that can be used.

    iwhooge@iwhooge-mac .ssh % ls -l -a
    
  3. Note that we do not have any SSH key pair. In this tutorial, we will generate a new SSH key pair.

    total 16
    drwx------   4 iwhooge  staff   128 Feb  8 12:48 .
    drwxr-x---+ 30 iwhooge  staff   960 Apr  4 11:03 ..
    -rw-------@  1 iwhooge  staff  2614 Feb 28 11:49 known_hosts
    -rw-------@  1 iwhooge  staff  1773 Feb  8 12:48 known_hosts.old
    
  4. Run the following command to generate a new SSH key pair.

    iwhooge@iwhooge-mac .ssh % ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/Users/iwhooge/.ssh/id_rsa):
    
  5. Leave the passphrase empty and click ENTER.

    Enter passphrase (empty for no passphrase):
    
  6. Leave the passphrase empty and click ENTER.

    Enter same passphrase again:
    
  7. Note that the new SSH key pair is saved in the provided locations.

    Your identification has been saved in /Users/iwhooge/.ssh/id_rsa
    Your public key has been saved in /Users/iwhooge/.ssh/id_rsa.pub
    The key fingerprint is:
    SHA256:2E7jD5Cvt0C3pArp+u5Q3BWDBDwfbtxp5T6eez75DPc iwhooge@iwhooge-mac
    The key's randomart image is:
    +---[RSA 3072]----+
    | ..o..o          |
    |  o o  o.        |
    |   = o.+         |
    | . .=.++.        |
    |  o...=.S        |
    | . . . Xoo       |
    |. o   o.*o...    |
    | o . . o+o++ .   |
    |.== . ..o=ooo E  |
    +----[SHA256]-----+
    iwhooge@iwhooge-mac .ssh %
    

image

Task 2.5: Create an API Key in the OCI Console and Add the Public Key to your OCI Account

Task 2.6: Collect the Required Information for your OCI Environment

We need to collect some information for our Terraform files for OCI authentication using the API. Most of the information is already provided in the API authentication configuration file created in Task 2.5.

This is the compartment in which you will deploy your Kubernetes clusters.

Task 3: Create Terraform Scripts and Files

We have completed the preparation on our local machine, including setting up Terraform, RSA and SSH keys, configuring the OCI environment (API), and gathering all the essential information needed to authenticate Terraform with OCI. Now create the Terraform script.

First we need to verify that we are subscribed to the regions that we are deploying our Kubernetes clusters on. If we are deploying to a non-subscribed region, we will get an authentication error and the deployment will fail.

For this tutorial, we are using the following three regions for deployment: Amsterdam, San Jose, and Dubai.

Task 4: Run Terraform and OKE Clusters along with the necessary Resources (VCN, Subnets, DRGs, RPCs and so on)

We have the Terraform scripts in place with the correct parameter. Now, execute the scripts and build our environment consisting three Kubernetes clusters in three different regions.

The following image illustrates from where you are setting up the SSH connection to the bastion host and from the bastion host to the operator host.

image

Now, we have deployed three Kubernetes clusters in the different regions. Take a look at the deployed resources from a high level in the OCI Console.

Task 5: Establish RPC Connections

Establish the connections between the various RPC attachments. Let us first review these in the different regions.

Collect All the RPC OCIDs

Repeat the process for all remote peering connections on all regions on all DRGs and save them.

Now, we have collected the following remote peering connection OCIDs:

Create the RPC Peerings

Configure the peering on C1 to C2 and C3. This will automatically configure the peering for C1 on the C2 and C3 side, and configuring the peering on C2 to C3 will automatically configure the peering for C2 on the C3 side.

Configure the C1 Peerings (Amsterdam).

Configure the C2 Peering (San Jose).

Task 6: Use the Network Visualizer to Verify the RPC Connections

Perform an additional check to ensure that the RPC has been configured correctly with Network Visualizer.

  1. Click the hamburger menu in the upper left corner.
  2. Click Networking.
  3. Click Network Visualizer.

image

  1. Make sure you are connected to the Amsterdam region.
  2. Note that the Amsterdam region is c1.
  3. Note the connections from Amsterdam to San Jose and Dubai.

image

  1. Make sure you are connected to the San Jose region.
  2. Note that the San Jose region is c2.
  3. Note the connections from San Jose to Amsterdam and Dubai.

image

  1. Make sure you are connected to the Dubai region.
  2. Note that the Dubai region is c3.
  3. Note the connections from Dubai to Amsterdam and San Jose.

image

Task 7: Use the Bastion and Operator to Check Working of Connectivity

We have created the Kubernetes clusters (on all the three different regions) and connected the regions using RPC. We can now use the operator host to verify that the operator can manage the Kubernetes clusters.

  1. Run the following command (which was provided after the completion of the terraform plan command).

    Last login: Fri Apr  5 09:10:01 on ttys000
    iwhooge@iwhooge-mac ~ % ssh -o ProxyCommand='ssh -W %h:%p -i ~/.ssh/id_rsa opc@143.47.183.243' -i ~/.ssh/id_rsa opc@10.1.0.12
    Activate the web console with: systemctl enable --now cockpit.socket
    Last login: Fri Apr  5 07:34:13 2024 from 10.1.0.2
    [opc@o-tmcntm ~]$
    
  2. Run the following command that will iterate using a for loop through each Kubernetes cluster (c1, c2, and c3) and retrieve the status of the worker nodes.

    [opc@o-tmcntm ~]$ for c in c1 c2 c3; do
    >   kubectx $c
    >   kubectl get nodes
    > done
    Switched to context "c1".
    NAME           STATUS   ROLES   AGE   VERSION
    10.1.113.144   Ready    node    76m   v1.28.2
    10.1.125.54    Ready    node    76m   v1.28.2
    Switched to context "c2".
    NAME          STATUS   ROLES   AGE   VERSION
    10.2.65.174   Ready    node    78m   v1.28.2
    10.2.98.54    Ready    node    78m   v1.28.2
    Switched to context "c3".
    NAME           STATUS   ROLES   AGE   VERSION
    10.3.118.212   Ready    node    73m   v1.28.2
    10.3.127.119   Ready    node    73m   v1.28.2
    [opc@o-tmcntm ~]$
    

    Run the following command in the terminal after you connect to the operator host.

    for c in c1 c2 c3; do
      kubectx $c
      kubectl get nodes
    done
    
  3. Note the output of all nodes for all the Kubernetes clusters that were deployed using the Terraform script.

image

Run the kubectl get all -n kube-system command with the for loop.

[opc@o-tmcntm ~]$ for c in c1 c2 c3; do
>   kubectx $c
>   kubectl get all -n kube-system
> done
Switched to context "c1".
NAME                                       READY   STATUS    RESTARTS       AGE
pod/coredns-844b4886f-8b4k6                1/1     Running   0              118m
pod/coredns-844b4886f-g8gbm                1/1     Running   0              122m
pod/csi-oci-node-5xzdg                     1/1     Running   0              119m
pod/csi-oci-node-nsdg4                     1/1     Running   1 (118m ago)   119m
pod/kube-dns-autoscaler-74f78468bf-l9644   1/1     Running   0              122m
pod/kube-flannel-ds-5hsp7                  1/1     Running   0              119m
pod/kube-flannel-ds-wk7xl                  1/1     Running   0              119m
pod/kube-proxy-gpvv2                       1/1     Running   0              119m
pod/kube-proxy-vgtf7                       1/1     Running   0              119m
pod/proxymux-client-nt59j                  1/1     Running   0              119m
pod/proxymux-client-slk9j                  1/1     Running   0              119m

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.101.5.5   <none>        53/UDP,53/TCP,9153/TCP   122m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                 AGE
daemonset.apps/csi-oci-node               2         2         2       2            2           <none>                                        122m
daemonset.apps/kube-flannel-ds            2         2         2       2            2           <none>                                        122m
daemonset.apps/kube-proxy                 2         2         2       2            2           beta.kubernetes.io/os=linux                   122m
daemonset.apps/node-termination-handler   0         0         0       0            0           oci.oraclecloud.com/oke-is-preemptible=true   122m
daemonset.apps/nvidia-gpu-device-plugin   0         0         0       0            0           <none>                                        122m
daemonset.apps/proxymux-client            2         2         2       2            2           node.info.ds_proxymux_client=true             122m

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns               2/2     2            2           122m
deployment.apps/kube-dns-autoscaler   1/1     1            1           122m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-844b4886f                2         2         2       122m
replicaset.apps/kube-dns-autoscaler-74f78468bf   1         1         1       122m
Switched to context "c2".
NAME                                       READY   STATUS    RESTARTS       AGE
pod/coredns-84bd9cd884-4fqvr               1/1     Running   0              120m
pod/coredns-84bd9cd884-lmgz2               1/1     Running   0              124m
pod/csi-oci-node-4zl9l                     1/1     Running   0              122m
pod/csi-oci-node-xjzfd                     1/1     Running   1 (120m ago)   122m
pod/kube-dns-autoscaler-59575f8674-m6j2z   1/1     Running   0              124m
pod/kube-flannel-ds-llhhq                  1/1     Running   0              122m
pod/kube-flannel-ds-sm6fg                  1/1     Running   0              122m
pod/kube-proxy-7ppw8                       1/1     Running   0              122m
pod/kube-proxy-vqfgb                       1/1     Running   0              122m
pod/proxymux-client-cnkph                  1/1     Running   0              122m
pod/proxymux-client-k5k6n                  1/1     Running   0              122m

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.102.5.5   <none>        53/UDP,53/TCP,9153/TCP   124m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                 AGE
daemonset.apps/csi-oci-node               2         2         2       2            2           <none>                                        124m
daemonset.apps/kube-flannel-ds            2         2         2       2            2           <none>                                        124m
daemonset.apps/kube-proxy                 2         2         2       2            2           beta.kubernetes.io/os=linux                   124m
daemonset.apps/node-termination-handler   0         0         0       0            0           oci.oraclecloud.com/oke-is-preemptible=true   124m
daemonset.apps/nvidia-gpu-device-plugin   0         0         0       0            0           <none>                                        124m
daemonset.apps/proxymux-client            2         2         2       2            2           node.info.ds_proxymux_client=true             124m

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns               2/2     2            2           124m
deployment.apps/kube-dns-autoscaler   1/1     1            1           124m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-84bd9cd884               2         2         2       124m
replicaset.apps/kube-dns-autoscaler-59575f8674   1         1         1       124m
Switched to context "c3".
NAME                                       READY   STATUS    RESTARTS   AGE
pod/coredns-56c7ffc89c-jt85k               1/1     Running   0          115m
pod/coredns-56c7ffc89c-lsqcg               1/1     Running   0          121m
pod/csi-oci-node-gfswn                     1/1     Running   0          116m
pod/csi-oci-node-xpwbp                     1/1     Running   0          116m
pod/kube-dns-autoscaler-6b69bf765c-fxjvc   1/1     Running   0          121m
pod/kube-flannel-ds-2sqbk                  1/1     Running   0          116m
pod/kube-flannel-ds-l7sdz                  1/1     Running   0          116m
pod/kube-proxy-4qcmb                       1/1     Running   0          116m
pod/kube-proxy-zcrk4                       1/1     Running   0          116m
pod/proxymux-client-4lgg7                  1/1     Running   0          116m
pod/proxymux-client-zbcrg                  1/1     Running   0          116m

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.103.5.5   <none>        53/UDP,53/TCP,9153/TCP   121m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                 AGE
daemonset.apps/csi-oci-node               2         2         2       2            2           <none>                                        122m
daemonset.apps/kube-flannel-ds            2         2         2       2            2           <none>                                        121m
daemonset.apps/kube-proxy                 2         2         2       2            2           beta.kubernetes.io/os=linux                   121m
daemonset.apps/node-termination-handler   0         0         0       0            0           oci.oraclecloud.com/oke-is-preemptible=true   121m
daemonset.apps/nvidia-gpu-device-plugin   0         0         0       0            0           <none>                                        122m
daemonset.apps/proxymux-client            2         2         2       2            2           node.info.ds_proxymux_client=true             122m

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns               2/2     2            2           121m
deployment.apps/kube-dns-autoscaler   1/1     1            1           121m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-56c7ffc89c               2         2         2       121m
replicaset.apps/kube-dns-autoscaler-6b69bf765c   1         1         1       121m
[opc@o-tmcntm ~]$

Task 8: Delete the OKE Clusters using Terraform

We have used Terraform for our deployment so we can also use Terraform to delete the complete deployment.

Acknowledgments

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.