Note:
- This tutorial requires access to Oracle Cloud. To sign up for a free account, see Get started with Oracle Cloud Infrastructure Free Tier.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Use Terraform to Deploy Multiple Kubernetes Clusters across different OCI Regions using OKE and Create a Full Mesh Network using RPC
Introduction
In this tutorial, we will explain how to create multiple Kubernetes clusters using Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) and we will deploy these clusters in three different countries (regions). To speed up the deployment, and to deploy the Kubernetes clusters consistently with the least amount of configuration mistakes we will use Terraform and some custom bash scripts.
We have also manually deployed single clusters using the quick create method and the custom create method.
This tutorial is an update based on this documentation: Deploying multiple Kubernetes cluster on Oracle Cloud.
Objectives
- We will prepare our local computer with the necessary tools for Terraform actions in the Oracle Cloud Infrastructure (OCI) environment. We will also configure the OCI environment to authenticate requests from our local computer for Terraform executions and create Terraform and shell scripts to deploy three Kubernetes clusters on OKE across different regions. We will ensure inter-cluster communication by configuring Remote Peering Connections (RPC) on the Dynamic Routing Gateways (DRG).
Task 1: Determine the Topology (Star vs Mesh)
We are building these Kubernetes clusters to deploy a container-based application that is deployed across all regions. To allow communication between these Kubernetes clusters we need to have some form of network communication. For now, this is out of the scope of this tutorial, but we need to make some architectural decisions upfront. One of these decisions is to determine if we want to allow direct communication between all regions or we want to use one region as the hub for all communication and the others as a spoke.
Star Topology: The star topology allows communication between the regions using one single hub region. So if the Kubernetes clusters in San Jose wants to communicate with the Kubernetes clusters in Dubai, it will use Amsterdam as a transit hub.
Mesh Topology: The mesh topology allows direct communication to and from all regions (Kubernetes clusters). So if the Kubernetes clusters in San Jose wants to communicate with the Kubernetes clusters in Dubai, it can communicate directly.
In this tutorial, we are going to build a mesh topology, and this connectivity will be done using DRG and RPC.
Task 2: Prepare your Environment for Authentication and Run Terraform Scripts
Before using Terraform we need to prepare our environment. To use Terraform, open a terminal. In this tutorial, we are using OS X terminal application.
-
Run the following command to verify that Terraform is installed, added to your path, and what the version is.
Last login: Thu Apr 4 08:50:38 on ttys000 iwhooge@iwhooge-mac ~ % terraform -v zsh: command not found: terraform iwhooge@iwhooge-mac ~ %
-
You can see that the command is not found, this means that either Terraform is not installed, or that it is not added to the path variable.
As you can see Terraform is not installed so we need to install it. You will notice that it is not only installing Terraform but there are multiple steps required to deploy the Terraform application and to prepare the environment for our full end-to-end scripting solution to deploy three Kubernetes clusters in three different regions.
The following image provides guidance on the necessary tasks to follow.
Task 2.1: Install Homebrew
Terraform can be installed using different methods. In this tutorial, we will install Terraform using Homebrew.
Homebrew is a package manager for MacOS (and Linux) that can be used to install applications and their required dependencies is like apt
or yum
.
-
Install Homebrew.
-
Run the following command to install Homebrew.
iwhooge@iwhooge-mac ~ % /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ==> Checking for `sudo` access (which may request your password)... Password: ==> This script will install: /opt/homebrew/bin/brew /opt/homebrew/share/doc/homebrew /opt/homebrew/share/man/man1/brew.1 /opt/homebrew/share/zsh/site-functions/_brew /opt/homebrew/etc/bash_completion.d/brew /opt/homebrew ==> The following new directories will be created: /opt/homebrew/Caskroom Press RETURN/ENTER to continue or any other key to abort: ==> /usr/bin/sudo /bin/mkdir -p /opt/homebrew/Caskroom ==> /usr/bin/sudo /bin/chmod ug=rwx /opt/homebrew/Caskroom ==> /usr/bin/sudo /usr/sbin/chown iwhooge /opt/homebrew/Caskroom ==> /usr/bin/sudo /usr/bin/chgrp admin /opt/homebrew/Caskroom ==> /usr/bin/sudo /usr/sbin/chown -R iwhooge:admin /opt/homebrew ==> Downloading and installing Homebrew... remote: Enumerating objects: 8902, done. remote: Counting objects: 100% (4704/4704), done. remote: Compressing objects: 100% (931/931), done. remote: Total 8902 (delta 3862), reused 4508 (delta 3719), pack-reused 4198 Receiving objects: 100% (8902/8902), 4.72 MiB | 11.67 MiB/s, done. Resolving deltas: 100% (5474/5474), completed with 597 local objects. From https://github.com/Homebrew/brew * [new branch] analytics_command_run_test_bot -> origin/analytics_command_run_test_bot * [new branch] brew_runtime_error_restore -> origin/brew_runtime_error_restore * [new branch] bump_skip_repology -> origin/bump_skip_repology * [new branch] bye-byebug -> origin/bye-byebug * [new branch] dependabot/bundler/Library/Homebrew/json_schemer-2.2.1 -> origin/dependabot/bundler/Library/Homebrew/json_schemer-2.2.1 * [new branch] load-internal-cask-json-v3 -> origin/load-internal-cask-json-v3 392cc15a7d..2fe08b139e master -> origin/master * [new branch] neon-proxy-5201 -> origin/neon-proxy-5201 * [new branch] strict-parser -> origin/strict-parser * [new tag] 4.2.10 -> 4.2.10 * [new tag] 4.2.11 -> 4.2.11 * [new tag] 4.2.12 -> 4.2.12 * [new tag] 4.2.13 -> 4.2.13 * [new tag] 4.2.15 -> 4.2.15 * [new tag] 4.2.16 -> 4.2.16 * [new tag] 4.2.7 -> 4.2.7 * [new tag] 4.2.8 -> 4.2.8 * [new tag] 4.2.9 -> 4.2.9 remote: Enumerating objects: 15, done. remote: Counting objects: 100% (9/9), done. remote: Total 15 (delta 9), reused 9 (delta 9), pack-reused 6 Unpacking objects: 100% (15/15), 2.23 KiB | 104.00 KiB/s, done. From https://github.com/Homebrew/brew * [new tag] 4.2.14 -> 4.2.14 Reset branch 'stable' ==> Updating Homebrew... Updated 2 taps (homebrew/core and homebrew/cask). ==> Installation successful! ==> Homebrew has enabled anonymous aggregate formulae and cask analytics. Read the analytics documentation (and how to opt-out) here: https://docs.brew.sh/Analytics No analytics data has been sent yet (nor will any be during this install run). ==> Homebrew is run entirely by unpaid volunteers. Please consider donating: https://github.com/Homebrew/brew#donations ==> Next steps: - Run these two commands in your terminal to add Homebrew to your PATH: (echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/iwhooge/.zprofile eval "$(/opt/homebrew/bin/brew shellenv)" - Run brew help to get started - Further documentation: https://docs.brew.sh iwhooge@iwhooge-mac ~ %
-
Click RETURN/ENTER to continue the installation.
- Note that the installation is successfully completed.
- Copy the additional commands to add Homebrew to your path variable.
-
-
Run the following command to add Homebrew to your path variable.
iwhooge@iwhooge-mac ~ % (echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/iwhooge/.zprofile eval "$(/opt/homebrew/bin/brew shellenv)"
Task 2.2: Use Homebrew to Install Terraform
In this task, we will use Homebrew to install the Terraform package.
-
Run the following command to install the Terraform package.
iwhooge@iwhooge-mac ~ % brew install terraform ==> Downloading https://ghcr.io/v2/homebrew/core/terraform/manifests/1.5.7 ######################################################################### 100.0% ==> Fetching terraform ==> Downloading https://ghcr.io/v2/homebrew/core/terraform/blobs/sha256:f43afa7c ######################################################################### 100.0% ==> Pouring terraform--1.5.7.arm64_sonoma.bottle.tar.gz 🍺 /opt/homebrew/Cellar/terraform/1.5.7: 6 files, 69.7MB ==> Running `brew cleanup terraform`... Disable this behavior by setting HOMEBREW_NO_INSTALL_CLEANUP. Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`). iwhooge@iwhooge-mac ~ %
-
Run the following command to verify if Terraform is installed and the version.
iwhooge@iwhooge-mac ~ % terraform -v Terraform v1.5.7 on darwin_arm64 Your version of Terraform is out of date! The latest version is 1.7.5. You can update by downloading from https://www.terraform.io/downloads.html iwhooge@iwhooge-mac ~ %
-
Note that Homebrew is installed with the Terraform version
1.5.7
. -
Note that this is an old version. To upgrade the Terraform version, see Install Terraform.
-
To upgrade Terraform we need to add the Hashicorp repository to Homebrew. Run the following command.
iwhooge@iwhooge-mac ~ % brew tap hashicorp/tap
-
Run the following command to install Terraform from the Hashicorp repository.
iwhooge@iwhooge-mac ~ % brew install hashicorp/tap/terraform terraform 1.5.7 is already installed but outdated (so it will be upgraded). ==> Fetching hashicorp/tap/terraform ==> Downloading https://releases.hashicorp.com/terraform/1.7.5/terraform_1.7.5_d ######################################################################### 100.0% ==> Upgrading hashicorp/tap/terraform 1.5.7 -> 1.7.5 🍺 /opt/homebrew/Cellar/terraform/1.7.5: 3 files, 88.7MB, built in 4 seconds ==> Running `brew cleanup terraform`... Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP. Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`). Removing: /opt/homebrew/Cellar/terraform/1.5.7... (6 files, 69.7MB) Removing: /Users/iwhooge/Library/Caches/Homebrew/terraform_bottle_manifest--1.5.7... (9KB) Removing: /Users/iwhooge/Library/Caches/Homebrew/terraform--1.5.7... (19.6MB) iwhooge@iwhooge-mac ~ %
-
Note that the Terraform is being upgraded from
1.5.7
to1.7.5
version.
-
Run the following command to verify that the Terraform version is the latest one.
iwhooge@iwhooge-mac ~ % terraform -v Terraform v1.7.5 on darwin_arm64 iwhooge@iwhooge-mac ~ %
-
Note that the new version is
1.7.5
.
-
Task 2.3: Create Local RSA Keys for OCI Authentication
To allow authentication with OCI using an API key, we need to generate a new private and public key for this purpose only.
-
Run the following command to change the directory to your home directory.
iwhooge@iwhooge-mac ~ % cd ~/
-
Run the following command to verify that you are in your home directory.
iwhooge@iwhooge-mac ~ % pwd
-
Verify that your home directory is correct.
/Users/iwhooge
-
Run the following command to create a new directory that will contain the information to authenticate with OCI.
iwhooge@iwhooge-mac ~ % mkdir .oci
-
Run the following command to generate a private RSA key.
iwhooge@iwhooge-mac ~ % openssl genrsa -out ~/.oci/4-4-2023-rsa-key.pem 2048 Generating RSA private key, 2048 bit long modulus .........................................................................................................................................+++++ ......+++++ e is 65537 (0x10001)
-
Run the following command to make the private key file readable.
iwhooge@iwhooge-mac ~ % chmod 600 ~/.oci/4-4-2023-rsa-key.pem
-
Run the following command to generate a public RSA key from the private key.
iwhooge@iwhooge-mac ~ % openssl rsa -pubout -in ~/.oci/4-4-2023-rsa-key.pem -out ~/.oci/4-4-2023-rsa-key-public.pem
-
Verify that the key writing is completed.
writing RSA key
-
Run the following command to look at the content of the private RSA key.
iwhooge@iwhooge-mac ~ % cat ~/.oci/4-4-2023-rsa-key.pem
-
Verify the content of the private RSA key.
-----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEA52+LJ+gp3MAJGtXTeQ/dmqq6Xh1zufK0yurLt/0w/DuxqEsL RT7x+Znz6EOVLx34Ul27QnHk7bhXaDCuwInnaOTOiS97AnLuFM08tvFksglnJssA JsszfTzUMNf0w4wtuLsJ5oRaPbVUa01TIm6HdwKAloIKYSn6z8gcvfLLItkyvPRo XXX w3yip+Yxr1YN3LjpDbZk4WTagKWoVQzp5nrfZlyU7ToZcMpUn/fIUsI= -----END RSA PRIVATE KEY-----
-
Run the following command to look at the content of the public RSA key.
iwhooge@iwhooge-mac ~ % cat ~/.oci/4-4-2023-rsa-key-public.pem
-
Verify the content of the public RSA key.
----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA52+LJ+gp3MAJGtXTeQ/d XXX mtHVtjLM1ftjYlaRSG5Xl/xdKMC8LH0bxpy3XXzLmDrYCP3LrhrIG8Xmuzsji6Hw TQIDAQAB -----END PUBLIC KEY----- iwhooge@iwhooge-mac ~ %
Task 2.4: Generate Local SSH Keys for Bastion Host Authentication
We also need to create local SSH keys to authenticate with the Bastion host. This is another key that we are using for authentication with the OCI Console (API).
-
Run the following command to change the directory to your SSH directory.
iwhooge@iwhooge-mac ~ % cd ~/.ssh/
-
Run the following command to verify that you have a public and private SSH key that can be used.
iwhooge@iwhooge-mac .ssh % ls -l -a
-
Note that we do not have any SSH key pair. In this tutorial, we will generate a new SSH key pair.
total 16 drwx------ 4 iwhooge staff 128 Feb 8 12:48 . drwxr-x---+ 30 iwhooge staff 960 Apr 4 11:03 .. -rw-------@ 1 iwhooge staff 2614 Feb 28 11:49 known_hosts -rw-------@ 1 iwhooge staff 1773 Feb 8 12:48 known_hosts.old
-
Run the following command to generate a new SSH key pair.
iwhooge@iwhooge-mac .ssh % ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/Users/iwhooge/.ssh/id_rsa):
-
Leave the passphrase empty and click ENTER.
Enter passphrase (empty for no passphrase):
-
Leave the passphrase empty and click ENTER.
Enter same passphrase again:
-
Note that the new SSH key pair is saved in the provided locations.
Your identification has been saved in /Users/iwhooge/.ssh/id_rsa Your public key has been saved in /Users/iwhooge/.ssh/id_rsa.pub The key fingerprint is: SHA256:2E7jD5Cvt0C3pArp+u5Q3BWDBDwfbtxp5T6eez75DPc iwhooge@iwhooge-mac The key's randomart image is: +---[RSA 3072]----+ | ..o..o | | o o o. | | = o.+ | | . .=.++. | | o...=.S | | . . . Xoo | |. o o.*o... | | o . . o+o++ . | |.== . ..o=ooo E | +----[SHA256]-----+ iwhooge@iwhooge-mac .ssh %
Task 2.5: Create an API Key in the OCI Console and Add the Public Key to your OCI Account
-
We have the RSA key created in Task 2.3, we can use that public RSA key to create an API key in the OCI Console for OCI authentication.
- Click profile.
- Select My Profile.
-
Scroll down.
- Select API Keys.
- Click Add API key.
- Select Paste in the public key.
- Paste the public key created in Task 2.3.
- Click Add.
- Note the path and the file where the generated API authentication configuration needs to be pasted.
- Note the API key fingerprint for the API key you just created.
- Note the API authentication configuration.
- Click Copy.
- Click Close.
-
Paste the API authentication configuration in a temporary text file.
[DEFAULT] user=ocid1.user.oc1..aaaaaaaavgrXXX23aq fingerprint=30:XXX:ba:ee tenancy=ocid1.tenancy.oc1..aaaaaaaabh2XXXvq region=eu-frankfurt-1 key_file=<path to your private keyfile> # TODO
-
Update the last line of the API authentication configuration and add the correct path of your private key file created in Task 2.3.
[DEFAULT] user=ocid1.user.oc1..aaaaaaaavgxxxXX23aq fingerprint=30:XXX:ba:ee tenancy=ocid1.tenancy.oc1..aaaaaaaabh2XXXvq region=eu-frankfurt-1 key_file=~/.oci/4-4-2023-rsa-key.pem
-
Create an OCI API authentication configuration file.
iwhooge@iwhooge-mac ~ % nano ~/.oci/config iwhooge@iwhooge-mac ~ %
-
Copy the API authentication configuration in the file. Use
CTRL + X
to exit this file. -
Enter Y (Yes) to save the file.
-
Confirm the file that you want to use to save the API authentication configuration.
-
When the file is saved successfully you will return to the terminal prompt.
Task 2.6: Collect the Required Information for your OCI Environment
We need to collect some information for our Terraform files for OCI authentication using the API. Most of the information is already provided in the API authentication configuration file created in Task 2.5.
-
Save the following information for later use.
Tenancy OCID ocid1.tenancy.oc1..aaaaaaaabh2XXXvq User OCID ocid1.user.oc1..aaaaaaaavgrXXX23aq Fingerprint 30:XXX:ba:ee Region eu-frankfurt-1 Private Key Path ~/.oci/4-4-2023-rsa-key.pem Compartment OCID ocid1.compartment.oc1..aaaaaaaabgXXXnuq -
The only thing that we require which is not in the API authentication configuration file is the Compartment OCID.
-
To get compartment OCID, navigate to Identity, Compartments, Compartment Details.
-
You can see the compartment OCID.
-
Click Copy to copy the compartment OCID. Save this for later use.
-
This is the compartment in which you will deploy your Kubernetes clusters.
Task 3: Create Terraform Scripts and Files
We have completed the preparation on our local machine, including setting up Terraform, RSA and SSH keys, configuring the OCI environment (API), and gathering all the essential information needed to authenticate Terraform with OCI. Now create the Terraform script.
First we need to verify that we are subscribed to the regions that we are deploying our Kubernetes clusters on. If we are deploying to a non-subscribed region, we will get an authentication error and the deployment will fail.
For this tutorial, we are using the following three regions for deployment: Amsterdam, San Jose, and Dubai.
-
Click region selection menu.
- Scroll down.
- Click Manage Regions.
- Scroll down.
- Click arrow to see the next 10 items.
- Note that we are subscribed to Amsterdam.
- Click arrow to see the next 10 items.
- Note that we are subscribed to Dubai.
- Note that we are subscribed to San Jose.
- Click arrow to see the next 10 items.
-
Note that there are a few regions that we are not subscribed to. For example, if we wanted to deploy one of our Kubernetes clusters to Bogota we first need to subscribe to the Bogota region.
-
The following image illustrates what we are trying to achieve with Terraform.
- We are using the remote computer with Terraform.
- This remote computer will authenticate with OCI.
- After authentication, we will use Terraform to deploy the following three Kubernetes clusters using OKE.
- c1: Amsterdam
- c2: San Jose
- c3: Dubai
We have used the aliases c1, c2, and c3 to make it easier to name components in OCI, so that it is easier to recognize the clusters based on the name instead of a uniquely generated name.
-
Run the following command to make sure you are in your home directory
iwhooge@iwhooge-mac ~ % pwd /Users/iwhooge
-
Run the following command to create a new directory named
terraform-multi-oke
andscripts
directory inside theterraform-multi-oke
directory.iwhooge@iwhooge-mac ~ % mkdir terraform-multi-oke iwhooge@iwhooge-mac ~ % mkdir terraform-multi-oke/scripts
-
Run the following command to verify that the
terraform-multi-oke
directory is created.iwhooge@iwhooge-mac ~ % ls -l total 0 drwx------@ 5 iwhooge staff 160 Jan 2 06:25 Applications drwx------+ 4 iwhooge staff 128 Mar 27 08:15 Desktop drwx------@ 10 iwhooge staff 320 Mar 29 08:39 Documents drwx------@ 90 iwhooge staff 2880 Apr 3 14:16 Downloads drwx------@ 93 iwhooge staff 2976 Mar 16 15:49 Library drwx------ 5 iwhooge staff 160 Feb 14 08:18 Movies drwx------+ 4 iwhooge staff 128 Feb 21 20:00 Music drwxr-xr-x@ 6 iwhooge staff 192 Feb 9 08:36 Oracle Content drwx------+ 7 iwhooge staff 224 Feb 28 12:03 Pictures drwxr-xr-x+ 4 iwhooge staff 128 Dec 30 16:31 Public drwxr-xr-x 2 iwhooge staff 64 Apr 4 12:39 terraform-multi-oke
-
Run the following command to change the path to the new
terraform-multi-oke
directory, and make sure that the directory is empty.iwhooge@iwhooge-mac ~ % cd terraform-multi-oke iwhooge@iwhooge-mac terraform-multi-oke % ls -l total 0 iwhooge@iwhooge-mac terraform-multi-oke %
-
Create files inside
terraform-multi-oke
andterraform-multi-oke/scripts
directories. Files and folder structure should look like this.iwhooge@iwhooge-mac terraform-multi-oke % tree . ├── c1.tf ├── c2.tf ├── c3.tf ├── contexts.tf ├── locals.tf ├── outputs.tf ├── providers.tf ├── scripts │ ├── cloud-init.sh │ ├── generate_kubeconfig.template.sh │ ├── kubeconfig_set_credentials.template.sh │ ├── set_alias.template.sh │ └── token_helper.template.sh ├── templates.tf ├── terraform.tfstate ├── terraform.tfstate.backup ├── terraform.tfvars ├── variables.tf └── versions.tf
Note: You can also download the files from the GitHub repository: oci-oke-terraform.
-
Update
terraform.tfvars
file with the parameters collected in Task 2.6.# ===================================================================== # START - UPDATE THIS SECTION WITH OWN PARAMETERS # provider api_fingerprint = "<use your own API fingerprint>" api_private_key_path = "<use your own OCI RSA private key path>" home_region = "<use your own home region>" # Use short form e.g. Ashburn from location column https://docs.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm tenancy_id = "<use your own Tenancy OCID>" user_id = "<use your own User OCID>" compartment_id = "<use your own Compartement OCID>" # ssh ssh_private_key_path = "<use your own SSH private key path>" ssh_public_key_path = "<use your own SSH public key path>" # END - UPDATE THIS SECTION WITH OWN PARAMETERS # =====================================================================
-
If you want to deploy fewer or more Kubernetes clusters or want to change the regions you can also do this by altering the regions in
terraform.tfvars
,contexts.tf
andproviders.tf
files. Find c1, c2 and c3 and make changes.-
terraform.tfvars
(Add or remove clusters here, make sure when you are adding clusters, use unique CIDR blocks).clusters = { c1 = { region = "amsterdam", vcn = "10.1.0.0/16", pods = "10.201.0.0/16", services = "10.101.0.0/16", enabled = true } c2 = { region = "bogota", vcn = "10.2.0.0/16", pods = "10.202.0.0/16", services = "10.102.0.0/16", enabled = true } c3 = { region = "sanjose", vcn = "10.3.0.0/16", pods = "10.203.0.0/16", services = "10.103.0.0/16", enabled = true } }
-
contexts.tf
(Add or remove clusters in thedepends_on
parameter).resource "null_resource" "set_contexts" { depends_on = [module.c1, module.c2, module.c3] for_each = local.all_cluster_ids connection { host = local.operator_ip private_key = file(var.ssh_private_key_path) timeout = "40m" type = "ssh" user = "opc" bastion_host = local.bastion_ip bastion_user = "opc" bastion_private_key = file(var.ssh_private_key_path) }
-
providers.tf
(Add or remove clusters as a provider, make sure you alter the region and alias parameters).provider "oci" { fingerprint = var.api_fingerprint private_key_path = var.api_private_key_path region = lookup(local.regions,var.home_region) tenancy_ocid = var.tenancy_id user_ocid = var.user_id alias = "home" ignore_defined_tags = ["Oracle-Tags.CreatedBy", "Oracle-Tags.CreatedOn"] } provider "oci" { fingerprint = var.api_fingerprint private_key_path = var.api_private_key_path region = lookup(local.regions,lookup(lookup(var.clusters,"c1"),"region")) tenancy_ocid = var.tenancy_id user_ocid = var.user_id alias = "c1" ignore_defined_tags = ["Oracle-Tags.CreatedBy", "Oracle-Tags.CreatedOn"] } provider "oci" { fingerprint = var.api_fingerprint private_key_path = var.api_private_key_path region = lookup(local.regions,lookup(lookup(var.clusters,"c2"),"region")) tenancy_ocid = var.tenancy_id user_ocid = var.user_id alias = "c2" ignore_defined_tags = ["Oracle-Tags.CreatedBy", "Oracle-Tags.CreatedOn"] } provider "oci" { fingerprint = var.api_fingerprint private_key_path = var.api_private_key_path region = lookup(local.regions,lookup(lookup(var.clusters,"c3"),"region")) tenancy_ocid = var.tenancy_id user_ocid = var.user_id alias = "c3" ignore_defined_tags = ["Oracle-Tags.CreatedBy", "Oracle-Tags.CreatedOn"] }
Note: You can also download the files from the GitHub repository: oci-oke-terraform.
-
Task 4: Run Terraform and OKE Clusters along with the necessary Resources (VCN, Subnets, DRGs, RPCs and so on)
We have the Terraform scripts in place with the correct parameter. Now, execute the scripts and build our environment consisting three Kubernetes clusters in three different regions.
-
Run the following command to change the directory to the
terraform-multi-oke
directory.Last login: Fri Apr 5 09:01:47 on ttys001 iwhooge@iwhooge-mac ~ % cd terraform-multi-oke
-
Run the
terraform init
command to initialize Terraform and to download the required Terraform modules to deploy the Terraform scripts.Make sure the Terraform has been successfully initialized.
-
Run the
terraform plan
command to plan Terraform to do a pre-check if your Terraform code is valid, and verify what will be deployed (This is not the real deployment yet). -
Note that Terraform will add 229 new resources in OCI, and these objects are all related to the three Kubernetes clusters we are planning to deploy.
-
Run the
terraform apply
command to apply Terraform and deploy three Kubernetes clusters. -
Enter yes to approve the deployment. It will take around 30 minutes for the Terraform script to finish.
- Note that the apply is completed and 229 new resources are added.
- Copy the SSH command output to access the bastion and operator hosts for Kubernetes cluster management tasks.
-
The following image illustrates current deployment with Terraform.
-
Run the SSH command to log in to the Kubernetes operator host.
iwhooge@iwhooge-mac terraform-multi-oke % ssh -o ProxyCommand='ssh -W %h:%p -i ~/.ssh/id_rsa opc@143.47.183.243' -i ~/.ssh/id_rsa opc@10.1.0.12 The authenticity of host '143.47.183.243 (143.47.183.243)' can't be established. ED25519 key fingerprint is SHA256:hMVDzms+n0nEmsh/rTe0Y/MLSSSk6OKMSipoVlQyNfU. This key is not known by any other names.
-
Enter yes to continue for the bastion host.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '143.47.183.243' (ED25519) to the list of known hosts. The authenticity of host '10.1.0.12 (<no hostip for proxy command>)' can't be established. ED25519 key fingerprint is SHA256:AIUmsHHGONNxuJsnCDDSyPCrJyoJPKYgdODX3qGe0Tw. This key is not known by any other names.
-
Enter yes to continue again for the operator host.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.1.0.12' (ED25519) to the list of known hosts. Activate the web console with: systemctl enable --now cockpit.socket Last login: Fri Apr 5 07:31:38 2024 from 10.1.0.2
-
Note that you are now logged in to the operator.
-
-
Run the following command to verify the deployed and running Kubernetes clusters from the operator host.
[opc@o-tmcntm ~]$ kubectx c1 c2 c3 [opc@o-tmcntm ~]$
The following image illustrates from where you are setting up the SSH connection to the bastion host and from the bastion host to the operator host.
Now, we have deployed three Kubernetes clusters in the different regions. Take a look at the deployed resources from a high level in the OCI Console.
-
OCI Console Verification (Amsterdam).
- Select Amsterdam as the Region.
- Navigate to Networking and VCN.
- Review that the c1 VCN is created here.
- Navigate to Developer Services and Kubernetes Clusters (OKE).
- Review that the c1 Kubernetes cluster is created here.
- Navigate to Compute and Instances.
- Review that the bastion host and the two worker nodes belonging to the c1 Kubernetes cluster are created here.
- Navigate to Networking, Customer Connectivity and Dynamic Routing Gateway.
- Review that the DRG is created here.
- Navigate to Identity and Policies.
- Review that three identity policies are created here.
-
OCI Console Verification (San Jose).
- Select San Jose as the Region.
- Navigate to Networking and VCN.
- Review that the c2 VCN is created here.
- Navigate to Developer Services and Kubernetes Clusters (OKE).
- Review that the c2 Kubernetes cluster is created here.
- Navigate to Compute and Instances.
- Review that the two worker nodes belonging to the c2 Kubernetes cluster are created here.
- Navigate to Networking, Customer Connectivity and Dynamic Routing Gateway.
- Review that the DRG is created here.
- Navigate to Identity and Policies.
- Review that three identity policies are created here.
-
OCI Console Verification (Dubai)
- Select Dubai as the region.
- Navigate to Networking and VCN.
- Review that the c3 VCN is created here.
- Navigate to Developer Services and Kubernetes Clusters (OKE).
- Review that the c3 Kubernetes cluster is created here.
- Navigate to Compute and Instances.
- Review that the two worker nodes belonging to the c3 Kubernetes cluster are created here.
- Navigate to Networking, Customer Connectivity and Dynamic Routing Gateway.
- Review that the DRG is created here.
- Navigate to Identity and Policies.
- Review that three identity policies are created here.
Task 5: Establish RPC Connections
Establish the connections between the various RPC attachments. Let us first review these in the different regions.
-
Remote Peering connection attachments in Amsterdam.
Make sure you are connected to the Amsterdam region.
- Navigate to Networking, Customer Connectivity, Dynamic Routing Gateways and c1.
- Click Remote peering connection attachments.
- Note that there are two remote peering connection attachments configured.
- Note that both remote peering connection attachments are new and not peered.
-
Remote Peering connection attachments in San Jose.
Make sure you are connected to the San Jose region.
- Navigate to Networking, Customer Connectivity, Dynamic Routing Gateways and c2.
- Click Remote peering connection attachments.
- Note that there are two remote peering connection attachments configured.
- Note that both remote peering connection attachments are new and not peered.
-
Remote Peering connection attachments in Dubai.
Make sure you are connected to the Dubai region.
- Navigate to Networking, Customer Connectivity, Dynamic Routing Gateways and c3.
- Click Remote peering connection attachments.
- Note that there are two remote peering connection attachments configured.
- Note that both remote peering connection attachments are new and not peered.
Collect All the RPC OCIDs
-
To configure the RPC peering connections between all regions we need to collect the OCID of these RPC peering connection attachments.
- Make sure you are connected to the Amsterdam region.
- Navigate to Networking, Customer Connectivity, Dynamic Routing Gateways and c1.
- Click Remote peering connection attachments.
- Click the remote peering connection (
rpc-to-c2
).
-
Click Show.
-
Click Copy.
Repeat the process for all remote peering connections on all regions on all DRGs and save them.
Now, we have collected the following remote peering connection OCIDs:
-
c1 DRG RPC
Local RPC Local RPC OCID Remote RPC C1: rpc-to-c2 ocid1.remotepeeringconnection.oc1.eu-amsterdam-1.aaaaaxxxxxxuxfq C2: rpc-to-c1 C1: rpc-to-c3 ocid1.remotepeeringconnection.oc1.eu-amsterdam-1.aaaaaaaxxxxxXs4ya C3: rpc-to-c1 -
c2 DRG RPC
Local RPC Local RPC OCID Remote RPC C2: rpc-to-c1 ocid1.remotepeeringconnection.oc1.us-sanjose-1.aaaaaaaxxxxXXXvmya C1: rpc-to-c2 C2: rpc-to-c3 ocid1.remotepeeringconnection.oc1.us-sanjose-1.aaaaaaaaxxxxXXen2a C3: rpc-to-c2 -
c3 DRG RPC
Local RPC Local RPC OCID Remote RPC C3: rpc-to-c1 ocid1.remotepeeringconnection.oc1.me-dubai-1.aaaaaaaapxxxXXXcosq C1: rpc-to-c3 C3: rpc-to-c2 ocid1.remotepeeringconnection.oc1.me-dubai-1.aaaaaaaaxxxpXXXs5tq C2: rpc-to-c3
Create the RPC Peerings
Configure the peering on C1 to C2 and C3. This will automatically configure the peering for C1 on the C2 and C3 side, and configuring the peering on C2 to C3 will automatically configure the peering for C2 on the C3 side.
Configure the C1 Peerings (Amsterdam).
-
The following image shows what RPCs we are configuring.
- Make sure you are connected to the Amsterdam region.
- Navigate to Networking, Customer Connectivity, Dynamic Routing Gateways and c1.
- Click Remote peering connection attachments.
- Click the first remote peering connection attachment (
rpc-to-c2
). Here you will configure the connection towards San Jose.
-
Click Establish Connection.
- Select San Jose region.
- Enter RPC OCID of the San Jose side that is created for c1 (Amsterdam).
- Click Establish Connection.
- The peering status will change to Pending, and it will take a minute to complete.
- Click the second remote peering connection attachment (
rpc-to-c3
). Here you will configure the connection towards Dubai.
-
Click Establish Connection.
- Select Dubai region.
- Enter RPC OCID of the Dubai side that is created for c1 (Amsterdam).
- Click Establish Connection.
Configure the C2 Peering (San Jose).
-
The following image shows what RPCs we are configuring.
- The peering status will change to Pending, and it will take a minute to complete.
- Click the regions menu and switch from Amsterdam to San Jose region.
- Select the San Jose region.
- Navigate to Networking, Customer Connectivity, Dynamic Routing Gateways and c2.
- Click Remote peering connection attachments.
- Note that the connection between Amsterdam and San Jose is now peered. This is done from the Amsterdam side.
- Note that the peering status from San Jose (c2) to Dubai (c3) is still new.
- Click the second remote peering connection attachment (
rpc-to-c3
). Here you will configure the connection towards Dubai.
-
Click Establish Connection.
- Select the Dubai region.
- Enter RPC OCID of the Dubai side that is created for c2 (San Jose).
- Click Establish Connection.
-
The peering status will change to Pending, and it will take a minute to complete.
The following image illustrates the full mesh RPC peering which we have done.
-
Verify the connections peering.
- Make sure you are connected to the San Jose region.
- Navigate to Networking, Customer Connectivity, Dynamic Routing Gateways and c1.
- Click Remote peering connection attachments.
- Note that both remote peering connection attachments have the peered status.
- Make sure you are connected to the San Jose region.
- Navigate to Networking, Customer Connectivity, Dynamic Routing Gateways and c2.
- Click Remote peering connection attachments.
- Note that both remote peering connection attachments have the peered status.
- Make sure you are connected to the Dubai region.
- Navigate to Networking, Customer Connectivity, Dynamic Routing Gateways and c3.
- Click Remote peering connection attachments.
- Note that both remote peering connection attachments have the peered status.
Task 6: Use the Network Visualizer to Verify the RPC Connections
Perform an additional check to ensure that the RPC has been configured correctly with Network Visualizer.
- Click the hamburger menu in the upper left corner.
- Click Networking.
- Click Network Visualizer.
- Make sure you are connected to the Amsterdam region.
- Note that the Amsterdam region is c1.
- Note the connections from Amsterdam to San Jose and Dubai.
- Make sure you are connected to the San Jose region.
- Note that the San Jose region is c2.
- Note the connections from San Jose to Amsterdam and Dubai.
- Make sure you are connected to the Dubai region.
- Note that the Dubai region is c3.
- Note the connections from Dubai to Amsterdam and San Jose.
Task 7: Use the Bastion and Operator to Check Working of Connectivity
We have created the Kubernetes clusters (on all the three different regions) and connected the regions using RPC. We can now use the operator host to verify that the operator can manage the Kubernetes clusters.
-
Run the following command (which was provided after the completion of the
terraform plan
command).Last login: Fri Apr 5 09:10:01 on ttys000 iwhooge@iwhooge-mac ~ % ssh -o ProxyCommand='ssh -W %h:%p -i ~/.ssh/id_rsa opc@143.47.183.243' -i ~/.ssh/id_rsa opc@10.1.0.12 Activate the web console with: systemctl enable --now cockpit.socket Last login: Fri Apr 5 07:34:13 2024 from 10.1.0.2 [opc@o-tmcntm ~]$
-
Run the following command that will iterate using a for loop through each Kubernetes cluster (c1, c2, and c3) and retrieve the status of the worker nodes.
[opc@o-tmcntm ~]$ for c in c1 c2 c3; do > kubectx $c > kubectl get nodes > done Switched to context "c1". NAME STATUS ROLES AGE VERSION 10.1.113.144 Ready node 76m v1.28.2 10.1.125.54 Ready node 76m v1.28.2 Switched to context "c2". NAME STATUS ROLES AGE VERSION 10.2.65.174 Ready node 78m v1.28.2 10.2.98.54 Ready node 78m v1.28.2 Switched to context "c3". NAME STATUS ROLES AGE VERSION 10.3.118.212 Ready node 73m v1.28.2 10.3.127.119 Ready node 73m v1.28.2 [opc@o-tmcntm ~]$
Run the following command in the terminal after you connect to the operator host.
for c in c1 c2 c3; do kubectx $c kubectl get nodes done
-
Note the output of all nodes for all the Kubernetes clusters that were deployed using the Terraform script.
Run the kubectl get all -n kube-system
command with the for loop.
[opc@o-tmcntm ~]$ for c in c1 c2 c3; do
> kubectx $c
> kubectl get all -n kube-system
> done
Switched to context "c1".
NAME READY STATUS RESTARTS AGE
pod/coredns-844b4886f-8b4k6 1/1 Running 0 118m
pod/coredns-844b4886f-g8gbm 1/1 Running 0 122m
pod/csi-oci-node-5xzdg 1/1 Running 0 119m
pod/csi-oci-node-nsdg4 1/1 Running 1 (118m ago) 119m
pod/kube-dns-autoscaler-74f78468bf-l9644 1/1 Running 0 122m
pod/kube-flannel-ds-5hsp7 1/1 Running 0 119m
pod/kube-flannel-ds-wk7xl 1/1 Running 0 119m
pod/kube-proxy-gpvv2 1/1 Running 0 119m
pod/kube-proxy-vgtf7 1/1 Running 0 119m
pod/proxymux-client-nt59j 1/1 Running 0 119m
pod/proxymux-client-slk9j 1/1 Running 0 119m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.101.5.5 <none> 53/UDP,53/TCP,9153/TCP 122m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/csi-oci-node 2 2 2 2 2 <none> 122m
daemonset.apps/kube-flannel-ds 2 2 2 2 2 <none> 122m
daemonset.apps/kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 122m
daemonset.apps/node-termination-handler 0 0 0 0 0 oci.oraclecloud.com/oke-is-preemptible=true 122m
daemonset.apps/nvidia-gpu-device-plugin 0 0 0 0 0 <none> 122m
daemonset.apps/proxymux-client 2 2 2 2 2 node.info.ds_proxymux_client=true 122m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2/2 2 2 122m
deployment.apps/kube-dns-autoscaler 1/1 1 1 122m
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-844b4886f 2 2 2 122m
replicaset.apps/kube-dns-autoscaler-74f78468bf 1 1 1 122m
Switched to context "c2".
NAME READY STATUS RESTARTS AGE
pod/coredns-84bd9cd884-4fqvr 1/1 Running 0 120m
pod/coredns-84bd9cd884-lmgz2 1/1 Running 0 124m
pod/csi-oci-node-4zl9l 1/1 Running 0 122m
pod/csi-oci-node-xjzfd 1/1 Running 1 (120m ago) 122m
pod/kube-dns-autoscaler-59575f8674-m6j2z 1/1 Running 0 124m
pod/kube-flannel-ds-llhhq 1/1 Running 0 122m
pod/kube-flannel-ds-sm6fg 1/1 Running 0 122m
pod/kube-proxy-7ppw8 1/1 Running 0 122m
pod/kube-proxy-vqfgb 1/1 Running 0 122m
pod/proxymux-client-cnkph 1/1 Running 0 122m
pod/proxymux-client-k5k6n 1/1 Running 0 122m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.102.5.5 <none> 53/UDP,53/TCP,9153/TCP 124m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/csi-oci-node 2 2 2 2 2 <none> 124m
daemonset.apps/kube-flannel-ds 2 2 2 2 2 <none> 124m
daemonset.apps/kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 124m
daemonset.apps/node-termination-handler 0 0 0 0 0 oci.oraclecloud.com/oke-is-preemptible=true 124m
daemonset.apps/nvidia-gpu-device-plugin 0 0 0 0 0 <none> 124m
daemonset.apps/proxymux-client 2 2 2 2 2 node.info.ds_proxymux_client=true 124m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2/2 2 2 124m
deployment.apps/kube-dns-autoscaler 1/1 1 1 124m
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-84bd9cd884 2 2 2 124m
replicaset.apps/kube-dns-autoscaler-59575f8674 1 1 1 124m
Switched to context "c3".
NAME READY STATUS RESTARTS AGE
pod/coredns-56c7ffc89c-jt85k 1/1 Running 0 115m
pod/coredns-56c7ffc89c-lsqcg 1/1 Running 0 121m
pod/csi-oci-node-gfswn 1/1 Running 0 116m
pod/csi-oci-node-xpwbp 1/1 Running 0 116m
pod/kube-dns-autoscaler-6b69bf765c-fxjvc 1/1 Running 0 121m
pod/kube-flannel-ds-2sqbk 1/1 Running 0 116m
pod/kube-flannel-ds-l7sdz 1/1 Running 0 116m
pod/kube-proxy-4qcmb 1/1 Running 0 116m
pod/kube-proxy-zcrk4 1/1 Running 0 116m
pod/proxymux-client-4lgg7 1/1 Running 0 116m
pod/proxymux-client-zbcrg 1/1 Running 0 116m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.103.5.5 <none> 53/UDP,53/TCP,9153/TCP 121m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/csi-oci-node 2 2 2 2 2 <none> 122m
daemonset.apps/kube-flannel-ds 2 2 2 2 2 <none> 121m
daemonset.apps/kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 121m
daemonset.apps/node-termination-handler 0 0 0 0 0 oci.oraclecloud.com/oke-is-preemptible=true 121m
daemonset.apps/nvidia-gpu-device-plugin 0 0 0 0 0 <none> 122m
daemonset.apps/proxymux-client 2 2 2 2 2 node.info.ds_proxymux_client=true 122m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2/2 2 2 121m
deployment.apps/kube-dns-autoscaler 1/1 1 1 121m
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-56c7ffc89c 2 2 2 121m
replicaset.apps/kube-dns-autoscaler-6b69bf765c 1 1 1 121m
[opc@o-tmcntm ~]$
Task 8: Delete the OKE Clusters using Terraform
We have used Terraform for our deployment so we can also use Terraform to delete the complete deployment.
-
Run the
terraform destroy
command to delete all resources that are related to the three Kubernetes clusters. -
Enter yes to approve the delete process. It will take a few minutes to finish.
-
Note that the destroy is completed and all 229 resources are destroyed.
Acknowledgments
-
Author - Iwan Hoogendoorn (OCI Network Specialist)
-
Contributor - Ali Mukadam
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Use Terraform to Deploy Multiple Kubernetes Clusters across different OCI Regions using OKE and Create a Full Mesh Network using RPC
F96709-01
April 2024