Note:
- This tutorial requires access to Oracle Cloud. To sign up for a free account, see Get started with Oracle Cloud Infrastructure Free Tier.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Deploy a Kubernetes Cluster with Terraform using Oracle Cloud Infrastructure Kubernetes Engine
Introduction
Deploying Kubernetes with Terraform on Oracle Cloud Infrastructure Kubernetes Engine (OCI Kubernetes Engine or OKE) offers a streamlined and scalable approach to managing containerized applications in the cloud. OKE, an Oracle Cloud Infrastructure managed Kubernetes service, simplifies the deployment, management, and scaling of Kubernetes clusters.
By using Terraform, an Infrastructure as Code (IaC) tool, you can automate the provisioning and configuration of OKE clusters, ensuring consistency and efficiency. This combination allows for repeatable deployments, infrastructure versioning, and easy updates, making it ideal for cloud-native and DevOps-focused teams looking to leverage Oracle Cloud Infrastructure ecosystem.
In this tutorial, we are going to deploy a very specific Kubernetes architecture on OCI OKE using Terraform.
We are going to deploy the following components:
- Internet Gateway
- NAT Gateway
- Service Gateway
- 7 x Subnets (Private and Public)
- 2 x Node Pools
- 4 x Worker Nodes (Instances)
- Bastion (Instance)
- Operator (Instance)
Objectives
- Deploy a Kubernetes cluster with Terraform using OKE.
Prerequisites
- To deploy objects on OCI using Terraform you first need to prepare your environment for authenticating and running Terraform scripts. For more information, see Task 2: Prepare your Environment for Authentication and Run Terraform Scripts.
Task 1: Clone the Repository with the Terraform Scripts
-
Clone the
terraform-oci-oke
repository from here: terraform-oci-oke.- At the time of writing this tutorial the latest version is
5.1.8
. - You can either download the full zip file with the full repository or clone the repository using the
git clone
command. - Click on version to access the 5.1.8 branch.
- Note that you are in the 5.1.8 branch.
- Click the docs folder.
- At the time of writing this tutorial the latest version is
-
Click the src folder.
-
Click the SUMMARY.md file.
-
Click Getting started.
- Note that you are still in the 5.1.8 branch, other branches may contain different types of documentation steps depending on the code version.
- Note that you are on the getting started page, and this page will do exactly what the title says.
In the following output you can see how we use the Getting Started page to create this tutorial.
-
Run the
git clone
command to clone the repository.iwhooge@iwhooge-mac ~ % git clone https://github.com/oracle-terraform-modules/terraform-oci-oke.git tfoke
-
Run the following command to change the folder into repository folder.
-
Run the following command to list the content of the folder.
-
You can see all files of the repository.
-
Create a
providers.tf
file inside this directory with the following content.providers.tf
:provider "oci" { fingerprint = var.api_fingerprint private_key_path = var.api_private_key_path region = var.region tenancy_ocid = var.tenancy_id user_ocid = var.user_id } provider "oci" { fingerprint = var.api_fingerprint private_key_path = var.api_private_key_path region = var.home_region tenancy_ocid = var.tenancy_id user_ocid = var.user_id alias = "home" }
- Run the following command to list the folder content.
- Note that the
providers.tf
file is created.
-
Run the following command to initialize Terraform and upgrade the required modules.
iwhooge@iwhooge-mac tfoke % terraform init --upgrade
-
Note that the message: Terraform has been successfully Initialized is displayed.
-
Make sure you have the following information available, to create the
terraform.tfvars
file. This information can be retrieved using the following steps mentioned here: Task 2: Prepare your Environment for Authentication and Run Terraform Scripts.Tenancy OCID ocid1.tenancy.oc1..aaaaaaaaXXX User OCID ocid1.user.oc1..aaaaaaaaXXX Fingerprint 30:XXX Region me-abudhabi-1 Private Key Path ~/.oci/4-4-2023-rsa-key.pem Compartment OCID ocid1.compartment.oc1..aaaaaaaaXXX -
We need to create a
terraform.tfvars
file inside this directory with the following content.terraform.tfvars
:# Identity and access parameters api_fingerprint = "30:XXX" api_private_key_path = "~/.oci/4-4-2023-rsa-key.pem" home_region = "me-abudhabi-1" region = "me-abudhabi-1" tenancy_id = "ocid1.tenancy.oc1..aaaaaaaaXXX" user_id = "ocid1.user.oc1..aaaaaaaaXXX" # general oci parameters compartment_id = "ocid1.compartment.oc1..aaaaaaaaXXX" timezone = "Australia/Sydney" # ssh keys ssh_private_key_path = "~/.ssh/id_rsa" ssh_public_key_path = "~/.ssh/id_rsa.pub" # networking create_vcn = true assign_dns = true lockdown_default_seclist = true vcn_cidrs = ["10.0.0.0/16"] vcn_dns_label = "oke" vcn_name = "oke" # Subnets subnets = { bastion = { newbits = 13, netnum = 0, dns_label = "bastion", create="always" } operator = { newbits = 13, netnum = 1, dns_label = "operator", create="always" } cp = { newbits = 13, netnum = 2, dns_label = "cp", create="always" } int_lb = { newbits = 11, netnum = 16, dns_label = "ilb", create="always" } pub_lb = { newbits = 11, netnum = 17, dns_label = "plb", create="always" } workers = { newbits = 2, netnum = 1, dns_label = "workers", create="always" } pods = { newbits = 2, netnum = 2, dns_label = "pods", create="always" } } # bastion create_bastion = true bastion_allowed_cidrs = ["0.0.0.0/0"] bastion_user = "opc" # operator create_operator = true operator_install_k9s = true # iam create_iam_operator_policy = "always" create_iam_resources = true create_iam_tag_namespace = false // true/*false create_iam_defined_tags = false // true/*false tag_namespace = "oke" use_defined_tags = false // true/*false # cluster create_cluster = true cluster_name = "oke" cni_type = "flannel" kubernetes_version = "v1.29.1" pods_cidr = "10.244.0.0/16" services_cidr = "10.96.0.0/16" # Worker pool defaults worker_pool_size = 0 worker_pool_mode = "node-pool" # Worker defaults await_node_readiness = "none" worker_pools = { np1 = { shape = "VM.Standard.E4.Flex", ocpus = 2, memory = 32, size = 1, boot_volume_size = 50, kubernetes_version = "v1.29.1" } np2 = { shape = "VM.Standard.E4.Flex", ocpus = 2, memory = 32, size = 3, boot_volume_size = 150, kubernetes_version = "v1.30.1" } } # Security allow_node_port_access = false allow_worker_internet_access = true allow_worker_ssh_access = true control_plane_allowed_cidrs = ["0.0.0.0/0"] control_plane_is_public = false load_balancers = "both" preferred_load_balancer = "public"
- Run the following command to list the folder content.
- Note that the
terraform.tfvars
file is created.
Task 2: Run Terraform Apply and Create one OKE Cluster with the Necessary Resources (VCN, Subnets and so on)
-
Run the following command to plan the Kubernetes cluster deployment on OKE using Terraform.
iwhooge@iwhooge-mac tfoke % terraform plan
- Note that this Terraform code will deploy 77 objects.
- Run the
terraform apply
command.
-
Run the following command to apply the Kubernetes cluster deployment on OKE using Terraform.
iwhooge@iwhooge-mac tfoke % terraform apply
- Note that this Terraform code will deploy 77 objects.
- Enter
yes
to continue the deployment.
When the deployment is finished, you will see a message that the apply is completed.
-
Note the output that has been provided with useful information you might need for your reference.
Outputs:
apiserver_private_host = "10.0.0.22" availability_domains = tomap({ "1" = "cAtJ:ME-ABUDHABI-1-AD-1" }) bastion_id = "ocid1.instance.oc1.me-abudhabi-1.anqxkljrpkxxxxxx5chephxjadkxqa2ekxksb5gokj4q" bastion_nsg_id = "ocid1.networksecuritygroup.oc1.me-abudhabi-1.aaaaaxxxxxxxxxxxy5qm22odw7by37h77ki6cosoqd7pzwq" bastion_public_ip = "129.151.149.237" bastion_subnet_cidr = "10.0.0.0/29" bastion_subnet_id = "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaxxxxxxxxxxxxxxxxmyzp6f2czibfr6534evfkt7m2a" cluster_endpoints = tomap({ "kubernetes" = "" "private_endpoint" = "10.0.0.22:6443" "public_endpoint" = "" "vcn_hostname_endpoint" = "cbyedhyevbq.cp.oke.oraclevcn.com:6443" }) cluster_id = "ocid1.cluster.oc1.me-abudhabi-1.aaaaaaaxxxxxxxxxxxxx57gz5q26hmpzkbcbyedhyevbq" control_plane_nsg_id = "ocid1.networksecuritygroup.oc1.me-abudhabi-1.aaaaaaaavxvgoewcgxxxxxxxxxxxxx6psycdz6zz5gaf6re4kcxa" control_plane_subnet_cidr = "10.0.0.16/29" control_plane_subnet_id = "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaajvvpdmuixxxxxxxxxxxjkpx77hqu3v2s4qilnp56a" dynamic_group_ids = tolist([ "ocid1.dynamicgroup.oc1..aaaaaaaafsikvwgexxxxxxxxx5tx2u4c2s2eic4sslwyabva", ]) ig_route_table_id = "ocid1.routetable.oc1.me-abudhabi-1.aaaaaaaalxxxxxxxxxxxxxxcyzrl2af4ihrkhcjttu2w7aq" int_lb_nsg_id = "ocid1.networksecuritygroup.oc1.me-abudhabi-1.aaaaaaaanwppaafc7fyiwcoyhjxxxxxxxxxxxxxxxxx6a6752l7547gxg7ea" int_lb_subnet_cidr = "10.0.2.0/27" int_lb_subnet_id = "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaawo5xmcwwbzk3gxxxxxxxxxxxxxxxxxxjjfllbvsdauaq" lpg_all_attributes = {} nat_route_table_id = "ocid1.routetable.oc1.me-abudhabi-1.aaaaaaaapqn3uqtszdcxxxxxxxxxxxxxxxxxliwveow232xgffigq" operator_id = "ocid1.instance.oc1.me-abudhabi-1.anqxkljrpkbrwsac5exxxxxxxxxxxxxxxxxxxxxxxrfdxsjdfzmq56jja" operator_nsg_id = "ocid1.networksecuritygroup.oc1.me-abudhabi-1.aaaaaaaao5kh3z3eaf4zhbtxixxxxxxxxxxxxxxxxxxxxxevyds3cbrjqzthv5ja" operator_private_ip = "10.0.0.14" operator_subnet_cidr = "10.0.0.8/29" operator_subnet_id = "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaa6vnirazbox2edmtnxrzhxxxxxxxxxxxxxxxxxxxishk7556iw6zyq" pod_subnet_cidr = "10.0.128.0/18" pod_subnet_id = "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaasfnwg3lnyphk5h275r46ksi3xxxxxxxxxxxxxxxxxxxxxxxqz25iwkcinxa" policy_statements = tolist([ "Allow dynamic-group oke-operator-lccwyk to MANAGE clusters in compartment id ocid1.compartment.oc1..aaaaaaaa323ijv54zwkwbz2hhr2nnqoywlpxxxxxxxxxxxxxxxxxxxxxxxtecokaq4h4a", ]) pub_lb_nsg_id = "ocid1.networksecuritygroup.oc1.me-abudhabi-1.aaaaaaaae5stez5u26g6x75dy5nf72mfcixxxxxxxxxxxxxxxxxxxxxxxx2nwkwv5nkaa" pub_lb_subnet_cidr = "10.0.2.32/27" pub_lb_subnet_id = "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaagr5j2l6tu3rmroejg6t4nrixxxxxxxxxx6oar6xcirvq" ssh_to_bastion = "ssh -i ~/.ssh/id_rsa opc@129.xxx.xxx.xxx" ssh_to_operator = "ssh -o ProxyCommand='ssh -W %h:%p -i ~/.ssh/id_rsa opc@129.xxx.xxx.xxx' -i ~/.ssh/id_rsa opc@10.0.0.14" state_id = "lccwyk" vcn_id = "ocid1.vcn.oc1.me-abudhabi-1.amaaaaaapkbrwsaatndqrw3xq7e7krqfxxxxxxxxxxxxxxxpywz4qvhida" worker_nsg_id = "ocid1.networksecuritygroup.oc1.me-abudhabi-1.aaaaaaaazbpfavygiv4xy3khfi7cxunixxxxxxxxxxxxxxxxjvsm6jqtja" worker_pool_ids = { "np1" = "ocid1.nodepool.oc1.me-abudhabi-1.aaaaaaaauo57ekyoiiaif25gq7uwrpxxxxxxxxxxxxxxpsnygeam7t2wa" "np2" = "ocid1.nodepool.oc1.me-abudhabi-1.aaaaaaaa7423cp6zyntxwxokol3kmdxxxxxxxxxxxxx3wnayv2xq6nyq" } worker_pool_ips = { "np1" = { "ocid1.instance.oc1.me-abudhabi-1.anqxkljrpkbrwsac3t3smjsyxjgen4zqjpxxxxxxxxxxxxx663igu7cznla" = "10.0.90.37" } "np2" = { "ocid1.instance.oc1.me-abudhabi-1.anqxkljrpkbrwsac2eayropnzgazrxxxxxxxxxxxxxxxxpiykrq4r5pbga" = "10.0.125.238" "ocid1.instance.oc1.me-abudhabi-1.anqxkljrpkbrwsace4aup47ukjcedxxxxxxxxxxxxxxxxxx2hdxbsyiitva" = "10.0.92.136" "ocid1.instance.oc1.me-abudhabi-1.anqxkljrpkbrwsacojjri4b7qsaxxxxxxxxxxxxxxxxxnarf75ejv7a2a" = "10.0.111.157" } } worker_subnet_cidr = "10.0.64.0/18" worker_subnet_id = "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaarrnazbrmwbiyl3ljmdxhgmxxxxxxxxxxxxxxxxxxxv3wtgmiucnsa" iwhooge@iwhooge-mac tfoke %
Task 3: Confirm Terraform Deployment in the OCI Console
Navigate to the OCI Console and confirm the following Terraform deployments.
-
OCI Kubernetes Engine Cluster
-
Go to the OCI Console.
- Navigate to Developer Services and click Kubernetes Clusters (OKE).
- Click the oke Kubernetes cluster created in Task 2.
-
Scroll down.
- Click Node pools.
- Click the np1 node pool.
-
Scroll down.
-
Note that there is one worker node in the np1 node pool.
-
Go to the previous page and click the np2 node pool.
-
Scroll down.
-
Note that there are three worker nodes in the np2 node pool.
-
-
Instances
-
Go to the OCI Console.
- Navigate to Compute and click Instances.
- Review the four worker nodes of the Kubernetes cluster.
- Review the operator of the Kubernetes cluster.
- Review the bastion host belongs of the Kubernetes cluster.
-
-
Virtual Cloud Network
-
Go to the OCI Console, navigate to Networking, Virtual cloud networks and click the oke VCN.
-
Click Subnets and you can see all the seven subnets, of the Kubernetes cluster.
-
The following image illustrates what we have created so far with the Terraform script.
-
Task 4: Use Bastion and Operator to Check the Connectivity
In the output you will find some commands to connect to your Kubernetes environment when the Terraform deployment is completed.
-
Run the following command to connect to the bastion host.
ssh_to_bastion = "ssh -i ~/.ssh/id_rsa opc@129.xxx.xxx.xxx"
-
Run the following command to connect to the Kubernetes operator through the bastion host.
ssh_to_operator = "ssh -o ProxyCommand='ssh -W %h:%p -i ~/.ssh/id_rsa opc@129.xxx.xxx.xxx' -i ~/.ssh/id_rsa opc@10.0.0.14"
-
We will manage the Kubernetes cluster from the operator. Enter yes twice and run the
kubectl get nodes
command to get the Kubernetes Worker nodes. -
To make it easy in terms of SSH connections, add the following additional commands in your
~/.ssh/config
file.Host bastion47 HostName 129.xxx.xxx.xxx user opc IdentityFile ~/.ssh/id_rsa UserKnownHostsFile /dev/null StrictHostKeyChecking=no TCPKeepAlive=yes ServerAliveInterval=50 Host operator47 HostName 10.0.0.14 user opc IdentityFile ~/.ssh/id_rsa ProxyJump bastion47 UserKnownHostsFile /dev/null StrictHostKeyChecking=no TCPKeepAlive=yes ServerAliveInterval=50
After adding the following content to the
~/.ssh/config
file, you will be able to use simple names in the SSH commands.iwhooge@iwhooge-mac tfoke % ssh operator47
Task 5: Delete the Kubernetes Cluster using Terraform
-
Run the
terraform destroy
command to delete the Kubernetes cluster in OKE.iwhooge@iwhooge-mac tfoke % terraform destroy
-
Enter yes.
When the destroy process is finished, you will see a message that the destroy is completed.
Next Steps
Deploying Kubernetes cluster on OCI Kubernetes Engine using Terraform provides an efficient, automated, and scalable solution for managing containerized applications in the cloud.
By leveraging Terraform’s IaC capabilities, you ensure that your Kubernetes clusters are deployed consistently and can be easily maintained or updated over time.
This integration streamlines the process, allowing for better version control, automated scaling, and a repeatable infrastructure setup. Whether you are managing a single cluster or scaling across environments, this approach empowers teams to manage their Kubernetes workloads with reliability and ease in Oracle Cloud Infrastructure.
Acknowledgments
- Author - Iwan Hoogendoorn (OCI Network Specialist)
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Deploy a Kubernetes Cluster with Terraform using Oracle Cloud Infrastructure Kubernetes Engine
G18096-01
October 2024