Note:
- This tutorial requires access to Oracle Cloud. To sign up for a free account, see Get started with Oracle Cloud Infrastructure Free Tier.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Leverage SSH Tunneling with Oracle Cloud Infrastructure Kubernetes Engine for Secure Application Development
Introduction
When I got SSH tunneling with OKE working with Ali Mukadam’s help, I called it “magic.”
He responded to me with the following message:
“You called it magic, others call it Science. Where I am they are one and the same.”
The original quote is from a Thor movie:
“Your Ancestors Called it Magic, but You Call it Science. I Come From a Land Where They Are One and the Same.”
So what is this magic?
In modern application development, securing connections between local and cloud-based resources is essential, especially when working with Oracle Cloud Infrastructure Kubernetes Engine (OCI Kubernetes Engine or OKE). SSH tunneling offers a simple yet powerful way to securely connect to OKE clusters, enabling developers to manage and interact with resources without exposing them to the public internet. This tutorial explores how to set up SSH tunneling with OKE and how developers can integrate this approach into their workflow for enhanced security and efficiency. From initial configuration to best practices, we will cover everything you need to leverage SSH tunneling effectively in your OKE-based applications.
The following image illustrates the full traffic flows of SSH tunneling two different applications.
Objectives
- Leverage SSH tunneling with OKE for secure application development.
Task 1: Deploy Kubernetes Cluster on OKE (with a Bastion and Operator Instance)
Make sure you have a deployed Kubernetes cluster on OKE.
-
To deploy Kubernetes cluster on OKE, use one of the following method:
-
Deploy a Kubernetes Cluster with Terraform using Oracle Cloud Infrastructure Kubernetes Engine: Deploy a single Kubernetes cluster on OKE using Terraform.
-
Use Terraform to Deploy Multiple Kubernetes Clusters across different OCI Regions using OKE and Create a Full Mesh Network using RPC: Deploy multiple Kubernetes cluster on multiple regions on OKE using Terraform.
-
Task 1: Create a New Kubernetes Cluster and Verify the Components: Deploy a Kubernetes cluster on OKE using the Quick Create mode.
-
Task 1: Deploy a Kubernetes Cluster using OKE: Deploy a Kubernetes cluster on OKE using the Custom Create mode.
In this tutorial, we will use Deploy a Kubernetes Cluster with Terraform using Oracle Cloud Infrastructure Kubernetes Engine as the base Kubernetes cluster on OKE to explain how we can use an SSH tunnel to access a container-based application deployed on OKE with localhost.
Let us quickly review the OCI OKE environment to set the stage.
-
-
Virtual Cloud Network (VCN)
Log in to the OCI Console, navigate to Networking and Virtual Cloud Networks.
-
Review the VCN named oke.
-
Click the oke VCN.
-
-
Subnets
Go to the VCN details page.
- Click Subnets.
- Review the deployed subnets.
-
Gateways
Go to the VCN details page.
- Click Internet Gateways.
- Review the created internet gateway.
- Click NAT Gateways.
- Review the created NAT gateway.
- Click Service Gateways.
- Review the created service gateway.
- Click Security Lists.
- Review the created security lists.
-
Node Pools
Navigate to Developer Services and Container & Artifacts.
- Click Kubernetes Clusters (OKE).
- Click the oke cluster.
- Click Node Pools.
- Review the node pools.
-
Instances
Navigate to Compute and Instances.
- Click Instances.
- Review the Kubernetes Worker nodes deployments.
- Review the Bastion Host deployment.
- Review the Kubernetes Operator deployment.
-
The following image illustrates a full overview of what our starting point will be for the remaining content of this tutorial.
-
The following image illustrates a simplified view of the previous figure. We will use this figure in the rest of this tutorial.
Task 2: Deploy an NGINX Web Server on the Kubernetes Cluster
The operator cannot directly be accessed from the internet and we have to go through the Bastion host.
-
In this tutorial, we are using an SSH script provided by Ali Mukadam to connect to the operator using one single SSH command. This script and method to connect is provided here: Task 4: Use Bastion and Operator to Check the Connectivity. You will need this script later in this tutorial so make sure you use it.
-
Set up an SSH session for the Kubernetes operator.
-
Review the active Worker Nodes with the
kubectl get nodes
command. -
Review all the active Worker Nodes.
-
-
To create a sample NGINX application that is running inside a container, create a YAML file named
modified2_nginx_ext_lb.yaml
with the following code on the operator.The YAML file contains the code to create the NGINX web server application with 3 replicas, and will also create a service of the type load balancer.
modified2_nginx_ext_lb.yaml
:apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx annotations: oci.oraclecloud.com/load-balancer-type: "lb" service.beta.kubernetes.io/oci-load-balancer-internal: "true" service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaaguwakvc6jxxxxxxxxxxxxxxxxxxxu7rixvdf5urvpxldhya" service.beta.kubernetes.io/oci-load-balancer-shape: "flexible" service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "50" service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "100" spec: type: LoadBalancer ports: - port: 80 selector: app: nginx
-
We want to make this application accessible internally and decide to create the service of type load balancer that is attached to the private load balancer subnet.
To assign the service of type load balancer to a private load balancer subnet you need the subnet OCID of the private load balancer subnet, and you need to add the following code in the annotations section.
annotations: oci.oraclecloud.com/load-balancer-type: "lb" service.beta.kubernetes.io/oci-load-balancer-internal: "true" service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaaguwakvcxxxxxxxxxxxxxxxxxxxxxxxxxxxxixvdf5urvpxldhya" service.beta.kubernetes.io/oci-load-balancer-shape: "flexible" service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "50" service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "100"
-
To get the subnet OCID of the private load balancer subnet click the internal load balancer subnet.
Click Show and Copy for full private load balancer subnet OCID. Use this OCID in the annotations section.
-
To deploy the NGINX application and the service of type load balancer, run the following commands:
-
To create the YAML file on the operator.
nano modified2_nginx_ext_lb.yaml
-
To deploy the NGINX application with the service of type load balancer.
kubectl apply -f modified2_nginx_ext_lb.yaml
To verify if the NGINX application was deployed successfully (not shown in the image).
kubectl get pods
-
To verify if the service of the type load balancer was deployed successfully.
kubectl get svc
-
Notice that the service of the type load balancer was deployed successfully.
-
-
When we look at the internal load balancer subnet we can see that the CIDR block for this subnet is
10.0.2.0/27
. The new service of the load balancer has the IP address10.0.2.3
. -
To verify the load balancer object in the OCI Console, navigate to Networking, Load balancer and click Load Balancer.
-
The following image illustrates the deployment we have done till now. Notice that the load balancer is added.
Test the New Pod/Application
-
Approach 1: From a temporary pod
To test if the newly deployed NGINX application is working with the service of the type load balancer we can do an internal connectivity test using a temporary pod.
There are multiple ways to test connectivity to the application, one way would be to open a browser and test if you can access the webpage. But when we do not have a browser available we can do another quick test by deploying a temporary pod.
To create a temporary pod and use that for connectivity tests, see Task 3: Deploy a Sample Web Application and Service.
-
Run the following command:
-
To get the IP address of the internal load balancer service.
kubectl get svc
-
To deploy a sample pod to test the web application connectivity.
kubectl run --rm -i -t --image=alpine test-$RANDOM -- sh
-
To test connectivity to the web server using wget.
wget -qO- http://<ip-of-internal-lb-service>
-
Notice the HTML code that the web server returns, confirming the web server and connectivity using the internal load balancing service are working.
-
-
Run the following command to exit the temporary pod.
exit
Notice that the pod is deleted immediately after we close the command line interface.
-
The following image illustrates the deployment we have done till now. Notice that the temporarily deployed pod is connecting to the service of type load balancer IP to test the connectivity.
-
-
Approach 2: From your local computer
-
Run the following command to test connectivity to the test NGINX application with the service of type load balancer from our local laptop.
iwhooge@iwhooge-mac ~ % wget -qO- <ip-of-internal-lb-service>
As you will notice this is currently not working as the service of type load balancer has an internal IP address and this is only reachable inside the Kubernetes environment.
-
Run the following command to try to access the NGINX application using the local IP address with a custom port
8080
.iwhooge@iwhooge-mac ~ % wget -qO- 127.0.0.1:8080 iwhooge@iwhooge-mac ~ %
For now, this is not working, but we will use the same command later in this tutorial after we have set up the SSH tunnel.
-
-
The following image illustrates the deployment we have done till now. Notice that the tunneled connection to the local IP address is not working.
Task 3: Create an SSH Config Script with Localhost Entries
To allow the SSH tunnel to work we need to add the following entry in our SSH config file located in the /Users/iwhooge/.ssh
folder.
-
Run the
nano /Users/iwhooge/.ssh/config
command to edit the config file. -
Add the following line in the Host operator47 section.
LocalForward 8080 127.0.0.1:8080
-
The output of the SSH config file.
iwhooge@iwhooge-mac .ssh % pwd /Users/iwhooge/.ssh iwhooge@iwhooge-mac .ssh % more config Host bastion47 HostName 129.xxx.xxx.xxx user opc IdentityFile ~/.ssh/id_rsa UserKnownHostsFile /dev/null StrictHostKeyChecking=no TCPKeepAlive=yes ServerAliveInterval=50 Host operator47 HostName 10.0.0.11 user opc IdentityFile ~/.ssh/id_rsa ProxyJump bastion47 UserKnownHostsFile /dev/null StrictHostKeyChecking=no TCPKeepAlive=yes ServerAliveInterval=50 LocalForward 8080 127.0.0.1:8080 iwhooge@iwhooge-mac .ssh %
-
Notice that the
LocalForward
command is added to the SSH config file.
Task 4: Set up the SSH Tunnel and Connect to the NGINX Web Server using Localhost
-
If you are connected with SSH to the operator, disconnect that session.
-
Reconnect to the operator using the script again.
iwhooge@iwhooge-mac ~ % ssh operator47
-
Run the following command to get the IP address of the internal load balancer service.
[opc@o-sqrtga ~]$ kubectl get svc
-
Run the following command on the operator (SSH window) to set up the SSH tunnel and forward all traffic that is going to localhost
8080
to the service of type load balancer80
. The service of type load balancer will then forward the traffic eventually to the NGINX application.[opc@o-sqrtga ~]$ k port-forward svc/my-nginx-svc 8080:80
Notice the Forwarding messages on the SSH window that localhost port
8080
is forwarded to port80
.Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80
-
-
Test the connectivity from your local computer and verify if the connectivity works using a local IP address (
127.0.0.1
) with port8080
and see if that will allows you to connect to the NGINX application inside the OKE environment. -
Open a new terminal and run the following command to test the connectivity.
iwhooge@iwhooge-mac ~ % wget -qO- 127.0.0.1:8080
-
Notice that you will get the following output in the terminal of your local computer which means it is working.
iwhooge@iwhooge-mac ~ % wget -qO- 127.0.0.1:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> iwhooge@iwhooge-mac ~ %
-
In the operator SSH window, notice the output has changed and a new line added Handling connection for 8080.
-
A quick test using a web browser shows the following output.
-
The following image illustrates the deployment we have done till now. Notice that the tunnelled connection to the local IP address is working.
Task 5: Deploy a MySQL Database Service on the Kubernetes Cluster
We can reach the NGINX application through the SSH tunnel, now add a MySQL database service that is running inside the OKE environment.
-
To set up a MySQL database service inside a Kubernetes environment you need to create:
- A secret for password protection.
- A Persistent Volume and a Persistent Volume Claim for database storage.
- The MYSQL database service with a service of type load balancer.
-
Run the following commands to:
-
Create the password for the MySQL database service.
nano mysql-secret.yaml
Copy the following YAML code in
mysql-secret.yaml
.apiVersion: v1 kind: Secret metadata: name: mysql-secret type: kubernetes.io/basic-auth stringData: password: Or@cle1
-
Apply the YAML code.
kubectl apply -f mysql-secret.yaml
-
Create the storage for the MySQL database service.
nano mysql-storage.yaml
Copy the following YAML code in
mysql-storage.yaml
.apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 20Gi
-
Apply the YAML code.
kubectl apply -f mysql-storage.yaml
-
Create the MySQL database service and the service of type load balancer.
nano mysql-deployment.yaml
Copy the following YAML code in
mysql-deployment.yaml
.apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - image: mysql:latest name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim --- apiVersion: v1 kind: Service metadata: name: my-mysql-svc labels: app: mysql annotations: oci.oraclecloud.com/load-balancer-type: "lb" service.beta.kubernetes.io/oci-load-balancer-internal: "true" service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid1.subnet.oc1.me-abudhabi-1.aaaaaaaaguwakvc6xxxxxxxxxxxxxxxxxxxxxx2rseu7rixvdf5urvpxldhya" service.beta.kubernetes.io/oci-load-balancer-shape: "flexible" service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "50" service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "100" spec: type: LoadBalancer ports: - port: 3306 selector: app: mysql
-
Apply the YAML code.
kubectl apply -f mysql-deployment.yaml
-
Verify if the MySQL database service has been deployed successfully.
kubectl get pod
-
Note that the MySQL database service has been deployed successfully.
-
Verify if the service of type load balancer has been deployed successfully.
kubectl get svc
-
Note that the service of type load balancer has been deployed successfully..
-
-
To verify the load balancer object in the OCI Console, navigate to Networking, Load balancer and click the load balancer.
-
To access the terminal console of the MySQL database service we can use the
kubectl exec
command and the localhost SSH tunnel command.-
Run the following command to access the terminal console from the operator.
kubectl exec --stdin --tty mysql-74f8bf98c5-bl8vv -- /bin/bash
-
Run the following command to access the MySQL database service console.
mysql -p
-
Enter the password you specified in the
mysql-secret.yaml
file and notice the welcome message of the MySQL database service. -
Run the following SQL query to see a list of all MySQL databases inside the database service.
SHOW DATABASES;
Now access the MySQL database service management console from within the Kubernetes environment.
-
-
The following image illustrates the deployment we have done till now. Notice that the MySQL service with the service of type load balancer is deployed.
Task 6: Add Additional Localhost Entries Inside the SSH Config Script
Add additional localhost entries inside the SSH config script to access the new MySQL database service.
-
To allow the SSH tunnel to work for the MySQL database service we need to add the following entry in our SSH config file located in the
/Users/iwhooge/.ssh
folder. -
Run the
nano /Users/iwhooge/.ssh/config
command to edit the config file. -
Add the following line in the Host operator47 section.
LocalForward 8306 127.0.0.1:8306
-
Output of the SSH config file.
Host bastion47 HostName 129.xxx.xxx.xxx user opc IdentityFile ~/.ssh/id_rsa UserKnownHostsFile /dev/null StrictHostKeyChecking=no TCPKeepAlive=yes ServerAliveInterval=50 Host operator47 HostName 10.0.0.11 user opc IdentityFile ~/.ssh/id_rsa ProxyJump bastion47 UserKnownHostsFile /dev/null StrictHostKeyChecking=no TCPKeepAlive=yes ServerAliveInterval=50 LocalForward 8080 127.0.0.1:8080 LocalForward 8306 127.0.0.1:8306
-
Note that the
LocalForward
command is added to the SSH config file.
Task 7: Set up the SSH Tunnel and Connect to the MySQL Database using Localhost
-
To test the connection to the MySQL database service from the local computer you need to download and install MySQL Workbench on the local computer.
-
Open a new terminal to the operator using the script again. Leave the other terminal open.
iwhooge@iwhooge-mac ~ % ssh operator47
-
Run the following command on the operator SSH window to set up the SSH tunnel and forward all traffic that is going to localhost
8306
to the service of type load balancer3306
. The service of type load balancer will then forward the traffic eventually to the MySQL database service.[opc@o-sqrtga ~]$ k port-forward svc/my-mysql-svc 8306:3306
-
Notice the Forwarding messages on the SSH window that localhost port
8306
is forwarded to port3306
.Forwarding from 127.0.0.1:8306 -> 3306 Forwarding from [::1]:8306 -> 3306
-
-
The MySQL Workbench application is installed and the SSH session and tunnel are established, open the MySQL Workbench application on your local computer.
-
Click + to add a new MySQL connection.
-
In Setup New Connection, enter the following information.
- Connection Name: Enter a name.
- Hostname: Enter the IP address to be
127.0.0.1
(localhost as we are tunneling the traffic). - Port: Enter the port to be
8306
, the port that we use for the local tunnel forwarding for the MySQL database service. - Click Test Connection.
- Password: Enter the password you specified in the
mysql-secret.yaml
file. - Click OK.
-
Click Continue Anyway to disregard the connection warning. This warning is given as the MySQL Workbench application version and the deployed MySQL database service version might not be compatible.
- Notice the successful connection message.
- Click OK.
- Click OK to save the MySQL connection.
-
Click the saved MySQL connection to open the session.
-
Notice the Please stand by… message.
-
Click Continue Anyway to disregard the connection warning.
-
Run the following SQL query to set a list of all MySQL databases inside the database service.
SHOW DATABASES;
-
Click the lightning icon.
-
Notice the output of all MySQL databases inside the MySQL database service.
-
-
On the operator SSH window notice the output has changed and a new line added Handling connection for 8306.
-
There are multiple entries because we have made multiple connections, one each for the:
- Test.
- Actual connection.
- SQL query.
- Test we did earlier (additional).
-
We can now open multiple SSH sessions toward the operator and run multiple tunnel commands for different applications simultaneously. Notice the following windows.
- The SSH terminal with the tunnel command for the MySQL database service.
- The connection using the MYSQL Workbench application from the local computer to the MySQL database service using the localhost IP address
127.0.0.1
. - The SSH terminal with the tunnel command for the NGINX application.
- The connection using the Safari Internet Browser from the local computer to the NGINX application using the localhost IP address
127.0.0.1
.
-
The following image illustrates the deployment we have done till now. Notice that the tunnelled connection to the local IP address is working for the NGINX application and the MySQL database service simultaneously at the same time with the use of multiple SSH sessions and SSH tunnels.
Task 8: Clean up all Applications and Services
-
Run the following commands to clean up the deployed NGINX application and the associated service.
kubectl get pods kubectl delete service my-nginx-svc -n default kubectl get pods kubectl get svc kubectl delete deployment my-nginx --namespace default kubectl get svc
-
Run the following commands to clean up the deployed MySQL database service and the associated service, storage, and password.
kubectl delete deployment,svc mysql kubectl delete pvc mysql-pv-claim kubectl delete pv mysql-pv-volume kubectl delete secret mysql-secret
-
The following image illustrates the deployment we have done till now where you have a clean environment again and can start over.
Next Steps
Securing access to OKE clusters is a critical step in modern application development, and SSH tunneling provides a robust and straightforward solution. By implementing the steps in this tutorial, developers can safeguard their resources, streamline their workflows, and maintain control over sensitive connections for multiple applications. Integrating SSH tunneling into your OKE setup not only enhances security but also minimizes the risks associated with exposing resources to the public internet. With these practices in place, you can confidently make use of your OKE clusters and focus on building scalable, secure, and efficient applications.
Acknowledgments
- Author - Iwan Hoogendoorn (OCI Network Specialist)
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Leverage SSH Tunneling with Oracle Cloud Infrastructure Kubernetes Engine for Secure Application Development
G21941-02
December 2024