Note:

Use Ingress-Nginx Controller with Oracle Cloud Native Environment

Introduction

Ingress is a Kubernetes API object that manages external access to a cluster’s services. The Ingress-Nginx Controller uses NGINX as a reverse proxy and load balancer that can load-balance Websocket, gRPC, TCP, and UDP applications.

Objectives

In this tutorial, you will learn:

Prerequisites

Deploy Oracle Cloud Native Environment

Note: If running in your own tenancy, read the linux-virt-labs GitHub project README.md and complete the prerequisites before deploying the lab environment.

  1. Open a terminal on the Luna Desktop.

  2. Clone the linux-virt-labs GitHub project.

    git clone https://github.com/oracle-devrel/linux-virt-labs.git
    
  3. Change into the working directory.

    cd linux-virt-labs/ocne
    
  4. Install the required collections.

    ansible-galaxy collection install -r requirements.yml
    
  5. Update the Oracle Cloud Native Environment repository versions.

    cat << EOF | tee repos.yml > /dev/null
    ol8_enable_repo: "ol8_olcne19"
    ol8_disable_repo: "ol8_olcne12 ol8_olcne13 ol8_olcne14 ol8_olcne15 ol8_olcne16 ol8_olcne17 ol8_olcne18"
    ol9_enable_repo: "ol9_olcne19"
    ol9_disable_repo: "ol9_olcne17 ol9_olcne18"
    EOF
    
    
  6. Deploy the lab environment.

    ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6" -e "@repos.yml" -e use_oci_ccm=true -e use_ingress_lb=true
    

    The free lab environment requires the extra variable local_python_interpreter, which sets ansible_python_interpreter for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.

    Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.

Confirm the Number of Nodes

It helps to know the number and names of nodes in your Kubernetes cluster.

  1. Open a terminal and connect via SSH to the ocne-operator node.

    ssh oracle@<ip_address_of_node>
    
  2. Set up the kubectl command on the operator node.

    mkdir -p $HOME/.kube; \
    ssh ocne-control-01 "sudo cat /etc/kubernetes/admin.conf" > $HOME/.kube/config; \
    sudo chown $(id -u):$(id -g) $HOME/.kube/config; \
    export KUBECONFIG=$HOME/.kube/config; \
    echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
    
    
  3. List the nodes in the cluster.

    kubectl get nodes
    

    The output shows the control plane and worker nodes in a Ready state along with their current Kubernetes version.

Install the Ingress-Nginx Controller

  1. Create the module.

    olcnectl module create \
    --environment-name myenvironment \
    --module ingress-nginx \
    --name myingress-nginx \
    --ingress-nginx-kubernetes-module mycluster 
    
    
  2. Install the module.

    olcnectl module install \
    --environment-name myenvironment \
    --name myingress-nginx
    
    

    The installation takes a few minutes to complete but returns to the shell prompt when done.

Verify the Ingress-Nginx Controller Module Deployment

  1. Verify the module deployed.

    olcnectl module instances --environment-name myenvironment
    

    Example Output:

    INSTANCE            	MODULE       	STATE    
    myingress-nginx     	ingress-nginx	installed
    ocne-control-01:8090	node         	installed
    ocne-worker-01:8090 	node         	installed
    ocne-worker-02:8090 	node         	installed
    mycluster           	kubernetes   	installed
    myoci               	oci-ccm      	installed
    
  2. Verify the deployment is running.

    kubectl get deployments --namespace ingress-nginx
    

    Example Output:

    NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
    myingress-nginx-controller   1/1     1            1           95s
    
  3. Verify the service is running.

    kubectl get service --namespace ingress-nginx
    

    Example Output:

    NAME                                   TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
    myingress-nginx-controller             LoadBalancer   10.107.245.248   138.2.173.141   80:31113/TCP,443:31534/TCP   100s
    myingress-nginx-controller-admission   ClusterIP      10.98.19.84      <none>          443/TCP                      100s
    

    The Ingress-Nginx Controller listens on an external IP address and exposes ports 80 and 443.

  4. Review the settings in the ConfigMap.

    kubectl describe configmaps --namespace ingress-nginx myingress-nginx-controller
    

    Example Output:

    Name:         myingress-nginx-controller
    Namespace:    ingress-nginx
    Labels:       app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=myingress-nginx
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.9.6
                  helm.sh/chart=ingress-nginx-4.9.1
    Annotations:  meta.helm.sh/release-name: myingress-nginx
                  meta.helm.sh/release-namespace: ingress-nginx
    
    Data
    ====
    allow-snippet-annotations:
    ----
    false
    
    BinaryData
    ====
    
    Events:
      Type    Reason  Age   From                      Message
      ----    ------  ----  ----                      -------
      Normal  CREATE  98s   nginx-ingress-controller  ConfigMap ingress-nginx/myingress-nginx-controller
    

Use the Ingress

Test the ingress by creating two services and using the Ingress-NGINX Controller to demonstrate how it routes the request to the correct deployment. We’ll use the http-echo container as the web application, which allows us to output a slightly different response.

  1. Create the first Pod.

    ```shell cat « EOF | tee coffee.yaml > /dev/null kind: Pod apiVersion: v1 metadata: name: coffee-app labels: app: coffee spec: containers: - name: coffee-app image: hashicorp/http-echo args: - “-text=coffee”


kind: Service apiVersion: v1 metadata: name: coffee-service spec: selector: app: coffee ports: - port: 5678 # Default port for image

EOF

2. Create the second Pod.

   ```shell
   cat << EOF | tee tea.yaml > /dev/null
   kind: Pod
   apiVersion: v1
   metadata:
     name: tea-app
     labels:
       app: tea
   spec:
     containers:
       - name: tea-app
         image: hashicorp/http-echo
         args:
           - "-text=tea"

   ---

   kind: Service
   apiVersion: v1
   metadata:
     name: tea-service
   spec:
     selector:
       app: tea
     ports:
       - port: 5678 # Default port for image

   EOF

  1. Create the resources.

    kubectl apply -f coffee.yaml
    kubectl apply -f tea.yaml
    
    
  2. Create the Ingress definition file.

    Next, you will create an Ingress definition to route incoming requests to either the /coffee or the /tea service.

    cat << EOF | tee ingress.yaml > /dev/null
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: demo-ingress
    spec:
      ingressClassName: nginx
      rules:
      - http:
          paths:
            - path: /coffee
              pathType: Prefix
              backend:
                service:
                  name: coffee-service
                  port:
                    number: 5678
            - path: /tea
              pathType: Prefix
              backend:
                service:
                  name: tea-service
                  port:
                    number: 5678
    
    EOF
    
    
  3. Create the Ingress.

    kubectl create -f ingress.yaml
    
  4. Verify the creation of the Ingress.

    watch kubectl get ingress demo-ingress
    

    Wait for the IP address of the Ingress to appear. Then, exit the watch command using Ctrl-C.

  5. Assign the Ingress load balancer IP address to a variable.

    INGRESS=$(kubectl get ingress demo-ingress -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
    
  6. Verify the Ingress.

    Test that everything works as expected. First, test the coffee service:

    curl -kL http://$INGRESS/coffee
    

    Next, test the tea service:

    curl -kL http://$INGRESS/tea
    

    Last, test what happens if you test a non-existent service:

    curl -kL http://$INGRESS/biscuit
    

    You will get a 404 error message because there is no mapping or application for biscuit.

Summary

A Kubernetes Ingress provides a robust way to expose deployed and available services on your Oracle Cloud Native Environment cluster to your users. Rules you define in the Ingress resource determine the HTTP and HTTPS traffic routing. An Ingress does not support the use of non-HTTP or HTTPS protocols. If you wish to use a non-HTTP or HTTPS service, use either a LoadBalancer or NodePort service type instead.

For More Information

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.