Example: Setting Up an Ingress Controller on a Cluster
You can set up different open source ingress controllers on clusters you have created with Container Engine for Kubernetes.
This topic explains how to set up an example ingress controller along with corresponding access control on an existing cluster. Having set up the ingress controller, this topic describes how to use the ingress controller with an example hello-world backend, and how to verify the ingress controller is working as expected.
Example Components
The example includes an ingress controller and a hello-world backend.
Ingress Controller Components
The ingress controller comprises:
- An ingress controller deployment called
nginx-ingress-controller
. The deployment deploys an image that contains the binary for the ingress controller and Nginx. The binary manipulates and reloads the/etc/nginx/nginx.conf
configuration file when an ingress is created in Kubernetes. Nginx upstreams point to services that match specified selectors. - An ingress controller service called
ingress-nginx
. The service exposes the ingress controller deployment as a LoadBalancer type service. Because Container Engine for Kubernetes uses an Oracle Cloud Infrastructure integration/cloud-provider, a load balancer will be dynamically created with the correct nodes configured as a backend set.
Backend Components
The hello-world backend comprises:
- A backend deployment called
docker-hello-world
. The deployment handles default routes for health checks and 404 responses. This is done by using a stock hello-world image that serves the minimum required routes for a default backend. - A backend service called
docker-hello-world-svc
.The service exposes the backend deployment for consumption by the ingress controller deployment.
Setting Up the Example Ingress Controller
In this section, you create the access rules for ingress. You then create the example ingress controller components, and confirm they are running.
Creating the Access Rules for the Ingress Controller
- If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up Cluster Access.
- If your Oracle Cloud Infrastructure user is a tenancy administrator, skip the next step and go straight to Creating the Service Account, and the Ingress Controller.
-
If your Oracle Cloud Infrastructure user is not a tenancy administrator, in a terminal window, grant the user the Kubernetes RBAC cluster-admin clusterrole on the cluster by entering:
kubectl create clusterrolebinding <my-cluster-admin-binding> --clusterrole=cluster-admin --user=<user-OCID>
where:
<my-cluster-admin-binding>
is a string of your choice to be used as the name for the binding between the user and the Kubernetes RBAC cluster-admin clusterrole. For example,jdoe_clst_adm
<user-OCID>
is the user's OCID (obtained from the Console ). For example,ocid1.user.oc1..aaaaa...zutq
(abbreviated for readability).
For example:
kubectl create clusterrolebinding jdoe_clst_adm --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaa...zutq
Creating the Service Account, and the Ingress Controller
-
Run the following command to create the
nginx-ingress-controller
ingress controller deployment, along with the Kubernetes RBAC roles and bindings:kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
-
Create and save the file
cloud-generic.yaml
containing the following code to define theingress-nginx
ingress controller service as a load balancer service:kind: Service apiVersion: v1 metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https
-
Using the file you just saved, create the
ingress-nginx
ingress controller service by running the following command:kubectl apply -f cloud-generic.yaml
Verifying the ingress-nginx
Ingress Controller Service is Running as a Load Balancer Service
-
View the list of running services by entering:
kubectl get svc -n ingress-nginx
The output from the above command shows the services that are running:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.96.229.38 <pending> 80:30756/TCP,443:30118/TCP 1h
The EXTERNAL-IP for the
ingress-nginx
ingress controller service is shown as<pending>
until the load balancer has been fully created in Oracle Cloud Infrastructure. -
Repeat the
kubectl get svc
command until an EXTERNAL-IP is shown for theingress-nginx
ingress controller service:kubectl get svc -n ingress-nginx
The output from the above command shows the EXTERNAL-IP for the
ingress-nginx
ingress controller service:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.96.229.38 129.146.214.219 80:30756/TCP,443:30118/TCP 1h
Creating the TLS Secret
A TLS secret is used for SSL termination on the ingress controller.
-
Output a new key to a file. For example, by entering:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
To generate the secret for this example, a self-signed certificate is used. While this is okay for testing, for production, use a certificate signed by a Certificate Authority.
Note
Under Windows, you may need to replace"/CN=nginxsvc/O=nginxsvc"
with"//CN=nginxsvc\O=nginxsvc"
. For example, this is necessary if you run theopenssl
command from a Git Bash shell. -
Create the TLS secret by entering:
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
Setting Up the Example Backend
In this section, you define a hello-world backend service and deployment.
Creating the docker-hello-world Service Definition
-
Create the file
hello-world-ingress.yaml
containing the following code. This code uses a publicly available hello-world image from Docker Hub. You can substitute another image of your choice that can be run in a similar manner.apiVersion: apps/v1 kind: Deployment metadata: name: docker-hello-world labels: app: docker-hello-world spec: selector: matchLabels: app: docker-hello-world replicas: 3 template: metadata: labels: app: docker-hello-world spec: containers: - name: docker-hello-world image: scottsbaldwin/docker-hello-world:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: docker-hello-world-svc spec: selector: app: docker-hello-world ports: - port: 8088 targetPort: 80 type: ClusterIP
Note the docker-hello-world service's type is ClusterIP, rather than LoadBalancer, because this service will be proxied by the
ingress-nginx
ingress controller service. The docker-hello-world service does not need public access directly to it. Instead, the public access will be routed from the load balancer to the ingress controller, and from the ingress controller to the upstream service. -
Create the new hello-world deployment and service on nodes in the cluster by running the following command:
kubectl create -f hello-world-ingress.yaml
Using the Example Ingress Controller to Access the Example Backend
In this section you create an ingress to access the backend using the ingress controller.
Creating the Ingress Resource
-
Create the file
ingress.yaml
and populate it with this code:apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: hello-world-ing annotations: kubernetes.io/ingress.class: "nginx" spec: tls: - secretName: tls-secret rules: - http: paths: - backend: serviceName: docker-hello-world-svc servicePort: 8088
-
Create the resource by entering:
kubectl create -f ingress.yaml
Verifying that the Example Components are Working as Expected
In this section, you confirm that all of the example components have been successfully created and are operating as expected. The docker-hello-world-svc
service should be running as a ClusterIP service, and the ingress-nginx
service should be running as a LoadBalancer service. Requests sent to the ingress controller should be routed to nodes in the cluster.
Obtaining the External IP Address of the Load Balancer
To confirm the ingress-nginx
service is
running as a LoadBalancer service, obtain its external IP address by entering:
kubectl get svc --all-namespaces
The output from the above command shows the services that are running:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default docker-hello-world-svc ClusterIP 10.96.83.247 <none> 8088/TCP 16s
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
ingress-nginx ingress-nginx LoadBalancer 10.96.229.38 129.146.214.219 80:30756/TCP,443:30118/TCP 5m
kube-system kube-dns ClusterIP 10.96.5.5 <none> 53/UDP,53/TCP 1h
Sending cURL Requests to the Load Balancer
-
Use the external IP address of the
ingress-nginx
service (for example, 129.146.214.219) to curl an http request by entering:curl -I http://129.146.214.219
Example output from the above command:
HTTP/1.1 301 Moved Permanently Via: 1.1 10.68.69.10 (McAfee Web Gateway 7.6.2.10.0.23236) Date: Thu, 07 Sep 2017 15:20:16 GMT Server: nginx/1.13.2 Location: https://129.146.214.219/ Content-Type: text/html Content-Length: 185 Proxy-Connection: Keep-Alive Strict-Transport-Security: max-age=15724800; includeSubDomains;
The output shows a 301 redirect and a Location header that suggest that http traffic is being redirected to https.
-
Either cURL against the https url or add the
-L
option to automatically follow the location header. The-k
option instructs cURL to not verify the SSL certificates. For example, by entering:curl -ikL http://129.146.214.219
Example output from the above command:
HTTP/1.1 301 Moved Permanently Via: 1.1 10.68.69.10 (McAfee Web Gateway 7.6.2.10.0.23236) Date: Thu, 07 Sep 2017 15:22:29 GMT Server: nginx/1.13.2 Location: https://129.146.214.219/ Content-Type: text/html Content-Length: 185 Proxy-Connection: Keep-Alive Strict-Transport-Security: max-age=15724800; includeSubDomains; HTTP/1.0 200 Connection established HTTP/1.1 200 OK Server: nginx/1.13.2 Date: Thu, 07 Sep 2017 15:22:30 GMT Content-Type: text/html Content-Length: 71 Connection: keep-alive Last-Modified: Thu, 07 Sep 2017 15:17:24 GMT ETag: "59b16304-47" Accept-Ranges: bytes Strict-Transport-Security: max-age=15724800; includeSubDomains; <h1>Hello webhook world from: docker-hello-world-1732906117-0ztkm</h1>
The last line of the output shows the HTML that is returned from the pod whose hostname is
docker-hello-world-1732906117-0ztkm
. -
Issue the cURL request several times to see the hostname in the HTML output change, demonstrating that load balancing is occurring:
$ curl -k https://129.146.214.219 <h1>Hello webhook world from: docker-hello-world-1732906117-6115l</h1> $ curl -k https://129.146.214.219 <h1>Hello webhook world from: docker-hello-world-1732906117-7r89v</h1> $ curl -k https://129.146.214.219 <h1>Hello webhook world from: docker-hello-world-1732906117-0ztkm</h1>
Inspecting nginx.conf
The nginx-ingress-controller
ingress controller deployment manipulates the nginx.conf
file in the pod within which it is running.
-
Find the name of the pod running the
nginx-ingress-controller
ingress controller deployment by entering:kubectl get po -n ingress-nginx
The output from the above command shows the name of the pod running the
nginx-ingress-controller
ingress controllerNAME READY STATUS RESTARTS AGE nginx-ingress-controller-110676328-h86xg 1/1 Running 0 1h
-
Use the name of the pod running the
nginx-ingress-controller
ingress controller deployment to show the contents ofnginx.conf
by entering the followingkubectl exec
command:kubectl exec -n ingress-nginx -it nginx-ingress-controller-110676328-h86xg -- cat /etc/nginx/nginx.conf
-
Look for
proxy_pass
in the output. There will be one for the default backend and another that looks similar to:proxy_pass http://upstream_balancer;
This shows that Nginx is proxying requests to an upstream called
upstream_balancer
. -
Locate the upstream definition in the output. It will look similar to:
upstream upstream_balancer { server 0.0.0.1:1234; # placeholder balancer_by_lua_block { tcp_udp_balancer.balance() } }
The upstream is proxying via Lua.