Note:
- This tutorial is available in an Oracle-provided free lab environment.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Use and Configure CoreDNS with Oracle Cloud Native Environment
Introduction
Dynamic Name System (DNS) provides a way to translate hostnames to IP addresses for systems located anywhere on your network or the Internet. CoreDNS provides the same DNS service within your Kubernetes cluster to ensure that all deployments on your Kubernetes cluster have a reliable communication mechanism between the pods and services it uses. CoreDNS resolves requests for hostnames to IP addresses within the Oracle Cloud Native Environment cluster.
Objectives
In this tutorial, you will learn:
- How to configure and use CoreDNS
- Where to locate the CoreDNS configuration files and how to alter them
Prerequisites
-
Minimum of a 3-node Oracle Cloud Native Environment cluster:
- Operator node
- Kubernetes control plane node
- Kubernetes worker node
-
Each system should have Oracle Linux installed and configured with:
- An Oracle user account (used during the installation) with sudo access
- Key-based SSH, also known as password-less SSH, between the hosts
- Installation of Oracle Cloud Native Environment
Deploy Oracle Cloud Native Environment
Note: If running in your own tenancy, read the linux-virt-labs
GitHub project README.md and complete the prerequisites before deploying the lab environment.
-
Open a terminal on the Luna Desktop.
-
Clone the
linux-virt-labs
GitHub project.git clone https://github.com/oracle-devrel/linux-virt-labs.git
-
Change into the working directory.
cd linux-virt-labs/ocne
-
Install the required collections.
ansible-galaxy collection install -r requirements.yml
-
Deploy the lab environment.
ansible-playbook create_instance.yml -e localhost_python_interpreter="/usr/bin/python3.6"
The free lab environment requires the extra variable
local_python_interpreter
, which setsansible_python_interpreter
for plays running on localhost. This variable is needed because the environment installs the RPM package for the Oracle Cloud Infrastructure SDK for Python, located under the python3.6 modules.Important: Wait for the playbook to run successfully and reach the pause task. At this stage of the playbook, the installation of Oracle Cloud Native Environment is complete, and the instances are ready. Take note of the previous play, which prints the public and private IP addresses of the nodes it deploys and any other deployment information needed while running the lab.
Confirm the Number of Nodes
It helps to know the number and names of nodes in your Kubernetes cluster.
-
Open a terminal and connect via ssh to the ocne-control-01 node.
ssh oracle@<ip_address_of_node>
-
List the nodes in the cluster.
kubectl get nodes
The output shows the control plane and worker nodes in a
Ready
state along with their current Kubernetes version.
How To Configure CoreDNS
Knowing how to configure CoreDNS and how to change that configuration helps you understand how DNS works within your Kubernetes cluster.
IMPORTANT: Part of the changes we’ll make in this tutorial is to modify the kube-dns
Service provided by CoreDNS. The kube-dns
Service spec.clusterIP
field is immutable. Kubernetes protects this field to prevent changes that may disrupt a working cluster. You must remove and add this resource if you need to change an immutable field, thus risking an outage in the cluster.
Change the CIDR Block in the ClusterConfiguration.
The ClusterConfiguration includes various options that affect the configuration of individual components, such as kube-apiserver, kube-scheduler, kube-controller-manager, CoreDNS, etcd, and kube-proxy. Changes to the configuration must be reflected on node components manually. Updating a file in /etc/kubernetes/manifests informs the kubelet to restart the static Pod for the corresponding component. Kubernetes documentation recommends making these changes one node at a time to leave the cluster without downtime.
-
Review the currently assigned CIDR block.
sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep service-cluster-ip-range
Example Output:
- --service-cluster-ip-range=10.96.0.0/12
-
Update the CIDR range.
sudo sed -i "s/10.96.0.0/100.96.0.0/g" /etc/kubernetes/manifests/kube-apiserver.yaml
Updating the file causes the API server to restart automatically, which takes 2-3 minutes to become available again for any new
kubectl
commands. You can monitor when the API server becomes available withwatch kubectl get nodes
. -
Confirm the updated setting.
sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep service-cluster-ip-range
Example Output:
- --service-cluster-ip-range=100.96.0.0/12
The IP address now reflects the change to
100.96.0.0/12
.
Update the Cluster DNS Service’s IP address
-
Confirm the current IP address used by the cluster DNS Service.
kubectl -n kube-system get service
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 1h12m
Note: If you see a message similar to
The connection to the server 10.0.0.52:6443 was refused - did you specify the right host or port?
, it means that the API Server has not restarted yet. Retry the command again until it succeeds. -
Get the complete resource spec for the Service.
kubectl -n kube-system get svc kube-dns -o yaml > kube-dns-svc.yaml
-
Create a patch file containing the changes.
cat << EOF | tee patch-kube-dns-svc.yaml > /dev/null spec: clusterIP: 100.96.0.10 clusterIPs: - 100.96.0.10 EOF
-
Apply the patch to the local spec file.
kubectl patch -f kube-dns-svc.yaml --local=true --patch-file patch-kube-dns-svc.yaml -o yaml | shuf --output=kube-dns-svc.yaml --random-source=/dev/zero
-
Force replace the Service.
This action causes
kubectl
to remove and then re-create the Service.kubectl replace --force -f kube-dns-svc.yaml
-
Confirm the new IP address is in use.
kubectl -n kube-system get service
Example Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 100.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m11s
Apply Configuration Changes
Update Kubelet Configuration
Kubelet is essential in the Kubernetes framework and in managing and coordinating pods and nodes. Its features include pod deployment, resource management, and health monitoring, contributing considerably to a Kubernetes cluster’s operational stability. Kubelet ensures seamless coordination and efficient resource allocation by supporting effective communication between the control plane and nodes, constantly monitoring the containers, and engaging in automated recovery to improve cluster resilience.
-
Get the current Kubelet YAML definition.
sudo cat /var/lib/kubelet/config.yaml
-
Change the value for
clusterDNS
.sudo sed -i "s/10.96.0.10/100.96.0.10/g" /var/lib/kubelet/config.yaml
-
Update and replace the Kubelet ConfigMap.
kubectl -n kube-system get cm kubelet-config -o yaml | sed "s/10.96.0.10/100.96.0.10/g" | kubectl replace -f -
-
Verify the Kubelet ConfigMap changes for the Kubelet configuration.
kubectl -n kube-system get configmap kubelet-config -o yaml
Update Cluster Configuration
-
Update and replace the Kubeadm ConfigMap.
kubectl -n kube-system get cm kubeadm-config -o yaml | sed "s/10.96.0.0/100.96.0.0/g" | kubectl replace -f -
-
Verify the Kubeadm ConfigMap changes for the Kubeadm configuration.
kubectl -n kube-system get configmap kubeadm-config -o yaml
Reload the Kubelet Daemon and Restart the Kubelet Service
The Kubelet process executes as a daemon on this node. Reload the configuration for it to take effect.
-
Update the Kubelet Service.
sudo kubeadm upgrade node phase kubelet-config
Example Output:
[upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config3792300054/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
-
Restart the Daemon and Kubelet Services.
sudo systemctl daemon-reload; sudo systemctl restart kubelet
There are no output results. If you do not see any error message(s), the kubelet service restarted successfully.
-
Update the Kubelet Configuration on the remaining nodes.
for host in ocne-worker-01 ocne-worker-02 do printf "====== $host ======\n\n" ssh $host \ 'sudo sed -i "s/10.96.0.10/100.96.0.10/g" /var/lib/kubelet/config.yaml' done
-
Restart the Kubelet Service on those nodes.
for host in ocne-worker-01 ocne-worker-02 do printf "====== $host ======\n\n" ssh $host \ "sudo systemctl daemon-reload; sudo systemctl restart kubelet" done
Confirm that the Configuration Change is Working
You have updated the Worker nodes to use the new CoreDNS network CIDR definition. Next, you will confirm that a new deployment returns the correct DNS IP address and can resolve an external website.
-
Deploy a new Pod.
kubectl run netshoot --image=nicolaka/netshoot --command sleep --command "3600"
-
Check the status of the pod deployment.
kubectl get pods
Keep checking until the
netshoot
pod reports asRunning
status. -
Confirm local DNS works.
kubectl exec -it netshoot -- nslookup kubernetes.default
Example Output:
[oracle@ocne-control-01 ~]$ kubectl exec -it netshoot -- nslookup kubernetes.default Server: 100.96.0.10 Address: 100.96.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1
NOTE: If you see the following output, it means the
netshoot
container is still deploying. Retry the command until it works.[oracle@ocne-control-01 ~]$ kubectl exec -it netshoot -- nslookup kubernetes.default error: unable to upgrade connection: container not found ("netshoot")
The
kubernetes.default.svc.cluster.local
still reports its address in the10.96.0.0/12
range. This address retention is because the Service keeps its existing ClusterIP until you delete and recreate the Service. -
Confirm the Kubernetes Cluster DNS configuration is updated.
kubectl exec -it netshoot -- cat /etc/resolv.conf
Example Output:
[oracle@ocne-control-01 ~]$ kubectl exec -it netshoot -- cat /etc/resolv.conf search default.svc.cluster.local svc.cluster.local cluster.local vcn.oraclevcn.com lv.vcn.oraclevcn.com nameserver 100.96.0.10 options ndots:5
-
Confirm external DNS lookup works.
kubectl exec -it netshoot -- nslookup example.com
Example Output:
[oracle@ocne-control-01 ~]$ kubectl exec -it netshoot -- nslookup example.com Server: 100.96.0.10 Address: 100.96.0.10#53 Non-authoritative answer: Name: example.com Address: 93.184.215.14 Name: example.com Address: 2606:2800:21f:cb07:6820:80da:af6b:8b2c
Troubleshooting Strategies
If the CoreDNS
service is not working as expected, the following steps will help to identify the underlying problem. If you need to update anything, wait for the CoreDNS
process to restart, and then repeat the steps in the last section to confirm the Kubernetes DNS service is running.
-
Check the
CoreDNS
logs for any errors.kubectl logs --namespace=kube-system -l k8s-app=kube-dns
Example Output (showing healthy logs):
[oracle@ocne-control-01 ~]$ kubectl logs --namespace=kube-system -l k8s-app=kube-dns .:53 [INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908 CoreDNS-1.10.1 linux/amd64, go1.21.7, 055b2c3 .:53 [INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908 CoreDNS-1.10.1 linux/amd64, go1.21.7, 055b2c3
-
Confirm the Kubernetes DNS service is running.
kubectl get service --namespace=kube-system
Example Output:
[oracle@ocne-control-01 ~]$ kubectl get service --namespace=kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 100.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 1h
-
Verify the DNS endpoints are exposed.
A Kubernetes service references an endpoint resource so that the service has a record of the internal IPs of Pods with which to communicate. Endpoints consist of an IP address and port (one pair per Pod) that the service manages itself, but you can manage them manually if necessary.
kubectl get endpoints kube-dns --namespace=kube-system
Example Output:
[oracle@ocne-control-01 ~]$ kubectl get endpoints kube-dns --namespace=kube-system NAME ENDPOINTS AGE kube-dns 10.244.1.2:53,10.244.1.3:53,10.244.1.2:53 + 3 more... 1h1m
Information: An endpoint is the dynamically assigned IP address and port defined with a Service deployment (one endpoint per pod the service routes traffic to). If no endpoints are output, check out the debugging Services documentation.
-
Get the current
ClusterRole
.CoreDNS needs to be able to list the service and endpoint resources properly to resolve names.
kubectl describe clusterrole system:coredns -n kube-system
Example Output:
[oracle@ocne-control-01 ~]$ kubectl describe clusterrole system:coredns -n kube-system Name: system:coredns Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- endpoints [] [] [list watch] namespaces [] [] [list watch] pods [] [] [list watch] services [] [] [list watch] endpointslices.discovery.k8s.io [] [] [list watch]
If any expected permission is missing, edit the
ClusterRole
to add them. Thekubectl edit clusterrole system:coredns -n kube-system
command opens the ClusterRole in an editor so you can add any missing permission(s).
Use CoreDNS
Oracle Cloud Native Environment automatically provisions the internal services Kubernetes uses so they are up and running when the cluster starts. The following steps illustrate how CoreDNS resolves requests so that deployments can work together as needed.
Scale CoreDNS
-
Confirm the number of CoreDNS Pods running.
kubectl get pods --namespace kube-system -l k8s-app=kube-dns
Example Output:
[oracle@ocne-control-01 ~]$ kubectl get pods --namespace kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-676b47d668-b8pw2 1/1 Running 0 1h19m coredns-676b47d668-xrmzf 1/1 Running 0 1h19m
This output shows two CoreDNS replica Pods defined in the default deployment. If you see no CoreDNS Pods running or the
STATUS
column does not report as Running, this indicates that CoreDNS is not running in your cluster. -
View the CoreDNS Deployment.
kubectl -n kube-system get deploy
Example Output:
[oracle@ocne-control-01 ~]$ kubectl -n kube-system get deployment NAME READY UP-TO-DATE AVAILABLE AGE coredns 2/2 2 2 1h21m
-
Scale CoreDNS to three Pod replicas.
Because CoreDNS is a deployment, it can either scale up or down as required, allowing you, as the administrator, to ensure DNS resolution within the Kubernetes cluster remains performant.
kubectl -n kube-system scale deploy coredns --replicas 3
Example Output:
[oracle@ocne-control-01 ~]$ kubectl -n kube-system scale deploy coredns --replicas 3 deployment.apps/coredns scaled
-
Requery the number of CoreDNS Pod Replicas running.
kubectl get pods --namespace kube-system -l k8s-app=kube-dns
Example Output:
[oracle@ocne-control-01 ~]$ kubectl get pods --namespace kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-676b47d668-57tzz 1/1 Running 0 5s coredns-676b47d668-b8pw2 1/1 Running 0 1h19m coredns-676b47d668-xrmzf 1/1 Running 0 1h19m
NOTE: If the new Pod shows with a
STATUS
ofContainerCreating
, repeat the command a few more times until theSTATUS
shows asRunning
.There are now three CoreDNS replica Pods running, demonstrating a simple way to boost the performance of DNS queries within your Kubernetes cluster.
How Pods Communicate using CoreDNS
Application Pods deployed into a Kubernetes cluster need to be able to communicate with each other. This section demonstrates how CoreDNS does this by showing how to access Pods in different namespaces. The following steps will deploy an Apache web server into the default
namespace, create a new namespace, and then deploy an Nginx web server to the newly created namespace. You will then open a shell in the Nginx Pod and execute a command against the Apache Pod to show how CoreDNS uses hostnames to resolve requests.
-
Deploy an Apache web server to the
default
namespace.kubectl create deploy apache --image httpd
-
Expose the deployment.
kubectl expose deploy apache --name apache-svc --port 80
-
Confirm the Apache Pod is running.
kubectl -n default get pods
Example Output:
[oracle@ocne-control-01 ~]$ kubectl -n default get pods NAME READY STATUS RESTARTS AGE apache-f9489c7dc-s6b5j 1/1 Running 0 38s netshoot 1/1 Running 1 (9m33s ago) 69m
Note: The output shows the newly deployed Apache web server and the previously deployed
netshoot
container. -
Create a new namespace.
kubectl create namespace mytest
-
Confirm the namespace exists.
kubectl get namespace
Example Output:
[oracle@ocne-control-01 ~]$ kubectl get namespace NAME STATUS AGE default Active 82m kube-node-lease Active 82m kube-public Active 82m kube-system Active 82m kubernetes-dashboard Active 81m mytest Active 13s ocne-modules Active 81m
-
Deploy a new Nginx web server into the namespace.
kubectl --namespace mytest create deploy nginx --image nginx
-
Expose the deployment.
kubectl --namespace mytest expose deploy nginx --name nginx-svc --port 80
-
Confirm the Nginx Pod is running.
kubectl -n mytest get pods
Example Output:
[oracle@ocne-control-01 ~]$ kubectl -n mytest get pods NAME READY STATUS RESTARTS AGE nginx-9498d8f59-cmsrk 1/1 Running 0 32s
-
Exec into the Nginx Container.
kubectl exec -it -n mytest nginx-<USE-THE NAME IN THE LAST OUTPUT> -- /bin/bash
Note: You need to use the name of the Nginx2 Pod returned in the previous output.
-
Use cURL to access the previously deployed Apache web server deployment.
curl apache-svc
Example Output:
root@nginx-7854ff8877-vwwc7:/# curl apache-svc curl: (6) Could not resolve host: apache-svc
Does this mean that CoreDNS is not working correctly? No, the actual issue is that CoreDNS looks in the deployment’s namespace by default, which is
mytest
in this specific case. If you want to access a service using a different namespace, include the deployment’s namespace in the cURL request. -
Retry the request with a properly formed service name.
curl apache-svc.default
Example Output:
root@nginx-7854ff8877-vwwc7:/# curl apache-svc.default <html><body><h1>It works!</h1></body></html>
This request worked because you deployed an Apache web server (apache-svc) into the
default
namespace. Therefore, you needed to add the.default
to the cURL request for it to be successful.More Information:
CoreDNS searches based on the
/etc/resolv.conf
results contained within the deployed Pod. Stay within the container shell and run this:cat /etc/resolv.conf
Example Output:
root@nginx-7854ff8877-vwwc7:/# cat /etc/resolv.conf search mytest.svc.cluster.local svc.cluster.local cluster.local vcn.oraclevcn.com lv.vcn.oraclevcn.com nameserver 100.96.0.10 options ndots:5
Notice that the values after the line starting with
search
tell CoreDNS which Domains to search, starting withmytest.svc.cluster.local
. This setting is why you can search for deployments within a Namespace without including the Namespace’s name. The next search value listed is anything in thesvc.cluster.local
domain, which explains why you only had to include the deployment name and its Namespace when using cURL. You could always use the Fully Qualified Domain Name (FQDN) like this:apache-svc.default.svc.cluster.local
- but that requires much more typing. For more information about how CoreDNS handles DNS name resolution for Kubernetes Services and Pods, look at the upstream documentation.
Summary
Kubernetes uses CoreDNS to provide DNS services and Service Discovery within the Kubernetes cluster. Hopefully, you now better understand how DNS within a Kubernetes cluster works and how it can help you manage your application deployments.
For More Information
- Oracle Cloud Native Environment Documentation
- Oracle Cloud Native Environment Track
- Oracle Linux Training Station
- Reconfiguring a kubeadm cluster
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Use and Configure CoreDNS with Oracle Cloud Native Environment
F99957-01
June 2024