2.3 Setting Up the Master Node

Before you begin, ensure you have satisfied the requirements in Section 2.2.5, “Oracle Container Registry Requirements”. Then on the host that you are configuring as the master node, install the kubeadm package and its dependencies:

# yum install kubeadm kubelet kubectl

As root, run kubeadm-setup.sh up to add the host as a master node:

# kubeadm-setup.sh up
Checking kubelet and kubectl RPM ...
Starting to initialize master node ...
Checking if env is ready ... 
Checking whether docker can pull busybox image ...
Checking access to container-registry.oracle.com/kubernetes...
Trying to pull repository container-registry.oracle.com/kube-proxy ...
v1.12.5: Pulling from container-registry.oracle.com/kube-proxy
Digest: sha256:9f57fd95dc9c5918591930b2316474d10aca262b5c89bba588f45c1b96ba6f8b
Status: Image is up to date for container-registry.oracle.com/kube-proxy:v1.12.5
Checking whether docker can run container ...
Checking firewalld settings ...
Checking iptables default rule ...
Checking br_netfilter module ...
Checking sysctl variables ...
Enabling kubelet ...
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service 
to /etc/systemd/system/kubelet.service.
Check successful, ready to run 'up' command ...
Waiting for kubeadm to setup master cluster...
Please wait ...
\ - 80% completed
Waiting for the control plane to become ready ...
100% completed
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created

Installing kubernetes-dashboard ...

secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
Enabling kubectl-proxy.service ...
Starting kubectl-proxy.service ...


Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You can now join any number of machines by running the following on each node
as root:

  export KUBE_REPO_PREFIX=container-registry.oracle.com/kubernetes && kubeadm-setup.sh join \
 --token 8tipwo.tst0nvf7wcaqjcj0 --discovery-token-ca-cert-hash \

If you do not specify a network range, the script uses the default network range of to configure the internal network used for pod interaction within the cluster. To specify an alternative network range, run the script with the --pod-network-cidr option. For example, you would set the network to use the range as follows:

# kubeadm-setup.sh up --pod-network-cidr

The kubeadm-setup.sh script checks whether the host meets all of the requirements before it sets up the master node. If a requirement is not met, an error message is displayed, along with the recommended fix. You should fix the errors before running the script again.

The systemd service for the kubelet is automatically enabled on the host so that the master node always starts at system boot.

The output of the kubeadm-setup.sh script provides the command for adding worker nodes to the cluster. Take note of this command for later use. The token that is shown in the command is only valid for 24 hours. See Section 2.4, “Setting Up a Worker Node” for more details about tokens.

Preparing to Use Kubernetes as a Regular User

To use the Kubernetes cluster as a regular user, perform the following steps on the master node:

  1. Create the .kube subdirectory in your home directory:

    $ mkdir -p $HOME/.kube
  2. Create a copy of the Kubernetes admin.conf file in the .kube directory:

    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. Change the ownership of the file to match your regular user profile:

    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. Export the path to the file for the KUBECONFIG environment variable:

    $ export KUBECONFIG=$HOME/.kube/config

    You cannot use the kubectl command if the path to this file is not set for this environment variable. Remember to export the KUBECONFIG variable for each subsequent login so that the kubectl and kubeadm commands use the correct admin.conf file, otherwise you might find that these commands do not behave as expected after a reboot or a new login.

    For example, append the export line to your .bashrc:

    $ echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
  5. Verify that you can use the kubectl command.

    Kubernetes runs many of its services to manage the cluster configuration as Docker containers running as a Kubernetes pod, which can be viewed by running the following command on the master node:

    $ kubectl get pods -n kube-system
    NAME                                        READY   STATUS    RESTARTS   AGE
    coredns-6c77847dcf-77grm                    1/1     Running   0          5m16s
    coredns-6c77847dcf-vtk8k                    1/1     Running   0          5m16s
    etcd-master.example.com                     1/1     Running   0          4m26s
    kube-apiserver-master.example.com           1/1     Running   0          4m46s
    kube-controller-manager-master.example.com  1/1     Running   0          4m31s
    kube-flannel-ds-glwgx                       1/1     Running   0          5m13s
    kube-proxy-tv2mj                            1/1     Running   0          5m16s
    kube-scheduler-master.example.com           1/1     Running   0          4m32s
    kubernetes-dashboard-64458f66b6-q8dzh       1/1     Running   0          5m13s