Go to primary content
Oracle® Communications OC-CNE Installation Guide
Release 1.0
F16979-01
Go To Table Of Contents
Contents

Previous
Previous
Next
Next

OCCNE Kubernetes Installer

These procedures provide the steps required to install the K8's image onto all hosts via the Bastion Host using a occne/k8s_install container. Once completed, configure procedure can be run.

Prerequisites

  1. All the hosts servers where this VM is created are captured in OCCNE Inventory File Preparation.
  2. Host names and IP Address, network information assigned to this VM is captured in the OCCNE 1.0 Installation PreFlight Checklist
  3. Cluster Inventory File and SSH Keys are present in the cluster_name folder in var/occne directory
  4. A docker image for 'k8s_install' must be available in the docker registry accesible by Bastion host. OCCNE 1.0 - Installation Procedure

Limitations and Expectations

Steps to Perform OCCNE Kubernetes Installer

All steps are executable from a SSH application (putty) connected laptop accessible via the Management Interface.

Table 3-14 Procedure to install OCCNE Kubernetes

Step # Procedure Description

1

Initial Configuration on the Bastion Host to Support the Kubernetes Install
  1. Log into the Bastion Host using the IP supplied from: OCCNE 1.0 Installation PreFlight Checklist : Complete VM IP Table
  2. Verify the entries in the hosts.ini file for occne_private_registry, occne_private_registry_address, occne_private_registry_port and occne_k8s_binary_repo are correct . The fields listed must reflect the new Bastion Host IP and the names of the repositories correctly.

2

Execute the Kubernetes Install on the Hosts from the Bastion Host

Note:

The cluster_name field is derived from the hosts.ini file field: occne_cluster_name.

The <image_name>:<image_tag> represent the images in the Bastion Host docker image registry as set up in procedure: OCCNE Configuration of the Bastion Host.

Create a file named repo_remove.yaml in /var/occne/<cluster_name> directory with following content:
- hosts: k8s-cluster
  tasks:
  - name: Clean artifact path
    file:
      state: absent
      path: "/etc/yum.repos.d/docker.repo"

Start the k8s_install container with a bash shell :

$ docker run --rm --network host --cap-add=NET_ADMIN -v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=<k8s_args>" <docker_registry>/<image_name>:<image_tag>

For example:

$ docker run --rm -it --network host --cap-add=NET_ADMIN -v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/occne/:/var/occne:rw -e OCCNEARGS="-vv" 10.75.200.217:5000/k8s_install:1.0.1 bash

Run following commands within the bash of the container:

$ sed -i /kubespray/cluster.yml -re '47,57d'

$ sed -i /kubespray/roles/container-engine/docker/templates/rh_docker.repo.j2 -re '10,17d'

Run following command:

$ ansible-playbook -i /kubespray/inventory/occne/hosts.ini --become --become-user=root --private-key /host/.ssh/occne_id_rsa /occne/kubespray/cluster.yml -vvvvv

Example:

$ ansible-playbook -i /kubespray/inventory/occne/hosts.ini --become --become-user=root --private-key /host/.ssh/occne_id_rsa /var/occne/rainbow.lab.us.oracle.com/repo_remove.yaml

%% Run exit command below to exit out of the bash of container

$ exit

3

Update the $PATH Environment Variable to access the kubectl command from the kubectl.sh script

On the Bastion Host, edit the /root /.bash_profile file. Update the PATH variable in that file.

%% On the bastion Host edit file /root/.bash_profile.
 
# .bash_profile
 
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi
 
# User specific environment and startup programs
 
PATH=$PATH:$HOME/bin
 
export PATH
 
 
%% Update the following to the PATH variable:
 
PATH=$PATH:$HOME/bin:var/occne/<cluster_name>/artifacts
 
%% Save the file and source the .bash_profile file:
 
source /root/.bash_profile
%% Execute the following to verify the $PATH has been updated.
echo $PATH /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/root/bin:/var/occne/rainbow.lab.us.oracle.com/artifacts
%% Make sure the permissions on the /var/occne/rainbow.lab.us.oracle.com/artifacts/kubectl.sh and /var/occne/rainbow.lab.us.oracle.com/artifacts/kubectl files are set correctly:
-rwxr-xr-x. 1 root root 248122280 May 30 18:23 kubectl
-rwxr-xr-x. 1 root root       112 May 30 18:44 kubectl.sh

%% If not run the following command:
chmod +x kubectl

4

Run Kubernetes Cluster Tests

For verification of k8s installation, run docker command in the k8s_install /test/cluster_test.