Go to primary content
Oracle® Communications OC-CNE Installation Guide
Release 1.0
F16979-01
Go To Table Of Contents
Contents

Previous
Previous
Next
Next

Configuration of the Bastion Host

Introduction

Prerequisites

Limitations and Expectations

References

Procedure

This procedure details the steps necessary to configure the Bastion Host onto RMS2 during initial installation. This VM is used for host provisioning, MySQL Cluster, and installing the hosts with kubernetes and the common services.

  1. Procedure OCCNE Installation of the Bastion Host has been completed.
  2. All the hosts servers where this VM is created are captured in OCCNE Inventory File Preparation.
  3. Host names and IP Address, network information assigned to this VM is captured in the OCCNE 1.0 Installation PreFlight Checklist
  4. Yum repository mirror is setup and accessible by Bastion host.
  5. Http server is setup and has kubernetes binaries, helm charts on a server with address that is accessible by Bastion Host.
  6. Docker registry is setup to an address that is reachable by the Bastion host.
  7. This document is based on the assumption that an apache http server (as part of the mirror creation) is created outside of bastion host that supports yum mirror, helm charts and Kubernetes Binaries. (This can be different so directories to copy static content to Bastion host must be verified before starting the rsync procedure ).

All steps are executable from a SSH application (putty) connected laptop accessible via the Management Interface.

  1. https://docs.docker.com/registry/deploying/
  2. https://computingforgeeks.com/how-to-configure-ntp-server-using-chrony-on-rhel-8/
These procedures detail the steps required to configure the existing Bastion Host (Management VM).

Table 3-10 Procedure to configure Bastion Host

Step # Procedure Description
1.

Create the /var/occne/<cluster_name> directory on the Bastion Host
Create the directory using the occne_cluster_name variable contained in the hosts.ini file.
$ mkdir /var/occne
$ mkdir /var/occne/<cluster_name>
2.

Copy the host.ini file to the /var/occne/<cluster_name> directory

Copy the hosts.ini file (created using procedure: OCCNE Inventory File Preparation) into the /var/occne/<cluster_name>/ directory from RMS1 (this procedure assumes the same hosts.ini file is being used here as was used to install the OS onto RMS2 from RMS1. If not then the hosts.ini file must be retrieved from the Utility USB mounted onto RMS2 and copied from RMS2 to the Bastion Host).

This hosts.ini file defines each host to the OS Installer Container running the os-install image downloaded from the repo.
$ scp root@172.16.3.4:/var/occne/<cluster_name> /var/occne/<cluster_name>/hosts.ini
The current sample hosts.ini file requires a "/" to be added to the entry for the occne_helm_images_repo.
vim (or use vi) and edit the hosts.ini file and add the"/"to the occne_helm_images_repo entry.
occne_helm_images_repo='bastion-1:5000 -> occne_helm_images_repo='bastion-1:5000/
3.

Check and Disable Firewall Check the status of the firewall. If active then disable it.
$ systemctl status firewalld
 
$ systemctl stop firewalld
$ systemctl disable firewalld
 
To verify:
$ systemctl status firewalld
4.

Set up Binaries, Helm Charts and Docker Registry on Bastion Host VM
  1. Create the local YUM repo mirror file in etc/yum.repos.d and add the docker repo mirror. Follow procedure: OCCNE Artifact Acquisition and Hosting
  2. Disable the public repo
    $ mv /etc/yum.repos.d/public-yum-ol7.repo /etc/yum.repos.d/public-yum-ol7.repo.disabled

    Install necessary packages from the yum mirror on Bastion Host

    $ yum install rsync
    $ yum install createrepo  yum-utils
    $ yum install docker-ce-18.06.1.ce-3.el7.x86_64
    $ yum install nfs-utils
    $ yum install httpd
    $ yum install chrony -y

    Install curl with http2 support:

    Get curl on server accessible by Bastion Host

    $ mkdir curltar
    $ cd curltar
    $ wget https://curl.haxx.se/download/curl-7.63.0.tar.gz  --no-check-certificate
    Login to Bastion Host and run the following commands:
    Create a temporary directory on the bastion host. It does not really matter where this directory is created but it must have read/write/execute privileges.
    $ mkdir /var/occne/<cluster_name>/tmp
    $ yum install -y nghttp2
    $ rsync -avzh <login-username>@<IP address of server with curl tar>:curltar /var/occne/<cluster_name>/tmp
    $ cd /var/occne/<cluster_name>/tmp
    $ tar xzf curl-7.63.0.tar.gz
    $ rm -f curl-7.63.0.tar.gz
    $ cd curl-7.63.0
    $ ./configure --with-nghttp2 --prefix=/usr/local --with-ssl
    $ make && sudo make install
    $ sudo ldconfig
  3. Copy the yum mirror contents from the remote server where the yum mirror is deployed, this can be done in the following way:

    Get the ip address of the yum mirror from the yum repo file.

    Create an apache http server on Bastion host.

    $ systemctl start httpd
    $ systemctl enable httpd
    $ systemctl status httpd
  4. Retrieve the latest rpm's from the yum mirror to /var/www/yum on the Bastion host using reposync:

    Run following repo sync commands to get latest packages on Bastion host

    $ reposync -g -l -d -m --repoid=local_ol7_x86_64_addons --newest-only --download-metadata --download_path=/var/www/html/yum/OracleLinux/OL7/
    $ reposync -g -l -d -m --repoid=local_ol7_x86_64_UEKR5 --newest-only --download-metadata --download_path=/var/www/html/yum/OracleLinux/OL7/
    $ reposync -g -l -d -m --repoid=local_ol7_x86_64_developer --newest-only --download-metadata --download_path=/var/www/html/yum/OracleLinux/OL7/
    $ reposync -g -l -d -m --repoid=local_ol7_x86_64_developer_EPEL --newest-only --download-metadata --download_path=/var/www/html/yum/OracleLinux/OL7/
    $ reposync -g -l -d -m --repoid=local_ol7_x86_64_ksplice --newest-only --download-metadata --download_path=/var/www/html/yum/OracleLinux/OL7/
    $ reposync -g -l -d -m --repoid=local_ol7_x86_64_latest --newest-only --download-metadata --download_path=/var/www/html/yum/OracleLinux/OL7/

    After the above execution, you will be able to see the directory structure in with all the repo id's in /var/www/html/yum/OracleLinux/OL7/. Rename the repositories in OL7/ directory:

    Note: download_path can be changed according to the folder structure required. Change the names of the copied over folders to match the base url.

    $ cd /var/www/html/yum/OracleLinux/OL7/
    $ mv local_ol7_x86_64_addons addons
    $ mv local_ol7_x86_64_UEKR5 UEKR5
    $ mv local_ol7_x86_64_developer developer
    $ mv local_ol7_x86_64_developer_EPEL developer_EPEL
    $ mv local_ol7_x86_64_ksplice ksplice
    $ mv local_ol7_x86_64_latest latest
    Run following createrepo commands to create repo data for each repository channel on Bastion host yum mirror:
    $ createrepo -v /var/www/html/yum/OracleLinux/OL7/addons
    $ createrepo -v /var/www/html/yum/OracleLinux/OL7/UEKR5
    $ createrepo -v /var/www/html/yum/OracleLinux/OL7/developer
    $ createrepo -v /var/www/html/yum/OracleLinux/OL7/developer_EPEL
    $ createrepo -v /var/www/html/yum/OracleLinux/OL7/ksplice
    $ createrepo -v /var/www/html/yum/OracleLinux/OL7/latest
  5. Get Docker-ce and gpg key from mirror

    Execute the following rsync command:

    $ rsync -avzh <login-username>@<IP address of repo server>:<centos folder directory path>  /var/www/html/yum/
  6. Create http repository configuration to retrieve Kubernetes binaries and helm binaries/charts on Bastion Host if on a different server

    Given that the Kubernetes binaries have been created outside of bastion host as part of the procedure of setting up artifacts and repositories, kubernetes/helm binaries and helm charts have to be copied using rsync command to the bastion host. Example below should copy all of the contents from a folder to the static content render folder of the http server on the bastion host:

    $ rsync -avzh <login-username>@<IP address of repo server>:<copy from directory address>  /var/www/html

    Note: Above is an example directory for an apache folder, if there is another http server running, the directory may be different

  7. Setup Helm and initiate on Bastion Host
    Get Helm version on server accessible by Bastion Host
    $ mkdir helmtar
    $ cd helmtar
    $ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
    Login to Bastion Host and run the following commands:
    Create a temporary directory on the bastion host. It does not really matter where this directory is created but it must have read/write/execute privileges.
    $ mkdir /var/occne/<cluster_name>/tmp1
    $ rsync -avzh <login-username>@<IP address of repo server>:helmtar /var/occne/<cluster_name>/tmp1
    $ cd /var/occne/<cluster_name>/tmp1
    $ tar -xvf helm-v2.9.1-linux-amd64.tar.gz
    $ rm -f helm-v2.9.1-linux-amd64.tar.gz
    $ mv linux-amd64 helm
    $ cd helm
     
    # Run the following command in the charts directory of the http server on bastion host to create index.yaml file so that helm chart can be initialized
    $ helm repo index <path_to_helm_charts_directory_bastion_host>
    # initialize helm
    $ ./helm init --client-only --stable-repo-url <bastion_host_occne_helm_stable_repo_url>
5.

Create a docker registry on Bastion Host
  1. Pull registry Image from Docker registry onto Bastion host to run a registry locally
    Add the server registry IP and port to the /etc/docker/daemon.json file. Create the file if not currently existing.
    {
      "insecure-registries" : ["<server_docker_registry_address>:<port>"]
    }
  2. Start docker: Start the docker daemon.
    $ systemctl daemon-reload
    $ systemctl restart docker
    $ systemctl enable docker
     
    Verify docker is running:
    $ ps -elf | grep docker
    $ systemctl status docker
    While creating the docker registry on a server outside of the bastion host, there is no tag added to the registry image and the image is also not added to the docker registry repository of that server. Manually tag the registry image and push it as one of the repositories on the docker registry server:
    $ docker tag registry:<tag> docker_registry_address>:<port>/registry:<tag>
  3. Push the tagged registry image customer to docker registry repository on server accessible by Bastion Host:
    $ docker push <docker_registry_address>:<port>/registry:<tag>
  4. Login into Bastion host and pull the registry image onto Bastion Host from customer registry setup on server outside of bastion host
    $ docker pull --all-tags <docker_registry_address>:<port>/registry
  5. Run Docker registry on Bastion Host
    $ docker run -d -p 5000:5000 --restart=always --name registry registry:<tag>

    This runs the docker registry local to Bastion host on port 5000.

  6. Get docker images from docker registry to Bastion Host docker registry
    Pull all the docker images from Docker Repository Requirements to the local Bastion Host repository:
    $ docker pull --all-tags <docker_registry_address>:<port>/<image_names_from_attached_list>

    Note: If following error is encountered during the pull of images "net/http: request canceled (Client.Timeout exceeded while awaiting headers)" from the internal docker registry, edit http-proxy.conf and add the docker registry address to NO_PROXY environment variable

  7. Tag Images
    $ docker tag <docker_registry_address>:<port>/<imagename>:<tag> <bastion_host_docker_registry_address>:<port>/<image_names_from_attached_list>
    
    Example:
    $ docker tag 10.75.207.133:5000/jaegertracing/jaeger-collector:1.9.0 10.75.216.125:5000/jaegertracing/jaeger-collector
    
  8. Push the images to local Docker Registry created on the Bastion host
    Create a daemon.json file in /etc/docker directory and add the following to it:
    {
      "insecure-registries" : ["<bastion_host_docker_registry_address>:<port>"]
    }
     
    Restart docker:
     
    $ systemctl daemon-reload
    $ systemctl restart docker
    $ systemctl enable docker
     
    To verify:
    $ systemctl status docker
    $ docker push <bastion_host_docker_registry_address>:<port>/<image_names_from_attached_list>
6.

Setup NFS on the Bastion Host Run the following commands:
$ echo '/var/occne 172.16.3.100/24(ro,no_root_squash)' >> /etc/exports
$ systemctl start nfs-server
$ systemctl enable nfs-server
 
Verify nfs is running:
$ ps -elf | grep nfs
$ systemctl status nfs-server
7.

Setup the Bastion Host to clock off the ToR Switch

The ToR acts as the NTP source for all hosts.

Update the chrony.conf file with the source NTP server by adding the VIP address of the ToR switch from: OCCNE 1.0 Installation PreFlight Checklist : Complete OA and Switch IP SwitchTable as the NTP source.
$ vim /etc/chrony.conf
 
Add the following line at the end of the file:
server 172.16.3.1

chrony was installed in the first step of this procedure. Enable the service.

$ systemctl enable --now chronyd
$ systemctl status chronyd

chrony was installed in the first step of this procedure. Enable the service.

$ systemctl enable --now chronyd$ systemctl status chronyd

Execute the chronyc sources -v command to display the current status of NTP on the Bastion Host. The S field should be set to * indicating NTP sync.

$ chronyc sources -v
210 Number of sources = 1
 
  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* 172.16.3.1                    4   9   377   381  -1617ns[  +18us] +/-   89ms

Edit the /var/occne/<cluster_name>/host.ini file to include the ToR Switch IP as the NTP server host.

$ vim /var/occne/<cluster_name>/hosts.ini
 
Change field: ntp_server='<ToR Switch IP'