A Artifact Acquisition and Hosting
Introduction
The OCCNE deployment containers require access to a number of resources that are usually downloaded from the internet. For cases where the target system is isolated from the internet, locally available repositories may be used. These repositories require provisioning with the proper files and versions, and some of the cluster configuration needs to be updated to allow the installation containers to locate these local repositories.
- YUM Repository Configuration is needed to hold a mirror of a number of OL7 repositories, as well as the version of docker-ce that is required by OCCNE's Kubernetes deployment
-
HTTP Repository Configuration is needed to hold Kubernetes binaries and Helm charts
- Docker Image Registry Configuration is needed to hold the proper Docker images to support the containers that run Kubernetes and the common services that Kubernetes will manage
- A copy of the Oracle Linux ISO. See Oracle Linux 7.5 Download Instructions for OS installation.
- A copy of the MySQL NDB archive for database nodes.
YUM Repository Configuration
Introduction
To perform an installation without the system needing access to the internet, a local YUM mirror must be made of the OL7 latest, epel, and addons repository used by the OS installation.
A repository file will need to be created to reference this local YUM repository, and placed on the necessary machines (those which run the OCCNE installation Docker instances).
Prerequisites
-
Local YUM mirror repository for the OL7 'latest', 'epel', and 'addons' repositories. Directions here: https://www.oracle.com/technical-resources/articles/it-infrastructure/unbreakable-linux-network.html
-
Subscribe to following channels while creating the yum mirror from uln:
[ol7_x86_64_UEKR5] [ol7_x86_64_ksplice] [ol7_x86_64_latest] [ol7_x86_64_addons] [ol7_x86_64_developer]
References
Oracle YUM mirroring directions:
https://www.oracle.com/technetwork/articles/servers-storage-admin/yum-repo-setup-1659167.html
Procedure Steps
Table A-1 Procedure to configure OCCNE YUM Repository
Step # | Procedure | Description |
---|---|---|
1. | Create OL7 repository mirror repo |
Below is an example of a repository file providing the details on a mirror with the necessary repositories. This repository file would be placed on the OCCNE Bootstrap machine that will setup the OCCNE Bastion Host. (directions on the locations in the installation procedure)
|
HTTP Repository Configuration
Introduction
To perform an installation without the system needing access to the internet, a local HTTP repository must be created and provisioned with the necessary files. These files are used to provide the binaries for Kubernetes installation, as well as the Helm charts used during Common Services installation.
- Docker is setup and docker commands can be run by the target system.
- HTTP server that is
reachable by the target system, Example- Running Nginx in docker container.
$ docker run --name mynginx1 -p <port>:<port> -d nginx
More information can be found out on configuring and installing Nginx using docker here: https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-docker/
OR
Use the html directory of Apache http server created during setting up yum mirror to perform the tasks listed below. Note: Create new directories for kubernetes binaries and helm charts in html folder
Procedure Steps
Table A-2 Steps to configure OCCNE HTTP Repository
Steps | Procedure | Description |
---|---|---|
1.
|
Retrieve Kubernetes Binaries |
The Kubernetes installer requires access to an HTTP server from which it can download the proper version of a set of binary files. To provision an internal HTTP repository one will need to obtain these files from the internet, and place them at a known location on the internal HTTP server. The following command will retrieve the proper binaries and place them in a directory named 'binaries' under the command-line specified directory. This 'binaries' directory needs to then be placed on the HTTP server where it can be served up, with the URL identified in the clusters hosts.ini inventory file (see below).
Example:
|
2.
|
Retrieve Helm binaries and charts |
The Configuration installer requires access to an HTTP server from which it can download the proper version of a set of Helm charts for the common services. To provision an internal HTTP repository one will need to obtain these charts from the internet, and place them at a known location on the internal HTTP server using command below.
Example
|
Docker Image Registry Configuration
Introduction
To perform an installation without the system needing access to the internet, a local Docker registry must be created, and provisioned with the necessary docker images. These docker images are used to populate the Kubernetes pods once Kubernetes is installed, as well as providing the services installed during Common Services installation.
Prerequisites
Docker images for OCCNE release must be pulled to the executing system.
- Docker is installed and docker commands can be run
- Make sure docker registry
is running
$ dockerps
- If not then creating a
local docker registry accessible by the target of the installation
$ docker run -d -p <port>:<port> --restart=always --name <registryname> registry:2
(For more directions refer: https://docs.docker.com/registry/deploying/)
References
Provision the registry with the necessary images
On the repo server that can reach the internet AND reach the registry, populate the registry with the following images:
Run the following commands on repo server to generate bastion, k8s install, and configure dependencies:
First retrieve the docker registry image which will be used by the bastion-host to serve up docker images to the rest of the cluster:
docker pull registry:2
docker tag registry:2 <registryaddress>:<port>/registry:2
docker push <registryaddress>:<port>/registry:2
Then retrieve the lists of required docker images from each
container :
$ docker run --rm -it -v /var/occne/<cluster>/:/host occne/<configure_install_image_name>:<1.4.x_tag> /getdeps/getdeps
$ docker run --rm -it -v /var/occne/<cluster>/:/host occne/<k8s_install_image_name>:<1.4.x_tag> /getdeps/getdeps
Example-
$ docker run --rm -it -v /var/occne/rainbow/:/host occne/configure:1.4.0 /getdeps/getdeps
$ docker run --rm -it -v /var/occne/rainbow/:/host occne/k8s_install:1.4.0 /getdeps/getdeps
Once the above command is successfully executed, go to
/var/occne/<cluster>/artifacts directory and verify that there are
retrieve_docker.sh script and k8s_docker_images.txt file in the directory and
execute:
$ sh /var/occne/<cluster>/artifacts/retrieve_docker.sh docker.io <registryaddress>:<port> < /var/occne/<cluster>/artifacts/k8s_docker_images.txt
Once the above command is successfully executed, go to the
/var/occne/<cluster>/artifacts directory and verify that there are
retrieve_docker.sh script and config_docker_images.txt file in the directory
and execute:
$ sh /var/occne/<cluster>/artifacts/retrieve_docker.sh docker.io <registryaddress>:<port> < /var/occne/<cluster>/artifacts/config_docker_images.txt
Verify the list of repositories in the docker registry
Access endpoint <registryaddress>:<port>/v2/_catalog using a browser
or
$ curl http://<registryaddress>:5000/v2/_catalog
$ {"repositories":["coredns/coredns","docker.elastic.co/elasticsearch/elasticsearch-oss","docker.elastic.co/kibana/kibana-oss","gcr.io/google-containers/fluentd-elasticsearch","gcr.io/google-containers/kube-apiserver","gcr.io/google-containers/kube-controller-manager","gcr.io/google-containers/kube-proxy","gcr.io/google-containers/kube-scheduler","gcr.io/google-containers/pause","gcr.io/google_containers/cluster-proportional-autoscaler-amd64","gcr.io/google_containers/metrics-server-amd64","gcr.io/google_containers/pause-amd64","gcr.io/kubernetes-helm/tiller","grafana/grafana","jaegertracing/jaeger-agent","jaegertracing/jaeger-collector","jaegertracing/jaeger-query","jimmidyson/configmap-reload","justwatch/elasticsearch_exporter","k8s.gcr.io/addon-resizer","lachlanevenson/k8s-helm","metallb/controller","metallb/speaker","nginx","prom/alertmanager","prom/prometheus","prom/pushgateway","quay.io/calico/cni","quay.io/calico/ctl","quay.io/calico/kube-controllers","quay.io/calico/node","quay.io/coreos/etcd","quay.io/coreos/kube-state-metrics","quay.io/external_storage/local-volume-provisioner","quay.io/jetstack/cert-manager-controller","quay.io/pires/docker-elasticsearch-curator","quay.io/prometheus/node-exporter"]}
Set hosts.ini variables
...
[occne:vars]
...
occne_private_registry=winterfell
occne_private_registry_address='10.75.216.114'
occne_private_registry_port=5002
occne_helm_images_repo='winterfell:5002'
...