Oracle Linux OS Installer

This procedure provides the steps required to install the OL7 image onto all hosts via the Bastion Host using a occne/provision container. Once completed, all hosts include all necessary rpm updates and tools necessary to run the k8-install procedure.

Prerequisites:
  1. All procedures in OCCNE Installation of the Bastion Host are complete.
  2. The Utility USB is available containing the necessary files as per: Installation PreFlight checklist : Miscellaneous Files.

Limitations and Expectations

All steps are executable from a SSH application (putty) connected laptop accessible via the Management Interface.

References

https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html

Table 3-11 Procedure to run the auto OS-installer container

Step # Procedure Description
1.

Initial Configuration on the Bastion Host to Support the OS Install This step is used to provide the steps for creating directories and copying all supporting files to the appropriate directories on the Bastion Host so that the OS Install Container successfully installs OL7 onto each host.

Note: The cluster_name field is derived from the hosts.ini file field: occne_cluster_name.

  1. Log into the Bastion Host using the IP supplied from: Installation PreFlight Checklist : Complete VM IP Table
  2. Create the directories needed on the Bastion Host.
    $ mkdir /var/occne
    $ mkdir /var/occne/<cluster_name>
    $ mkdir /var/occne/<cluster_name>/yum.repos.d
  3. Copy the hosts.ini file (created using procedure: Inventory File Preparation) into the /var/occne/<cluster_name>/ directory from RMS1 (this procedure assumes the same hosts.ini file is being used here as was used to install the OS onto RMS2 from RMS1. If not then the hosts.ini file must be retrieved from the Utility USB mounted onto RMS2 and copied from RMS2 to the Bastion Host).

    This hosts.ini file defines each host to the OS Installer(Provision) Container running the provision image downloaded from the repo.

    $ scp root@172.16.3.4:/var/occne/<cluster_name> /var/occne/<cluster_name>/hosts.ini
  4. Update the repository fields in the hosts.ini file to reflect the changes from procedure: OCCNE Configuration of the Bastion Host. The fields listed must reflect the new Bastion Host IP (172.16.3.100) and the names of the repositories.

    $ vim /var/occne/<cluster_name>/hosts.ini
     
    Update the following fields with the new values from the configuration of the Bastion Host.
    ntp_server
    occne_private_registry
    occne_private_registry_address
    occne_private_registry_port
    occne_k8s_binary_repo
    occne_helm_stable_repo_url
    occne_helm_images_repo
    docker_rh_repo_base_url
    docker_rh_repo_gpgkey
    
    Example:
    ntp_server='172.16.3.1'
    occne_private_registry=registry
    occne_private_registry_address='10.75.207.133'
    occne_private_registry_port=5000
    occne_k8s_binary_repo='http://10.75.207.133/binaries/'
    occne_helm_stable_repo_url='http://10.75.207.133/helm/'
    occne_helm_images_repo='10.75.207.133:5000/'
    docker_rh_repo_base_url=http://10.75.207.133/yum/centos/7/updates/x86_64/
    docker_rh_repo_gpgkey=http://10.75.207.133/yum/centos/RPM-GPG-CENTOS
    
    Comment out the fields under ilo network configuration, management network configuration and signalling network configuration for mysql ndb replication in hosts.ini
     
     
    Only keep values for ilo_vlanid, mgmnt_vlan_id and signal_vlan_id and comment all other variables
2.

Copy the OL7 ISO to the Bastion Host

The iso file is normally accessible from a Customer Site Specific repository. It is accessible because the ToR switch configurations were completed in procedure: OCCNE Configure Top of Rack 93180YC-EX Switches. For this procedure the file has already been copied to the /var/occne directory on RMS2 and can be copied to the same directory on the Bastion Host.

Copy from RMS2, the OL7 ISO file to the /var/occne directory. The example below uses OracleLinux-7.5-x86_64-disc1.iso.

Note: If the user copies this ISO from their laptop then they must use an application like WinSCP pointing to the Management Interface IP.

$ scp root@172.16.3.5:/var/occne/OracleLinux-7.5-x86_64-disc1.iso /var/occne/OracleLinux-7.5-x86_64-disc1.iso
3.

Set up the Boot Loader on the Bastion Host

Execute the following commands:

Note: The iso can be unmounted after the files have been copied if the user wishes to do so using the command: umount /mnt.

$ mkdir -p /var/occne/pxelinux
$ mount -t iso9660 -o loop /var/occne/OracleLinux-7.5-x86_64-disc1.iso /mnt
$ cp /mnt/isolinux/initrd.img /var/occne/pxelinux
$ cp /mnt/isolinux/vmlinuz /var/occne/pxelinux
4.

Verify and Set the PXE Configuration File Permissions on the Bastion Host Each file configured in the step above must be open for read and write permissions.
$ chmod 777 /var/occne/pxelinux
$ chmod 777 /var/occne/pxelinux/vmlinuz
$ chmod 777 /var/occne/pxelinux/initrd.img
5.

Copy and Update .repo files
  1. The customer specific .repo files on the bastion_host must be copied to the /var/occne/<cluster_name> /yum.repos.d directory and updated to reflect the URL to the bastion host. This file is transferred to /etc/yum.repos.d directory on the host by ansible after the host has been installed but before the actual yum update is performed.
    $ cp /etc/yum.repos.d/<*.repo> /var/occne/<cluster_name>/yum.repos.d/. 
  2. Edit each .repo file in the /var/occne/<cluster_name>/yum.repos.d directory and update the baseurl IP of the repo to reflect the IP of the bastion_host.
    $ vim /var/occne/<cluster_name>/yum.repos.d/<repo_name>.repo
     
    Example:
     
    [local_ol7_x86_64_UEKR5]
    name=Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7 (x86_64)
    baseurl=http://10.75.155.195/yum/OracleLinux/OL7/UEKR5/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY
    enabled=1
    proxy=_none_
     
    Change the IP address of the baseurl IP: 10.75.155.195 to the bastion host ip: 172.16.3.100.
     
    The URL may have to change based on the configuration of the customer repos. That cannot be indicated in this procedure.
6.

Execute the OS Install on the Hosts from the Bastion Host

This step requires executing docker run for four different Ansible tags.

Note: The <image_name>:<image_tag> represent the images in the docker image registry accessible by Bastion host.

Run the docker command below to create a container running bash. This command must include the -it option and the bash executable at the end of the command. After execution of this command the user prompt will be running within the container.
docker run --rm --network host --cap-add=NET_ADMIN -v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--skip-tags datastore,vms_provision,yum_configure" <image_name>:<image_tag>
 
docker run --rm --network host --cap-add=NET_ADMIN -v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--tags yum_configure" <image_name>:<image_tag>
 
Example:
 
docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--skip-tags datastore,vms_provision,yum_configure" 10.75.200.217:5000/occne/provision:1.2.0
 
docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--tags yum_configure" 10.75.200.217:5000/occne/provision:1.2.0
7.

Update Network configuration on Master node Execute the following steps on the Master Nodes
  1. cd /etc/sysconfig/network-scripts
    .
  2. Edit file ifcfg-team0 (using vi).
  3. Comment out the field BRIDGE using the "#" char.
  4. Save and exit out of ifcfg-team0.
  5. Edit file ifcfg-teambr0.
  6. Capture the following lines from the ifcfg-teambr0 file and then comment these out:
    1. IPADDR=<address>
    2. GATEWAY=<address>
    3. DNS1=<address>
    4. PREFIX=<number>
  7. Edit ifcfg-team0 and insert these lines at the end of the ifcfg-team0 file.
  8. Once done restart the network on all the master node by executing the command: service network restart.
  9. Check if all the master nodes are reachable from bastion host using the command: ssh -i /var/occne/<cluster_name>/.ssh/occne_id_rsa admusr@172.16.3.x
  10. Edit file /etc/ssh/sshd_config and set value for UseDNS to no
  11. Save and restart ssh service by executing the command: service sshd restart
8.

Update Network configuration on Worker node Execute the following steps on the Worker Nodes
  1. cd /etc/sysconfig/network-scripts
  2. Edit file ifcfg-team0 and uncomment the GATEWAY field.
  3. Save and exit the file.
  4. Restart the network by executing the command: service network restart
  5. Check if all the worker nodes are reachable from bastion host using the command: ssh -i /var/occne/<cluster_name>/.ssh/occne_id_rsa admusr@172.16.3.x
  6. Edit /etc/ssh/sshd_config file and set value for UseDNS to no
  7. Save and restart ssh service by executing the command: service sshd restart
9.

Re-instantiate the management link bridge on RMS1
  1. Run the following commands on RMS1 host OS:
    $ sudo su
    $ nmcli con add con-name mgmtBridge type bridge ifname mgmtBridge
    $ nmcli con add type bridge-slave ifname eno2 master mgmtBridge
    $ nmcli con add type bridge-slave ifname eno3 master mgmtBridge
    $ nmcli con mod mgmtBridge ipv4.method manual ipv4.addresses 192.168.2.11/24
    $ nmcli con up mgmtBridge 
  2. Verify access to the ToR switches management ports.
    $ ping 192.168.2.1
    $ ping 192.168.2.2