Oracle Linux OS Installer
This procedure provides the steps required to install the OL7 image onto all hosts via the Bastion Host using a occne/provision container. Once completed, all hosts include all necessary rpm updates and tools necessary to run the k8-install procedure.
- All procedures in OCCNE Installation of the Bastion Host are complete.
- The Utility USB is available containing the necessary files as per: Installation PreFlight checklist : Miscellaneous Files.
Limitations and Expectations
All steps are executable from a SSH application (putty) connected laptop accessible via the Management Interface.
References
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
Table 3-11 Procedure to run the auto OS-installer container
Step # | Procedure | Description |
---|---|---|
1.
|
Initial Configuration on the Bastion Host to Support the OS Install | This step is used to provide the steps for
creating directories and copying all supporting files to the appropriate
directories on the Bastion Host so that the OS Install Container successfully
installs OL7 onto each host.
Note: The cluster_name field is derived from the hosts.ini file field: occne_cluster_name.
|
2.
|
Copy the OL7 ISO to the Bastion Host |
The iso file is normally accessible from a Customer Site Specific repository. It is accessible because the ToR switch configurations were completed in procedure: OCCNE Configure Top of Rack 93180YC-EX Switches. For this procedure the file has already been copied to the /var/occne directory on RMS2 and can be copied to the same directory on the Bastion Host. Copy from RMS2, the OL7 ISO file to the /var/occne directory. The example below uses OracleLinux-7.5-x86_64-disc1.iso. Note: If the user copies this ISO from their laptop then they must use an application like WinSCP pointing to the Management Interface IP. $ scp root@172.16.3.5:/var/occne/OracleLinux-7.5-x86_64-disc1.iso /var/occne/OracleLinux-7.5-x86_64-disc1.iso |
3.
|
Set up the Boot Loader on the Bastion Host |
Execute the following commands:
Note: The
iso can be unmounted after the files have been copied if the user wishes to do
so using the command:
$ mkdir -p /var/occne/pxelinux $ mount -t iso9660 -o loop /var/occne/OracleLinux-7.5-x86_64-disc1.iso /mnt $ cp /mnt/isolinux/initrd.img /var/occne/pxelinux $ cp /mnt/isolinux/vmlinuz /var/occne/pxelinux |
4.
|
Verify and Set the PXE Configuration File Permissions on the Bastion Host | Each file configured in the step above must be
open for read and write permissions.
$ chmod 777 /var/occne/pxelinux $ chmod 777 /var/occne/pxelinux/vmlinuz $ chmod 777 /var/occne/pxelinux/initrd.img |
5.
|
Copy and Update .repo files |
|
6.
|
Execute the OS Install on the Hosts from the Bastion Host |
This step requires executing docker run for four different Ansible tags. Note: The <image_name>:<image_tag> represent the images in the docker image registry accessible by Bastion host. Run the docker command below to create a container running bash. This command must include the -it option and the bash executable at the end of the command. After execution of this command the user prompt will be running within the container.docker run --rm --network host --cap-add=NET_ADMIN -v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--skip-tags datastore,vms_provision,yum_configure" <image_name>:<image_tag> docker run --rm --network host --cap-add=NET_ADMIN -v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--tags yum_configure" <image_name>:<image_tag> Example: docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--skip-tags datastore,vms_provision,yum_configure" 10.75.200.217:5000/occne/provision:1.2.0 docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--tags yum_configure" 10.75.200.217:5000/occne/provision:1.2.0 |
7.
|
Update Network configuration on Master node | Execute the following steps on the Master
Nodes
|
8.
|
Update Network configuration on Worker node | Execute the following steps on the Worker
Nodes
|
9.
|
Re-instantiate the management link bridge on RMS1 |
|