Go to primary content
Oracle® Communications OC-CNE Installation Guide
Release 1.0
F16979-01
Go To Table Of Contents
Contents

Previous
Previous
Next
Next

Install Host OS onto RMS2 from the Installer Bootstrap Host (RMS1)

Introduction

These procedures provide the steps required to install the OL7 image onto the RMS2 via the Installer Bootstrap Host using a occne/os_install container. Once completed, RMS2 includes all necessary rpm updates and tools necessary to Install the Bastion Host.

Prerequisites

Limitations and Expectations

All steps are executable from a SSH application (putty) connected laptop accessible via the Management Interface.

Procedures

Table 3-8 Procedure to install the OL7 image onto the RMS2 via the installer bootstrap host

Step # Procedure Description
1.

Copy the Necessary Files from the Utility USB to Support the OS Install This procedure is used to provide the steps for copying all supporting files from the Utility USB to the appropriate directories so that the OS Install Container successfully installs OL7 onto RMS2.

Note: The cluster_name field is derived from the occne_cluster_name field in the hosts.ini file.

  1. Create the directories needed on the Installer Bootstrap Host.
    $ mkdir /var/occne
    $ mkdir /var/occne/<cluster_name>
    $ mkdir /var/occne/<cluster_name>/yum.repos.d
    
  2. Mount the Utility USB.

    Note: Instructions for mounting a USB in Linux are at: OCCNE Installation of Oracle Linux 7.5 on Bootstrap Host : Install Additional Packages. Only follow steps 1-4 to mount the USB.

  3. Copy the hosts.ini file (created using procedure: OCCNE Inventory File Preparation) into the /var/occne/<cluster_name>/ directory. This hosts.ini file defines RMS2 to the OS Installer Container running the os-install image downloaded from the repo.
    $ cp /media/usb/hosts.ini /var/occne/<cluster_name>/hosts.ini
  4. Update the hosts.ini file to include the ToR host_net (vlan3) VIP for NTP clock synchronization. Use the ToR VIP address as defined in procedure: OCCNE 1.0 Installation PreFlight Checklist : Complete OA and Switch IP SwitchTable as the NTP source.
    $ vim /var/occne/<cluster_name>/hosts.ini
    
    Update the ntp_server field with the VIP address.
  5. Copy the customer specific ol7-mirror.repo and the docker-ce-stable repo on the Utility USB to the Installer Bootstrap Host.

    This is the .repo file created by the customer that provides access to the onsite (within their network) repositories needed to complete the full deployment of OCCNE 1.0 and to install docker-ce onto the Installer Bootstrap Host.
    $ cp /media/usb/ol7-mirror.repo /var/occne/<cluster_name>/yum.repos.d/ol7-mirror.repo
    $ cp /media/usb/ol7-mirror.repo /etc/yum.repos.d/ol7-mirror.repo
    $ cp /media/usb/docker-ce-stable.repo /etc/yum.repos.d/docker-ce-stable.repo
    
  6. If still enabled from procedure: OCCNE Installation of Oracle Linux 7.5 on Bootstrap Host, the /etc/yum.repos.d/Media.repo is to be disabled.

    $ mv /etc/yum.repos.d/Media.repo /etc/yum.repos.d/Media.repo.disable
  7. Copy the updated version of the kickstart configuration file to /var/occne/<cluster_name> directory.
    $ cp /media/usb/occne-ks.cfg.j2.new /var/occne/<cluster_name>/occne-ks.cfg.j2.new
2.

Copy the OL7 ISO to the Installer Bootstrap Host

The iso file should be accessible from a Customer Site Specific repository. This file should be accessible because the ToR switch configurations were completed in procedure: OCCNE Configure Top of Rack 93180YC-EX Switches.

Copy from RMS1, the OL7 ISO file to the /var/occne directory. The example below uses OracleLinux-7.5-x86_64-disc1.iso. Note: If the user copies this ISO from their laptop then they must use an application like WinSCP pointing to the Management Interface IP.

$ scp <usr>@<site_specific_address>:/<path_to_iso>/OracleLinux-7.5-x86_64-disc1.iso /var/occne/OracleLinux-7.5-x86_64-disc1.iso
3.

Install Docker onto the Installer Bootstrap Host
Use YUM to install docker-ce onto the installer Bootstrap Host. YUM should use the existing <customer_specific_repo_file>.repo in the /etc/yum.repos.d directory.
$ yum install docker-ce-18.06.1.ce-3.el7.x86_64
4.

Set up access to the Docker Registry on the Installer Bootstrap Host
  1. Add an entry to the /etc/hosts file on the Installer Bootstrap Host to provide a name mapping for the docker registry using the hosts.ini file fields occne_private_registry and occne_private_registry_address in OCCNE Inventory File Preparation.

    <occne_private_registry_address> <occne_private_registry>

    Example:10.75.200.217 reg-1

  2. Create the /etc/docker/daemon.json file on the Installer Bootstrap Host. Add an entry for the insecure-registries for the docker registry.
    $ mkdir /etc/docker
    $ vi /etc/docker/daemon.json
    Enter the following:
    
    {
    
      "insecure-registries": ["<occne_private_registry>:<occne_private_registry_port>"]
    
    }
    
    Example:
    
    cat /etc/docker/daemon.json
    
    {
    
      "insecure-registries": ["reg-1:5000"]
    
    }
    
    To Verify:
    
    ping <occne_private_registry>
    
    Example:
    
    # ping reg-1
    
    PING reg-1 (10.75.200.217) 56(84) bytes of data.
    
    64 bytes from reg-1 (10.75.200.217): icmp_seq=1 ttl=61 time=0.248 ms
    
    64 bytes from reg-1 (10.75.200.217): icmp_seq=2 ttl=61 time=0.221 ms
    
    64 bytes from reg-1 (10.75.200.217): icmp_seq=3 ttl=61 time=0.239 ms
  3. Create the docker service http-proxy.conf file.
    $ mkdir -p /etc/systemd/system/docker.service.d/
    
    $ vi /etc/systemd/system/docker.service.d/http-proxy.conf
    
    Add the following:
    
    [Service]
    
    Environment="NO_PROXY=<occne_private_registry_address>,<occne_private_registry>,127.0.0.1,localhost" 
    
    Example: 
    
    [Service]
    
    Environment="NO_PROXY=10.75.200.217,reg-1,127.0.0.1,localhost"
    
  4. Start the docker daemon
    $ systemctl daemon-reload
    $ systemctl restart docker
    $ systemctl enable docker
     
    Verify docker is running:
    $ ps -elf | grep docker
    $ systemctl status docker
    
5.

Setup NFS on the Installer Bootstrap Host

Run the following commands (assumes nfs-utils has already been installed in procedure: OCCNE Installation of Oracle Linux 7.5 on Bootstrap Host : Install Additional Packages).

Note: The IP address used in the echo command is the Platform VLAN IP Address (VLAN 3) of the Bootstrap Host (RMS 1) as given in: OCCNE 1.0 Installation PreFlight Checklist : Complete Site Survey Host Table.

$ echo'/var/occne 172.16.3.4/24(ro,no_root_squash)'>> /etc/exports
$ systemctl start nfs-server
$ systemctl enable nfs-server
Verify nfs is running:
$ ps -elf | grep nfs
$ systemctl status nfs-server
6.

Set up the Boot Loader on the Installer Bootstrap Host
Execute the following commands:
$ mkdir -p /var/occne/pxelinux
$ mount -t iso9660 -o loop /var/occne/OracleLinux-7.5-x86_64-disc1.iso /mnt
$ cp /mnt/isolinux/initrd.img /var/occne/pxelinux
$ cp /mnt/isolinux/vmlinuz /var/occne/pxelinux
7.

Verify and Set the PXE Configuration File Permissions on the Installer Bootstrap Host
Each file configured in the step above must be open for read and write permissions.
$ chmod 777 /var/occne/pxelinux
$ chmod 777 /var/occne/pxelinux/vmlinuz
$ chmod 777 /var/occne/pxelinux/initrd.img
8.

Disable DHCP and TFTP on the Installer Bootstrap Host
The TFTP and DHCP services running on the Installer Bootstrap Host may still be running. These services must be disabled.
$ systemctl stop dhcpd
$ systemctl disable dhcpd
$ systemctl stop tftp
$ systemctl disable tftp
9.

Disable SELINUX
SELINUX must be set to permissive mode. In order to successfully set the SELINUX mode, a reboot of the system is required. The getenforce command is used to determine the status of SELINUX.
$ getenforce
active
If the output of this command displays active, change it to permissive by editing the /etc/selinux/config file.
$ vi /etc/selinux/config
Change the SELINUX variable to passive: SELINUX=permissive
save the file
Reboot the system: reboot
10.

Execute the OS Install on RMS2 from the Installer Bootstrap Host

This step requires executing docker run for four different Ansible tags.

Note: The initial OS install is performed from the OS install container running bashbecause the new kickstart configuration file must be copied over the existing configuration prior to executing the Ansible playbook.

  1. Run the docker command below to install the OS onto RMS2. This first command installs the OS while the subsequent commands (steps) set up the environment for yum repository support, datastore, and security. These commands must be executed in the order listed. This command can take up to 30 minutes to complete.
    $ docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/rainbow.lab.us.oracle.com/:/host -v /var/occne/:/var/occne:rw <image_name>:<image_tag> bash
     
    Example:
     
    $ docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/rainbow/:/host -v /var/occne/:/var/occne:rw reg-1:5000/os_install:1.0.1 bash
    
  2. From the container, copy the /var/occne/occne-ks.cfg.j2.new file (which is mounted to the /host directory on the container) over the existing /install/os-install/roles/pxe_config/templates/ks/occne-ks.cfg.j2 file.
    $ cp /host/occne-ks.cfg.j2.new /install/roles/pxe_config/templates/ks/occne-ks.cfg.j2
  3. Install the OS onto each host using the ansible command indicated below. This command installs the OS while the subsequent commands (steps) set up the environment for yum repository support, datastore, and security. This command can take up to 30 minutes to complete.
    $ ansible-playbook -i /host/hosts.ini --become --become-user=root --private-key /host/.ssh/occne_id_rsa /install/os-install.yaml --limit <RMS2 db node from hosts.ini file>,localhost --skip-tags "ol7_hardening,datastore,yum_update"
     
    Example:
     
    ansible-playbook -i /host/hosts.ini --become --become-user=root --private-key /host/.ssh/occne_id_rsa /install/os-install.yaml --limit db-2.rainbow.lab.us.oracle.com,localhost --skip-tags "ol7_hardening,datastore,yum_update"
    
  4. Configure db-2 management interface.

    The <vlan_4_ip_address> is from OCCNE 1.0 Installation PreFlight Checklist : Complete Site Survey Host IP Table.

    The <ToRswitch_CNEManagementNet_VIP> is from OCCNE 1.0 Installation PreFlight Checklist : ToR and Enclosure Switches Variables Table (Switch Specific).

    $ scp /tmp/ifcfg-* root@<db-2 host_net address>:/tmp
    $ ssh root@<db-2 host_net address>
     
    $sudo su
    $ cd /etc/sysconfig/network-scripts/
     
    $ cp /tmp/ifcfg-vlan ifcfg-team0.4
    $ sed -i 's/{BRIDGE_NAME}/vlan4-br/g' ifcfg-team0.4
    $ sed -i 's/{PHY_DEV}/team0/g' ifcfg-team0.4
    $ sed -i 's/{VLAN_ID}/4/g' ifcfg-team0.4
    $ sed -i 's/{IF_NAME}/team0.4/g' ifcfg-team0.4
    $ echo "BRIDGE=vlan4-br" >> ifcfg-team0.4
     
    $ cp /tmp/ifcfg-bridge ifcfg-vlan4-br
    $ sed -i 's/{BRIDGE_NAME}/vlan4-br/g' ifcfg-vlan4-br
    $ sed -i 's/DEFROUTE=no/DEFROUTE=yes/g' ifcfg-vlan4-br
    $ sed -i 's/{IP_ADDR}/<vlan_4_ip_address>/g' ifcfg-vlan4-br
    $ sed -i 's/{PREFIX_LEN}/29/g' ifcfg-vlan4-br
    $ sed -i 's/DEFROUTE=no/DEFROUTE=yes/g' ifcfg-vlan4-br
    $ echo "GATEWAY=<ToRswitch_CNEManagementNet_VIP>" >> ifcfg-vlan4-br
     
    $ service network restart
  5. Execute yum-update using docker.

    This step disables any existing .repo files that are currently existing in directory /etc/yum.repos.d on RMS after the OS Install. It then copies any .repo files in the /var/occne/<cluster_name> directory into the /etc/yum.repos.d and sets up the customer repo access.

    $ docker run --rm --network host --cap-add=NET_ADMIN -v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit <RMS2 db node from hosts.ini file>,localhost --tags yum_update" <image_name>:<image_tag>
     
    Example:
     
    $ docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/rainbow/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit db-2.rainbow.lab.us.oracle.com,localhost --tags yum_update" reg-1:5000/os_install:1.0.1
    
  6. Check the /etc/yum.repos.d directory on RMS2 for non-disabled repo files. These files should be disabled. The only file that should be enabled is the customer specif .repo file that was set in the /var/occne/<cluster_name>/yum.repos.d directory on RMS1. If any of these files are not disabled then each file must be renamed as <filename>.repo.disabled.
    $ cd /etc/yum.repos.d
    $ ls
     
    Check for any files other than the customer specific .repo file that are not listed as disabled. If any exist, disable them using the following command:
     
    $ mv <filename>.repo <filename>.repo.disabled
  7. Execute datastore using docker.
    $ docker run --rm --network host --cap-add=NET_ADMIN -v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit <RMS2 db node from hosts.ini file>.oracle.com,localhost --tags datastore" <image_name>:<image_tag>
     
    Example:
     
    $ docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/rainbow/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit db-2.rainbow.lab.us.oracle.com,localhost --tags datastore" reg-1:5000/os_install:1.0.1
  8. Execute the OL7 hardening using docker.

    Note: The two extra-vars included in the command are not used in the context of this command but need to be there to set the values to something other than an empty string.

    $ docker run --rm --network host --cap-add=NET_ADMIN -v /var/occne/<cluster_name>/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit <RMS2 db node from hosts.ini file>,localhost --tags ol7_hardening --extra-vars ansible_env=172.16.3.4 --extra-vars http_proxy=172.16.3.4" <image_name>:<image_tag>
     
    Example:
     
    $ docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/rainbow/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit db-2.rainbow.lab.us.oracle.com,localhost --tags ol7_hardening --extra-vars ansible_env=172.16.3.4 --extra-vars http_proxy=172.16.3.4" reg-1:5000/os_install:1.0.1