Bastion Host Installation

This section outlines the use of the Installer Bootstrap Host to provision db-2/RMS2 with an operating system and configure it to fulfill the role of Database Host. After the Bastion Host is created, it is used to complete the installation of OCCNE.

Provision Second Database Host (RMS2) from Installer Bootstrap Host (RMS1)

Table 2-11 Terminology used in Procedure

Name Description
bastion_full_name This is the full name of the Bastion Host as defined in the hosts.ini file.

Example: bastion-2.rainbow.us.labs.oracle.com

bastion_kvm_host_full_name This is the full name of the KVM server (usually RMS2/db-2) that hosts the Bastion Host VM.

Example: db-2.rainbow.us.labs.oracle.com

bastion_kvm_host_ip_address

This is the IPv4 ansible_host IP address of the server (usually RMS2/db-2) that hosts the Bastion Host VM.

Example: 172.16.3.5

bastion_short_name This is the name of the Bastion Host derived from the bastion_full_name up to the first ".".

Example: bastion-2

bastion_ip_address This is the internal IPv4 "ansible_host" address of the Bastion Host as defined within the hosts.ini file.

Example: 172.16.3.100 for bastion-2 on db-2

cluster_full_name This is the name of the cluster as defined in the hosts.ini file field: occne_cluster_name.

Example: rainbow.us.labs.oracle.com

cluster_short_name This is the short name of the cluster derived from the cluster_full_name up to the the first ".".

Example: rainbow

Note:

The Bootstrap Host must be setup to use root/<customer_specific_root_password> as the credentials to access it. Setting that user/password is part of the instructions at: Installation of Oracle Linux 7.x on Bootstrap Host.

Table 2-12 Bastion Installation

Step # Procedure Description
1.

Copy the Necessary Files from the Utility USB to Support the OS Install
  1. Login to the Bootstrap Host using the root credentials configured during OS installation of the Bootstrap Host.
  2. Create the directories needed on the Installer Bootstrap Host.
    $ mkdir -p /var/occne/cluster/<cluster_short_name>/yum.repos.d
  3. Mount the Utility USB.

    Note: Follow the instructions for mounting a USB in Linux are at: Installation of Oracle Linux 7.x on Bootstrap Host.

  4. Copy the hosts.ini file (created using procedure: OCCNE Inventory File Preparation) into the /var/occne/cluster/<cluster_short_name>/ directory.

    This hosts.ini file defines the Bastion KVM Host to the Provision Container running the provision image downloaded from the repo.

    $ cp /<path_to_usb>/hosts.ini /var/occne/cluster/<cluster_short_name>/hosts.ini
     
    Example:
    $ cp /media/usb/hosts.ini /var/occne/cluster/rainbow/hosts.ini
  5. Edit the /var/occne/cluster/<cluster_short_name>/hosts.ini file to include the ToR host_net (vlan3) VIP for NTP clock synchronization. Use the ToR VIP address (ToRswitch_Platform_VIP ) as defined in procedure: Installation PreFlight Checklist : Complete OA and Switch IP SwitchTable as the NTP source. Update the ntp_server field with the VIP address. Update the occne_repo_host_address to point to this Bootstrap Host internal address. This is being used for PXE booting and accessing the NFS share on the Installer Bootstrap Host (db-1/RMS1).
    Example (from hosts.sample.ini):
     
    ntp_server='172.16.3.1'
    occne_repo_host_address='172.16.3.4'
  6. Copy the customer specific .repo file from the Utility USB to the Installer Bootstrap Host.

    This is the .repo file created by the customer that provides access to the onsite (within their network) yum repositories needed to complete the full deployment of OCCNE 1.3 onto the Installer Bootstrap Host. It needs to be in two places, one for the local system, and one for the target systems.

    $ cp /<path_to_usb>/<customer_specific_repo>.repo /var/occne/cluster/<cluster_short_name>/yum.repos.d/.
    $ cp -r /var/occne/cluster/<cluster_short_name>/yum.repos.d /var/occne/.
    $ echo "reposdir=/var/occne/yum.repos.d" >> /etc/yum.conf
     
    Example:
    $ cp /media/usb/ol7-mirror.repo /var/occne/cluster/rainbow/yum.repos.d/
    $ cp -r /var/occne/cluster/rainbow/yum.repos.d /var/occne/
    $ echo "reposdir=/var/occne/yum.repos.d" >> /etc/yum.conf
2.

Set up the /etc/hosts file for the Central Repo and Verify Access
  1. Add an entry to the /etc/hosts file on the Installer Bootstrap Host to provide a name mapping for the Customer central repository.
    $ vi /etc/hosts
     
    Example:
    10.75.200.217 rainbow-reg
    
    To Verify:
    $ ping <central_repo_name>
     
    Example:
    # ping rainbow-reg
    PING reg-1 (10.75.200.217) 56(84) bytes of data.
    64 bytes from reg-1 (10.75.200.217): icmp_seq=1 ttl=61 time=0.248 ms
    64 bytes from reg-1 (10.75.200.217): icmp_seq=2 ttl=61 time=0.221 ms
    64 bytes from reg-1 (10.75.200.217): icmp_seq=3 ttl=61 time=0.239 ms
  2. Verify the repo access execute the following command:
    $ yum repolist
    
    Example:
    $ yum repolist
    Loaded plugins: ulninfo
    repo id                repo name                                                                   status
    !UEKR5/x86_64          Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7 (x86_64)             80
    !addons/x86_64         Oracle Linux 7 Addons (x86_64)                                                  91
    !developer/x86_64      Packages for creating test and development environments for Oracle Linux 7     226
    !developer_EPEL/x86_64 Packages for creating test and development environments for Oracle Linux 7  13,246
    !ksplice/x86_64        Ksplice for Oracle Linux 7 (x86_64)                                            393
    !latest/x86_64         Oracle Linux 7 Latest (x86_64)                                               5,401
    repolist: 19,437
3.

Copy the OL7 ISO to the Installer Bootstrap Host

The iso file must be accessible from a Customer Site Specific repository. This file should be accessible because the ToR switch configurations were completed in procedure: Configure Top of Rack 93180YC-EX Switches

Copy the OL7 ISO file to the /var/occne directory. The example below uses OracleLinux-7.5-x86_64-disc1.iso. If this file was copied to the Utility USB, it can be copied from there into the same directory on the Bootstrap Host.

Note: If the user copies this ISO from their laptop then they must use an application like WinSCP pointing to the Management Interface IP.

scp <usr>@<site_specific_address>:/<path_to_iso>/OracleLinux-7.5-x86_64-disc1.iso /var/occne/OracleLinux-7.5-x86_64-disc1.iso
4.

Install Packages onto the Installer Bootstrap Host Use YUM to install necessary packages onto the installer Bootstrap Host.
$ yum install docker-engine nfs_utils ansible
5.

Set up access to the Docker Registry on the Installer Bootstrap Host
  1. Copy the docker registry certificate to two places on the Bootstrap Host.

    Note: How to obtain the docker registry certificate <source> is not necessarily covered in the procedure. The user can use the instructions at reference 1 to understand this more thoroughly.

    $ mkdir -p /var/occne/certificates
    $ cp <source>.crt /var/occne/certificates/<occne_private_registry>:<occne_private_registry_port>.crt
    $ mkdir -p /etc/docker/certs.d/<occne_private_registry>:<occne_private_registry_port>
    $ cp <source>.crt /etc/docker/certs.d/<occne_private_registry>:<occne_private_registry_port>/ca.crt
     
    Example:
    $ mkdir -p /var/occne/certificates
    $ cp <source>.crt /var/occne/certificates/rainbow_reg:5000.crt
    $ mkdir -p /etc/docker/certs.d/rainbow_reg:5000
    $ cp <source>.crt /etc/docker/certs.d/rainbow_reg:5000/ca.crt 
  2. Start the docker daemon.
    $ systemctl daemon-reload
    $ systemctl restart docker
    $ systemctl enable docker
      
    Verify docker is running:
    $ ps -elf | grep docker
    $ systemctl status docker
6.

Setup NFS on the Installer Bootstrap Host

Run the following commands using sudo (assumes nfs-utils has already been installed in procedure: Installation of Oracle Linux 7.x on Bootstrap Host : Install Additional Packages).

Note: The IP address used in the echo command is the Platform VLAN IP Address (VLAN 3)of the Bootstrap Host (RMS 1) as given in: Installation PreFlight Checklist : Site Survey Host Table.
$ echo '/var/occne 172.16.3.4/24(ro,no_root_squash)' >> /etc/exports
$ systemctl start nfs-server
$ systemctl enable nfs-server
  
Verify nfs is running:
$ ps -elf | grep nfs
$ systemctl status nfs-server
7.

Set up the Boot Loader on the Installer Bootstrap Host Execute the following commands:
$ mkdir -p /var/occne/pxelinux
$ mount -t iso9660 -o loop /var/occne/OracleLinux-7.5-x86_64-disc1.iso /mnt
$ cp /mnt/isolinux/initrd.img /var/occne/pxelinux
$ cp /mnt/isolinux/vmlinuz /var/occne/pxelinux
8.

Verify and Set the PXE Configuration File Permissions on the Installer Bootstrap Host Each file configured in the step above must be open for read and write permissions.
$ chmod -R 777 /var/occne/pxelinux
9.

Disable DHCP and TFTP on the Installer Bootstrap Host The TFTP and DHCP services running on the Installer Bootstrap Host may still be running. These services must be disabled.
$ systemctl stop dhcpd
$ systemctl disable dhcpd
$ systemctl stop tftp
$ systemctl disable tftp
10.

Disable SELINUX Set SELINUX to permissive mode. In order to successfully set the SELINUX mode, a reboot of the system is required. The getenforce command is used to determine the status of SELINUX.
$ getenforce
active
  
If the output of this command displays "active", change it to "permissive" by editing the /etc/selinux/config file.
  
$ vi /etc/selinux/config
  
Change the SELINUX variable to passive: SELINUX=permissive
save the file
  
Reboot the system: reboot
11.

Generate the SSH private and public keys on Bootstrap Host.

This command generates a private and public key for the cluster. These keys are passed to the Bastion Host and used to communicate to other nodes from that Bastion Host. The public key is passed to each node on OS install. Do not supply a passphrase when it asks for one. Just hit enter.

Note: The private key (occne_id_rsa) must be copied to a server that going to access the Bastion Host because the Bootstrap Host is repaved. This key is used later in the procedure to access the Bastion Host after it has been created.

Execute the following commands on the Bootstrap Host:
$ mkdir -m 0700 /var/occne/cluster/<cluster_short_name>/.ssh
$ ssh-keygen -b 4096 -t rsa -C "occne installer key" -f "/var/occne/cluster/<cluster_short_name>/.ssh/occne_id_rsa" -q -N ""
12.

Execute the OS Install and Bastion VM Creation on Bastion KVM Host (RMS2) from the Installer Bootstrap Host
  1. Run the docker commands below to perform the OS install and Bastion Host VM creation on the Bastion KVM Host (RMS2):
    $ docker run --rm --network host --cap-add=NET_ADMIN -v /var/occne/cluster/<cluster_short_name>/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit=<bastion_full_name>,<bastion_kvm_host_full_name>,localhost" <image_name>:<image_tag>
      
      
    Example:
      
    $ docker run -it --rm --network host --cap-add=NET_ADMIN -v /var/occne/cluster/rainbow/:/host -v /var/occne/:/var/occne:rw -e "OCCNEARGS=--limit=bastion-2.rainbow.lab.us.oracle.com,db-2.rainbow.lab.us.oracle.com,localhost" winterfell:5000/occne/provision:1.3.0
    
  2. Verify that Bastion Host VM is installed by logging into RMS2/db-2 and issuing the following command. The <ansible_host> field (which is an IPv4 address) is derived from the hosts.ini file db-2 entry for host_hp_gen_x groups.

    Note: This command is optional. Had a failure actually occurred, the docker run command would have experienced failures.

    ssh -i /var/occne/cluster/<cluster_short_name>/.ssh/occne_id_rsa  admusr@<oam_host>
     
    $ sudo virsh list
      
    Example:
    ssh -i /var/occne/cluster/rainbow.lab.us.oracle.com/.ssh/occne_id_rsa  admusr@10.75.148.6
    $ sudo virsh list
     
     Id    Name                           State
    ----------------------------------------------------
     11    bastion-2.rainbow.lab.us.oracle.com running
  3. Login to Bastion Host from the Boostrap Host as admusr using the generated key from the /var/occne/cluster/<cluster_short_name> directory to confirm the VM is set up correctly. The <oam_host> field (which is an IPv4 address) is derived from the hosts.ini file bastion-2 entry for the host_kernel_virtual group.

    Note: This command is optional. Had a failure actually occurred, the docker run command would have experienced failures.

    ssh -i /var/occne/cluster/<cluster_short_name>/.ssh/occne_id_rsa  admusr@<oam_host>
     
    Example:
    ssh -i /var/occne/cluster/rainbow/.ssh/occne_id_rsa admusr@10.75.148.5