Install and Configure Red Hat OpenShift 4.13 on Oracle Cloud VMware Solution using Assisted Installer

For our demo implementation, we are using Oracle Cloud VMware Solution with Standard Shapes to deploy Red Hat OpenShift 4.13.

However, the procedure remains the same for Oracle Cloud VMware Solution with Dense I/O Shapes. We have deployed the Red Hat OpenShift Container Platform into the SDDC using Red Hat Cloud Subscription and followed the Assisted Installation of Red Hat OpenShift on SDDC.

Before You Begin

Before you begin installing Red Hat OpenShift 4.13 on Oracle Cloud VMware Solution, complete the following prerequisites:

  • Oracle Cloud VMware Solution environment with a minimum of 3 nodes for production Red Hat OpenShift implementation.
  • NSX-T Overlay Segment with DHCP and Internet-enabled network.
  • Red Hat Cloud Subscription to perform the initial installer-assisted steps.
  • OCI Block Volumes dedicated per Red Hat OpenShift VM, if the Oracle Cloud VMware Solution is deployed using Standard Shapes.
  • DNS Server for name resolution.
  • Administrative privileges on Oracle Cloud VMware Solution vCenter Server.

Setup Details

We used the following set up for our demo implementation.

  • Oracle Cloud VMware Solution version 7.0.3 with Standard Shape deployment.
  • Dedicated block volumes per Red Hat OpenShift Infrastructure VMs presented as Datastores in SDDC (applicable only for Standard Shape otherwise only one single vSAN datastore will be used).
  • NSX-T Overlay segment of CIDR 10.60.10.0/24.
  • DNS Server with the domain name ocp.local deployed as an OCI compute instance.

Install Red Hat OpenShift 4.13

The following steps provide the details for Red Hat SaaS assisted-installer. You can follow your own choice of Red Hat OpenShift implementation.

  1. Log in to https://console.redhat.com/ with a registered username. For first-time users, create an account.
  2. Click OpenShift, Clusters, and then click Create cluster.
  3. Select Datacenter as the cluster type and then select vSphere.
  4. Under Assisted Installer, click Create cluster.
  5. Complete the following details:
    1. Cluster name: the name of the cluster.
    2. Base Domain: DNS domain name for the name resolution.
    3. OpenShift version: We have used OpenShift version 4.13.4.
    4. CPU architecture: Leave the default value.
    5. Hosts’ network configuration: Select DHCP only.
    6. Encryption of installation disks: Leave the default value.
  6. On the Operators screen, click Next.
  7. Under Host Discovery, click Add hosts, and complete the following details:
    1. From the Provisioning type drop-down list, select Minimal image file – Download an ISO that fetches content on boot.
    2. In the SSH public key field, provide the value.
    3. Click Generate Discovery ISO.
    4. Once the ISO is ready to be downloaded, click Download Discovery ISO.
  8. Click on the Minimum hardware requirements link to know the control plane and worker node specifications.
  9. Log in to the Oracle Cloud VMware Solution vCenter server and create the OpenShift infrastructure VMs.
  10. Upload the ISO downloaded in Step 7d to the vSphere datastore. You can chose any management datastore to store the ISO.
  11. Create 3 Controller and 3 Worker node vanilla VMs according to the hardware specifications gathered from Step 8. Make sure to adhere to the following guidelines for all VMs when creating Red Hat OpenShift infrastructure nodes (controller and worker VMs).
    • The hardware specification has physical core information and should be translated into vCPU when creating a VM.
    • Create VMware vSphere DRS affinity and anti-affinity rules as applicable to provide the highest possible resiliency to the Red Hat OpenShift infrastructure nodes.
    • Select Red Hat as an Operating System while creating a VM.
    • The VM should be connected to the NSX Overlay Segment prepared for this installation which has DHCP, and internet services enabled.
    • Each VM should be kept in the dedicated Datastore for Oracle Cloud VMware Solution with Standard Shapes deployment. For Oracle Cloud VMware Solution with Dense I/O Shapes, there will be a single vSAN datastore which will be used for all the Red Hat OpenShift VMs.
    • Adjust the Virtual Performance Unit (VPU) for the OCI Block Volumes according to the performance requirements of the cluster. It is recommended to use 30 VPUs for each OCI block volume.
    • Attach the ISO that was uploaded to each controller and worker VM to bootstrap the red Hat CoreOS.
    • Make sure the VM boots up with the attached ISO when powered ON. If required, edit the VM boot option to force boot into the EFI setup screen during the next boot.
    • Under the VM Options tab for each controller and worker VM, go to the Advanced section. Click ADD CONFIGURATION PARAMS and set disk.EnableUUID to TRUE. This option is needed because the Red Hat OpenShift installation will be done in the Virtualize mode.
    If everything is configured correctly and the bootstrap process is completed successfully, the VMs with MAC address will start appearing on the Red Hat SaaS console under Host Inventory with Ready status.
  12. Identify the MAC address that appeared on the console with the controller and worker VMs and edit the hostname details and roles for each VM. Select the check box for the entry, click Action and then click Change hostname. Click on the Auto-assign drop-down under the Role column and update the role.
    Once all the servers are updated, the status should display Ready.
  13. For the Host discovery page, make sure to enable the Integrate with your virtualization platform option because the Red Hat CoreOS for OpenShift is managed by vSphere and click Next.
    Under the Storage Section, you should see the Ready status of the OpenShift VMs.
  14. For the Networking section, complete the following details:
    • Network Management: Leave the default as Cluster-Managed Networking.
    • Networking Stack type: Leave the default as IPv4.
    • Network type: Leave the default selection as Open Virtual Networking (OVN).
    • Machine network: By default, the NSX overlay network will be selected which is assigned to the OpenShift VMs.
    • API IP: Provide the free IP from the same machine network for the API URL. Make sure to create a DNS record according to the internal or external usage.
    • Ingress IP: Provide the free IP from the same machine network for the Ingress networking. Make sure to create a DNS record according to internal or external usage.
    • Host SSH Public keys: Leave the default setting and click Next. Make sure the node status always shows Ready. If not then, check the VMs for further troubleshooting.
  15. Review the summary and click Install Cluster. Monitor the installation progress. It takes approximately 40 minutes to 1 hour to complete the setup.
  • Make sure to create all required DNS records to access the Web Console and API console URLs. The required DNS record details can be found from the Not able to access the Web Console? link.
  • Download the kubeconfig file and save it as it will be deleted after 20 days.
  • Note the Web Console URL, Username and Password. It will be required for configuring the VMware vSphere connection settings.

Configure VMware vSphere Connection Settings

Keep the Username and Password you noted on the Cluster installation summary page of the Red Hat OpenShift installation handy to configure the VMware vSphere connection settings.

  1. To modify the default settings, access the web console URL and log in using kubeadmin.
  2. Once you are logged into the web console, you should see the green check mark Status for the various services. The vSphere connection will show an invalid credentials warning.
  3. Complete the post-install VMware vSphere configuration and validation as described in this document: Modify vSphere configuration of OCP cluster that was installed using the Assisted Installer.

    Note:

    • This procedure is not applicable for any other installation methods of the OCP cluster.
    • Make sure the vCenter IP address is reachable from the machine network that is selected (NSX overlay to vCenter VLAN).
    • Make sure the default datastore selected is not part of the Storage DRS cluster. The storage operator does not work with the vSphere Storage DRS Cluster. You must move the datastore out of the Storage cluster or select the one which is not part of the Storage cluster. The Storage Operator will fail if the selected Datastore is part of the Storage DRS cluster.
  4. Click Monitored operators and the Operator Status should display Healthy.