Note:

Install and Configure Red Hat OpenShift Data Foundation on Oracle Cloud Infrastructure

Introduction

Red Hat OpenShift Data Foundation is a fully integrated, software-defined storage solution designed to provide scalable, persistent storage for containerized applications running on the Red Hat OpenShift Container Platform. It simplifies the management of storage across Kubernetes environments by providing block, file, and object storage options through unified interfaces.

When deployed on Oracle Cloud Infrastructure (OCI), Red Hat OpenShift Data Foundation leverages OCI’s high-performance, low-latency infrastructure to provide reliable, highly available storage for modern workloads. Red Hat OpenShift Data Foundation uses OCI Block Volumes and integrates seamlessly with the Red Hat OpenShift platform to deliver data durability, fault tolerance, and high availability.

ODF High Level Architecture

This tutorial will walk you through the process of setting up Red Hat OpenShift Data Foundation on Oracle Cloud Infrastructure.

By the end of this tutorial, you will have a solid understanding of how to implement Red Hat OpenShift Data Foundation on Oracle Cloud Infrastructure and optimize it for your containerized workloads.

The following image illustrates the workflow.

ODF Installation Steps

Note:

Objectives

Install and configure Red Hat OpenShift Data Foundation on Oracle Cloud Infrastructure.

Prerequisites

Task 1: Create OpenShift Cluster with Data Foundation

This task provide the details for Red Hat OpenShift Data Foundation using assisted-installer.

  1. Log in to the Red Hat Hybrid Cloud Console with a registered username. If you a new user, create an account.

  2. Click OpenShift, Clusters, and Create cluster.

  3. Select Interactive.

  4. In Cluster details, enter the following information and click Next.

    • Cluster name: Enter the name of the cluster.
    • Base Domain: Enter DNS domain name for the name resolution.
    • OpenShift version: We have used OpenShift version 4.17.0. Select 4.13 version and later.
    • CPU architecture: Keep the default value (x86_64).
    • Select Oracle Cloud Infrastructure (Requires a customer manifest).

    OCI Platform Integration

  5. In Cluster details, select Install OpenShift Data Foundation and click Next.

    ODF Selection

  6. In Host Discovery, click Add hosts and follow the steps:

    1. From the Provisioning type drop-down menu, select Minimal image file.

    2. Download an ISO that fetches content on boot.

    3. In SSH public key, enter the key value.

    4. Click Generate Discovery ISO.

    5. Once the ISO is ready for the download, click Download Discovery ISO.

  7. Log in to the OCI Console with the required privileges to interact with OCI Object Storage and perform the following steps to obtain Pre-Authenticated URL.

    1. Navigate to Storage and Bucket.

    2. Create a bucket or use an existing bucket.

    3. Upload the ISO generated from Task 1.6.

    4. Create Pre-Authenticated (PAR) URL and save it.

    Pre-Authenticated URL

Task 2: Create OCI Resources for OpenShift

This task will create the necessary OCI resources for OpenShift that includes control plane, compute VMs/BMs, block storage, DNS zones and load balancers.

  1. Download the GitHub repository oci-openshift zip bundle.

  2. Log in to the OCI Console and navigate to Developer Services, Resource Manager, Stacks and click Create Stack.

  3. Upload the zip file, enter the required information and click Next.

    OCI Quickstart Terraform Stack

  4. In Configure variable, enter the following information.

    • cluster_name: Enter the exact name from Task 1.4.
    • compartment_ocid: This is auto-populated but change the compartment ID if needed. This is where the OpenShift cluster resources will be deployed.
    • compute_boot_size: The size of the boot volume of each compute node in GBs.
    • compute_boot_volume_vpus_per_gb: The number of volume performance units (VPUs) that will be applied to this volume per GB of each compute node. It is recommended to keep the default value.
    • compute_count: The number of compute nodes in the cluster (worker nodes).
    • compute_memory: The amount of memory available for the shape of each compute node, in GBs. The minimum memory required for Red Hat OpenShift Data Foundation cluster is 27 GB. Update the value.
    • compute_ocpu: The number of OCPUs available for the shape of each compute node. The minimum OCPU required for Red Hat OpenShift Data Foundation cluster is 10. Update the value.
    • compute_shape: Compute shape of the compute nodes. The default shape is VM.Standard.E4.Flex.

    Compute VM Specification

    • control_plane_boot_size: The size of the boot volume of each control_plane node in GBs.
    • control_plane_boot_volume_vpus_per_gb: The number of VPUs that will be applied to this volume per GB of each control_plane node. Keep the default value.
    • control_plane_count: The number of control_plane nodes in the cluster.
    • control_plane_memory: The amount of memory available for the shape of each control_plane node, in GBs.
    • control_plane_ocpu: The number of OCPUs available for the shape of each control_plane node.
    • control_plane_shape: Compute shape of the control_plane nodes.
    • enable_private_dns: Select, if OpenShift will be using private DNS. Deselect if OpenShift will be integrated with public DNS.
    • load_balancer_shape_details_maximum_bandwidth_in_mbps: Bandwidth in Mbps that determines the maximum bandwidth.
    • load_balancer_shape_details_minimum_bandwidth_in_mbps: Bandwidth in Mbps that determines the total pre-provisioned bandwidth.
    • openshift_image_source_uri: Enter the Pre-Authenticated URL created in Task 1.7.
    • private_cidr: The IPv4 CIDR blocks for the public subnet of your OpenShift cluster.
    • region: Select OCI region.
    • tenancy_ocid: This is auto-populated. Keep the default value.
    • vcn_cidr: The IPv4 CIDR blocks for the VCN of your OpenShift cluster.
    • vcn_dns_label: A DNS label for the VCN.
    • zone_dns: Enter base domain supplied in the Create Cluster page.

    zone_dns

  5. Click Run Apply and monitor the progress of the stack.

Task 3: Create Additional Storage for Red Hat OpenShift Data Foundation

This task will provide instructions to create additional OCI Block Storage required for the Red Hat OpenShift Data Foundation storage architecture.

  1. Go to the OCI Console, navigate to Storage and Block Volumes.

  2. Create an OCI Block Volume in each availability domain (AD) based on the worker node placements for multi-AD regions. For single AD region, just create it in the default AD. Make sure to select the same size of block volumes for all the worker nodes and configure VPU that meet the storage demands.

  3. Attach the block volumes to the respective worker nodes.

    COmpute nodes and Availability Domain

    Block volumes and Availability Domain

Task 4: Continue the Cluster Installation Process

In this task, we will continue the cluster creation task started in Red Hat Hybrid Cloud Console.

  1. Log in to the ongoing cluster creation wizard in Red Hat Hybrid Cloud Console.

  2. You will notice all the compute and control VMs appearing in the Host discovery section.

  3. Select the compute nodes and change the Role to Worker.

  4. Select the control plane nodes and change the Role to Control Plane node.

  5. All the node status should show Ready and click Next.

    Nodes host discovery

  6. In this Storage section, the compute nodes will reflect the status with ODF Usage.

    Storage view

  7. Click Next and keep the default values in the Networking section.

  8. In the Custom manifests section, follow the steps:

    1. Go to the OCI Console and open the stack Job details.

    2. From the Outputs section, copy the value of oci_ccm_config and paste it in your Integrated Development Environment (IDE).

      Stack output

    3. You will need to capture the compartment ID, VCN ID, subnet ID, and security list IDs from the output.

    4. Extract the zip file which was downloaded in Task 2. Find and update the oci-ccm.yml, oci-csi.yml and other machineconfig files.

      oci-ccm-outpt

    5. Under the oci-ccm-04-cloud-controller-manager-config.yaml section, update the oci-ccm.yml file values.

      oci-ccm.yml

    6. Under the oci-csi-01-config.yaml section, update the oci-csi.yml file values.

      oci-csi.yml

    7. Upload the manifest files which were updated in previous steps and machineconfig files without any modification.

      Manifests

  9. Review the details and create the cluster.

  10. Once the installation is successful. Obtain the OpenShift Web Console URL and kubeadmin credentials.

    OpenShift console

Task 5: Validate the OpenShift StorageClasses

  1. Log in to the OpenShift Console using kubeadmin credentials.

  2. Validate the OpenShift StorageClasses.

    storage classes

    You can create PersistentVolumeClaims from any of the StorageClasses created by the Red Hat OpenShift Data Foundation operator and can use it with your containerized applications.

Next Steps

Deploying Red Hat OpenShift Data Foundation on Oracle Cloud Infrastructure (OCI) delivers a scalable, resilient, and high-performance storage solution for containerized workloads. Red Hat OpenShift Data Foundation ensures robust data protection and high availability, offering a reliable software-defined storage platform that efficiently supports your applications.

Additionally, Red Hat OpenShift Data Foundation enables applications to directly consume block, file, and object storage through PersistentVolumeClaims and StorageClasses, bypassing the underlying storage complexities while delivering seamless access to various storage types.

Acknowledgments

More Learning Resources

Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.

For product documentation, visit Oracle Help Center.