OCI Resources Needed for Using the Agent-based Installer

Learn about the resources you need to create in your OCI tenancy for the installation of a OpenShift Container Platform cluster using the Agent-based Installer.

The Agent-based Installer work flow begins in the Red Hat Hybrid Cloud Console, where you download the openshift-install binary for creating the discovery ISO image needed to provision Compute instances in OCI. After you create the discovery image, the work flow moves to the OCI Console for resource provisioning.

When using the agent-based installer, you need the resources discussed in Prerequisites in this documentation, and the OCI infrastructure resources described in this topic.

Optionally, you can create most of these resources before you begin, except for the Compute service custom image, which depends on the discovery ISO image. You need the following resources:

  • Compartment
  • VCN
  • Load balancers
  • DNS records for the load balancers
  • Tag namespace and defined tags for the cluster's compute nodes
  • IAM dyanamic group for the cluster's compute nodes
  • IAM policies for the dynamic group
  • Custom image for Compute instance provisioning, created from the OpenShift iso discovery image

Compartment

Compartments enable you to organize and isolate cloud resources. We recommend creating a new compartment for the OpenShift cluster. See Creating a Compartment for instructions.

VCN and Networking Resources

OpenShift compute nodes, load balancers, and other resources use an OCI VVirtual Cloud Network (VCN) to connect. See Creating a VCN for instructions on creating a VCN.

You need IAM permissions to manage VCNs and the related Networking resources in the virtual-network-family Aggregate Resource-Type. See Managing Access to Resources for details. Note that Networking permissions are discussed in the Core Services section.

Optionally, you can use network security groups (NSGs) in your VCN to control access. See Network Security Groups for details on using NSGs to control network traffic and access. Note that the NSG must be in the same compartment as the other OpenShfit infrastructure resources.

See the OpenShift on OCI Terraform script in Git Hub for VCN and subnet configuration details. Search the file for the following resources: oci_core_vcn, oci_core_internet_gateway, oci_core_nat_gateway, oci_core_route_table, oci_core_subnet, and oci_core_network_security_group.

Load Balancers

The Red Hat OpenShift cluster requires two load balancers, one for internal network traffic, and one for external traffic. See Creating a Load Balancer for instructions. See the OpenShift on OCI Terraform script in Git Hub for load balancer configuration details. Search the file for the following resources: oci_load_balancer_load_balancer, oci_load_balancer_backend_set, and oci_load_balancer_listener.

Internal Load Balancer

Use the following information to configure the load balancer used for internal traffic:
Port Back-end machines (pool members) Description
6443 Bootstrap and control plane Kubernetes API server
22623 Bootstrap and control plane Machine config server
22624 Bootstrap and control plane Machine config server

External Load Balancer

Use the following information to configure the load balancer used for external traffic:

Port Back-end machines (pool members) Description
80

The machines that run the Ingress Controller pods, compute, or worker, by default

HTTPS traffic
443

The machines that run the Ingress Controller pods, compute, or worker, by default

HTTPS traffic
6443 Bootstrap and control plane HTTPS traffic

DNS Records

Create DNS records for routing internal and external OpenShift network traffic. See Zones for instructions on creating and managing DNS zones. Each DNS record must share the same public and private ID of the load balancer.

See the OpenShift on OCI Terraform script in Git Hub for DNS configuration details. Search the file for the following resources: oci_dns_zone, oci_dns_rrset.

Component Record Load balancer type Description
Kubernetes API api.<cluster_name>.<base_domain>. Use external load balancer IP

A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer.

These records must be resolvable by clients external to the cluster, and from all the nodes within the cluster.

Kubernetes API api-int.<cluster_name>.<base_domain>. Use internal load balancer IP

A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer.

These records must be resolvable from all the nodes within the cluster.

Routes *.apps.<cluster_name>.<base_domain>. Use external load balancer IP

A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer.

The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.

These records must be resolvable by clients external to the cluster, and from all the nodes within the cluster.

For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console.

Defined Tags

Defined tags are required for grouping all master nodes (control plane nodes) and worker nodes.

Using the Tagging service, create the following Tagging resources in the compartment you're using to create the OpenShift cluster:

  • Tag namespace: openshift_tags
  • Defined tag name: instance_role
  • Defined tag values for instance_role:
    • master
    • worker

See Tags and Tag Namespace Concepts for instructions. See the OpenShift on OCI Terraform script in Git Hub for tagging details. Search the file for the following resources: oci_identity_tag_namespace and oci_identity_tag.

Dynamic Groups

Dynamic groups enable you to group Oracle Cloud Infrastructure compute instances as "principal" actors (similar to user groups). You create IAM policies (discussed in the following section) to define rules for dynamic groups. See Managing Dynamic Groups for instructions. See the OpenShift on OCI Terraform script in Git Hub for dyanamic group configuration details. Search the file for the following resources: oci_identity_dynamic_group.

Dynamic Group Policies

IAM Policies are required for the master dynamic group. See Managing Dynamic Groups and How Policies Work for instructions. See the OpenShift on OCI Terraform script in Git Hub for dyanamic group policy details. Search the file for the following resources: oci_identity_policy.

The policy statements are formatted as follows


      Allow dynamic-group master to manage volume-family in compartment id <compartment_ocid>
      Allow dynamic-group master to manage instance-family in compartment id <compartment_ocid>
      Allow dynamic-group master to manage security-lists in compartment id <compartment_ocid>
      Allow dynamic-group master to use virtual-network-family in compartment id <compartment_ocid>
      Allow dynamic-group master to manage load-balancers in compartment id <compartment_ocid>
      
      
    

Custom Image for OpenShift Container Platform Instances

To create cluster nodes for OpenShift Container Platform using the Agent-based Installer, you need a Compute service custom image that contains the Red Hat software needed to run the cluster nodes. To create this image, you need to do the following:

  1. Create a discovery ISO image locally, using the openshift-install binary, available from Red Hat Hybrid Cloud Console. See Creating configuration files for installing a cluster on OCI for instructions.
  2. Upload your discovery ISO image to OCI Object Storage. See Creating an Object Storage Bucket and Uploading an Object Storage Object to a Bucket for instructions.
  3. Create custom image in the a Compute service based on the discovery ISO. See Managing Custom Images for instructions.
Important

When creating your custom image, you must clear the 'BIOS' capability so that this option is not enabled for your image. See Configuring Image Capabilities for Custom Images in the Managing Custom Images documentation for details.

Agent Configuration Files

The agent-based installer requires two configuration files that must be edited so that you can use the Agent-based Installer to generate the discovery ISO image. These are the agent-config.yaml and install-config.yaml files. See Creating configuration files for installing a cluster on OCI in the Red Hat documentation for details on creating and configuring these files.

After creating the agent-config.yaml and install-config.yaml files, save them locally. Your local directory structure must be as follows:

.
└── local_machine_work_directory/
    ├── agent-config.yaml
    ├── install-config.yaml
    └── openshift /
        ├── oci-csi.yml
        └── oci-ccm.yml

Firewall Configuration

Ensure that your firewall is configured to grant access to the sites that OpenShift Container Platform requires. See Configuring your firewall for OpenShift Container Platform in the Red Hat documentation for details on setting up your firewall's allowlist for OpenShift Container Platform.