OCI Resources for the Agent-based Installer for Manual Installation

Manually create the required OCI resources to install an OpenShift Container Platform cluster using the Agent-based Installer.

Tip

If you're provisioning resources with Terraform, as described in Installing a Cluster with Agent-based Installer Using Terraform, you can skip this topic.

The Agent-based Installer workflow to configure the infrastructure begins in the Red Hat Hybrid Cloud Console, where you download the openshift-installbinary to create a discovery ISO image needed to provision Compute instances in OCI. See the OpenShift Installer and CLI Downloads page to download the binary.

Before you can generate the discovery ISO image (used to provision Compute instances in OCI), you need to provision the OCI infrastructure. This includes OCIDs, VCN CIDR ranges, control plane and compute instance counts, and other inputs required to create the agent-config.yaml, install-config.yaml, and openshift/custom_manifest.yaml Agent-based installation files.

After generating the image, you switch to the OCI Console to provision the remaining infrastructure and continue with cluster deployment.

When using the Agent-based Installer, you need the resources discussed in the Prerequisites topic and the following infrastructure components:

  • Compartment
  • VCN
  • Load balancers
  • DNS records for the load balancers
  • Tag namespace and defined tags for the cluster's compute nodes
  • IAM dynamic group for the cluster's compute nodes
  • IAM policies for the dynamic group
  • Custom image for Compute instance provisioning, created from the OpenShift discovery ISO image.
Tip

You can create most of these resources before you begin, except for the custom Compute image, which requires the discovery ISO image.

Compartment

Compartments enable you to organize and isolate cloud resources. We recommend creating a new compartment for the OpenShift cluster. For more information, see Creating a Compartment.

VCN and Networking Resources

OpenShift compute nodes, load balancers, and other resources use an OCI Virtual Cloud Network (VCN) to connect. See Creating a VCN for instructions on creating a VCN.

You need IAM permissions to manage VCNs and the related Networking resources in the virtual-network-family Aggregate Resource-Type. See Managing Access to Resources for details. Note that Networking permissions are discussed in the Core Services section.

Optionally, you can use network security groups (NSGs) in your VCN to control access. See Network Security Groups for details on using NSGs to control network traffic and access. Note that the NSG must be in the same compartment as the other OpenShfit infrastructure resources.

See the Terraform Defined Resources for OpenShift on OCI page on GitHub for VCN and subnet configuration details. For specific resource definitions, access the relevant folders in the shared_modules directory and search for the following resources: oci_core_vcn, oci_core_internet_gateway, oci_core_nat_gateway, oci_core_route_table, oci_core_subnet, and oci_core_network_security_group.

Load Balancers

The Red Hat OpenShift cluster requires two load balancers, one for internal network traffic, and one for external traffic. See Creating a Load Balancer for instructions. For high-level details on load balancer configurations, refer to the Terraform Defined Resources for OpenShift on OCI page on GitHub. For specific resource definitions, access the relevant folder in the shared_modules directory and search for the following resources: oci_load_balancer_load_balancer, oci_load_balancer_backend_set, and oci_load_balancer_listener.

Internal Load Balancer

Use the following information to configure the load balancer used for internal traffic:
Port Back-end machines (pool members) Description
6443 Bootstrap and control plane Kubernetes API server
22623 Bootstrap and control plane Machine config server
22624 Bootstrap and control plane Machine config server

API Load Balancer

Used for Kubernetes API server traffic. Can be public or private. Use the following information to configure the API load balancer

Port Back-end machines (pool members) Description
6443 Bootstrap and control plane nodes Kubernetes API server traffic (HTTPS)
22624 Bootstrap and control plane nodes Used by worker nodes to download ignition configs when joining the cluster.
Note

Port 22624 is required for adding new worker nodes to the cluster. Ensure it's open and routed correctly on the external (not internal) API load balacer.

Apps Load Balancer

Used for application ingress traffic (for example, user apps, OpenShift web console). Can be public or private. Use the following information to configure the Apps load balancer
Port Back-end machines (pool members) Description
80 Nodes running Ingress Controller pods (usually compute or worker nodes) HTTP traffic
443 Nodes running Ingress Controller pods (usually compute or worker nodes) HTTPS traffic
Note

You can configure the load balancers to be either public or private depending on the networking and security requirements.

DNS Records

Create DNS records for routing internal and external OpenShift network traffic. Depending on your networking and security requirements, create either a public DNS zone, a private DNS zone, or both. A private DNS zone is only resolvable within Oracle networks (such as your VCN). A public DNS zone enables external access.

After creating the zones, add DNS records for the following hostnames:
  • api.<cluster_name>.<base_domain>
  • api-int.<cluster_name>.<base_domain>
  • *.apps.<cluster_name>.<base_domain>

Each DNS record must reference the same public and private load balancer IDs.

See Zones for instructions on creating and managing DNS zones.

For high-level DNS configuration details, see the Terraform Defined Resources for OpenShift on OCI page on GitHub. For specific resource definitions, access the relevant folder in the shared_modules directory and search for the following resources: oci_dns_zone and oci_dns_rrset.

Component Record Load Balancer Description
Kubernetes API api.<cluster_name>.<base_domain>. Use API load balancer IP

A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer.

These records must be resolvable by clients external to the cluster, and from all the nodes within the cluster.

Kubernetes API api-int.<cluster_name>.<base_domain>. Use internal load balancer IP

A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer.

These records must be resolvable from all the nodes within the cluster.

Application Ingress *.apps.<cluster_name>.<base_domain>. Use apps load balancer IP

A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer.

The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.

These records must be resolvable by clients external to the cluster, and from all the nodes within the cluster.

For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console.

Defined Tags

Defined tags are required to group and identify all control plane and compute nodes.

Using the Tagging service, create two tag namespaces and define the required tags in the compartment you're using to create the OpenShift cluster:

  • Tag namespace: openshift-tags and openshift-{cluster_name}
  • Defined tag names and values:
    • For openshift-tags: openshift-resource
    • For openshift-{cluster_name}:
      • instance-role: control_plane or compute (depending on the node type)
      • boot-volume-type: PARAVIRTUALIZED or ISCSI

These tags must be applied to all relevant resources during provisioning. For more information, see Resource Attribution Tags.

For more information, see Tags and Tag Namespace Concepts.

For high-level instructions specficic to OpenShift on OCI, see Terraform Defined Resources for OpenShift on OCI page on GitHub. For specific resource definitions, access the relevant folder in the shared_modules directory and search for the following resources: oci_identity_tag_namespace and oci_identity_tag.

Dynamic Groups

Dynamic groups enable you to group Oracle Cloud Infrastructure (OCI) compute instances as "principal" actors (similar to user groups) for granting access through IAM policies.

You can create IAM policies (discussed in the following section) that reference these dynamic groups to control access to OCI resources. See Managing Dynamic Groups for instructions. For high-level details on dynamic group configuration, see the Terraform Defined Resources for OpenShift on OCI page on GitHub . For specific resource definitions, access the relevant folder in the shared_modules directory and search for the following resource: oci_identity_dynamic_group.

Use the following values to define the dynamic group for control plane instances:
  • Dynamic group name:
    ${var.cluster_name}_control_plane_nodes
  • Compartment: Cluster compartment
  • Matching rule:
    all {
      instance.compartment.id = "${var.compartment_ocid}",
      tag.${var.op_openshift_tag_namespace}.${var.op_openshift_tag_instance_role}.value = "control_plane"
    }

Dynamic Group Policies

Three IAM policies are required for the OpenShift control plane dynamic group to access and manage OCI resources during cluster creation. These IAM policies are required for the master dynamic group. See Managing Dynamic Groups and IAM Policies Overview for instructions. For high-level details on dynamic group policy configurations, see the Terraform Defined Resources for OpenShift on OCI page on GitHub. For specific resource definitions, access the relevant folder in the shared_modules directory and search for the following resource: oci_identity_policy.

  • Control Plane Resource Access Policy: This policy allows the control plane noes to manage core infrastructure resources. The dynamic group for control plane nodes is named ${var.cluster_name}_control_plane_nodes
    • Policy Name:${var.cluster_name}_control_plane_nodes
    • Compartment: Cluster compartment
    • Policy Statements:
      
      Allow dynamic-group ${oci_identity_dynamic_group.openshift_control_plane_nodes.name} to manage volume-family in compartment id ${var.compartment_ocid}
      
      Allow dynamic-group ${oci_identity_dynamic_group.openshift_control_plane_nodes.name} to manage instance-family in compartment id ${var.compartment_ocid}
      
      Allow dynamic-group ${oci_identity_dynamic_group.openshift_control_plane_nodes.name} to manage security-lists in compartment id ${var.compartment_ocid}
      
      Allow dynamic-group ${oci_identity_dynamic_group.openshift_control_plane_nodes.name} to use virtual-network-family in compartment id ${var.compartment_ocid}
      
      Allow dynamic-group ${oci_identity_dynamic_group.openshift_control_plane_nodes.name} to manage load-balancers in compartment id ${var.compartment_ocid}
      
      Allow dynamic-group ${oci_identity_dynamic_group.openshift_control_plane_nodes.name} to manage objects in compartment id ${var.compartment_ocid}
  • Cluster Resource Tagging Access Policy: This policy grants access to the control plane nodes to use the openshift-tags namespace to tag cluster resources.
    • Policy Name: ${var.cluster_name}_control_plane_nodes_tags
    • Compartment: Root compartment
    • Policy Statement:
      Allow dynamic-group ${oci_identity_dynamic_group.openshift_control_plane_nodes.name} to use tag-namespaces in tenancy
  • (Optional) Networking Access Policy: This policy is required only if networking components are in a different compartment than the cluster instances.
    • Policy Name: ${var.cluster_name}_control_plane_nodes_networking_access_policy

    • Compartment: Networking compartment
    • Policy Statements:
      
      Allow dynamic-group ${oci_identity_dynamic_group.openshift_control_plane_nodes.name} to manage security-lists in compartment id ${var.networking_compartment_ocid}
      
      Allow dynamic-group ${oci_identity_dynamic_group.openshift_control_plane_nodes.name} to manage virtual-network-family in compartment id ${var.networking_compartment_ocid}  

Custom Image for OpenShift Container Platform Instances

To create cluster nodes for OpenShift Container Platform using the Agent-based Installer, you need a Compute service custom image that contains the Red Hat software needed to run the cluster nodes. To create this image, you need to do the following:

  1. Create a discovery ISO image locally, using the openshift-install binary, available from Red Hat Hybrid Cloud Console. See Creating configuration files for installing a cluster on OCI (Red Hat documentation) (Red Hat documentation) for instructions.
  2. Upload your discovery ISO image to OCI Object Storage. See Creating an Object Storage Bucket and Uploading an Object Storage Object to a Bucket for instructions.
  3. Create custom image in the a Compute service based on the discovery ISO. See Managing Custom Images for instructions.
Important

When creating your custom image, you must clear the 'BIOS' capability so that this option is not enabled for your image. See Configuring Image Capabilities for Custom Images in the Managing Custom Images documentation for details.

Agent Configuration Files

The Agent-based Installer requires two configuration files that must be edited so that you can use the Agent-based Installer to generate the discovery ISO image. These are the agent-config.yaml and install-config.yaml files. See Creating configuration files for installing a cluster on OCI (Red Hat documentation) for details.

After creating the agent-config.yaml and install-config.yaml files, save them locally. Your local directory structure must be as follows:

.
└── local_machine_work_directory/
    ├── agent-config.yaml
    ├── install-config.yaml
    └── openshift /
        ├── manifest.yml

Firewall Configuration

Ensure that your firewall is configured to grant access to the sites that OpenShift Container Platform requires. See Configuring your firewall for OpenShift Container Platform (Red Hat documentation)for details on setting up your firewall's allowlist for OpenShift Container Platform.

Set your firewall's allowlist to include the following URLs:
URL Port Function
ghcr.io 443 Provides the container image for the Oracle Cloud Control Manager (CCM) and Container Storage Interface (CSI).
registry.k8s.io 443 Provides the supporting kubernetes container images for CCM and CSI.