Note:
- This tutorial requires access to Oracle Cloud. To sign up for a free account, see Get started with Oracle Cloud Infrastructure Free Tier.
- It uses example values for Oracle Cloud Infrastructure credentials, tenancy, and compartments. When completing your lab, substitute these values with ones specific to your cloud environment.
Accelerate Oracle Cloud Infrastructure Architect Professional Certification with Terraform
Introduction
Achieving Oracle Cloud Infrastructure (OCI) Architect Professional certification demands a deep understanding of Oracle Cloud and hands-on experience. This tutorial accelerates your journey by leveraging Infrastructure as Code (IaC) with Terraform.
Figure 1 below depicts the target OCI architecture to be deployed. The right side outlines a structured approach to the OCI Architect Professional Certification (2024) lab, divided into seven sequential steps. By leveraging pre-built Terraform code snippets tailored to the certification exam labs, repetitive tasks are automated, freeing up valuable time to concentrate on mastering essential OCI concepts. This approach not only accelerates lab completion but also fosters a deeper understanding of the platform.
Figure 1: OCI Architecture Professional Certification Lab 1 - Architecture and Manual vs Automated Process
Drawback of Manual Methods
Manual approaches to the OCI certification exam hands-on labs can be a significant hurdle. Some drawbacks are:
- Error Prone: Repetitive manual tasks increase the risk of human errors, leading to typos, missed steps, and delays.
- Time Consuming: Complex workflows are time-consuming, hindering progress and limiting practice.
- Not Scalable: Require copying and pasting, introduce errors, and multiply correction efforts.
- Testing Bottleneck: Evaluating manual tasks is subjective, hindering accurate performance measurement and readiness.
- Documentation Shortcomings: Dependent on informal human input, inconsistent practices and challenges in maintaining procedures.
- Beyond the Manual Pain:
- Traditional methods create an all-or-nothing pressure. A single missed step can derail progress adding unnecessary stress and obscuring true understanding.
- These methods blur the line between essential skills and mandatory tasks. Valuable practice time is lost on tedious set up, hindering learning and OCI skills mastery.
Master OCI Skills Faster: The Power of Automation
This tutorial shows how to accelerate OCI certification by automating tasks, focusing on core skills, and shortening practice time.
Key benefits:
- Focused Learning: Automate repetitive tasks, freeing up time to master core concepts and deepening understanding and knowledge retention.
- Practice On-Demand: Gain flexibility by choosing which part of the labs to automate, personalizing learning pace and accelerating readiness.
- Boost Efficiency: Reduce preparation time by up to 80% through automation.
Additional Benefits:
- Scalability and Cost-Effectiveness: Easily scale infrastructure up or down based on demand, resulting in cost savings.
- Reduced Errors: Eliminate human error, ensure infrastructure is always provisioned securely and correctly.
To expedite your OCI Architect Professional Certification (2024) journey, we will automate Lab 1: Oracle Cloud Infrastructure Architect Professional illustrated in Figure 1 using Terraform. This automated approach can be extended to other certification labs and real world cloud deployments.
Network Security Group (NSG) as Ingress Source to Another NSG instead of CIDR Block
This tutorial explores Network Security Groups in OCI and how they provide granular control over network traffic within a Virtual Cloud Network (VCN). Network Security Groups act as virtual firewalls, controlling network access to resources within a VCN. This includes for example OCI Compute instances, Kubernetes clusters, or databases. Network Security Groups offer greater flexibility than security lists by enabling control between resources rather than just between subnets.
Semi-Automated Hybrid Approach in 7 Steps
OCI Architect Professional Certification hands-on labs, while comprehensive, can be time consuming. Consider Lab 1: Oracle Cloud Infrastructure Architect Professional, requiring at least 25 minutes to complete. By automating repetitive tasks like VCN, NSG, and virtual machine (VM) creation with Terraform, you can slash this time by up to 80%, focusing on core security concepts rather than tedious manual steps. The proposed semi-automated approach offers flexibility, allowing you to choose which tasks to automate.
Objectives
- Accelerate Oracle Cloud Infrastructure Architect Professional Certification preparation with Terraform.
Prerequisites
-
Familiarity with Infrastructure as Code (IaC) principles and core Terraform features.
-
Basic Terraform knowledge. Beginners must complete the Oracle Cloud Infrastructure Architect Professional or any beginner guide like Terraform for_each: A simple Tutorial with Examples.
-
Use of OCI Cloud Shell, Oracle Resource Manager (ORM), or IDE (for example, Visual Studio) with Terraform plugin.
Task 1: Create a Virtual Cloud Network (VCN)
Manual Option:
Use the OCI VCN wizard to create core networking resources: VCN, internet gateway, route table, security list, public and private subnets. For more information, see Virtual Networking Quickstart.
-
Go to the OCI Console, navigate to Networking, Virtual Cloud Networks and create a new VCN.
-
Click VCN Information and note down the VCN, Private Subnet, and Public Subnet OCIDs (Figure 2).
-
Add the OCIDs into the
input.auto.tfvars
(orterraform.tfvars
) file as depicted in the deployment options section.
By default, the create_vcn
flag is off (manually create the VCN from the OCI Console). Collect the VCN OCID, Subnets OCIDs and add them to the input.auto.tfvars
file.
# Step 1a - Create VCN using VCN Wizard (create_vcn flag off)
create_vcn = false
# Copy from the console the VCN OCIDs and subnets (public, Private) OCIDs.
vcn_id = "REPLACE_CREATED_VCN_OCID_HERE"
private_subnet_id = "REPLACE_CREATED_PRIVATE_SUBNET_OCID_HERE"
public_subnet_id = "REPLACE_CREATED_PUBLIC_SUBNET_OCID_HERE"
Figure 2: Console VCN View to Collect VCN OCID, Public Subnet OCID, and Private Subnet OCID
Automated Option:
Create the VCN using the following steps.
-
To automatically create the VCN and all networking resources, set the
create_vcn
flag totrue
in theinput.auto.tfvars
file. -
Specify VCN, public subnet, private subnets CIDR blocks, and host name prefix.
# Step 1b - Create VCN using Terraform. Provided the CIDR Blocks for the # VCN, Subnets and all other required input (host_name_prefix). create_vcn = true # Provide the VCN & Subnet CIDR Blocks as well as host name prefix vcn_cidr_block = "10.0.0.0/16" public_subnet_cidr_block = "10.0.0.0/24" private_subnet_cidr_block = "10.0.1.0/24" host_name_prefix = "phxapl4"
Task 2: Create Two NSGs (NSG-01 and NSG-02)
Manual Option:
Create two NSGs (NSG-01 and NSG-02) using the following steps:
-
Select the VCN created in Task 1.
-
Under Resources, click Network Security Groups.
-
Click Create Network Security Group and enter the following information.
- Name: Enter a name. For example,
<REGION-KEY>-AP-LAB01-NSG-01
. - Create-In Compartment: Enter your working compartment.
- Click Create.
- Name: Enter a name. For example,
-
Repeat step 1 to 3 for the second NSG named
<REGION-KEY>-AP-LAB01-NSG-02
.
Automated Option:
The following Terraform code creates two NSGs (NSG-01 and NSG-02).
resource "oci_core_network_security_group" "nsg-01" {
count = (var.create_vcn && var.create_nsg_1) ? 1 : 0
#Required
compartment_id = var.compartment_id
vcn_id = var.create_vcn ? oci_core_vcn.this.*.id[0] : var.vcn_id
display_name = "${var.display_name_prefix}-TF-NSG-01"
}
resource "oci_core_network_security_group" "nsg-02" {
count = (var.create_vcn && var.create_nsg_2) ? 1 : 0
#Required
compartment_id = var.compartment_id
vcn_id = var.create_vcn ? oci_core_vcn.this.*.id[0] : var.vcn_id
display_name = "${var.display_name_prefix}-TF-1-NSG-02"
}
To create the two NSGs, set the create_nsg_1
and create_nsg_2
flags to true
.
# Step 2: Create two(2) empty Network Security Groups (NSG-01 & NSG-02).
create_nsg_1 = true
create_nsg_2 = true
Task 3: Launch Four VMs using Terraform and Run Internet Control Message Protocol (ICMP) Ping Test
Assuming familiarity with launching VMs manually from the OCI Console. For more information, see Creating an Instance.
Note: We only cover the Terraform automation option to create 4 VMs (3 VMs in the public subnet and 1 VM in the private subnet).
Enable the create_vm_1_3
flag to instruct Terraform to create three VMs (VM-01, VM-02, and VM-03).
# Step 3a: Launch three(3) VMs(VM-01, VM-02, VM-03) in the public subnet.
create_vm_1_3 = true
To create three compute instances within the public subnet, we used the count
meta-argument in the Terraform code below. This concise approach simplifies the creation process compared to the more complex for_each
meta-argument. Setting count
to 3 automatically generates instances indexed 0, 1, and 2, enhancing code readability and efficiency.
resource "oci_core_instance" "VM1-3" {
count = (var.create_vm_1_3) ? 3 : 0
availability_domain = "GqIF:PHX-AD-1"
compartment_id = var.compartment_id
create_vnic_details {
assign_ipv6ip = "false"
assign_private_dns_record = "true"
assign_public_ip = "true"
subnet_id = var.create_vcn ? oci_core_subnet.My-Public-Subnet.*.id[0] : var.public_subnet_id
#nsg_ids = (var.create_vcn && var.create_nsg_1) && (count.index == 2) ? [oci_core_network_security_group.nsg-1.*.id[0]] : []
nsg_ids = (var.automate_step_4 && var.create_nsg_1) ? (var.create_vcn ? [oci_core_network_security_group.nsg-1.*.id[0]] : [oci_core_network_security_group.nsg-01.*.id[0]]) : []
}
display_name = "${var.display_name_prefix}-VM-0${count.index + 1}"
metadata = {
"ssh_authorized_keys" = "${file(var.ssh_public_key)}"
}
shape = "VM.Standard.A1.Flex"
shape_config {
memory_in_gbs = "6"
ocpus = "1"
}
source_details {
source_id = var.amper_image_id
source_type = "image"
}
}
Note: To access the public VMs through SSH, generate your own SSH key pair. For more information, see Generate SSH keys.
Next, enable the create_vm_4
flag to instruct Terraform to create VM-04 within the private network.
# Step 3b: Launch the fourth VM (VM-04) in the private subnet.
create_vm_4 = true
This is the portion of Terraform code that creates the fourth instance (VM-04) in the private subnet.
resource "oci_core_instance" "vm-4" {
count = (var.create_vm_4) ? 1 : 0
availability_domain = "GqIF:PHX-AD-1"
compartment_id = var.compartment_id
create_vnic_details {
#assign_ipv6ip = "false"
assign_private_dns_record = "true"
assign_public_ip = "false"
subnet_id = var.create_vcn ? oci_core_subnet.My-Private-Subnet.*.id[0] : var.private_subnet_id
#nsg_ids = (var.create_vcn && var.create_nsg_1) ? [oci_core_network_security_group.nsg-2.*.id[0]] : []
nsg_ids = (var.automate_step_6 && var.create_nsg_2) ? (var.create_vcn ? [oci_core_network_security_group.nsg-2.*.id[0]] : [oci_core_network_security_group.nsg-02.*.id[0]]) : []
}
display_name = "${var.display_name_prefix}-VM-04"
shape = "VM.Standard.A1.Flex"
shape_config {
memory_in_gbs = "6"
ocpus = "1"
}
source_details {
source_id = var.amper_image_id
source_type = "image"
}
}
Manual ICMP Echo Testing: Observing the impact of NSGs on traffic.
-
Go to the OCI Console, navigate to Compute and Instances. All four instances will be listed.
-
Note down the public IP addresses of the three instances (VM-01, VM-02, VM-03).
-
From your computer, ping the public IP address of each VM instance.
Expected Results:
-
All instances will fail because the security list does not have an ICMP echo rule.
-
Default security rules allowed SSH access (port 22) to all public VMs.
Automated ICMP Echo Testing: This test is automated using the Terraform local-exec
provisioner.
-
We have defined an array,
VM1_3
, containing three elements to represent the three public VMs, and a variable,icmp_ping_count
, to specify the number of pings. -
The following Terraform code automates ICMP echo tests to
VM-02
from your local machine.resource "null_resource" "icmp_ping_VM2_fromlocal" { depends_on = [oci_core_instance.VM1-3[1]] count = (var.icmp_pingvm2_fromlocal) ? 1 : 0 # Ping VM-02 from local Computer provisioner "local-exec" { command = "ping -c ${var.icmp_ping_count} ${oci_core_instance.VM1-3[1].public_ip}" } }
Edit the input.auto.tfvars
file to set the value of the icmp_ping_count
variable. To enable pinging each of the three public VMs from your local computer, set the icmp_pingvm?_fromlocal
flags to true
(where 1, 2, or 3 represents the specific VM).
# ICMP ping from Local Computer (First Attempt)
icmp_pingvm1_fromlocal = true
icmp_pingvm2_fromlocal = true
icmp_pingvm3_fromlocal = true
Task 4: Configure ICMP Ingress Traffic and Attach vNICs to NSG-01
Manual Option:
The following sub-tasks required to configure traffic between VM-03 and NSG-01.
Task 4.1: Add a New Ingress Rule in Networking
To update the first NSG (NSG-01), follow the steps outlined in Figure 3.
-
Select your VCN.
-
Under Network Security Groups, select NSG-01 and click Add Rules.
-
Enter the following information and click Add.
- Source Type: Enter CIDR.
- Source CIDR: Enter
0.0.0.0/0
. - IP Protocol: Select ICMP.
- Type: Enter 8.
- Code: Select All.
Figure 3: Add ingress rule to NSG-01 to allow ICMP Ping from the Internet
Task 4.2: Configure the Third VM (VM-03) to Point to NSG-01
To attach NSG-01 to the VM-03 virtual network interface (VNIC), follow the steps outlined in Figure 4.
-
Under Compute, click Instances, VM-03 and View Details.
-
Select the attached vNICs and click Primary vNIC.
-
Under vNIC Information, click Edit link next to Network Security Group and select NSG-01.
-
Click Save Changes.
Figure 4: Attach NSG-01 to VM-03 vNIC
Automated Option:
Go to the input.auto.tfvars
file and turn the automate_step_4
flag on.
# Step 4: Add CIDR ingress rule in Network, attach VM-03 vNIC with NSG-01.
automate_step_4 = true
Notes:
- We strongly recommend performing configuration tasks manually while leveraging Terraform for automated resource provisioning.
- If opting for Terraform automation, remember to recreate (destroy and re-create) VMs to successfully attach NSG-01 to the VM-03 vNIC.
- Regardless of the chosen option, the lab mandates ICMP ping testing.
Manual Check: Perform a second ICMP echo test (ping) to the public IP addresses of VMs VM-01, VM-02, and VM-03 from your local computer.
Results:
- All ping will fail. Only the third instance (VM-03) will successfully respond.
- VM-03 responds to the ping due to its vNIC’s attachment to NSG-01 (Figure 4), which allows inbound ICMP traffic (Figure 3).
Automated Check: Navigate to the input.auto.tfvars
file and set all icmp_pingvm?_fromlocal
flags to true (where 1, 2, or 3 represents the specific VM).
# ICMP ping from Local Computer (Second Attempt)
icmp_pingvm1_fromlocal = true
icmp_pingvm2_fromlocal = true
icmp_pingvm3_fromlocal = true
Task 5: Run ICMP Ping Before Nesting the two NSGs
Manual Option:
Initially attempt to ping the private IP address of VM-04 from the three public VMs.
- SSH to all compute instances in the public subnet (VM-01, VM-02, VM-03).
- From each server ping the private IP address of VM-04.
Expected Results: All ping attempts will fail. To enable connectivity, a nested NSG architecture is required.
Automated Option:
Set the icmp_test_from_vm?
flag to true for each public VM (VM-01, VM-02, and VM-03). Replace with the specific VM number (1, 2, or 3) in each flag name.
# Step 5: SSH to VM-01, VM-02, VM-03 and ping VM-04 (First Attempt).
icmp_test_from_vm1 = true
icmp_test_from_vm2 = true
icmp_test_from_vm3 = true
This option leverages the Terraform remote-exec
provisioner to automate the ICMP testing. You will need to specify the private SSH key that corresponds to the public key used during VMs creation. The shell script, ping_script.sh
iterates through the number of ping attempts defined by icmp_ping_count
.
resource "null_resource" "icmp_ping_vm4_from_vm1" {
depends_on = [oci_core_instance.VM1-3[0]]
count = (var.icmp_test_from_vm1) ? 1 : 0
connection {
agent = false
host = oci_core_instance.VM1-3[0].public_ip
user = "opc"
private_key = file(var.ssh_private_key)
}
# At this stage we assume that the ping_script.sh is copied under /home/opc
provisioner "remote-exec" {
inline = [
"echo \" PING PRIVATE IP ${oci_core_instance.vm-4[0].private_ip}\"",
"chmod +x ping_script.sh",
"export TARGET_IP=${oci_core_instance.vm-4[0].private_ip}",
"export PING_COUNT=${var.icmp_ping_count}",
"sh ping_script.sh",
]
}
}
Task 6: Configure Nested NSGs (NSG-01 and NSG-02)
Manual Option:
Configure the second NSG (NSG-02) with an ingress rule that specifies NSG-01 as the source, enabling ICMP traffic flow between the two NSGs.
Task 6.1: Add an Ingress Rule in Networking (Virtual Cloud Network)
-
Under your VCN (VCN-01), click Security Rules.
-
Select NSG-02 and click Add Rules.
-
Enter the following information and click Add.
- Source Type: Select Network Security Group.
- Source: Enter NSG-01.
- IP Protocol: Select ICMP.
- Type: Enter 8.
- Code: Select All.
Figure 5: Add NSG-01 as Source to NSG-02 Ingress Rule
Task 6.2: Configure the Fourth VM (VM-04) to Point to NSG-02
-
Under vNIC Information of the VM-04 (Figure 6), click Edit link next to NSG and select NSG-02.
-
Click Save Change.
Figure 6: Attach the NSG-02 to VM-04 vNIC
Automated Option:
To automate this task, set automate_step_6
to true in the input.auto.tfvars
file.
# Step 6: Add NSG-01 as ingress rule source, attach NSG-02 to VM04 vNIC.
automate_step_6 = true
Task 7: Run the Final ICMP Echo Tests for the Nested NSGs
Manual End-to-End Testing:
Re-attempt pinging the private IP address of VM-04 from the public VMs.
-
SSH into the three instances in the public subnet (VM-01, VM-02, VM-03).
-
From each one, ping the private IP of VM-04. Only VM-03 succeeds as illustrated in Figure 1.
Explanation: VM-04’s vNIC is now governed by the rules within NSG-02 and those of the default security list. NSG-01 is configured as an ingress rule source for NSG-02, permitting traffic from the internet.
Automated End-to-End Testing:
Enable Terraform to automatically execute ICMP echo ping tests.
-
Set the
icmp_test_from_vm?
flag totrue
for each of the 3 VMs (VM-01, VM-02 and VM-03). -
As depicted in the left portion of Figure 1, only VM-03 successfully pings VM-04.
# Step 7: SSH to VM-01, VM-02, VM-03 and ping VM-04 (Second Attempt).
icmp_test_from_vm1 = true
icmp_test_from_vm2 = true
icmp_test_from_vm3 = true
Figure 7 displays the results of multiple ICMP ping attempts performed after establishing an SSH connection to VM-03. The figure compares ping outcome before and after implementing nested NSGs.
Figure 7: Ping results before and after nesting NSG-02 to NSG-01 and attaching vNICs to VMs
Note: When using Terraform, recreate VMs after switching from manual configuration to ensure proper linking of NSG-02 to the VM-04 vNIC.
Deployment Options
Option 1: Using Terraform Command Line Interface (CLI) (Community Edition)
Before running the Terrafrom commands to plan and deploy your infrastructure using Terraform CLI, you need to update the provided Terraform configuration with your specific environment details from your local machine or remotely on the OCI Cloud Shell. Download the complete Terraform source code from here: oci-blog-fast-tracking-apcertif-main.zip. The only file you need to customize for your environment is the input.auto.tfvars
file (similarly named file is terraform.tfvars
). You can specify for example the OCIDs of the used computer image (amper_image_id
) and that of the compartment (compartment_id
) where the lab resources will be created (modify default values only if needed). The package provides comprehensive instructions for setting up your environment, executing labs, and understanding networking and security concepts. It includes detailed guides, tips, and best practices to enhance your OCI advanced learning experience.
##########################################################################
# Terraform module: Nested NSGs - NSG-01 as Ingress source to NSG-02. #
# #
# Copyright (c) 2024 Oracle Author: Mahamat Hissein Guiagoussou #
##########################################################################
# Working Compartment
compartment_id = "REPLACE_WORKING_COMPARTMENT_OCID_HERE"
# Image OCID - https://docs.oracle.com/en-us/iaas/images/
amper_image_id = "REPLACE_INSTANCE_REGIONAL_IMAGE_OCID_HERE"
# Region based display name prefix
display_name_prefix = "AP-LAB01-1" # Replace with your own prefix
##########################################################################
# Step 1a - Create VCN using VCN Wizard (turn off the create_vcn flag), #
##########################################################################
create_vcn = false
vcn_id = "REPLACE_VCN_OCID_HERE"
private_subnet_id = "REPLACE_PRIVATE_SUBNET_OCID_HERE"
public_subnet_id = "REPLACE_PUBLIC_SUBNET_OCID_HERE"
##########################################################################
# Step 1b - Create VCN using Terraform. Provide the CIDR Blocks for the #
# VCN, Subnets and other required input (host_name_prefix). #
##########################################################################
cvcn_cidr_block = "10.0.0.0/16"
public_subnet_cidr_block = "10.0.0.0/24"
private_subnet_cidr_block = "10.0.1.0/24"
host_name_prefix = "phxapl4"
##########################################################################
# Step 2: Create two(2) empty Network Security Groups: NSG-01 & NSG-02. #
##########################################################################
create_nsg_1 = false
create_nsg_2 = false
##########################################################################
# Step 3a: Launch three VMs(VM-01, VM-02, VM-03) in the public subnet. #
##########################################################################
create_vm_1_3 = false
##########################################################################
# Step 3b: Launch the fouth VM (VM-04) in the private subnet. #
##########################################################################
create_vm_4 = false
# Shape Definition
shape_name = "VM.Standard.A1.Flex"
shape_memory_in_gbs = "6"
shape_numberof_ocpus = "1"
# Ping all public VM from Local Computer
icmp_pingvm1_fromlocal = false
icmp_pingvm2_fromlocal = false
icmp_pingvm3_fromlocal = false
# Compute Instance SSH keys
ssh_public_key = "~/cloudshellkey.pub"
ssh_private_key = "~/cloudshellkey"
# Ping VM-04 from Public VMs (VM-02, VM-02, and VM-03) via SSH
icmp_test_from_vm1 = false
icmp_test_from_vm2 = false
icmp_test_from_vm3 = false
##########################################################################
# Step 4: Add CIDR ingress rule in Network & Attach VM3 vNIC with NSG-01 #
##########################################################################
automate_step_4 = false
##########################################################################
# Step 5: SSH to VM-01, VM-02, VM-03 and ping VM-04 (First Attempt). #
##########################################################################
##########################################################################
# Step 6: Add NSG-01 as ingress rule source, Attach VM4 vNIC with NSG-02 #
##########################################################################
automate_step_6 = false
##########################################################################
# Step 7: SSH to VM-01, VM-02, VM-03 and ping VM-04 (Second Attempt). #
##########################################################################
# Number of time ping is executed
icmp_ping_count = "REPLACE_NUMBER_OF_PING_ATTEMPTS_HERE"
Option 2: Using Oracle Resource Manager (variable input samples)
Create an Oracle Resource Manager stack by defining variables (for example, amper_image_id
, compartment_id
), provisioning network resources (create_vcn
, create_msg_1/2
), creating VMs (create_vm_1_3
, create_vm_4
, shape_name
), ICMP pings(icmp_pingvm1_fromlocal
, icmp_test_from_vm1
), and executing the plan to deploy the infrastructure.
Next Steps
Infrastructure as Code (IaC) principles, using Terraform, significantly enhance infrastructure management through accelerated deployment and improved security. For instance, manually configuring nested NSGs in OCI Architect Professional Certification (2024) Lab 1: Become An OCI Architect Professional (2024) typically consumes around 25 minutes.
By leveraging Terraform, we significantly reduced the time required to provision and configure complex OCI resources, demonstrating a substantial efficiency gain. This translates to measurable time and cost savings for OCI users managing complex network security configurations. Additionally, IaC promotes consistency and reduces the risk of human error, making it a valuable model for both learning and real-world customer’s implementations. To learn more about applying IaC with Terraform or other similar tools for your OCI automation needs, consider applying the learned principles in the remaining OCI Architect Professional Certification labs while exploring the OCI Reference Architecture and best practices.
Acknowledgments
- Author - Mahamat Hissein Guiagoussou (Master Principal Cloud Architect)
More Learning Resources
Explore other labs on docs.oracle.com/learn or access more free learning content on the Oracle Learning YouTube channel. Additionally, visit education.oracle.com/learning-explorer to become an Oracle Learning Explorer.
For product documentation, visit Oracle Help Center.
Accelerate Oracle Cloud Infrastructure Architect Professional Certification with Terraform
G13782-01
August 2024