Create a Multi-Tier Topology with IP Networks Using Terraform

Terraform is a third-party tool that you can use to create and manage your IaaS and PaaS resources on Oracle Cloud at Customer. This guide shows you how to use Terraform to launch and manage a multi-tier topology of Compute Classic instances attached to IP networks.

Scenario Overview

The application and the database that the application uses are hosted on instances attached to separate IP networks. Users outside Oracle Cloud have HTTPS access to the application instances. The topology also includes an admin instance that users outside the cloud can connect to using SSH. The admin instance can communicate with all the other instances in the topology.

Note:

The focus of this guide is the network configuration for instances attached to IP networks in a sample topology. The framework and the flow of the steps can be applied to other similar or more complex topologies. The steps for provisioning or configuring other resources (like storage) are not covered in this guide.

Compute Topology

The topology that you are going to build using the steps in this tutorial contains the following Compute Classic instances:

  • Two instances – appVM1 and appVM2 – for hosting a business application, attached to an IP network, each with a fixed public IP address.

  • Two instances – dbVM1 and dbVM2 – for hosting the database for the application. These instances are attached to a second IP network.

  • An admin instance – adminVM – that's attached to a third IP network and has a fixed public IP address.

Note:

You won't actually install any application or database. Instead, you'll simulate listeners on the required application and database ports using the nc utility. The goal of this section is to demonstrate the steps to configure the networking that's necessary for the traffic flow requirements described next.

Traffic Flow Requirements

Only the following traffic flows must be permitted in the topology that you'll build:

  • HTTPS requests from anywhere to the application instances

  • SSH connections from anywhere to the admin instance

  • All traffic from the admin instance to the application instances

  • All traffic from the admin instance to the database instances

  • TCP traffic from two application instances to port 1521 of the database instances

Network Resources Required for this Topology

  • Public IP address reservations for the application instances and for the admin instance

  • Three IP networks, one each for the application instances, the database instances, and the admin instance

  • An IP network exchange to connect the IP networks in the topology

  • Security protocols for SSH, HTTPS, and TCP/1521 traffic

  • ACLs that will contain the required security rules

  • vNICsets for the application instances, database instances, and the admin instance

  • Security rules to allow SSH connections to the admin instance, HTTPS traffic to the application instances, and TCP/1521 traffic to the database instances

Prerequisites

  1. If you are new to Terraform, learn the basics.
    At a minimum, read the brief introduction here: https://www.terraform.io/intro/index.html.
  2. Download and install Terraform on your local computer.
    Binary packages are available for several operating systems and processor architectures. For the instructions to download and install Terraform, go to https://www.terraform.io/intro/getting-started/install.html.
  3. Generate an SSH key pair. See Generate an SSH Key Pair.
  4. Gather the required Oracle Cloud account information:
    • Your Oracle Cloud user name and password.
    • The service instance ID.
      1. Sign in to Oracle Cloud My Services.
      2. Locate the Compute Classic tile and click Compute Classic.
      3. Locate the Service Instance ID field, and the note its value (example: 500099999).
    • The REST endpoint URL for Compute Classic.
      1. Sign in to Oracle Cloud My Services, using the My Services URL from the welcome email.
      2. Click Menu icon near the upper left corner of the page.
      3. In the menu that appears, expand Services, and click Compute Classic. The Instances page of the Compute Classic web console is displayed.
      4. Click Site near the top of the page, and select the site for which you want to find out the REST endpoint URL.
      5. In the Site Selector dialog box, note the URL in the REST Endpoint field.

Create the Required Resources Using Terraform

Define all the resources required for the multi-tier topology in a Terraform configuration and then apply the configuration.

Note:

The procedure described here shows how to define resources in a simple Terraform configuration. It does not use advanced Terraform features, such as variables and modules.
  1. On the computer where you installed Terraform, create a new directory.
  2. In the new directory, create an empty text file, name-of-your-choice.tf.

    This is a Terraform configuration. In this file, you define the following:

    • The parameters that Terraform must use to connect to your Oracle Cloud at Customer machine

    • The resources to be provisioned

    Important:

    The .tf extension is mandatory. When Terraform performs any operation, it looks for a file with the .tf extension in the current directory.
  3. Open the text file in an editor of your choice.
  4. Add the following code to define the parameters that Terraform needs to connect to your account:
    provider "opc" {
      user            = "jack.smith@example.com"
      password        = "mypassword"
      identity_domain = "500099999"
      endpoint        = "https://compute.site99.ocm.rack100.example.com"
    }

    In this code:

    • Don’t change the provider line.

    • user and password: Replace with your Oracle Cloud credentials.

    • identity_domain: Replace with the service instance ID that you identified earlier.

    • endpoint: Replace with the REST endpoint URL of Compute Classic.

  5. Add code for each resource that you want to create using Terraform.

    Note:

    When copying and editing the code, follow the instructions carefully.
    1. Add code for the ACLs:
      # Create the ACLs
      
      # For the admin VM
      resource "opc_compute_acl" "adminVM" {
        name = "adminVM"
      }
      # For the application VMs
      resource "opc_compute_acl" "appVMs" {
        name = "appVMs"
      }
      # For the database VMS
      resource "opc_compute_acl" "dbVMs" {
        name = "dbVMs"
      }
      In this code:
      • Don’t change the resource lines.

      • name: Replace with names of your choice, or leave the examples as is.

    2. Add code for an IP exchange:
      # Create an IP exchange
      resource "opc_compute_ip_network_exchange" "occIPX" {
        name = "occIPX"
      }
      
      In this code:
      • Don’t change the resource line.

      • name: Replace with a name of your choice, or leave the example as is.

    3. Add code for the IP networks:
      # Create the IP networks
      
      # For the admin VM
      resource "opc_compute_ip_network" "adminIPnetwork" {
        name              = "adminIPnetwork"
        ip_network_exchange = "${opc_compute_ip_network_exchange.occIPX.name}"
        ip_address_prefix = "172.16.1.0/24"
      }
      # For the application VMs
      resource "opc_compute_ip_network" "appIPnetwork" {
        name              = "appIPnetwork"
        ip_network_exchange = "${opc_compute_ip_network_exchange.occIPX.name}"
        ip_address_prefix = "10.50.1.0/24"
      }
      # For the database VMs
      resource "opc_compute_ip_network" "dbIPnetwork" {
        name              = "dbIPnetwork"
        ip_network_exchange = "${opc_compute_ip_network_exchange.occIPX.name}"
        ip_address_prefix = "192.168.1.0/24"
      }

      In this code:

      • Don't change the resource lines.

      • name: Replace with names of your choice, or leave the examples as is.

      • ip_network_exchange is a reference to the IP network exchange that you defined earlier. Don’t change these lines.

      • ip_address_prefix: Replace with address ranges of your choice in CIDR format, or leave the examples as is.

    4. Add code to reserve public IP addresses for the VMs:
      # Reserve public IP addresses
      
      # For the admin VM
      resource "opc_compute_ip_address_reservation" "ipResForAdminVM" {
        name            = "ipResForAdminVM"
        ip_address_pool = "public-ippool"
        lifecycle {
          prevent_destroy = true
        }
      }
      # For application VM 1
      resource "opc_compute_ip_address_reservation" "ipResForAppVM1" {
        name            = "ipResForAppVM1"
        ip_address_pool = "public-ippool"
        lifecycle {
          prevent_destroy = true
        }
      }
      # For application VM 2
      resource "opc_compute_ip_address_reservation" "ipResForAppVM2" {
        name            = "ipResForAppVM2"
        ip_address_pool = "public-ippool"
        lifecycle {
          prevent_destroy = true
        }
      }

      In this code:

      • Don't change the resource lines.

      • name: Replace with names of your choice, or leave the examples as is.

      • ip_address_pool: Don't change these lines.

      • lifecycle.prevent_destroy=true reduces the chance of accidentally deleting the resource. This setting is useful for resources that you want to retain for future use even after you delete the VM.

    5. Add code for the required security protocols:
      # Create security protocols
      
      # For HTTPS requests to the application VMs
      resource "opc_compute_security_protocol" "https" {
        name        = "https"
        dst_ports   = ["443"]
        ip_protocol = "tcp"
      }
      
      # For SSH connections
      resource "opc_compute_security_protocol" "ssh" {
        name        = "ssh"
        dst_ports   = ["22"]
        ip_protocol = "tcp"
      }
      
      # For TCP traffic from the application VMs to the database VMs
      resource "opc_compute_security_protocol" "tcp1521" {
        name        = "tcp1521"
        dst_ports   = ["1521"]
        ip_protocol = "tcp"
      }

      In this code:

      • Don't change the resource lines.

      • name: Replace with names of your choice, or leave the examples as is.

      • dst_ports: 443, 22, and 1521 are the ports we need to open. Don't change these lines.

      • ip_protocol: TCP is the protocol for all the ports that we need to open. Don't change these lines.

    6. Add code to upload an SSH public key:
      # Specify an SSH public key
      resource "opc_compute_ssh_key" "adminSSHkey" {
        name = "occKey"
        key  = "ssh-rsa AAAAB3NzaC1yc2E..."
        lifecycle {
          prevent_destroy = true
        }
      }

      In this code:

      • Don't change the resource line.

      • name: Replace with a name of your choice, or leave the example as is.

      • key: Replace with the value of your SSH public key. Copy and paste the value exactly as in the public key file. Don't introduce any extra characters or lines.

      • lifecycle.prevent_destroy=true ensures that the resource is retained even when you delete the VM.

    7. Add code for the virtual NIC sets:
      # Create virtual NIC sets
      
      # For the admin VM
      resource "opc_compute_vnic_set" "adminVM" {
        name         = "adminVM"
        applied_acls = ["${opc_compute_acl.adminVM.name}"]
      }
      
      # For the application VMs
      resource "opc_compute_vnic_set" "appVMs" {
        name         = "appVMs"
        applied_acls = ["${opc_compute_acl.appVMs.name}"]
      }
      
      # For the database VMs
      resource "opc_compute_vnic_set" "dbVMs" {
        name         = "dbVMs"
        applied_acls = ["${opc_compute_acl.dbVMs.name}"]
      }

      In this code:

      • Don't change the resource lines.

      • name: Replace with names of your choice, or leave the examples as is.

      • applied_acls contain references to the ACLs that you defined earlier. Don’t change these lines.

    8. Add code for the following security rules:
      Purpose Suggested Name Type ACL Source and Destination Protocol
      SSH requests from any source to the admin VM internet-to-adminVM Ingress adminVM Source: Any

      Destination: adminVM vNICset

      ssh
      All traffic from the admin VM to any destination adminVM-to-any Egress adminVM Source: adminVM vNICset

      Destination: Any

      Any
      All traffic from the admin VM to the application VMs adminVM-to-appVMs Ingress appVMs Source: adminVM vNICset

      Destination: appVMs vNICset

      Any
      HTTPS traffic from any source to port 443 of the application VMs internet-to-appVMs Ingress appVMs Source: Any

      Destination: appVMs vNICset

      https
      TCP traffic from the application VMs to port 1521 of the DB VMs appVMs-to-dbVMs-egress Egress appVMs Source: appVMs vNICset

      Destination: dbVMs vNICset

      tcp1521
      TCP traffic from the application VMs to port 1521 of the DB VMs appVMs-to-dbVMs-ingress Ingress dbVMs Source: appVMs vNICset

      Destination: dbVMs vNICset

      tcp1521
      All traffic from the admin VM to the DB VMs adminVM-to-dbVMs Ingress dbVMs Source: adminVM vNICset

      Destination: dbVMs vNICset

      Any
      # Create security rules
      
      # For SSH requests from any source to the admin VM 
      resource "opc_compute_security_rule" "internet-to-adminVM" {
        name               = "internet-to-adminVM"
        flow_direction     = "ingress"
        acl                = "${opc_compute_acl.adminVM.name}"
        security_protocols = ["${opc_compute_security_protocol.ssh.name}"]
        dst_vnic_set       = "${opc_compute_vnic_set.adminVM.name}"
      }
      
      # For all traffic from the admin VM to any destination
      resource "opc_compute_security_rule" "adminVM-to-any" {
        name               = "adminVM-to-any"
        flow_direction     = "egress"
        acl                = "${opc_compute_acl.adminVM.name}"
        src_vnic_set       = "${opc_compute_vnic_set.adminVM.name}"
      }
      
      # For all traffic from the admin VM to the application VMs 
      resource "opc_compute_security_rule" "adminVM-to-appVMs" {
        name               = "adminVM-to-appVMs"
        flow_direction     = "ingress"
        acl                = "${opc_compute_acl.appVMs.name}"
        src_vnic_set       = "${opc_compute_vnic_set.adminVM.name}"
        dst_vnic_set       = "${opc_compute_vnic_set.appVMs.name}"
      }
      
      # For HTTPS traffic from any source to port 443 of the application VMs 
      resource "opc_compute_security_rule" "internet-to-appVMs" {
        name               = "internet-to-appVMs"
        flow_direction     = "ingress"
        acl                = "${opc_compute_acl.appVMs.name}"
        security_protocols = ["${opc_compute_security_protocol.https.name}"]
        dst_vnic_set       = "${opc_compute_vnic_set.appVMs.name}"
      }
      
      # For TCP traffic from the application VMs to port 1521 of the DB VMs
      resource "opc_compute_security_rule" "appVMs-to-dbVMs-egress" {
        name               = "appVMs-to-dbVMs-egress"
        flow_direction     = "egress"
        acl                = "${opc_compute_acl.appVMs.name}"
        security_protocols = ["${opc_compute_security_protocol.tcp1521.name}"]
        src_vnic_set       = "${opc_compute_vnic_set.appVMs.name}"
        dst_vnic_set       = "${opc_compute_vnic_set.dbVMs.name}"
      }
      
      # For TCP traffic from the application VMs to port 1521 of the DB VMs 
      resource "opc_compute_security_rule" "appVMs-to-dbVMs-ingress" {
        name               = "appVMs-to-dbVMs-ingress"
        flow_direction     = "ingress"
        acl                = "${opc_compute_acl.dbVMs.name}"
        security_protocols = ["${opc_compute_security_protocol.tcp1521.name}"]
        src_vnic_set       = "${opc_compute_vnic_set.appVMs.name}"
        dst_vnic_set       = "${opc_compute_vnic_set.dbVMs.name}"
      }
      
      # For all traffic from the admin VM to the DB VMs 
      resource "opc_compute_security_rule" "adminVM-to-dbVMs" {
        name               = "adminVM-to-dbVMs"
        flow_direction     = "ingress"
        acl                = "${opc_compute_acl.dbVMs.name}"
        src_vnic_set       = "${opc_compute_vnic_set.adminVM.name}"
        dst_vnic_set       = "${opc_compute_vnic_set.dbVMs.name}"
      }

      In this code:

      • Don't change the resource lines.

      • name: Replace with names of your choice, or leave the examples as is.

      • flow_direction is the direction (to or from the VMs) in which the rules permit traffic. Don't change these line.

      • acl is a reference to one of the ACLs that you defined earlier. Don't change these lines.

      • security_protocols are references to the protocols that you defined earlier. Don't change these lines.

      • src_vnic_set and dst_vnic_set are references to the appropriate vNICsets that you defined earlier. Don't change these lines.

    9. Add code to create persistent boot volumes for the VMs:
      # Create persistent boot volumes
      
      # For the admin VM
      resource "opc_compute_storage_volume" "adminVMbootVolume" {
        size = "20"
        name = "adminVMbootVolume"
        bootable = true
        image_list = "/oracle/public/OL_7.2_UEKR4_x86_64"
        image_list_entry = 1
        lifecycle {
          prevent_destroy = true
        }
      }
      
      # For application VM 1
      resource "opc_compute_storage_volume" "appVM1bootVolume" {
        size = "20"
        name = "appVM1bootVolume"
        bootable = true
        image_list = "/oracle/public/OL_7.2_UEKR4_x86_64"
        image_list_entry = 1
        lifecycle {
          prevent_destroy = true
        }
      }
      
      # For application VM 2
      resource "opc_compute_storage_volume" "appVM2bootVolume" {
        size = "20"
        name = "appVM2bootVolume"
        bootable = true
        image_list = "/oracle/public/OL_7.2_UEKR4_x86_64"
        image_list_entry = 1
        lifecycle {
          prevent_destroy = true
        }
      }
      
      # For database VM 1
      resource "opc_compute_storage_volume" "dbVM1bootVolume" {
        size = "20"
        name = "dbVM1bootVolume"
        bootable = true
        image_list = "/oracle/public/OL_7.2_UEKR4_x86_64"
        image_list_entry = 1
        lifecycle {
          prevent_destroy = true
        }
      }
      
      # For database VM 2
      resource "opc_compute_storage_volume" "dbVM2bootVolume" {
        size = "20"
        name = "dbVM2bootVolume"
        bootable = true
        image_list = "/oracle/public/OL_7.2_UEKR4_x86_64"
        image_list_entry = 1
        lifecycle {
          prevent_destroy = true
        }
      }
      In this code:
      • Don't change the resource lines.

      • size: Leave the sizes at 20 GB or enter a larger size.

      • name: Replace with names of your choice, or leave the examples as is.

      • bootable=true indicates a bootable volume. Don't change these lines.

      • image_list: Replace with the full name of the images that you want to use, or leave the examples as is.

      • image_list_entry=1 means that the first image in the image list must be used. Don't change these lines.

      • lifecycle.prevent_destroy=true ensures that the resource is retained even when you delete the VM.

    10. Add code for volumes for the data and applications that you may want to store:
      # Create data volumes
      
      # For the admin VM
      resource "opc_compute_storage_volume" "adminVMdataVolume" {
        name = "adminVMdataVolume"
        size = 10
        lifecycle {
          prevent_destroy = true
        }
      }
      
      # For application VM 1
      resource "opc_compute_storage_volume" "appVM1dataVolume" {
        name = "appVM1dataVolume"
        size = 10
        lifecycle {
          prevent_destroy = true
        }
      }
      
      # For application VM 2
      resource "opc_compute_storage_volume" "appVM2dataVolume" {
        name = "appVM2dataVolume"
        size = 10
        lifecycle {
          prevent_destroy = true
        }
      }
      
      # For database VM 1
      resource "opc_compute_storage_volume" "dbVM1dataVolume" {
        name = "dbVM1dataVolume"
        size = 10
        lifecycle {
          prevent_destroy = true
        }
      }
      
      # For database VM 2
      resource "opc_compute_storage_volume" "dbVM2dataVolume" {
        name = "dbVM2dataVolume"
        size = 10
        lifecycle {
          prevent_destroy = true
        }
      }
      In this code:
      • Don't change the resource lines.

      • name: Replace with names of your choice, or leave the examples as is.

      • size: Replace with sizes of your choice, in GB.

      • lifecycle.prevent_destroy=true ensures that the resource is retained even when you delete the VM.

    11. Add code for the admin VM:
      # Create the admin VM
      resource "opc_compute_instance" "adminVM" {
        name       = "adminVM"
        shape      = "oc3"
        ssh_keys   = ["${opc_compute_ssh_key.adminSSHkey.name}"]
        hostname   = "adminvm"
      
        storage {
          volume = "${opc_compute_storage_volume.adminVMbootVolume.name}"
          index  = 1
        }
      
        boot_order = [1]
      
        storage {
          volume = "${opc_compute_storage_volume.adminVMdataVolume.name}"
          index = 2
        }
      
        networking_info {
          index      = 0
          ip_network = "${opc_compute_ip_network.adminIPnetwork.name}"
          nat        = ["${opc_compute_ip_address_reservation.ipResForAdminVM.name}"]
          vnic_sets  = ["${opc_compute_vnic_set.adminVM.name}"]
        }
      }

      In this code:

      • Don't change the resource line.

      • name: Replace with a name of your choice, or leave the example as is.

      • shape: Replace with a shape of your choice, or leave the example as is.

      • ssh_keys contains a reference to the SSH public key that you specified earlier. Don't change this line.

      • hostname: Replace with a host name of your choice, or leave the example as is.

      • storage.volume: There are two of these fields referring to the boot and data volumes that you defined earlier. Don't change these lines.

      • storage.index indicates the disk number at which volume must be attached to the VM. Don't change these lines.

      • boot_order=1 means that the volume attached at index #1 must be used to boot the VM. Don't change this line.

      • networking_info.index=0 means that this network definition is for eth0. Don't change this line.

      • networking_info.ip_network contains a reference to the IP network that you defined earlier. Don't change this line.

      • networking_info.nat contains a reference to the IP reservation that you defined earlier. Don't change this line.

      • networking_info.vnic_sets contains a reference to the vNICset that you defined earlier. Don't change this line.

    12. Add code for the application VMs:
      # Create application VM 1
      resource "opc_compute_instance" "appVM1" {
        name       = "appVM1"
        shape      = "oc3"
        ssh_keys   = ["${opc_compute_ssh_key.adminSSHkey.name}"]
        hostname   = "appvm1"
      
        storage {
          volume = "${opc_compute_storage_volume.appVM1bootVolume.name}"
          index  = 1
        }
      
        boot_order = [1]
      
        storage {
          volume = "${opc_compute_storage_volume.appVM1dataVolume.name}"
          index = 2
        }
      
        networking_info {
          index      = 0
          ip_network = "${opc_compute_ip_network.appIPnetwork.name}"
          nat        = ["${opc_compute_ip_address_reservation.ipResForAppVM1.name}"]
          vnic_sets  = ["${opc_compute_vnic_set.appVMs.name}"]
        }
      }
      
      # Create application VM 2
      resource "opc_compute_instance" "appVM2" {
        name       = "appVM2"
        shape      = "oc3"
        ssh_keys   = ["${opc_compute_ssh_key.adminSSHkey.name}"]
        hostname   = "appvm2"
      
        storage {
          volume = "${opc_compute_storage_volume.appVM2bootVolume.name}"
          index  = 1
        }
      
        boot_order = [1]
      
        storage {
          volume = "${opc_compute_storage_volume.appVM2dataVolume.name}"
          index = 2
        }
      
        networking_info {
          index      = 0
          ip_network = "${opc_compute_ip_network.appIPnetwork.name}"
          nat        = ["${opc_compute_ip_address_reservation.ipResForAppVM2.name}"]
          vnic_sets  = ["${opc_compute_vnic_set.appVMs.name}"]
        }
      }
      In this code:
      • Don't change the resource lines.

      • name: Replace with names of your choice, or leave the examples as is.

      • shape: Replace with shapes of your choice, or leave the examples as is.

      • ssh_keys contain references to the SSH public key that you specified earlier. Don't change these lines.

      • hostname: Replace with host names of your choice, or leave the examples as is.

      • storage.volume: These fields refer to the boot and data volumes that you defined earlier. Don't change these lines.

      • storage.index indicate the disk number at which the volumes must be attached to the VMs. Don't change these lines.

      • boot_order=1 means that the volumes attached at index #1 must be used to boot the VMs. Don't change these lines.

      • networking_info.index=0 means that the network definitions are for eth0. Don't change these lines.

      • networking_info.ip_network is a reference to the IP network that you defined earlier. Don't change these lines.

      • networking_info.nat is a reference to the IP reservation that you defined earlier for each VM. Don't change these lines.

      • networking_info.vnic_sets are references to the vNICsets that you defined earlier. Don't change these lines.

    13. Add code for the database VMs:
      # Create database VM 1
      resource "opc_compute_instance" "dbVM1" {
        name       = "dbVM1"
        shape      = "oc3"
        ssh_keys   = ["${opc_compute_ssh_key.adminSSHkey.name}"]
        hostname   = "dbvm1"
      
        storage {
          volume = "${opc_compute_storage_volume.dbVM1bootVolume.name}"
          index  = 1
        }
      
        boot_order = [1]
      
        storage {
          volume = "${opc_compute_storage_volume.dbVM1dataVolume.name}"
          index = 2
        }
      
        networking_info {
          index      = 0
          ip_network = "${opc_compute_ip_network.dbIPnetwork.name}"
          vnic_sets  = ["${opc_compute_vnic_set.dbVMs.name}"]
        }
      }
      
      # Create database VM 2
      resource "opc_compute_instance" "dbVM2" {
        name       = "dbVM2"
        shape      = "oc3"
        ssh_keys   = ["${opc_compute_ssh_key.adminSSHkey.name}"]
        hostname   = "dbvm2"
      
        storage {
          volume = "${opc_compute_storage_volume.dbVM2bootVolume.name}"
          index  = 1
        }
      
        boot_order = [1]
      
        storage {
          volume = "${opc_compute_storage_volume.dbVM2dataVolume.name}"
          index = 2
        }
      
        networking_info {
          index      = 0
          ip_network = "${opc_compute_ip_network.dbIPnetwork.name}"
          vnic_sets  = ["${opc_compute_vnic_set.dbVMs.name}"]
        }
      }
      In this code:
      • Don't change the resource lines.

      • name: Replace with names of your choice, or leave the examples as is.

      • shape: Replace with shapes of your choice, or leave the examples as is.

      • ssh_keys contain references to the SSH public key that you specified earlier. Don't change these lines.

      • hostname: Replace with host names of your choice, or leave the examples as is.

      • storage.volume: These fields refer to the boot and data volumes that you defined earlier. Don't change these lines.

      • storage.index indicate the disk number at which the volumes must be attached to the VMs. Don't change these lines.

      • boot_order=1 means that the volumes attached at index #1 must be used to boot the VMs. Don't change these lines.

      • networking_info.index=0 means that the network definitions are for eth0. Don't change these lines.

      • networking_info.ip_network is a reference to the IP network that you defined earlier. Don't change these lines.

      • networking_info.vnic_sets are references to the vNICsets that you defined earlier. Don't change these lines.

    14. Add code for Terraform to display the public and private IP addresses of the VMs after the configuration is applied:
      output "adminVM public IP address"{
        value = "${opc_compute_ip_address_reservation.ipResForAdminVM.ip_address}"
      }
      
      output "appVM1 public IP address"{
        value = "${opc_compute_ip_address_reservation.ipResForAppVM1.ip_address}"
      }
      
      output "appVM2 public IP address"{
        value = "${opc_compute_ip_address_reservation.ipResForAppVM2.ip_address}"
      }
      
      output "appVM1 private IP address"{
        value = "${opc_compute_instance.appVM1.ip_address}"
      }
      
      output "appVM2 private IP address"{
        value = "${opc_compute_instance.appVM2.ip_address}"
      }
      
      output "dbVM1 private IP address"{
        value = "${opc_compute_instance.dbVM1.ip_address}"
      }
      
      output "dbVM2 private IP address"{
        value = "${opc_compute_instance.dbVM2.ip_address}"
      }
      In this code:
      • output is the text label to be displayed before the IP address. Don’t change these lines.

      • value is a reference to each IP address to be displayed. Don’t change these lines.

  6. After adding all the required code, save the file.
  7. Initialize the directory containing the configuration.
    terraform init

    This command downloads the opc provider and sets up the current directory for use by Terraform.

  8. Verify that the syntax of the configuration has no errors.
    terraform validate

    If any syntactical errors exist, the output lists the errors.

  9. If errors exist, then reopen the configuration, fix the errors, save the file, and run terraform validate again.

    When no error exists, the command doesn't display any output.

    Tip:

    To debug problems from this point onward, you can enable logging.
    1. Configure the log level by setting the TF_LOG environment variable to TRACE, DEBUG, INFO, WARN or ERROR. The TRACE level is the most verbose.

    2. Set the log-file path by using the TF_LOG_PATH environment variable.

  10. Review the resources that you have defined.
    terraform plan

    Terraform displays all the actions that will be performed when you apply this configuration. It lists the resources that will be created and deleted and the attributes of each resource.

    At the end, Terraform summarizes the number of resources that will be added, destroyed, and changed when you apply the configuration.

    Plan: 39 to add, 0 to change, 0 to destroy.
  11. If you want to change anything, edit the configuration, validate it, and review the revised plan.
  12. After finalizing the configuration, create the resources defined in it.
    terraform apply
  13. At the Do you want to perform these actions prompt, enter yes.

    For each resource, Terraform shows the status of the operation and the time taken.

  14. Wait for a message as shown in the following example:
    Apply complete! Resources: 39 added, 0 changed, 0 destroyed.
    
    Outputs:
    
    adminVM public IP address = 203.0.113.2
    
    appVM1 private IP address = 10.50.1.2
    appVM1 public IP address = 203.0.113.3
    appVM2 private IP address = 10.50.1.3
    appVM2 public IP address = 203.0.113.4
    dbVM1 private IP address = 192.168.1.2
    dbVM2 private IP address = 192.168.1.3
    
  15. Note the IP addresses. You’ll need them to verify network access to the VMs.

(Optional) Verify Network Access to the VMs

Verify SSH Connections from Outside the Cloud to the Admin VM

Run the following command from your local machine:

[localmachine ~]$ ssh -i path-to-privateKeyFile opc@publicIPaddressOfAdminVM 

You should see the following prompt:

opc@adminvm

This confirms that SSH connections can be made from outside the cloud to the admin VM.

Verify SSH Connections from the Admin VM to the Database and Application VMs

  1. Copy the private SSH key file corresponding to the public key that you associated with your VMs from your local machine to the admin VM, by running the following command on your local machine:

    [localmachine ~]$ scp -i path-to-privateKeyFile path-to-privateKeyFile opc@publicIPaddressOfAdminVM:~/.ssh/privatekey 
  2. From your local machine, connect to the admin VM using SSH:

    [localmachine ~]$ ssh -i path-to-privateKeyFile opc@publicIPaddressOfAdminVM 
  3. From the admin VM, connect to each of the database and application VMs using SSH:

    [opc@adminvm]$ ssh -i ~/.ssh/privatekey opc@privateIPaddress 
  4. Depending on the VM you connect to, you should see one of the following prompts after the ssh connection is established.

    • opc@appvm1

    • opc@appvm2

    • opc@dbvm1

    • opc@dbvm2

Verify Connectivity from Outside the Cloud to Port 443 of the Application VMs

You can use the nc utility to simulate a listener on port 443 on one of the application VMs, and then run nc from any host outside the cloud to verify connectivity to the application VM.

Note:

The verification procedure described here is specific to VMs created using the Oracle-provided images for Oracle Linux 7.2 and 6.8.
  1. On your local host, download the nc package from http://yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64/getPackage/nc-1.84-24.el6.x86_64.rpm.

  2. Copy nc-1.84-24.el6.x86_64.rpm from your local host to the admin VM.
    [localmachine ~]$ scp -i path-to-privateKeyFile path_to_nc-1.84-24.el6.x86_64.rpm opc@publicIPaddressOfAdminVM:~ 
  3. From your local machine, connect to the admin VM using SSH:
    [localmachine ~]$ ssh -i path-to-privateKeyFile opc@publicIPaddressOfAdminVM 
  4. Copy nc-1.84-24.el6.x86_64.rpm to one of the application VMs.
    [opc@adminvm]$ scp -i ~/.ssh/privatekey ~/nc-1.84-24.el6.x86_64.rpm opc@privateIPaddressOfAppVM1:~ 
  5. Connect to the application VM:
    [opc@adminvm]$ ssh -i ~/.ssh/privatekey opc@privateIPaddressOfAppVM1 
  6. On the application VM, install nc.
    [opc@appvm1]$ sudo rpm -i nc-1.84-24.el6.x86_64.rpm
  7. Configure the application VM to listen on port 443. Note that this step is just for verifying connections to port 443. In real-life scenarios, this step would be done when you configure your application on the VM to listen on port 443.
    [opc@appvm1]$ sudo nc -l 443
  8. From any host outside the cloud, run the following nc command to test whether you can connect to port 443 of the application VM:
    [localmachine ~]$ nc -v publicIPaddressOfAppVM1 443
    The following message is displayed:
    Connection to publicIPaddressOfAppVM1 443 port [tcp/https] succeeded!

    This message confirms that the application VM accepts connection requests on port 443.

  9. Press Ctrl + C to exit the nc process.

Verify Connectivity from the Application VMs to Port 1521 of the Database VMs

You can use the nc utility to simulate a listener on port 1521 on one of the database VMs, and then run nc from one of the application VMs to verify connectivity from the application tier to the database tier.

Note:

The verification procedure described here is specific to VMs created using the Oracle-provided images for Oracle Linux 7.2 and 6.8.
  1. From your local machine, connect to the admin VM using SSH:
    [localmachine ~]$ ssh -i path-to-privateKeyFile opc@publicIPaddressOfAdminVM 
  2. Copy nc-1.84-24.el6.x86_64.rpm to one of the database VMs.
    [opc@adminvm]$ scp -i ~/.ssh/privatekey ~/nc-1.84-24.el6.x86_64.rpm opc@privateIPaddressOfDBVM1:~ 
  3. Connect to the database VM:
    [opc@adminvm]$ ssh -i ~/.ssh/privatekey opc@privateIPaddressOfDBVM1 
  4. On the database VM, install nc.
    [opc@dbvm1]$ sudo rpm -i nc-1.84-24.el6.x86_64.rpm
  5. Configure the VM to listen on port 1521. Note that this step is just for verifying connections to port 1521. In real-life scenarios, this step would be done when you set up your database to listen on port 1521.
    [opc@dbvm1]$ nc -l 1521
  6. Leave the current terminal session open. Using a new terminal session, connect to the admin VM using SSH and, from there, connect to one of the application VMs.
    [localmachine ~]$ ssh -i path-to-privateKeyFile opc@publicIPaddressOfAdminVM
    [opc@adminvm]$ ssh -i ~/.ssh/privatekey opc@privateIPaddressOfAppVM1 
  7. From the application VM, run the following nc command to test whether you can connect to port 1521 of the database VM:
    [opc@appvm1 ~]$ nc -v privateIPaddressOfDBVM1 1521
    The following message is displayed:
    Connection to privateIPaddressOfDBVM1 1521 port [tcp/ncube-lm] succeeded!

    This message confirms that the database VM accepts connection requests received on port 1521 from the application VMs.

  8. Press Ctrl + C to exit the nc process.