Provision the Resources

You can provision the resources by using either Terraform or Terragrunt. If you use Terraform, you must apply the configurations in each directory, in a prescribed sequence. With Terragrunt, you can provision all the resources with a single command.

About Terraform State Files

Terraform stores state information to track your managed infrastructure resources, map the deployed resources to your configuration, track metadata, and improve performance for large infrastructure deployments.

By default, the terraform.tfstate file is stored on the local host. This default behavior is not optimal in IT environments where multiple users need to create and destroy the resources that are defined in a given configuration. To control deploying and managing resources in a multi-user environment, store the Terraform state files in Oracle Cloud Infrastructure Object Storage, and share the state files and lock files between all the users. See Using the Object Store for Terraform State Files.

Provision the Resources Using Terragrunt

You can use Terragrunt to provision all the resources in the topology by using a single command. Internally, Terragrunt invokes Terraform commands and handles all the inter-resource dependencies defined in the configuration.

  1. Go to the examples/full-deployment directory.
  2. Initialize the Terraform modules, by running the following command:
    make init
    The command initializes all the Terraform modules in the configuration, by running terraform init in each directory.
  3. Provision the resources, by running the following command:
    terragrunt apply-all
    Terragrunt invokes the terraform apply command for all the Terraform modules in the configuration, in a defined sequence. All the resources are deployed.

Provision the Resources Using Terraform

If you choose to provision the resources by using Terraform, then you must apply the Terraform configuration in each directory individually, in a prescribed sequence.

  1. Copy examples/full-deployment/terraform.tfvars to each of the following subdirectories under examples/full-deployment:
    common/compartments
    common/configuration
    management/access
    management/network
    management/server_attachment
    management/servers
    peering/network
    peering/routing
    tenant/network
    tenant/servers

    Go to the examples/full-deployment directory, and enter the following command to copy terraform.tfvars to all the required subdirectories. For the sake of readability, the command is shown on multiple lines with a backslash (\) at the end of each line. Copy all the lines, including the backslash character, and paste as a single command.

    xargs -n 1 cp -v terraform.tfvars<<<"common/compartments/ \
    common/configuration/ management/access/ management/network/ \
    management/server_attachment/ management/servers/ peering/network/ \
    peering/routing/ tenant/network/ tenant/servers/"
  2. Go to the examples/full-deployment/common/configuration directory.
  3. Run the following commands:
    1. Initialize the configuration:
      terraform init
    2. Review the resources defined in the configuration:
      terraform plan
    3. Apply the configuration:
      terraform apply
    The configuration in the common/configuration directory calculates the number of tenant VCNs and peering VCNs required, the CIDR size of each VCN, and the mapping between the tenant VCNs and the peering VCNs. No resources are created when you apply this configuration; the results of the calculation are used when the VCNs and other networking resources are created.
  4. Run the terraform init, terraform plan, and terraform apply commands in the following directories under examples/full-deployment, in the prescribed execution order:
    common/compartments
    peering/network
    management/network
    tenant/network
    management/access
    peering/routing
    management/servers
    management/server_attachment
    tenant/servers

    After you run terraform apply in all the configuration directories in the prescribed order, the topology is fully deployed.

Modify the Topology

To modify the topology, you must update the resource definitions in the appropriate Terraform configurations, and then apply the revised configuration. Identifying the resource definitions that need to be modified requires a thorough understanding of the example code, specifically the Terraform modules referenced in each directory and the inter-module dependencies.

The instructions to modify the topology are outside the scope of this solution.

Remove All the Resources

You can remove all the deployed resources easily by using either Terraform or Terragrunt.

  1. Go to the examples/full-deployment directory.
  2. Do one of the following:
    • If you have Terragrunt installed, then run the following command:

      terragrunt destroy-all

      Terragrunt invokes the terraform destroy command for resources in the configuration, in a defined sequence.

      If you attempt to use terragrunt destroy-all to clean up a failed or partial deployment, then the following error might occur:

      Error: Unsupported attribute
        on management_rte_attachment.tf line 8, in module "management_rte_attachement":
         8:     data.terraform_remote_state.peering_servers.outputs.routing_instance_1_ip_id,
          |----------------
          | data.terraform_remote_state.peering_servers.outputs is object with 3 attributes
      This object does not have an attribute named "routing_instance_1_ip_id".

      If this error occurs, then remove the resources by running terraform destroy in each configuration directory, as described next.

    • To remove the resources by using the Terraform CLI, run terraform destroy in each configuration directory under examples/full-deployment, in the following order.

      Note:

      Wait for the command to finish running in each directory before proceeding to the next directory.
      tenant/servers
      management/server_attachment
      management/servers
      peering/routing
      management/access
      tenant/network
      management/network
      peering/network
      common/compartments
      common/configuration