Provision the Secondary Infrastructure

You can use Terraform to quickly build out the secondary site, extracting information from the OCI primary site that you just built. Terraform simplifies provisioning the network by duplicating the network topology at the secondary site and you can use the OCI Console to provision the rest of the infrastructure.

Subscribe to a Secondary OCI Region

Create your DR replica by subscribing to a second region geographically separate from your target Oracle Cloud Infrastructure (OCI) primary region. This secondary region should support similar infrastructure resources as the primary region. For example, Oracle Exadata Database Service on Dedicated Infrastructure of the same or similar shape and number, compute instances of similar shapes and numbers, OCI File Storage on both sides, and so on.
  1. Log in to the OCI Console for your tenancy.
  2. Expand the main menu, then click Governance and Administration.
  3. Under Account Management, click Region Management.
    A list of all available regions appears. Regions that the tenancy is currently subscribed to have a subscribed status of Subscribed. A Subscribe button appears next to other regions.
  4. Click Subscribe for the region that will be your secondary site.
    For example, US West (Phoenix), region identifier: us-phoenix-1.

To switch between regions, use the region combo box on the top banner of the OCI Console.

Provision the Secondary Region Network Resource Using Terraform

You can use Terraform to quickly provision your network resources on the secondary region. Using Terraform to duplicate your network definition onto your secondary site simplifies the task and eliminates a significant potential for errors.

When you have a valid Terraform plan, Terraform’s apply function will provision all resources defined in the .tf files, substantially reducing provisioning time.

You can run the command using the Terraform command line interface or the OCI Console’s Terraform interface. Both approaches provide the same functionality.

Note:

The following are the high-level steps to use Terraform to create your network resources. See "Working with Terraform" in Explore More for an example of discovering a network configuration on one environment and recreating it on another.
  1. Run the Terraform discovery command to export all or selected resources at the primary region within the tenancy. The Terraform will discover and export objects from a specific compartment. In our case, the psft-network-compartment.
  2. Edit the Terraform files (.tf).
  3. Validate the Terraform plan against the secondary site region and resolve any errors.

    Note:

    When editing the VCN CIDR, it is important that a non-overlapping CIDR block is chosen. For example, If the VCN in Ashburn is 10.0.0.0/16, then a CIDR block in Phoenix (as we have chosen) could be 10.10.0.0/16.

  4. Run the Terraform apply command to provision the resources at the secondary region site.

Finish Provisioning the Secondary Region

Once the network is set up, you can use the OCI Console to provision the compute instances, the OCI File Storage, and Oracle Exadata Database Service on Dedicated Infrastructure for the secondary region. This is very similar to how you provisioned the primary infrastructure.

Provision Oracle Exadata Database Service on Dedicated Infrastructure

Use the Oracle Cloud Infrastructure (OCI) Console to provision your target environment.

This example architecture uses the following Oracle Exadata model and shape: Oracle Exadata Cloud X6-2 Quarter Rack with two compute nodes (domUs) and three storage cells. The availability domain is AD-2.

  • Use the OCI Console to create your Oracle Exadata Cloud Infrastructure resource.
    See Creating an Exadata Cloud Infrastructure Instance in Oracle Cloud Exadata Database Service on Dedicated Infrastructure for how to prepare and get started with an Exadata Cloud Infrastructure deployment, and for steps to complete and submit your request.
    Select the Oracle Exadata model and shape, and specify the availability domain. You can scale the compute and storage capacity up after provisioning, if needed.
After you submit the provisioning request to create the Oracle Exadata Cloud Infrastructure, the status appears in the Exadata Infrastructure list with a status of Provisioning. Wait until the infrastructure provisioning has completed before proceeding.

Provision the VM Cluster in the Secondary Region

After your Oracle Exadata Database Service on Dedicated Infrastructure is successfully provisioned, you can provision the VM cluster onto the infrastructure.

  • Use the OCI Console to create a VM cluster instance.
    See To create a cloud VM cluster resource in Oracle Cloud Exadata Database Service on Dedicated Infrastructure for the steps.

    The architecture for this VM cluster uses the following for the secondary Phoenix region:

    Field Name Value
    Exadata VM Cluster Name PHX-Exa-VMCluster-1
    Compartment psft_exa_compartment
    Host name prefix phxexadb
    Subnet for Oracle Exadata Database Service on Dedicated Infrastructure client network exadb_private_subnet-ad1
    Subnet for Oracle Exadata Database Service on Dedicated Infrastructure backups exadb-backup_private_subnet-ad1
    OCPU count 22
    Grid Infrastructure version 19c RU 19 (19.19.0.0.0)
    Database version 19c RU 19 (19.19.0.0.0)
    Local storage for backup No – Backups are stored on region-local object storage
    SPARSE ASM Disk Group No for production, potentially yes for test databases

The Exadata VM Cluster is completely up, running, and accessible within a few hours. The following components are fully configured.

  • Two domU compute VM nodes
  • Oracle Clusterware and Oracle Grid Infrastructure
  • SCAN name with three IP addresses on the client subnet
  • SCAN and grid VIPs with their respective listeners
  • High redundancy ASM disk groups
Disk Group Name Redundancy Total Size (MB) Useable (MB)
DATAC1 High 161,206,272 48,055,638
RECOC1 High 53,747,712 16,376,564

Other small disk groups are created to support Oracle Advanced Cluster File System (Oracle ACFS).

Provision Compute Instances

The compute instances are your application and middle tier servers. They are used for PeopleSoft application and PeopleSoft Internet Architecture (PIA) web servers.

When provisioning compute instances, select the shape that best supports your workload. OCI provides several shapes to choose from as well as a choice between Intel or AMD based processors. Both Oracle Linux and Microsoft Windows are supported. When provisioning the application tier compute nodes, specify the compartment (psft-app-compartment) to hold the compute instance resources and specify the subnet for the application tiers (app-private-subnet-ad1). The application servers will host:

  • Tuxedo application server domain
  • Tuxedo batch process server domain
  • MicroFocus COBOL compiler and run-time facility

You can provision and place the PIA web servers into the same compartment and use the same subnet as the application servers. They will host the following:

  • WebLogic Web servers to host the PIA servers
  • Coherence*Web cache servers (optional)
  • Provision your compute instances by following the steps in Working with Instances.

    We provisioned four compute instances for the PeopleSoft application and web tiers: two to host the application server and process scheduler, and two to host the PIA web server and Coherence*Web. The table below provides the characteristics of these compute instances in the secondary Phoenix region.

    Host Name Shape Type OCPU Memory (GB) Block Storage Size (GB) Tier Subnet Components
    phx-psft-hcm-app01 VM.Standard2.4 4 60 128 Application app-private-subnet-ad1 Tuxedo: application server, Process scheduler
    phx-psft-hcm-app02 VM.Standard2.2 4 60 128 Application app-private-subnet-ad1 Tuxedo: application server, Process scheduler
    phx-psft-hcm-web01 VM.Standard2.4 2 30 128 Web app-private-subnet-ad1 WebLogic: Pure Internet Application server, Coherence*Web
    phx-psft-hcm-web02 VM.Standard2.2 2 30 128 Web app-private-subnet-ad1 WebLogic: Pure Internet Application server, Coherence*Web

Create OCI Compute Instances

Provision the compute instances in Oracle Cloud Infrastructure (OCI).

The configuration of our middle tier servers was simple and standard, with only the sizes of the boot, root, and swap file systems needing adjustment. At the time we provisioned ours, the default size of the boot volume was 46.6GB. This default size contains the basic required Linux file systems, including:

  • A /boot file system (200MB)
  • A root (/) file system (39GB)
  • A swap volume (8GB)

For both the application and web tier servers, we needed to increase the boot file system to 128GB, the root file system to 100GB, and the total swap size to 16GB.

  1. Open the navigation menu on the OCI Console.
  2. Click Compute, then click Instances.
  3. Click Create Instance, then enter a name for the instance.
    You can add or change the name later. The name doesn't need to be unique, because an Oracle Cloud Identifier (OCID) uniquely identifies the instance. Avoid entering confidential information.
  4. Select the compartment to create the instance in and complete the fields.
  5. Click Create.
    The provisioning process creates the compute instances.
  6. Increase the root partition and root file system sizes.
    See My Oracle Support document 2445549.1: How to Create a Linux instance with Custom Boot Volume and Extend the Root Partition in OCI to increase the root partition then the root file system size by 61GB.

    Note:

    The process OCI follows to provision the larger boot volume is to create a 39GB root partition then attach a paravirtualized block volume for the requested increase.
  7. Add an 8GB swap partition.
    See My Oracle Support document 2475325.1: How to Increase Swap Memory on Linux OCI Instances to add an 8GB swap partition, resulting in a total of 16GB swap space.

Provision OCI File Storage in the Secondary Region

Oracle Cloud Infrastructure File Storage provides the shared file systems for all application and PIA servers. These servers use NFS to mount the shared file systems. When you provision OCI File Storage from the OCI Console, ensure that the file storage is in the same availability domain as the application and PIA servers.
  1. Select Storage, then File Systems under File Storage in the OCI Console.
  2. Select the compartment where you want the file system to be placed.
    For example, psft-app-compartment.
  3. Click Create File System.
  4. Select File System for NFS.
  5. Click Edit Details under File System Information.
    1. Change the default name to a name of your choosing.
      This example uses the Phoenix region for the secondary. For example, PHX_PSFT_APP_INSTALL or PHX_PSFT_APP_INTERFACE.
    2. Change the availability domain to the availability domain where the compute instances are provisioned.
      For example, US-PHOENIX-AD1.
    3. Select the compartment where you want the file system.
      For example, psft-app-compartment.
    4. Select an encryption option.
      For example, Oracle Managed Keys.
  6. Click Edit Details under Export Information.
    1. Provide an export path.
      For example, /export/psftapp or /export/psftinterface.
    2. If required, select the check box for secure exports.
      See the information icon next to this option for details.
  7. Click Edit Details under Mount Target Information.
    1. Select either the Select an existing mount target or Create a new mount target option.
    2. Click Enable Compartment Selection.
      This enables you to select the compartment that the VCN and subnets reside in.
    3. Select the compartment that the mount target will either be created in or already exists in from Create in the Compartment drop-down combo box.
    4. Select the compartment that the VCN resides in from the Virtual Cloud Network drop-down combo box.
    5. If you're creating a new mount target, then enter a name.
    6. If you're using an existing mount target, then select the compartment that the mount target was provisioned onto from the Subnet drop-down combo box.
  8. Click Create.

Find the Security Ingress and Egress Rules

Find the required security ingress and egress rules to add to the appropriate security lists and the commands you need to issue on each application and PeopleSoft Internet Architecture (PIA) server. After provisioning the file system, perform the following steps:

  1. Log in to the OCI Console.
  2. Under File Storage, select Storage, then File Systems.
  3. Select the compartment that contains the file system.
  4. Select the name of the file system you provisioned.
  5. Click Export Target.
  6. Click Mount Commands.
    A window displays the ingress and egress rules and the commands used to mount the file system.
  7. Highlight and click Copy to copy the mount commands for use later.
  8. Edit the security list associated with the subnet that you'll use to mount OCI File Storage to add the ingress and egress rules.

Establish Remote VCN Peering

Remote VCN peering is the process of connecting two VCNs in different regions of the same tenancy. Peering allows the VCNs' resources to communicate securely using private IP addresses without routing the traffic over the internet or through your on-premises network.

The following are the requirements for establishing remote VCN peering:

  • Dynamic Routing Gateway (DRG) must exist in each region.
  • Define the pairing between the VCNs in the regions by attaching a Remote Peering Connection (RPC) to each DRG.
  • Implement an explicit agreement as an OCI Identity and Access Management policy for each VCN agreeing to the peering relationship.
  • Add route table rules for each VCN to route traffic. The DRG has a route table specific for remote VCN peering that you can update.
  • Add security list ingress and egress rules to subnets that are allowed to have traffic between regions.

When establishing remote VCN peering, update route tables at both regions to allow traffic to traverse. The following tables provide examples. The rows containing the target type of “Dynamic Routing Gateway” represent the rules that route traffic through that region’s DRG to the DRG at the other region.

The following are the updated route tables in the Ashburn region for db-private-RT and app-private-RT:

db-private-RT:
Destination Target type Target
0.0.0.0/0 NAT Gateway maa-ngw
10.10.101.0/24 Dynamic Routing Gateway cloudmaa-vcn-DRG
All IAD Services in Oracle Service Network Service Gateway Maa-Iad-sgw
app-private-RT:
Destination Target type Target
0.0.0.0/0 NAT Gateway maa-ngw
10.10.106.0/24 Dynamic Routing Gateway cloudmaa-vcn-DRG

The following are the updated route tables in the Phoenix region for db-private-RT and app-private-RT:

db-private-RT:
Destination Target type Target
0.0.0.0/0 NAT Gateway maa-ngw
10.0.101.0/24 Dynamic Routing Gateway maacloud2-vcn-DRG
All PHX Services in Oracle Service Network Service Gateway Maa-phx-sgw
app-private-RT:
Destination Target type Target
0.0.0.0/0 NAT Gateway maa-ngw
10.0.103.0/24 Dynamic Routing Gateway maacloud2-vcn-DRG

Note:

To implement remote VCN peering for your environment, see Peering VCNs in different regions through a DRG.

The following is an overview of the required steps:

  1. Create the RPCs: Create an RPC for each VCN's DRG.
  2. Share information: The administrators share the basic required information.
  3. Establish the connection: Connect the two RPCs
  4. Update route tables: Update each VCN's route tables to enable traffic between the peered VCNs.
  5. Update security rules: Update each VCN's security rules to enable traffic between the peered VCNs.