Provision the Secondary Infrastructure
Subscribe to a Secondary OCI Region
To switch between regions, use the region combo box on the top banner of the OCI Console.
Provision the Secondary Region Network Resource Using Terraform
You can use Terraform to quickly provision your network resources on the secondary region. Using Terraform to duplicate your network definition onto your secondary site simplifies the task and eliminates a significant potential for errors.
When you have a valid Terraform plan, Terraform’s apply
function
will provision all resources defined in the .tf
files,
substantially reducing provisioning time.
You can run the command using the Terraform command line interface or the OCI Console’s Terraform interface. Both approaches provide the same functionality.
Note:
The following are the high-level steps to use Terraform to create your network resources. See "Working with Terraform" in Explore More for an example of discovering a network configuration on one environment and recreating it on another.Finish Provisioning the Secondary Region
Once the network is set up, you can use the OCI Console to provision the compute instances, the OCI File Storage, and Oracle Exadata Database Service on Dedicated Infrastructure for the secondary region. This is very similar to how you provisioned the primary infrastructure.
Provision Oracle Exadata Database Service on Dedicated Infrastructure
Use the Oracle Cloud Infrastructure (OCI) Console to provision your target environment.
This example architecture uses the following Oracle Exadata model and shape: Oracle Exadata Cloud X6-2 Quarter Rack with two compute nodes (domUs) and three storage cells. The availability domain is AD-2.
Provision the VM Cluster in the Secondary Region
After your Oracle Exadata Database Service on Dedicated Infrastructure is successfully provisioned, you can provision the VM cluster onto the infrastructure.
The Exadata VM Cluster is completely up, running, and accessible within a few hours. The following components are fully configured.
- Two domU compute VM nodes
- Oracle Clusterware and Oracle Grid Infrastructure
- SCAN name with three IP addresses on the client subnet
- SCAN and grid VIPs with their respective listeners
- High redundancy ASM disk groups
Disk Group Name | Redundancy | Total Size (MB) | Useable (MB) |
---|---|---|---|
DATAC1 | High | 161,206,272 | 48,055,638 |
RECOC1 | High | 53,747,712 | 16,376,564 |
Other small disk groups are created to support Oracle Advanced Cluster File System (Oracle ACFS).
Provision Compute Instances
The compute instances are your application and middle tier servers. They are used for PeopleSoft application and PeopleSoft Internet Architecture (PIA) web servers.
When provisioning compute instances, select the shape that
best supports your workload. OCI provides several shapes to choose
from as well as a choice between Intel or AMD based processors. Both
Oracle Linux and Microsoft Windows are supported. When provisioning
the application tier compute nodes, specify the compartment
(psft-app-compartment
) to hold the compute
instance resources and specify the subnet for the application tiers
(app-private-subnet-ad1
). The application
servers will host:
- Tuxedo application server domain
- Tuxedo batch process server domain
- MicroFocus COBOL compiler and run-time facility
You can provision and place the PIA web servers into the same compartment and use the same subnet as the application servers. They will host the following:
- WebLogic Web servers to host the PIA servers
- Coherence*Web cache servers (optional)
Create OCI Compute Instances
Provision the compute instances in Oracle Cloud Infrastructure (OCI).
The configuration of our middle tier servers was simple and standard, with only the sizes of the boot, root, and swap file systems needing adjustment. At the time we provisioned ours, the default size of the boot volume was 46.6GB. This default size contains the basic required Linux file systems, including:
- A
/boot
file system (200MB) - A root (
/
) file system (39GB) - A swap volume (8GB)
For both the application and web tier servers, we needed to increase the boot file system to 128GB, the root file system to 100GB, and the total swap size to 16GB.
Provision OCI File Storage in the Secondary Region
Find the Security Ingress and Egress Rules
Find the required security ingress and egress rules to add to the appropriate security lists and the commands you need to issue on each application and PeopleSoft Internet Architecture (PIA) server. After provisioning the file system, perform the following steps:
Establish Remote VCN Peering
Remote VCN peering is the process of connecting two VCNs in different regions of the same tenancy. Peering allows the VCNs' resources to communicate securely using private IP addresses without routing the traffic over the internet or through your on-premises network.
The following are the requirements for establishing remote VCN peering:
- Dynamic Routing Gateway (DRG) must exist in each region.
- Define the pairing between the VCNs in the regions by attaching a Remote Peering Connection (RPC) to each DRG.
- Implement an explicit agreement as an OCI Identity and Access Management policy for each VCN agreeing to the peering relationship.
- Add route table rules for each VCN to route traffic. The DRG has a route table specific for remote VCN peering that you can update.
- Add security list ingress and egress rules to subnets that are allowed to have traffic between regions.
When establishing remote VCN peering, update route tables at both regions to allow traffic to traverse. The following tables provide examples. The rows containing the target type of “Dynamic Routing Gateway” represent the rules that route traffic through that region’s DRG to the DRG at the other region.
The following are the updated route tables in the Ashburn region for
db-private-RT
and app-private-RT
:
db-private-RT
:
Destination | Target type | Target |
---|---|---|
0.0.0.0/0 | NAT Gateway | maa-ngw |
10.10.101.0/24 | Dynamic Routing Gateway | cloudmaa-vcn-DRG |
All IAD Services in Oracle Service Network | Service Gateway | Maa-Iad-sgw |
app-private-RT
:
Destination | Target type | Target |
---|---|---|
0.0.0.0/0 | NAT Gateway | maa-ngw |
10.10.106.0/24 | Dynamic Routing Gateway | cloudmaa-vcn-DRG |
The following are the updated route tables in the Phoenix region for
db-private-RT
and app-private-RT
:
db-private-RT
:
Destination | Target type | Target |
---|---|---|
0.0.0.0/0 | NAT Gateway | maa-ngw |
10.0.101.0/24 | Dynamic Routing Gateway | maacloud2-vcn-DRG |
All PHX Services in Oracle Service Network | Service Gateway | Maa-phx-sgw |
app-private-RT
:
Destination | Target type | Target |
---|---|---|
0.0.0.0/0 | NAT Gateway | maa-ngw |
10.0.103.0/24 | Dynamic Routing Gateway | maacloud2-vcn-DRG |
Note:
To implement remote VCN peering for your environment, see Peering VCNs in different regions through a DRG.The following is an overview of the required steps:
- Create the RPCs: Create an RPC for each VCN's DRG.
- Share information: The administrators share the basic required information.
- Establish the connection: Connect the two RPCs
- Update route tables: Update each VCN's route tables to enable traffic between the peered VCNs.
- Update security rules: Update each VCN's security rules to enable traffic between the peered VCNs.