Configure a Hybrid DR Solution for Oracle SOA Suite
Before You Begin
The Oracle WebLogic Server system used in this document is a High Availability environment that was configured following the MAA standard best practices described in the Enterprise Deployment Guide for Oracle SOA Suite (EDG) for Fusion Middleware systems. Although not all scenarios follow this exact topology to the detail, this covers many different components and combinations, it is used frequently in large deployments, and implements high availability recommendations that should be applied before deploying a disaster recovery (DR) system. Hence, it is considered the best reference example of a primary system for a hybrid DR solution for Oracle WebLogic Server. Based on this, it is highly recommended to become familiar with Oracle WebLogic Server High Availability and Enterprise deployment best practices for network, storage, and host set up best practices before deploying a hybrid system.
- Oracle Fusion Middleware High Availability Guide
- Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite
- Oracle Fusion Middleware Enterprise Deployment Guides in MAA Best Practices - Oracle Fusion Middleware
Ensure you are also familiar with Oracle Cloud Infrastructure concepts and administration.
Architecture
The primary system is an Oracle SOA Suite system located on-premises. The standby system is created from scratch in an OCI tenancy that uses Oracle Cloud Infrastructure FastConnect (or Site-to-Site VPN) to connect with the on-premises environment.
The mid-tier on OCI is created by installing SOA on OCI virtual machines (VMs) and not an Oracle SOA Cloud Service or Oracle SOA Suite on Marketplace instance (there are restrictions in OS versions, OS users and directory structure that preclude Oracle SOA Cloud Service or Oracle SOA Suite on Marketplace instance from working properly as a standby for an on-premises system).
The database tier on the OCI site is an Oracle Real Application Clusters (Oracle RAC) DB System.

Description of the illustration maa-soa-hybrid-dr.png
This architecture supports the following components in the primary, on-premises data center:
- On-premises network
This network is the local network used by your organization. It is one of the spokes of the topology.
- Load Balancer
Provides automated traffic distribution from a single entry point to multiple servers in the back end.
- Oracle SOA Suite
The SOA environment is configured following the standard Enterprise Deployment Guide for Oracle SOA Suite (EDG). This topology covers many different components that are often used in large deployments and implements high availability recommendations that should be applied before deploying a DR system.
- Oracle Real Application Clusters (Oracle
RAC)
Oracle RAC enables you to run a single Oracle Database across multiple servers in order to maximize availability and enable horizontal scalability, while accessing shared storage. User sessions connecting to Oracle RAC instances can failover and safely replay changes during outages, without any changes to end user applications, hiding the impact of the outages from end users.
This architecture supports the following components in the secondary, stand-by environment on OCI:
- Region
An Oracle Cloud Infrastructure region is a localized geographic area that contains one or more data centers, called availability domains. Regions are independent of other regions, and vast distances can separate them (across countries or even continents).
- Virtual cloud network (VCN) and subnet
A VCN is a customizable, software-defined network that you set up in an Oracle Cloud Infrastructure region. Like traditional data center networks, VCNs give you complete control over your network environment. A VCN can have multiple non-overlapping CIDR blocks that you can change after you create the VCN. You can segment a VCN into subnets, which can be scoped to a region or to an availability domain. Each subnet consists of a contiguous range of addresses that don't overlap with the other subnets in the VCN. You can change the size of a subnet after creation. A subnet can be public or private.
Private subnets are generically recommended for security reasons, unless the resource must be accessible from the public internet (if a Load Balancer is accessed from the public internet, then it must be in a public subnet).
- Dynamic routing gateway (DRG)
The DRG is a virtual router that provides a path for private network traffic between VCNs in the same region, between a VCN and a network outside the region, such as a VCN in another Oracle Cloud Infrastructure region, an on-premises network, or a network in another cloud provider.
- FastConnect
Oracle Cloud Infrastructure FastConnect provides an easy way to create a dedicated, private connection between your data center and Oracle Cloud Infrastructure. FastConnect provides higher-bandwidth options and a more reliable networking experience when compared with internet-based connections.
- Security list
For each subnet, you can create security rules that specify the source, destination, and type of traffic that must be allowed in and out of the subnet.
- Route table
Virtual route tables contain rules to route traffic from subnets to destinations outside a VCN, typically through gateways.
- Load balancer
The Oracle Cloud Infrastructure Load Balancing service provides automated traffic distribution from a single entry point to multiple servers in the back end.
- Cloud Guard
You can use Oracle Cloud Guard to monitor and maintain the security of your resources in Oracle Cloud Infrastructure. Cloud Guard uses detector recipes that you can define to examine your resources for security weaknesses and to monitor operators and users for risky activities. When any misconfiguration or insecure activity is detected, Cloud Guard recommends corrective actions and assists with taking those actions, based on responder recipes that you can define.
- DB System
Oracle Database System is an Oracle Cloud Infrastructure (OCI) database service that enables you to build, scale, and manage full-featured Oracle databases. The DB System uses OCI Block Volumes storage instead of local storage and can run Oracle Real Application Clusters (Oracle RAC) to improve availability.
- Oracle Cloud Infrastructure File
Storage service
Oracle Cloud Infrastructure File Storage service provides a durable, scalable, secure, enterprise-grade network file system. You can connect to a File Storage service file system from any bare metal, virtual machine, or container instance in your Virtual Cloud Network (VCN). The File Storage service supports the Network File System version 3.0 (NFSv3) protocol. The service supports the Network Lock Manager (NLM) protocol for file locking functionality.
Oracle Cloud Infrastructure File Storage employs 5-way replicated storage, located in different fault domains, to provide redundancy for resilient data protection. Data is protected with erasure encoding. The File Storage service is designed to meet the needs of applications and users that need an enterprise file system across a wide range of use cases.
- Oracle Cloud
Infrastructure Block Volumes
Oracle Cloud Infrastructure Block Volumes service lets you dynamically provision and manage block storage volumes. You can create, attach, connect, and move volumes, as well as change volume performance, as needed, to meet your storage, performance, and application requirements. After you attach and connect a volume to an instance, you can use the volume like a regular hard drive.
- Oracle RAC DB System
Oracle Real Application Clusters (Oracle RAC) allow customers to run a single Oracle Database across multiple servers in order to maximize availability and enable horizontal scalability, while accessing shared storage. User sessions connecting to Oracle RAC instances can failover and safely replay changes during outages, without any changes to end user applications, hiding the impact of the outages from end users. The Oracle RAC high availability framework maintains service availability by storing the configuration information for each service in the Oracle Cluster Registry (OCR).
Oracle RAC DB System is on the database tier.
- Data Guard
Oracle Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to remain available without interruption. Oracle Data Guard maintains these standby databases as copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Oracle Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage.
Topology Description
In the db-tier, the primary and secondary databases are configured with Oracle Data Guard. With Oracle Data Guard, all changes that are applied to the primary database are replicated on the secondary (standby) database.
The secondary mid-tier is installed on OCI virtual machines (VMs). The Oracle Fusion Middleware binaries and SOA Domain are a replica of the primary’s binaries and SOA Domain, using Oracle Cloud Infrastructure File Storage service as the network file system solution, and Oracle Cloud Infrastructure Block Volumes as the node private access file system solution. The primary’s host name aliases are used also as the listener addresses of the WebLogic Servers in the secondary environment. In the secondary location, the host name aliases are resolved with the secondary hosts’ IP addresses.
The web-tier in the primary data center follows the Enterprise Deployment Guide (EDG) model with two Oracle HTTP Server hosts and a load balancer (LBR). The same topology is deployed in the secondary system using an OCI Load Balancer and Oracle HTTP Server hosts installed on OCI compute instances. In the secondary system, you can alternatively implement the web layer with only an OCI Load Balancer, which provides most of the features required by the enterprise deployment topology. Both options, only OCI Load Balancer or OCI Load Balancer and Oracle HTTP Server hosts, are included in this document.
A unique front-end address is configured to access the applications running in the system. The virtual front-end name points to the IP of the Load Balancer on the primary site. In a switchover, the front-end name is updated in DNS to point to the IP of the OCI Load Balancer on the secondary site. It must always point to the Load Balancer IP of the site that has the primary role.
During normal business operation, the standby database is a physical
standby. It either is in the mount state, or opened in read-only mode when Active Data Guard is used. The standby database receives and applies redo
from the
primary database, but cannot be opened in read-write mode. Oracle Data Guard automatically replicates the information that resides in the database to the
secondary site, including Server Oriented
Architecture (SOA) schemas, Oracle Platform Security Services information, custom
schemas, transaction logs (TLOGs), Java Database Connectivity (JDBC) persistent
stores, and more.
During the DR setup and lifecycle validation steps described in this
document, you can convert the standby database from a physical standby to a snapshot
standby to validate the secondary site without performing a complete switchover. A
database in snapshot standby mode is a fully updateable database. A snapshot standby
database receives and archives, but does not apply, the redo
data
received from a primary database. All changes applied to a snapshot standby are
discarded when it is converted into a physical standby.
The WebLogic Domain configuration must be replicated from the primary site to the secondary. This replication is required and performed during the initial DR setup, and is also necessary during the ongoing lifecycle of the system. When configuration changes are performed in the primary domain, they must be replicated to the secondary location.
When a standby database is shutdown during normal business operation, it does not receive updates from the primary database and it might become out-of-sync. Because this can result in a data loss in a switchover scenario, it's not recommended to stop the standby database during normal business operation. The standby mid-tier hosts can be stopped. However, when standby hosts are stopped, the configuration changes that are replicated from the primary site will not be pushed to the secondary domain. In this case, when a switchover happens, the recovery time objective (RTO) is increased since the standby mid-tier hosts must be started and the domain synchronized with primary changes.
Considerations for Topology
The following are the most relevant topology assumptions considered in this setup:
- The primary system is an existing on-premises
environment
The environment includes an Oracle Real Application Clusters (Oracle RAC) database, mid-tier hosts, and a load balancer. The on-premises system setup is out of the scope of this document.
- The secondary system is on Oracle Cloud Infrastructure
(OCI)
The secondary system is created from scratch on OCI and provides an Oracle Cloud version of your on-premises system. It's a fully standard Oracle Cloud system with the ability to become the primary system.
-
Oracle SOA Suite and components
This document is focused on Oracle SOA Suite system, including its components. For example, Oracle Service Bus, Oracle Business Process Management, Oracle Enterprise Scheduler Service, and more. Although the procedures and recommendations in this document may apply to other Oracle Fusion Middleware components that are not part of Oracle SOA Suite (such as Web Center and Business Intelligence), the specifics and supportability of those are excluded from this exercise.
- The primary and secondary systems are symmetric
The secondary system has the same number of nodes in the mid-tier, web-tier, and db-tier. There can be differences in the web-tier layer when OCI Load Balancer is used alone in OCI (without Oracle HTTP Server).
- The primary and secondary systems use similar
resources
Although the available OCI shapes may not match the same exact values in CPU and memory of the existing primary configuration, you must use the most similar shapes. Oracle recommends using symmetric standbys that provide a similar processing power to the primary system. Otherwise, performance issues and cascade faults could occur when the workload is switched over to the OCI system.
Considerations for Network
- Connectivity between on-premises network and Oracle Cloud
Infrastructure (OCI)
The on-premises and OCI databases must communicate with each other through Oracle SQL*Net Listener (port 1521) for Oracle Data Guard
redo
transport. The on-premises and OCI mid-tier hosts must communicate with each other using the SSH port (forrsync
copies). Oracle recommends using Oracle Cloud Infrastructure FastConnect communication between the on-premises data center and the OCI Region. Oracle Cloud Infrastructure FastConnect provides dedicated secure connectivity and bandwidth, with predictable latency, jitter, and cost. Alternatively, you can use Site-to-Site VPN, which also provides secure connectivity between on-premises and OCI. However, this provides lower bandwidth, and variable latency, jitter and cost. - Disable connectivity between mid-tier hosts and the remote
database
It is expected that the mid-tier hosts will never connect to the remote database during runtime. The latency between on-premises and OCI typically discourages this cross-connectivity. To avoid accidental connections and workload issues, preclude the connectivity from the mid-tier hosts to the remote database.
- Virtual host names
As a best practice, it is assumed that the primary on-premises system uses virtual host names, instead of the physical node host name, as the listen addresses for the Oracle WebLogic Servers. Virtual host names are normally aliases of the physical node host name. Using virtual host names facilitates moving configurations to different hosts; however, this is not a mandatory requirement. The configuration in this document works as long as the secondary servers in OCI can use the host names used as the listen addresses in the primary as an alias in each node (as resolved in DNS or local
/etc/hosts
). - A Virtual IP is only required for the Administration Server’s listen
address
Automatic Service Migration is supported and recommended for local high availability (HA), which means that the managed servers aren't required to use virtual IP addresses. Only the Administration Server needs a VIP for local failover. As a best practice, it is assumed that the Administration Server in the primary on-premises system listens in a VIP address. This document provides instructions for configuring a VIP for the Administration Server in OCI. However, this VIP address is not a hard requirement for configuring Disaster Recovery on OCI with this document. If your primary Administration Server does not listen in a VIP, then you can skip that point.
- Load balancer
A front-end load balancer (LBR) is used in the primary on-premises environment and an OCI Load Balancer is used in the standby environment.
- Virtual front-end address
The primary system must use a virtual front-end name (a vanity URL, such as mysoa.example.com) as the entry point for the clients. This front-end name is resolved in DNS with the primary load balancer’s IP address. In a DR topology, the secondary system is configured to use the same virtual front-end name. When a switchover or failover to the secondary system occurs, the virtual front-end name is modified in the DNS to point to the secondary load balancer’s IP address. This way, access from the clients to the system is agnostic to the site being used as the primary. The same applies to any virtual front-end name used in the system. For example, you can use an additional front-end name, such as
admin.example.com
, to access admin functions for the Oracle WebLogic Server Administration Console or Fusion Middleware Control Console.You can use a separated front-end name, such as
osb.example.com
, to access Oracle Service Bus services. This document uses one front-end host name for simplification.
Considerations for Storage
- EDG-based directory structure
This document uses an Enterprise Deployment Guide (EDG) directory structure for the primary system's Oracle WebLogic Server domain configuration. The EDG model uses separate domain directories for the administration Oracle WebLogic Server and for each managed Oracle WebLogic Server, as described in Preparing the File System for an Enterprise Deployment. The EDG directory structure is used as the reference in the examples. It uses a combination of shared and private folders, which covers more use cases. If your system does not use the EDG directory structure, then adapt the examples to meet your environment specifics.
- Considerations for storage in primary (on-premises)It's a good practice to store not only the shared folders (Oracle WebLogic Server Admin Server domain directory, deployment plans home, shared runtime, and so on), but also the Oracle Fusion Middleware home binaries and the local configurations (managed server domain directories, node manager folder) on NFS storage in the primary location. This facilitates the copy from primary to standby. Although each host will use its own binaries and its own local configuration privately (separated NFS volumes for each server), the copy of the configuration between sites is easier if these reside on shared storage. By using this approach, it's possible to mount all of them from a unique node and execute the rsync copy to the remote nodes in a single operation. Without this approach, the copy must happen individually, from each of the primary nodes that stores data privately.
Note:
The scripts provided for thersync
copy of these are enough flexible to be adapted to any case. - Considerations for shared folders in the secondary (OCI)
The folders that must be shared between the mid-tier hosts (for example, Oracle WebLogic Server Admin Server domain directory, deployment plans home, shared runtime, and so on) must be stored in shared storage. OCI provides Oracle Cloud Infrastructure File Storage as the network file system solution.
Different artifacts are shared folder, and they have different usage and lifecycle. They must be placed in separated shared storage, according to its usage. For example:- The shared configuration (for example, WebLogic Server Admin Server domain directory, deployment plans home) is mainly accessed by the administration server host. It is residually accessed by the rest of the hosts (for deployment plans, shared keystores, and so on). This must be placed in one Oracle Cloud Infrastructure File Storage file system.
- If the system uses a shared folder for runtime artifacts (for example, files created and read by the application), then it is normally used concurrently by all of the mid-tier hosts. You must place runtime artifacts in another Oracle Cloud Infrastructure File Storage file system or in a Database File System (DBFS) mount.
- Considerations for private folders storage in the secondary
(OCI)
The FMW binaries and local configurations are used by each host individually. It is a good practice to store them on external storage instead of in the default boot volume of the hosts.
- In OCI, you can store the FMW binaries in Oracle Cloud Infrastructure Block Volumes for each host. When there are more than two mid-tier hosts, it's a good practice to use redundant shared binary homes (see High Availability Guide). To accomplish this, use Oracle Cloud Infrastructure File Storage to store the FMW binaries.
- You can store the local configuration (for example, WebLogic managed server domain directories, and node manager folder) in Oracle Cloud Infrastructure Block Volumes because they are expected to be used privately by each host. Alternatively, you can use Oracle Cloud Infrastructure File Storage file systems mounted privately by each node.
To facilitate the copy from the primary to the remote site, it's possible to mount the storage files from a unique node and execute thersync
copy in a single operation (for Block Volumes, another host can mount the files in read only mode). Alternatively, the copy can be done individually, from each of the nodes that stores the data privately.Note:
The scripts provided for thersync
copy of these are enough flexible to be adapted to any case. - Storage replication
There is no direct replication at the storage level between on-premises and OCI. The copy of the binaries and configuration from primary to standby is performed with
rsync
over Secure Shell (SSH) over a private connection on Oracle Cloud Infrastructure FastConnect or Site-to-Site VPN (never use the public internet). The copy of the configuration and runtime artifacts can also be based on DBFS, depending on the customer needs. More details on this are provided later. - WebLogic persistent stores
It is assumed that the WebLogic Persistent stores used for TLOGS and JMS messages are JDBC persistent stores and are stored in database tables. This way, the persistent stores are accessible from any member of the cluster (to provide local HA with Service Migration). This also takes advantage of the underlying Oracle Data Guard, which automatically replicates the TLOGS and JMS to the secondary site.
Considerations for Database
Consider the following when configuring the databases:
- Multitenancy
The primary database must be a multitenant database (CDB/PDB). This is required to configure a hybrid Oracle Data Guard in Oracle Cloud Infrastructure.
- Patch levels
Oracle Home for the on-premises database must be at the same patch level as the standby database.
- Transparent Data Encryption
Use Transparent Data Encryption (TDE) for both the primary and standby databases to ensure that all data is encrypted. If the on-premises database is not already enabled with TDE, then it’s highly recommended to convert to TDE before configuring Oracle Data Guard to provide the most secure environment.
- High availability
To get local High Availability at the database level and follow the EDG topology, use Oracle Real Application Clusters (Oracle RAC) for both the primary and standby databases. Although focused on Oracle RAC, this procedure is applicable to a single database. However, the recommended practice is to use Oracle RAC to get local High Availability in the db tier.
- Database service
The primary on-premises mid-tier should use an application database service to connect to the primary database.
- Oracle Cloud
Infrastructure (OCI) Database System
Use an OCI DB System (bare metal, virtual machine, or Oracle Exadata Database Service) as the standby database. An Oracle Autonomous Database, either shared or dedicated, is out of the scope of this document. It does not provide a number of features required for the hybrid disaster recovery setup, such as the ability to configure Oracle Data Guard with an on-premises database, and snapshot standby conversion.
- TNS alias in WebLogic database connection strings
This document uses a TNS alias to define the database connection string in the Oracle WebLogic configuration. The TNS alias is resolved with a
tnsnames.ora
file, that is different in each site and points to the local database. Since the same alias name is used in the primary and the secondary, you don't need to alter the WebLogic configuration after replicating it from the primary to the secondary.If the primary is not already using this approach, then an initial one-time setup is required. Details on this are described later in this document.
Considerations for Identity Management
- Lightweight Directory Access Protocol (LDAP)
The system can use an external LDAP for authentication (for example, Oracle Unified Directory), as long as the external LDAP is reachable from both the primary and standby systems.
- Disaster recovery solution for LDAP
The disaster recovery solution for any external LDAP service is out of the scope of this document, and it should be provided by the specific LDAP product. The DR solution for LDAP should provide a unique way to access to it (typically a virtual host name), that does not change when there's a switchover in the LDAP system.
Considerations Summary
The following provides a summary of what you should consider when planning a disaster recovery solution.
Considerations for | Summary | Mandatory or Highly Recommended |
---|---|---|
Topology | The primary system is an existing on-premises environment. | Highly Recommended |
Topology | The secondary system is on Oracle Cloud Infrastructure (OCI) is created from scratch on OCI. | Mandatory |
Topology | The primary and secondary systems are symmetric and have the same number of nodes in db-tier, mid-tier and web-tier. | Mandatory |
Topology | The primary web-tier consist of a load balancer in front of Oracle HTTP Server. | Highly Recommended |
Topology | The primary and secondary systems use similar resources (CPU, memory, and so on). | Mandatory |
Network | Use OCI FastConnect for connectivity between on-premises and OCI, alternatively Site-to-Site VPN. Never public internet. | Mandatory |
Network | Disable connectivity between mid-tier hosts and the remote database. Only needed if Oracle Database File System (DBFS) is used for replicating configuration. | Highly Recommended |
Network | Use virtual host names for WebLogic server listen addresses. | Highly Recommended |
Network | Virtual IP for Administration Server. | Highly Recommended |
Network | Virtual front-end address. | Mandatory |
Storage | EDG-based directory structure. | Highly Recommended |
Storage | On-premises private and shared folders in external storage so they can be mounted from unique node for centralize rsync operations. | Highly Recommended |
Storage | OCI shared configuration in Oracle Cloud Infrastructure File Storage. | Mandatory |
Storage | OCI shared runtime in OCI File Storage or Oracle Database File System (DBFS). | Mandatory |
Storage | OCI FMW binaries folders in OCI File Storage mounted privately. | Highly Recommended |
Storage | OCI local configurations in Oracle Cloud Infrastructure Block Volumes (alternatively OCI File Storage mounted privately). | Highly Recommended |
Storage | Staging DBFS mount (only when DBFS based replica for the configuration is used). | Highly Recommended |
Storage | Storage replication based on rsync
(or DBFS for some specific artifacts like configuration).
|
Highly Recommended |
Storage | WebLogic persistent stores (TLOGS, JMS) in JDBC persistent stores. | Mandatory (*if JMS info is relevant) |
Database | On-premises database patch level same as the standby database. | Mandatory |
Database | Transparent Data Encryption (TDE) for the primary and standby databases. If the on-premises database is not already enabled with TDE, then highly recommend converting to TDE before configuring Oracle Data Guard. | Mandatory |
Database | Oracle RAC Database for local High Availability. | Highly Recommended |
Database | Primary multitenancy database. | Mandatory |
Database | Use an application database service, not the default administration service, to connect to the database. | Mandatory |
Database | In OCI, use DB System (BM, VM or Oracle Exadata Database Service), not Oracle Autonomous Database. | Mandatory |
Database | TNS alias in WebLogic database connection strings. | Highly Recommended |
Identity Management | The system can use an external LDAP for authentication (for example, Oracle Unified Directory), as long as the external LDAP is reachable from both the primary and standby systems. | Mandatory (using external LDAP is NOT mandatory, but if used, it must be reachable from both sites) |
Identity Management | The disaster recovery solution for any external LDAP service is out of the scope of this document, and it should be provided by the specific LDAP product. The DR solution for LDAP should provide a unique way to access to it (typically a virtual host name), that does not change when there's a switchover in the LDAP system. | Mandatory (using external LDAP is NOT mandatory, but if used and is DR protected, it should provide a virtual access address that remains the same when a switchover/failover happens for the LDAP way to access to it) |
About Required Services and Roles
This solution requires the following services and roles:
These are the roles needed for each service.
Service Name: Role | Required to... |
---|---|
Oracle Cloud
Infrastructure: administrator |
Create the required resources in the OCI tenancy: compartment, DB System, compute instances, FSS resources and Load Balancer. |
Network:
administrator
|
Configure the required network resources both in on-premises and OCI: Fast Connect, VCNs and subnets in OCI, network security rules and routing rules. |
Oracle Data Guard: root , oracle , and
sysdba |
Configure Oracle Data Guard between primary on-premises and standby OCI and perform role conversions. |
Oracle Database: sysdba |
Manage the databases. |
Oracle WebLogic Server: root ,
oracle |
Configure the mid-tier hosts as required: perform the OS level configuration, add host aliases, manage virtual IP addresses and mount file systems. |
Oracle WebLogic Server: Weblogic
Administrator |
Manage Oracle WebLogic Server: stop, start, and apply WebLogic configuration changes. |
See Learn how to get Oracle Cloud services for Oracle Solutions to get the cloud services you need.
Change Log
This log lists significant changes:
July 5, 2023 | Updated the instructions to configure the tns alias in
"Prepare the WebLogic Data Sources in the Primary Data Center".
|
March 7, 2023 | Updated the text and download location in Step 6 in Download Code. |
January 30, 2023 | Expanded content regarding using tnsnames.ora
files on the primary and standby in "Prepare the WebLogic Data Sources in the
Primary Data Center".
|
November 28, 2022 | Added Transparent Data Encryption (TDE) to "Considerations for Database" and "Considerations Summary". |
September 15, 2022 |
|
July 11, 2022 |
|