20 Oracle Data Guard Hybrid Cloud Configuration
A hybrid Oracle Data Guard configuration consists of a primary database and one or more standby databases residing partially on-premises and partially in the cloud. The process detailed here uses the Oracle Zero Downtime Migration tool to create a cloud standby database from an existing on-premises primary database.
Zero Downtime Migration streamlines and simplifies the process of creating the standby database on the cloud, while incorporating MAA best practices
After establishing the cloud standby database as described here, you can perform a role transition so that the primary database runs in the cloud instead of on-premises.
Benefits Of Hybrid Data Guard in the Oracle Cloud
The following are the primary benefits to using a hybrid Data Guard configuration in the Oracle Cloud.
Oracle manages the cloud data center and infrastructure.
Ability to switch over (planned events) or fail over (unplanned events) production to the standby database in the cloud during scheduled maintenance or unplanned outages. Once a failed on-premises database is repaired, it can be synchronized with the current production database in the cloud. Then, production can be switched back to the on-premises database.
Use the same Oracle MAA best practices as the on-premises deployment. Additional Oracle MAA best practices specific to hybrid Data Guard deployments are specified in the topics that follow. When configured with MAA practices, a hybrid Data Guard configuration provides:
- Recovery Time Objective (RTO) of seconds with automatic failover when configured with Data Guard fast start failover
- Recovery Point Objective (RPO) less than a second for Data Guard with ASYNC transport
- RPO zero for Data Guard in a SYNC or FAR SYNC configuration
Note:Data Guard life cycle management operations, such as switchover, failover, and reinstatement, are manual processes in a hybrid Data Guard configuration.
MAA Recommendations for using Exadata Cloud for Disaster Recovery
When deploying Exadata Cloud for Disaster Recovery, Oracle MAA recommends:
Create a cloud database system target that is symmetric or similar to the on-premises primary database to ensure performance SLAs can be met after a role transition. For example, create an Oracle RAC target for an Oracle RAC source, Exadata for Exadata, and so on.
Ensure that network bandwidth can handle peak redo rates in addition to existing network traffic.
My Oracle Support document Assessing and Tuning Network Performance for Data Guard and RMAN (Doc ID 2064368.1) provides additional network bandwidth troubleshooting guidance for assessing and tuning network performance for Data Guard and RMAN.
Ensure network reliability and security between on-premises and the Cloud environment.
Use Oracle Active Data Guard for additional automatic block repair, data protection, and offloading benefits.
Use Oracle Transparent Data Encryption (TDE) for both primary and standby databases.
My Oracle Support document Oracle Database Tablespace Encryption Behavior in Oracle Cloud (Doc ID 2359020.1) has additional details on TDE behavior in cloud configurations.
Automatic cloud backups should be configured after an optional Data Guard role transition which makes the cloud instance the primary database.
Service Level Requirements
Oracle Data Guard hybrid deployments are user-managed environments. The service level expectations for availability, data protection, and performance that are practical for a given configuration and application must be determined by your requirements.
Service levels must be established for each of the following dimensions relevant to disaster recovery that are applicable to any Data Guard configuration:
Recovery Time Objective (RTO) describes the maximum acceptable downtime if an outage occurs. This includes the time required to detect the outage and to fail over the database and application connections so that service is resumed.
Recovery Point Objective (RPO) describes the maximum amount of data loss that can be tolerated. Achieving the desired RPO depends on:
Available bandwidth relative to network volume
The ability of the network to provide reliable, uninterrupted transmission
The Data Guard transport method used: asynchronous for near-zero data loss protection, synchronous for zero data loss protection
Data Protection - You can configure the most comprehensive block corruption detection, prevention, and auto-repair with Oracle Active Data Guard and MAA.
Performance - Database response time may be different after a fail over if not enough capacity for compute, memory, I/O, and so on, is provisioned at the standby system, compared to the on-premises production system.
This occurs when administrators intentionally under-configure standby resources to reduce cost, accepting a reduced service level while in DR mode. MAA best practices recommend configuring symmetrical capacity on both the primary and standby database hosts so there is no change in response time after a fail over.
Rapid provisioning available with the cloud facilitates a middle ground where there is less capacity deployed during steady-state, but the new primary database system is rapidly scaled-up should a fail over be required.
Note:The reduced resources during steady state in a rapid provisioning approach could impact the ability of recovery to keep the standby database current with the primary database, creating an apply lag and impacting RTO. This approach should only be considered after thorough testing.
See High Availability and Data Protection – Getting From Requirements to Architecture for more details about determining RTO and RPO requirements along with other considerations.
Security Requirements and Considerations
Oracle MAA best practices recommend using Oracle Transparent Data Encryption (TDE) to encrypt the primary and standby databases to ensure that data is encrypted at-rest.
Using TDE to protect data is an essential part of improving the security of the system; however, you must be aware of certain considerations when using any encryption solution, including:
Additional CPU overhead - Encryption requires additional CPU cycles to calculate encrypted and decrypted values. TDE, however, is optimized to minimize the overhead by taking advantage of database caching capabilities and leveraging hardware acceleration within Exadata. Most TDE users see little performance impact on their production systems after enabling TDE.
Lower data compression - Encrypted data compresses poorly because it must reveal no information about the original plain text data, so any compression applied to data encrypted with TDE has low compression ratios.
When TDE encryption is used, redo transport compression is not recommended; however, when TDE is used in conjunction with Oracle Database compression technologies such as Advanced Compression or Hybrid Columnar Compression, compression is performed before the encryption occurs, and the benefits of compression and encryption are both achieved.
Key management - Encryption is only as strong as the encryption key used and the loss of the encryption key is tantamount to losing all data protected by that key.
If encryption is enabled on a few databases, keeping track of the key and its life cycle is relatively easy. As the number of encrypted databases grows, managing keys becomes an increasingly difficult problem. If you are managing a large number of encrypted databases, it is recommended that Oracle Key Vault be used on-premises to store and manage TDE master keys.
Data can be converted during the migration process, but it is recommended that TDE be enabled before beginning the migration to provide the most secure Oracle Data Guard environment. A VPN connection or Oracle Net encryption is also required for inflight encryption for any other database payload that is not encrypted by TDE, such as data file or redo headers for example. See My Oracle Support document Oracle Database Tablespace Encryption Behavior in Oracle Cloud (Doc ID 2359020.1) for more information.
If the on-premises database is not already enabled with TDE, see My Oracle Support document Primary Note For Transparent Data Encryption ( TDE ) (Doc ID 1228046.1) to enable TDE and create wallet files.
If TDE cannot be enabled for the on-premises database, see Encryption of Tablespaces in an Oracle Data Guard Environment in Oracle Database Advanced Security Guide for information about decrypting redo operations in hybrid cloud disaster recovery configurations where the Cloud database is encrypted with TDE and the on-premises database is not.
Platform, Database, and Network Prerequisites
The following requirements must be met to ensure a successful migration to a Cloud standby database.
|Requirement Type||On-Premises Requirements||Oracle Cloud Requirements|
Linux, Windows or Solaris X86 (My Oracle Support Note 413484.1 for Data Guard cross-platform compatibility)
Oracle Enterprise Linux (64-bit)
Oracle Database Version*
All Oracle releases supported by Zero Downtime Migration*
See Supported Database Versions for Migration for details about Oracle releases and edition support for the on-premises source database.
Extreme performance / BYOL*
See Supported Database Editions and Versions for information about database service options in Oracle Cloud.
Oracle Database Architecture
Oracle RAC or single-instance
Oracle RAC or single-instance
For Oracle 12.1 and above, the primary database must be a multitenant container database (CDB)
Multitenant container database (CDB) or non-CDB
Physical or Virtual Host
Physical or Virtual
For shape limits please consult Exadata Cloud documentation
Mandatory for Cloud databases
* The Oracle Database release on the primary and standby databases must match during initial instantiation. For database software updates that are standby-first compatible, the primary and standby database Oracle Home software can be different. See Oracle Patch Assurance - Data Guard Standby-First Patch Apply (Doc ID 1265700.1).
Cloud Network Prerequisites
Data transfers from on-premises to Oracle Cloud Infrastructure (OCI) use the public network, VPN, and/or the high bandwidth option provided by Oracle FastConnect.
In an Oracle Data Guard configuration, the primary and standby databases must be able to communicate bi-directionally. This requires additional network configuration to allow access to ports between the systems.
Note:Network connectivity configuration is not required for Oracle Exadata Database Service on Cloud@Customer because it is deployed on the on-premises network. Skip to On-Premises Prerequisites if using ExaDB-C@C.
For Oracle Exadata Database Service (not required for ExaDB-C@C) there are two options to privately connect the virtual cloud network to the on-premises network: FastConnect and IPSec VPN. Both methods require a Dynamic Routing Gateway (DRG) to connect to the private Virtual Cloud Network (VCN).
See Access to Your On-Premises Network for details about creating a DRG.
OCI FastConnect - Provides an easy way to create a dedicated, private connection between the data center and OCI. FastConnect provides higher bandwidth options and a more reliable and consistent networking experience compared to internet-based connections. See FastConnect Overview. (link https://docs.oracle.com/en-us/iaas/Content/Network/Concepts/fastconnectoverview.htm) for details.
- IPSec VPN - Internet Protocol Security or IP Security (IPSec ) is a protocol suite that encrypts the entire IP traffic before the packets are transferred from the source to the destination. See Site-to-Site VPN Overview for an overview of IPSec in OCI.
Public Internet Connectivity
Connectivity between OCI and on-premises can also be achieved using the public internet.
This method is not secure by default; additional steps must be taken to secure transmissions. The steps for hybrid Data Guard configuration assume public internet connectivity.
By default, cloud security for port 1521 is disabled. Also, this default pre-configured port in the cloud for either a Virtual Machine (VM) or Bare Metal (BM) has open access from the public internet.
If a Virtual Cloud Network (VCN) for the standby database doesn't have an Internet Gateway, one must be added.
To create an internet gateway see Internet Gateway.
Ingress and egress rules must be configured in the VCN security list to connect from and to the on-premises database.
See Security Lists for additional information.
The following prerequisites must be met before instantiating the standby database.
Evaluate Network Using oratcptest
In an Oracle Data Guard configuration, the primary and standby databases transmit information in both directions. This requires basic configuration, network tuning, and opening of ports at both the primary and standby databases.
It is vital that the bandwidth exists to support the redo generation rate of the primary database.
Follow instructions in Assessing and Tuning Network Performance for Data Guard and RMAN (Doc ID 2064368.1) to assess and tune the network link between the on-premises and cloud environments.
For ExaDB-C@C, because the clusters reside on the on-premises network, the on-premises DNS should resolve each cluster, and no further configuration should be necessary.
For Oracle Exadata Database Service, name resolution between the clusters must be configured.
This can be done either using a static file like /etc/hosts, or by configuring the on-premises DNS to properly resolve the public IP address of the OCI instance. In addition, the on-premises firewall must have Access Control Lists configured to allow SSH and Oracle Net to be accessed from the on-premises system to OCI.
Oracle Data Guard in a DR configuration requires access from the Cloud instance to the on-premises database; the primary database listener port must be opened with restricted access from the Cloud IP addresses using features like iptables.
Because every corporation has different network security policies, the network administrator must perform operations like the cloud-side network configuration shown in Cloud Network Prerequisites.
Prompt-less SSH from Oracle Cloud to the on-premises machine. This is configured both for on-premises to Cloud during the provisioning process and from the Cloud to on-premises.
The configuration of the on-premises firewall to allow inbound SSH connectivity from the Cloud to the on-premises machine.
It is strongly recommended that you complete the network assessment described above in Evaluate Network Using oratcptest. Setting the appropriate TCP socket buffers sizes is especially important for ASYNC redo transport.
The RDBMS software must be the same on the primary and standby database for instantiation. If the current on-premises Oracle Database release is not available in Oracle Exadata Database Service, then the primary database must be patched or upgraded to an available cloud bundle patch.
Implement MAA Best Practice Parameter Settings on the Primary Database
Most MAA best practices for Data Guard are part of the process described here; however, the Standby Redo Log should be created on the primary database before starting this process.
See Oracle Data Guard Configuration Best Practices for information.
Validating Connectivity between On-Premises and Exadata Cloud Hosts
After the networking steps are implemented successfully, run the command below to validate that the connection is successful between all sources and all targets in both directions.
On the on-premises host run:
[root@onpremise1 ~]# telnet TARGET-HOST-IP-ADDRESS PORT Trying xxx.xxx.xxx.xxx... Connected to xxx.xxx.xxx.xxx. Escape character is '^]'. ^C^]q telnet> q Connection closed.
On the Cloud hosts run:
[root@oci2 ~]# telnet TARGET-HOST-IP-ADDRESS PORT Trying xxx.xxx.xxx.xxx... Connected to xxx.xxx.xxx.xxx. Escape character is '^]'. ^]q telnet> q Connection closed.
If telnet is successful, proceed to the next step.
Note:netcat (nc -zv ) can be used in place of telnet.
Instantiate the Standby Using Zero Downtime Migration
Prepare the Zero Downtime Migration environment and instantiate the standby database using the physical migration method.
Each task references procedures from the latest Zero Downtime Migration documentation in Move to Oracle Cloud Using Zero Downtime Migration and then includes additional information pertaining to hybrid Data Guard configuration.
For the Oracle Data Guard hybrid use case, a Zero Downtime Migration migration can also be called a standby database instantiation.
After the standby database is instantiated, but before completing the full migration work flow, the migration job is stopped leaving the standby in place on the cloud. Some additional 'fix-ups' are needed to complete the hybrid Data Guard configuration.
Task 1: Install and Configure Zero Downtime Migration
The Zero Downtime Migration architecture includes a Zero Downtime Migration service host, which is separate from the primary and standby database hosts. Zero Downtime Migration software is installed and configured on the Zero Downtime Migration service host.
Any Linux Server, for example a DBCS compute resource, can be used as the service host if it meets the requirements and can be accessed bidirectionally by the target and source database systems.
See Setting Up Zero Downtime Migration Software for the host configuration and installation instructions.
Task 2: Prepare for a Physical Database Instantiation
The hybrid Data Guard configuration process uses the Zero Downtime Migration physical database online migration work flow with the option to pause the migration job after the target database instantiation.
When the standby database is instantiated and verified, the migration job can be stopped, leaving the standby database in place.
To prepare for a physical migration follow the instructions in Preparing for a Physical Database Migration in Move to Oracle Cloud Using Zero Downtime Migration.
Additional information specific to hybrid Data Guard configuration is detailed below.
Configuring Transparent Data Encryption on the Source Database
Transparent Data Encryption (TDE) is required on Oracle Cloud databases, including any standby database which is part of a hybrid Data Guard configuration.
While it is strongly recommended that the on-premises database also be encrypted, leaving the primary database unencrypted as part of a hybrid Data Guard configuration can be configured, and is better supported by new parameters in Oracle Database 19c (19.16) and later releases.
For all TDE configurations with Oracle Data Guard, the encryption wallet must be created on the primary database and the master key must be set.
The parameters required for TDE configuration differ depending with Oracle Database releases. The values may be different for each database in the Data Guard configuration.
- In Oracle Database release 19c (19.16) and later, the parameters
TDE_CONFIGURATIONare required to properly configure TDE.
- For Oracle Database 19c releases before 19.16, set parameters
- For releases earlier than Oracle Database19c, set parameters
Note:Unless otherwise specified by the
TABLESPACE_ENCRYPTION=DECRYPT_ONLYparameter, a new tablespace's encryption on the standby database will be the same as that of the primary.
In the following table use the links to find references for setting the primary and standby database parameters.
|Parameters||Definition||All Oracle Database releases before 19c||Oracle Database release 19.15 and earlier||Oracle Database release 19.16 and later|
Defines the location of the wallet
Indicates whether a new tablespace on the primary database should be encrypted
Override with recommended setting for TABLESPACE_ENCRYPTION
Oracle Database 19c (19.16) and later releases - indicates
whether a new tablespace should be encrypted. Available options
Starting with Oracle Database 19c (19.16), Oracle Cloud forces encryption for all tablespaces in the cloud database. This cannot be overridden.
To prevent encrypted tablespaces on an on-premises database
(primary or standby) set the
To configure TDE, follow the steps in Setting Up the Transparent Data Encryption Wallet in Move to Oracle Cloud Using Zero Downtime Migration.
Checking the TDE Master Key Before Instantiation
Even in cases where the primary database remains unencrypted, TDE must be configured on the primary database. This configuration includes creating the encryption wallet and setting the master key.
During the process the wallet is copied to the standby database. The master key stored in the wallet will be used by the standby database for encryption.
In the event of a switchover where the cloud standby database becomes the primary database, the key is used by the unencrypted on-premises database to decrypt the encrypted redo from the cloud database.
Failure to set the master key will result in failure of Data Guard managed recovery.
To confirm the master key is set properly:
Verify that the
V$DATABASE_KEY_INFOmatches a key existing in
V$ENCRYPTION_KEYSon the source database.
In a multitenant container database (CDB) environment, check
CDB$ROOTand all the PDBs except
Configuring Online Redo Logs
Redo log switches can have a significant impact on redo transport and apply performance. Follow these best practices for sizing the online redo logs on the primary database before instantiation.
- All online redo log groups should have identically sized logs (to the byte).
- Online redo logs should reside on high performing disks (DATA disk groups).
- Create a minimum of three online redo log groups per thread of redo on Oracle RAC instances.
- Create online redo log groups on shared disks in an Oracle RAC environment.
- Multiplex online redo logs (multiple members per log group) unless they are placed on high redundancy disk groups.
- Size online redo logs to switch no more than 12 times per hour (every ~5 minutes). In most cases a log switch every 15 to 20 minutes is optimal even during peak workloads.
Size redo logs based on the peak redo generation rate of the primary database.
You can determine the peak rate by running the query below for a time period that includes the peak workload. The peak rate could be seen at month-end, quarter-end, or annually. Size the redo logs to handle the highest rate in order for redo apply to perform consistently during these workloads.
SQL> SELECT thread#,sequence#,blocks*block_size/1024/1024 MB, (next_time-first_time)*86400 sec, blocks*block_size/1024/1024)/((next_time-first_time)*86400) "MB/s" FROM v$archived_log WHERE ((next_time-first_time)*86400<>0) and first_time between to_date('2015/01/15 08:00:00','YYYY/MM/DD HH24:MI:SS') and to_date('2015/01/15 11:00:00','YYYY/MM/DD HH24:MI:SS') and dest_id=1 order by first_time; THREAD# SEQUENCE# MB SEC MB/s ---------- ---------- ---------- ---------- ---------- 2 2291 29366.1963 831 35.338383 1 2565 29365.6553 781 37.6000708 2 2292 29359.3403 537 54.672887 1 2566 29407.8296 813 36.1719921 2 2293 29389.7012 678 43.3476418 2 2294 29325.2217 1236 23.7259075 1 2567 11407.3379 2658 4.29169973 2 2295 29452.4648 477 61.7452093 2 2296 29359.4458 954 30.7751004 2 2297 29311.3638 586 50.0193921 1 2568 3867.44092 5510 .701894903
Choose the redo log size based on the peak generation rate with the following chart.
|Peak Redo Rate||Recommended Redo Log Size|
|<= 1 MB/s||1 GB|
|<= 5 MB/s||4 GB|
|<= 25 MB/s||16 GB|
|<= 50 MB/s||32 GB|
|> 50 MB/s||64 GB|
Creating the Target Database
The target database, which will become the standby database, is initially created by the Oracle Cloud automation. This approach ensures that the database is visible in the Oracle Cloud user interface and is available for a subset of cloud automation, such as patching.
Note:Oracle Data Guard operations, such as switchover, failover, and reinstate, are manual operations performed with Data Guard Broker. Data Guard Life Cycle Management is not supported by the user interface in hybrid Data Guard configurations.
Once the database is created, the Zero Downtime Migration work flow removes the existing files and instantiates the standby database in its place.
The following are exceptions in a hybrid Data Guard configuration (as compared to Zero Downtime Migration) for the target database:
- The target database must use the same
db_nameas the source database.
- The target database must use a different
Choosing an Instantiation Method
The two recommended options for a hybrid Data Guard standby instantiation with Zero Downtime Migration are direct data transfer and Object Storage Service.
- Direct data transfer -
DATA_TRANSFER_MEDIUM=DIRECT- copies data files directly from the primary database using RMAN.
- Object Storage Service -
DATA_TRANSFER_MEDIUM=OSS- performs a backup of the primary database to an OSS bucket and instantiates the standby database from the backup.
There are additional options for instantiating from an existing backup or an existing standby which are not covered by this procedure. See Using an Existing RMAN Backup as a Data Source and Using an Existing Standby to Instantiate the Target Database in Move to Oracle Cloud Using Zero Downtime Migration for details.
Setting Zero Downtime Migration Parameters
The Zero Downtime Migration physical migration response file parameters listed below are the key parameters to be set in most cases.
db_unique_namefor the target cloud database as registered with clusterware (srvctl)
MIGRATION_METHOD=ONLINE_PHYSICAL- Hybrid Data Guard setups all use
DATA_TRANSFER_MEDIUM=DIRECT | OSS-
DIRECTis not supported for source databases on versions earlier than Oracle 12.1
PLATFORM_TYPE=EXACS | EXACC | VMDB- Choose the correct target Oracle Cloud platform to ensure proper configuration
HOST=cloud-storage-REST-endpoint-URL- Required if using OSS data transfer medium
OPC_CONTAINER=object-storage-bucket- Required if using OSS data transfer medium
ZDM_USE_DG_BROKER=TRUE- Data Guard Broker is an MAA configuration best practice
If bastion hosts or other complexities are involved, see Setting Physical Migration Parameters in Move to Oracle Cloud Using Zero Downtime Migration for details.
Task 3: Instantiate the Standby Database
After the preparations are complete you can run a Zero Downtime Migration online physical migration job to instantiate the cloud standby database.
You will actually run two jobs using the Zero Downtime Migration commands in ZDMCLI: an evaluation job and the actual migration job.
Run the evaluation job.
The evaluation job analyzes your topography configuration and migration job settings to ensure that the process will succeed when you run it against the production database.
-evaloption in the
ZDMCLI migrate databasecommand to run an evaluation job, as shown here.
zdmuser> $ZDM_HOME/bin/zdmcli migrate database -sourcedb source_db_unique_name_value -sourcenode source_database_server_name -srcauth zdmauth -srcarg1 user:source_database_server_login_user_name -srcarg2 identity_file:ZDM_installed_user_private_key_file_location -srcarg3 sudo_location:/usr/bin/sudo -targetnode target_database_server_name -backupuser Object_store_login_user_name -rsp response_file_location -tgtauth zdmauth -tgtarg1 user:target_database_server_login_user_name -tgtarg2 identity_file:ZDM_installed_user_private_key_file_location -tgtarg3 sudo_location:/usr/bin/sudo -eval
There are more examples of the evaluation job options in Evaluate the Migration Job in Move to Oracle Cloud Using Zero Downtime Migration.
Note:Because the hybrid Data Guard cloud standby instantiation process is a physical migration, the Cloud Premigration Advisor Tool (CPAT) is not supported.
Run the migration job.
By default, Zero Downtime Migration performs a switchover operation immediately after the target database is instantiated, so the
-stopafteroption is used in the
ZDMCLI migrate databasecommand to stop the migration job after the standby database is created.
-stopafteroption and set it to
ZDM_CONFIGURE_DG_SRCas shown here.
zdmuser> $ZDM_HOME/bin/zdmcli migrate database -sourcedb source_db_unique_name_value -sourcenode source_database_server_name -srcauth zdmauth -srcarg1 user:source_database_server_login_user_name -srcarg2 identity_file:ZDM_installed_user_private_key_file_location -srcarg3 sudo_location:/usr/bin/sudo -targetnode target_database_server_name -backupuser Object_store_login_user_name -rsp response_file_location -tgtauth zdmauth -tgtarg1 user:target_database_server_login_user_name -tgtarg2 identity_file:ZDM_installed_user_private_key_file_location -tgtarg3 sudo_location:/usr/bin/sudo -stopafter ZDM_CONFIGURE_DG_SRC
The job ID is shown in the command output when the database migration job is submitted. Save this information in case later diagnosis is required.
There are more examples of the
ZDMCLI migrate databasecommand usage shown in Migrate the Database in Move to Oracle Cloud Using Zero Downtime Migration.
Task 4: Validate the Standby Database
When the Zero Downtime Migration job has stopped after the standby database is instantiated, validate the standby database.
Check the Oracle Data Guard Broker Configuration
Using the parameter
ZDM_USE_DG_BROKER=TRUE in the Zero Downtime
Migration response file creates a Data Guard Broker configuration. Data Guard Broker
will be the primary utility to manage the life cycle operations for hybrid Data
Guard configurations, because the Oracle Cloud user interface is not aware of the
Using DGMGRL, validate the Data Guard Broker configuration. Data Guard Broker commands listed can be run form the primary or standby database.
DGMGRL> show configuration Configuration - ZDM_primary db_unique_name Protection Mode: MaxPerformance Members: primary db_unique_name - Primary database standby db_unique_name - Physical standby database Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 58 seconds ago)
Configuration Status should be
SUCCESS. If any other status is
shown, re-run the command after waiting 2 minutes to give the Broker time to update.
If issues persist, see the Oracle Data Guard Broker documentation to diagnose and
correct any issues.
Validate the Standby Database
Using DGMGRL, validate the standby database.
DGMGRL> validate database standby db_unique_name Database Role: Physical standby database Primary Database: primary db_unique_name Ready for Switchover: Yes Ready for Failover: Yes (Primary Running) Flashback Database Status: primary db_unique_name: On standby db_unique_name: Off <- see note below Managed by Clusterware: primary db_unique_name: YES standby db_unique_name: YES
Note:Steps to enable flashback database on the standby will be addressed in a future step.
Task 5: Implement Recommended MAA Best Practices
After standby instantiation, evaluate implementing the following Oracle MAA best practices to achieve better data protection and availability.
Key best practices are listed below. Also see Oracle Data Guard Configuration Best Practices for details about Oracle MAA recommended best practices for Oracle Data Guard.
Enable Flashback Database
Flashback Database allows reinstatement of the old primary database as a standby database after a failover. Without Flashback Database enabled, the old primary database would have to be recreated as a standby after a failover. If flashback database has not already been enabled, enable it now.
To enable flashback database, make sure you have sufficient space and I/O throughput in your Fast Recovery Area or RECO disk group, and evaluate any performance impact.
On the primary database, run the command below to enable flashback database on the primary if it is not already enabled.
SQL> alter database flashback on; Database altered.
On the standby database, to enable flashback database, first disable redo apply, enable flashback database, then re-enable redo apply.
DGMGRL> edit database standby-database set state=apply-off; Succeeded. SQL> alter database flashback on; Database altered. DGMGRL> edit database standby-database set state=apply-on; Succeeded.
Set CONTROL_FILES Parameter and Change Default Open Mode of Standby
An Oracle MAA best practice recommendation is to have only one control file when placed on a high redundancy disk group. All Oracle Cloud offerings use high redundancy, therefore only one control file is required.
On the standby database, edit the
SQL> show parameter control_files NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ control_files string controlfile-1 , controlfile-2 SQL> ALTER SYSTEM SET control_files='controlfile-1' scope=spfile sid='*'; System altered.
Stop the database as the
oracleuser, and then, as the
griduser, remove the extra control file (
griduser from the
$ srvctl stop database -db standby-unique-name [grid@standby-host1 ~]$ asmcmd rm controlfile-2
While the database is down, modify the start option so the standby database default is open read only, and then start the database.
$ srvctl modify database -db standby-unique-name -startoption 'read only' $ srvctl start database -db standby-unique-name
Note:The Oracle MAA best practice is for the standby to be open read-only to enable Automatic Block Media Recovery; however, Oracle Cloud supports a mounted standby. If a mounted standby is your preferred configuration it can be configured.
Set Alternate Local Archive Log Location
In the event that space is exhausted in the recovery area, a primary database will stop archiving and all operations will halt until space is made available to archive the online redo logs.
To avoid this scenario, create an alternate local archive location on the DATA disk group.
LOG_ARCHIVE_DEST_10to use the DATA disk group and set the state to
SQL> ALTER SYSTEM SET log_archive_dest_10='LOCATION=+DATAC1 VALID_FOR=(ALL_LOGFILES,ALL_ROLES) MAX_FAILURE=1 REOPEN=5 DB_UNIQUE_NAME=standby-unique-name ALTERNATE=LOG_ARCHIVE_DEST_1' scope=both sid=’*’; SQL> ALTER SYSTEM SET log_archive_dest_state_10=ALTERNATE scope=both sid=’*’;
LOG_ARCHIVE_DEST_10as an alternate.
SQL> ALTER SYSTEM SET log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) MAX_FAILURE=1 REOPEN=5 DB_UNIQUE_NAME=standby-unique-name ALTERNATE=LOG_ARCHIVE_DEST_10' scope=both sid=’*’;
Note:When backups are not configured, by default archived logs older than 24 hours are swept every 30 minutes.
Set Data Protection Parameters
MAA best practice recommendations include the following settings on the primary and standby databases.
SQL> show parameter db_block_checksum NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_block_checksum string TYPICAL SQL> alter system set db_block_checksum=TYPICAL scope=both sid='*'; SQL> show parameter db_lost_write_protect NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_lost_write_protect string typical SQL> alter system set db_lost_write_protect=TYPICAL scope=both sid='*'; SQL> show parameter db_block_checking NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_block_checking string OFF SQL> alter system set db_block_checking=MEDIUM scope=both sid='*';
Note that the
db_block_checking setting has an impact on primary
database performance and should be thoroughly tested with a production workload in a
lower, production-like environment.
If the performance impact is determined to be unacceptable on the primary database,
the standby database should set
db_block_checking=MEDIUM and set
cloudautomation Data Guard Broker property to '1' for both
databases so that the value will be changed appropriately after a role
DGMGRL> edit database primary-unique-name set property cloudautomation=1; Property "cloudautomation" updated DGMGRL> edit database standby-unique-name set property cloudautomation=1; Property "cloudautomation" updated
Note that the cloudautomation property must be set on both databases to work properly.
Configure Redo Transport - Oracle Net Encryption
To protect against plain text or unencrypted tablespace redo from being visible on the WAN, place the following entries in the sqlnet.ora file on all on-premises and cloud databases.
Cloud deployments use the
TNS_ADMIN variable to separate
tnsnames.ora and sqlnet.ora in shared database homes. Therefore, the cloud
sqlnet.ora, and by extension tnsnames.ora, for a given database are located in
These values should already be set by the deployment tool in cloud configurations.
SQLNET.ORA ON ON-PREMISES HOST(S) SQLNET.ENCRYPTION_SERVER=REQUIRED SQLNET.CRYPTO_CHECKSUM_SERVER=REQUIRED SQLNET.ENCRYPTION_TYPES_SERVER=(AES256,AES192,AES128) SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA1) SQLNET.ENCRYPTION_CLIENT=REQUIRED SQLNET.CRYPTO_CHECKSUM_CLIENT=REQUIRED SQLNET.ENCRYPTION_TYPES_CLIENT=(AES256,AES192,AES128) SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA1)
Note:If all tablespaces and data files are encrypted with TDE, Oracle Net encryption is redundant and can be omitted.
Configure Redo Transport - Reconfigure Redo Transport Using Full Connect Descriptors
For simplicity, Zero Downtime Migration uses an EZconnect identifier to set up Oracle Data Guard redo transport.
For short lived configurations, like those with a full Zero Downtime Migration work flow, this solution is acceptable. However, for hybrid Data Guard configurations, the MAA best practice recommendation is to use a full connect descriptor configured in tnsnames.ora.
Use the following example, replacing attribute values with values relevant to your configuration.
The TNS descriptors for the databases will be different depending on whether the SCAN listeners are resolvable from the other system.
The description below assumes that the SCAN name is resolvable and can be used in the
TNS descriptor. If a SCAN name cannot be resolved, an
can be used. See Multiple Address Lists in tnsnames.ora for
Add the following descriptors to a shared tnsnames.ora file on the primary and standby database systems after making the appropriate replacements.
standby-db_unique_name = (DESCRIPTION= (ADDRESS= (PROTOCOL= TCP) (HOST= standby-cluster-scan-name ) (PORT=standby-database-listener-port)) (CONNECT_DATA= (SERVER= DEDICATED) (SERVICE_NAME= standby-database-service-name))) primary-db_unique_name= (DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=primary-cluster-scan-name) (PORT=primary-database-listener-port)) (CONNECT_DATA= (SERVER=DEDICATED) (SERVICE_NAME=primary-database-service-name) ))
Note:A descriptor with the name of the primary
db_unique_namemay have been created by cloud automation or Zero Downtime Migration. Replace this entry, because it points to the wrong database.
Configure Redo Transport - Modify Data Guard Broker Settings for Redo Transport
Change the EZconnect identifier, which was set during the Zero Downtime Migration work flow, to use the connect descriptors added to the tnsnames.ora files for each database.
DGMGRL> show database primary-db_unique_name DGConnectIdentifier DGConnectIdentifier = 'ZDM-created-EZconnect-string>' DGMGRL> edit database primary-db_unique_name set property DGConnectIdentifier=’primary-db_unique_name’; DGMGRL> show database standby-db_unique_name DGConnectIdentifier DGConnectIdentifier = 'ZDM-created-EZconnect-string' DGMGRL> edit database standby-db_unique_name set property DGConnectIdentifier=’standby-db_unique_name’;
Configure Standby Automatic Workload Repository
Standby Automatic Workload Repository (AWR) allows the AWR reports to be produced against the standby. These reports are very important when diagnosing redo apply and other performance issues on a standby database.
It is strongly recommended that you configure standby AWR for all Oracle Data Guard configurations.
See My Oracle Support note How to Generate AWRs in Active Data Guard Standby Databases (Doc ID 2409808.1) for information.
Health Check and Monitoring
After instantiating the standby database, a health check should be performed to ensure that the Oracle Data Guard databases (primary and standby) are compliant with Oracle MAA best practices.
It is also recommended that you perform the health check monthly, and before and after database maintenance. Oracle Autonomous Health Framework and automated tools including an Oracle MAA Scorecard using OraChk or ExaChk are recommended for checking the health of a Data Guard configuration.
Regular monitoring of the Oracle Data Guard configuration is not provided in a hybrid Data Guard configuration and must be done manually. See Monitor an Oracle Data Guard Configuration for more information.