Learn about the Oracle SOA Suite enterprise deployment topology that illustrates how to set up production and standby sites.
Note:
You can automate disaster recovery operations like switchover and failover using Oracle Site Guard. See Oracle Site Guard Administrator's Guide.
This chapter includes the following sections:
Learn how to set up an Oracle Disaster Recovery site.
Before you start creating the production site, ensure that you:
Set up the host name aliases for the middle tier hosts, as described in Planning Host Names.
Create the required volumes on the shared storage on the production site as described in Designing Directory Structure and Volumes
Determine the Oracle Data Guard configuration to use based on the data loss requirements of the database and network considerations such as the available bandwidth and latency when compared to the redo generation.
This section includes the following topics:
Learn about the recommended directory structure in your disaster recovery topology.
You can choose a directory layout different from the one recommended in this document, but the model adopted enables maximum availability, provides the best isolation of components and symmetry in the configuration, and facilitates backup and disaster recovery.
The following list describes directories and directory environment variables:
ORACLE_BASE
: This environment variable and related directory path refers to the base directory below which Oracle products are installed.
ORACLE_HOME
: This related directory path refers to the location where Oracle Fusion Middleware resides.
WL_HOME
: This environment variable and related directory path contains installed files necessary to host an Oracle WebLogic Server.
PROD_DIR
: This environment variable and related directory path refers to the location where a product suite (such as Oracle SOA Suite, Oracle WebCenter Portal, or Oracle Identity Management) is installed.
DOMAIN
directory: This directory path refers to the location where the Oracle WebLogic Domain information (configuration artifacts) is stored. Different WebLogic Servers can use different domain directories even when in the same node.
ORACLE_INSTANCE
: An Oracle instance contains one or more system components. An Oracle instance directory contains updatable files, such as configuration files, log files, and temporary files.
See Recommended Directory Structure for Oracle SOA Suite.
This section includes the following topic:
Learn about the recommended directory structures for Oracle SOA Suite.
Oracle Fusion Middleware allows you to create multiple SOA Managed Servers from a single binary installation. This allows the installation of binary files in a single location on a shared storage and the reuse of this installation by the servers in different nodes. However, for maximum availability, Oracle recommends using redundant binary installations. In this model, two Oracle homes (each of which has a WL_HOME
and an ORACLE_HOME
for each product suite) are installed in a shared storage. Additional servers (when scaling out or up) of the same type can use either one of these two locations without requiring more installations. Ideally, users should use two different volumes for redundant binary location, this isolating the failures as much as possible in each volume. For additional protection, Oracle recommends using storage replication for these volumes. If multiple volumes are not available, Oracle recommends using mount points to simulate the same mount location in a different directory in the shared storage. Although this does not guarantee the protection that multiple volumes provide, it does allow protection from user deletions and individual file corruption.
Oracle also recommends separating the domain directory used by the Administration Server from the domain directory used by Managed Servers. This allows a symmetric configuration for the domain directories used by Managed Servers, and isolates the failover of the Administration Server. The domain directory for the Administration Server must reside in a shared storage to allow failover to another node with the same configuration. In addition, Oracle recommends placing the Managed Servers' domain directories on a shared storage, although having them on the local file system is also supported. This is especially important when designing a production site with the disaster recovery site in mind. Figure 4-1 represents the directory structure layout for Oracle SOA Suite.
Figure 4-1 Directory Structure for Oracle SOA Suite
For information about setting up this directory structure, see Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite.
For volume design, see the following section:
Learn about the recommended volume design for Oracle SOA Suite.
Figure 4-2 and Figure 4-3 shows an Oracle SOA Suite topology diagram. The volume design described in this section is for this Oracle SOA Suite topology. Detailed instructions for installing and configuring this topology are provided in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite.
Figure 4-2 Oracle SOA Suite and Oracle Business Activity Monitoring Enterprise Topology Diagram
Figure 4-3 Oracle SOA Suite and Oracle Service Bus Enterprise Deployment Reference Topology Diagram
For disaster recovery of this Oracle SOA Suite topology, Oracle recommends the following volume design:
Provision two volumes for two Oracle homes that contain redundant product binary files (VOLFMW1
and VOLFMW2
in Table 4-1).
Provision one volume for the Administration Server domain directory (VOLADMIN
in Table 4-1).
Provision one volume on each node for the Managed Server domain directory (VOLSOA1
and VOLSOA2
in Table 4-1). This directory is shared among all the Managed Servers on that node.
Provision one volume for the JMS file store and JTA transaction logs (VOLDATA
in Table 4-1). One volume for the entire domain is mounted on all the nodes in the domain.
Provision one volume on each node for the Oracle HTTP Server Oracle home (VOLWEB1
and VOLWEB2
in Table 4-1).
Provision one volume on each node for the Oracle HTTP Server Domain Directory (VOLOHS1
and VOLOHS2
in Table 4-1).
Note:
For WebTier hosts, local storage is usually recommended. You can replicate this configuration on a regular basis to one of the other app tier volumes to sync to standby or directly from production webhost to standby webhost.Table 4-1 provides a summary of Oracle recommendations for volume design for the Oracle SOA Suite topology shown in Figure 4-2 and Figure 4-3.
Table 4-1 Volume Design Recommendations for Oracle SOA Suite
Tier | Volume Name | Mounted on Host | Mount Point | Comments |
---|---|---|---|---|
Web |
|
|
|
Volume for Oracle HTTP Server installation |
Web |
|
|
|
Volume for Oracle HTTP Server installation |
Web |
|
|
|
Volume for Oracle HTTP Server domain directory |
Web |
|
|
|
Volume for Oracle HTTP Server domain directory |
Web |
|
|
|
(Optional) Volume for static HTML content |
Web |
|
|
|
(Optional) Volume for static HTML content |
Application |
|
|
|
Volume for the WebLogic Server and Oracle SOA Suite binary files |
Application |
|
|
|
Volume for the WebLogic Server and Oracle SOA Suite binary files |
Application |
|
|
|
Volume for Administration Server domain directory and other shared config like Deployment Plans, applications, keystores |
Application |
|
|
|
Volume for Managed Server domain directory |
Application |
|
|
|
Volume for Managed Server domain directory |
Application |
|
|
|
Volume for transaction logs and JMS data |
For consistency group recommendations, see:
Learn about the recommended consistency groups for Oracle SOA Suite.
Oracle recommends the following consistency groups for the Oracle SOA Suite topology:
Create one consistency group with the volumes containing the domain directories for the Administration Server and Managed Servers as members (DOMAINGROUP
in Table 4-2).
Create one consistency group with the volume containing the JMS file store and transaction log data as members (RUNTIMEGROUP
in Table 4-2).
Create one consistency group with the volume containing the Oracle homes as members (FMWHOMEGROUP
in Table 4-2).
Table 4-2 provides a summary of Oracle recommendations for consistency groups for the Oracle SOA Suite topology shown in Figure 4-2.
Table 4-2 Consistency Groups for Oracle SOA Suite
Tier | Group Name | Members | Comments |
---|---|---|---|
Application |
|
|
Consistency group for the Administration Server, Managed Server domain directory |
Application |
|
|
Consistency group for the JMS file store and transaction log data |
Application |
|
|
Consistency group for the Oracle homes |
Learn how to set up storage replication for the Oracle Fusion Middleware Disaster Recovery topology.
To set up storage replication for the Oracle Fusion Middleware Disaster Recovery topology:
On the standby site, ensure that alias host names that are created are the same as the physical host names used for the peer hosts at the production site.
On the shared storage at the standby site, create the same volumes as were created on the shared storage at the production site.
On the standby site, create the same mount points and symbolic links that you created at the production site (note that symbolic links only need to be set up on the standby site if you set up symbolic links at the production site). Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes. For more information about symbolic links, see Storage Replication.
It is not necessary to install the same Oracle Fusion Middleware instances at the standby site as were installed at the production site. When the production site storage is replicated to the standby site storage, the Oracle software installed on the production site volumes will be replicated at the standby site volumes.
Create the baseline snapshot copy of the production site shared storage that sets up the replication between the production site and standby site shared storage. Create the initial baseline copy and subsequent snapshot copies using asynchronous replication mode. After the baseline snapshot copy is performed, validate that all the directories inside the standby site volumes have the same contents as the directories inside the production site volumes.
Set up the frequency of subsequent copies of the production site shared storage, which will be replicated at the standby site. When asynchronous replication mode is used, then at the requested frequency the changed data blocks at the production site shared storage (based on comparison to the previous snapshot copy) become the new snapshot copy, and the snapshot copy is transferred to the standby site shared storage.
Ensure that disaster protection for any database that is included in the Oracle Fusion Middleware Disaster Recovery production site is provided by Oracle Data Guard. Do not use storage replication technology to provide disaster protection for Oracle databases.
The standby site shared storage receives snapshots transferred on periodically from the production site shared storage. After the snapshots are applied, the standby site shared storage includes all the data up to and including the data contained in the last snapshot transferred from the production site before the failover or switchover.
Oracle strongly recommends that you manually force a synchronization operation whenever a change is made to the middle tier at the production site (for example, when a new application is deployed at the production site). Follow the vendor-specific instructions for forcing a synchronization using storage replication technology.
Learn how to install and configure Oracle Database 11.2 or Oracle Database 12.1 MAA databases in an Oracle SOA Suite enterprise deployment.
For recommendations and considerations for setting up Oracle databases that are used in an Oracle Fusion Middleware Disaster Recovery topology, see Database Considerations.
Oracle Maximum Availability Architecture (MAA) is Oracle's comprehensive architecture for reducing downtime for scheduled outages, and preventing, detecting and recovering from unscheduled outages.
Real Application Clusters (RAC) and Data Guard provide the basis of the database MAA solution, where the primary site contains the RAC database, and the secondary site contains the RAC physical standby database.
This section contains the following topics:
Tip:
Alternatively, you can perform many of the tasks in this section using Oracle Enterprise Manager Cloud Control (Cloud Control).
Setting up and managing databases using Cloud Control helps in controlling downtime and simplifies disaster recovery operations.
For information about installing Enterprise Manager Cloud Control 12c, see Oracle Enterprise Manager Cloud Control Basic Installation Guide.
For more information about setting up Oracle Data Guard using Cloud Control, see Set Up and Manage Oracle Data Guard using Oracle Enterprise Manager Cloud Control 12c.
Ensure that the following prerequisites are met:
The Oracle RAC cluster and Automatic Storage Management (ASM) instances on the standby site have been created.
The Oracle RAC databases on the standby site and the production site are using a flash recovery area.
The Oracle RAC databases are running in archivelog
mode.
The database hosts on the standby site already have Oracle software installed.
In a shared ORACLE_HOME
configuration, the TNS_ADMIN
directory must be a local, non-shared directory.
The examples given in this section contain environment variables as described in Table 4-3.
Table 4-3 Variables Used by Primary and Standby Databases
Variable | Primary Database | Standby Database |
---|---|---|
Database names |
|
|
SOA Database Host Names |
|
|
Database unique names |
|
|
Instance names |
|
|
Service names |
|
|
Follow these steps to prepare the primary database for setting up Oracle Data Guard:
Note:
For information about prerequisites for setting up Oracle Data Guard, see Prerequisites in Oracle Data Guard Broker.
Enable force logging on the primary database:
SQL> alter database force logging; SQL> select name, log_mode, force_logging from v$database; NAME LOG_MODE FOR ---------- ---------------- ------- PSOA ARCHIVELOG YES
Create standby redo logs that are the same size as the online redo logs, on the primary database. The DUPLICATE command will automatically create the standby redo log on the standby database.
Oracle recommends having the same number plus one additional redo logs for each thread as shown in **INTERNAL XREF ERROR**.
In the standby node 1, create and start a listener (for example, LISTENER_DUPLICATE
) that offers a static SID
entry for the standby database, and has the same value of ORACLE_SID
as the primary (soa1
), as shown in Example 4-1.
Provide the following path to the listener.ora
file:
GRID_HOME/network/admin/listener.ora
Run the following command to start and verify the listeners using lsnrctl from GRID_HOME/bin
::
lsnrctl start listener_duplicate lsnrctl status listener_duplicate
In the standby node 2, create and start a listener (for example, LISTENER_DUPLICATE
) that offers a static SID
entry for the standby database, and has the same value of ORACLE_SID
as the primary (soa2
), as shown in Example 4-2.
Provide the following path to the listener.ora
file:
GRID_HOME/network/admin/listener.ora
Note:
Listeners are configured at the cluster level, and all nodes inherit the port and environment settings of the listener. Therefore, the TNS_ADMIN
directory path have the same values on all nodes.
In a shared ORACLE_HOME
configuration, the TNS_ADMIN
directory must be a local, non-shared directory. These network files are included as IFILES
.
Complete the following steps to set up TNS_ADMIN
for a shared ORACLE_HOME
in a two-node cluster, SOADC1.DBHOST1
and SOADC1.DBHOST2
, with respective instances SOA1
and SOA2
:
Create a local network directory on each node. For example, /
local_network_dir
/network_admin
.
Create a local listener.ora
file in the location: /
local_network_dir
/network_admin
on each node.
In the local listener.ora
file, add the values for the LISTENER_duplicate
parameter.
In the common listener.ora
file, in GRID_HOME
/network/admin
, add the IFILE
parameter values, as follows:
IFILE=/local_network_dir/network_admin/listener.ora
Run the following command to start and verify the listeners:
lsnrctl start listener_duplicate lsnrctl status listener_duplicate
In the database home of the primary node, create an Oracle Net alias to connect to the listener that you created in step 3. See Example 4-3.
For example: GRID_HOME/bin/tnsping dup
.
Configure Oracle password file authentication for redo transport. Make sure it meets the following requirements:
Set remote_login_passwordfile
to EXCLUSIVE
.
SQL> show parameter remote_login_passwordfile NAME TYPE VALUE --------------------------------- -------------- -------------------------------- remote_login_passwordfile string EXCLUSIVE
Copy the primary password file to the auxiliary instance location.
In the ORACLE_HOME/dbs
directory of the standby host, create a pfile, initsoa1.ora
, with the following parameters:
db_name=soa db_unique_name=ssoa sga_target=5g
Create the audit directory for the soa
database on all standby hosts:
mkdir -p /u01/app/oracle/admin/soa/adump
Create an Oracle Net alias on all primary hosts to reach the ssoa
database on the standby hosts.
Ensure that all hosts have Oracle Net alias for psoa and ssoa
. All the aliases that you create need to reference the scan listener and not the node VIP. Also, if the local_listener
variable is set to an alias on the primary host, then enter the details of the variable on the standby site, that point to the local listener on the primary host. See Example 4-4 and Example 4-5.
Note:
For primary node 2 (SOADC1.DBHOST2
), update the HOST
value in psoa_local_listener
to point to the VIP
of primary node 2, as follows:
psoa_local_listener = (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCP) (HOST=prmy2-vip)(PORT = 1521)) ) )
If you are using a shared Oracle home, add the VIP
s of the two nodes to the local_listener parameter.
For example:
psoa_local_listener = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCP) (HOST=prmy1-vip)(PORT = 1521)) (ADDRESS=(PROTOCOL = TCP) (HOST=prmy2-vip)(PORT = 1521)) ) )
Note:
For standby node 2 (SOADC2.DBHOST2
), update the HOST
value in ssoa_local_listener
to point to the VIP
of standby node 2, as follows:
ssoa_local_listener = (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCP) (HOST=stby2-vip)(PORT = 1521)) ) )
If you are using a shared Oracle home, add the VIPs of the two nodes to the local_listener
parameter.
For example:
ssoa_local_listener = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCP) (HOST=stby1-vip)(PORT = 1521)) (ADDRESS=(PROTOCOL = TCP) (HOST=stby2-vip)(PORT = 1521)) ) )
On the standby host, set the ORACLE_SID
to the same value as that of the primary database (ORACLE_SID=soa1
), and run the startup nomount
command on the standby database with standby PFILE
as created in step 9.
To obtain the values use this query:
SQL> select p.inst_id,instance_name, name,value from gv$parameter p, gv$instance i where p.inst_id=i.inst_id and p.name='cluster_interconnects';
INST_ID | INSTANCE_NAME | NAME | VALUE |
---|---|---|---|
1 | dbm1 | cluster_interconnects | 192.168.44.225 |
2 | dbm2 | cluster_interconnects | 192.168.44.226 |
Disable the parameter cluster_interconnects
on the primary host if it is set:
SQL> alter system reset cluster_interconnects scope=spfile sid='soa1'; SQL> alter system reset cluster_interconnects scope=spfile sid='soa2';
On the primary host, run the Recovery Manager (RMAN) script to duplicate the primary database using the command duplicate target database for standby from active database
.
Note:
This command varies depending on your environment. For information about how to use the command, see Duplicating a Database in Oracle Database Backup and Recovery User's Guide.
See Example 4-6 to understand how to duplicate between two systems with different disk-group names.
See Example 4-7 to understand how to duplicate between two systems that have the same disk-group name +DATA
.
If you have disabled the parameter cluster_interconnects
on the primary host as described in step 14, then you must set it back to the original values in the spfile.
(Optional) Stop and remove the listener that you created in step 3.
Example 4-1 Script for Creating and Starting a Listener with Static SID
LISTENER_duplicate = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = soadc2.dbhost1) (PORT = 1521)(IP = FIRST)))) SID_LIST_LISTENER_duplicate = (SID_LIST = (SID_DESC = (SID_NAME = soa1) (ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1)))
Example 4-2 Script for Creating and Starting a Listener with Static SID
LISTENER_duplicate = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = soadc2.dbhost2) (PORT = 1521)(IP = FIRST)))) SID_LIST_LISTENER_duplicate = (SID_LIST = (SID_DESC = (SID_NAME = soa2) (ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1)))
Example 4-3 Script for Creating an Oracle Net Alias
dup = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = soadc2.dbhost1) (PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SID = soa1)))
Example 4-4 Sample tnsnames.ora File on Primary Node 1 (SOADC1.DBHOST1)
psoa = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL= TCP) (HOST=prmy-scan)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = psoa) ) ) ssoa = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCP) (HOST=stby-scan)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ssoa) ) ) psoa_local_listener = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCP) (HOST=prmy1-vip)(PORT = 1521)) ) )
Example 4-5 Sample tnsnames.ora File on Standby Node 1 (SOADC2.DBHOST1)
psoa = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL= TCP)(HOST=prmy-scan)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = psoa) ) ) ssoa = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCP) (HOST=stby-scan)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ssoa) ) ) ssoa_local_listener = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL = TCP) (HOST=stby1-vip)(PORT = 1521)) ) )
Example 4-6 Duplicating Data Between Two Systems with Different Disk-Group Names
rman <<EOF connect target sys/password; connect auxiliary sys/password@dup; run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database spfile parameter_value_convert '+DATA_prmy','+DATA_stby','+RECO_prmy','+RECO_stby' set db_file_name_convert '+DATA_prmy','+DATA_stby' set db_unique_name='ssoa' set db_create_online_log_dest_1='+DATA_stby' set db_create_file_dest='+DATA_stby' set db_recovery_file_dest='+RECO_stby' set log_file_name_convert '+DATA_prmy','+DATA_stby','+RECO_prmy','+RECO_stby' set control_files='+DATA_stby/ssoa/standby.ctl' set local_listener='ssoa_local_listener' set remote_listener='stby-scan:1521'; } EOF
Example 4-7 Duplicating Data Between Two Systems with Same Disk-Group Name +DATA
rman <<EOF connect target sys/password; connect auxiliary sys/password@dup; run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database spfile set db_unique_name='ssoa' set control_files='+DATA/ssoa/standby.ctl' set local_listener='ssoa_local_listener' set remote_listener='stby-scan:1521'; } EOF
Example 4-8 Sample Redo Log
SQL> alter database add standby logfile thread 1 group 9 size 500M, group 10 size 500M, group 11 size 500M; SQL> alter database add standby logfile thread 2 group 12 size 500M, group 13 size 500M, group 14 size 500M;
To complete the RAC configuration on the standby database, complete the steps given in Procedure for Duplicating the Primary Database. Then, perform the following steps:
Create a temporary parameter file in standby database:
SQL> create pfile='/tmp/p.ora' from spfile;
Create an SPFILE
in +DATA_stby
for the standby database:
SQL> create spfile='+DATA_stby/ssoa/spfilessoa.ora' from pfile='/tmp/p.ora';
Remove the default file.
$rm /u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/spfilesoa1.ora
Create an initsoa
n
.ora
file on all the standby hosts. The file needs to point to the location of the SPFILE
created in step 2.
cat /u01/app/oracle/product/11.2.0/dbs/initsoa1.ora spfile='+DATA/ssoa/spfilesoa.ora
On the standby system, restart the instances in mount state:
startup mount
Register the RAC database with CRS as follow:
srvctl add database -d ssoa –o /u01/app/oracle/product/11.2.0/db_1 srvctl add instance -d ssoa -i soa1 -n soadc2.dbhost1 srvctl add instance -d ssoa -i soa2 -n soadc2.dbhost2 srvctl modify database –d ssoa –r physical_standby
This section describes the basic steps for creating a Data Guard configuration.
For complete information about Data Guard Broker, see the Oracle Data Guard Installation guide.
To create a Data Guard Broker configuration, complete the following steps:
Example 4-9 Accessing dgmgrl to Create the Data Guard Broker Configuration on Primary Host
dgmgrl sys/password
DGMGRL> create configuration 'dg_config' as
primary database is 'psoa'
connect identifier is psoa;
Configuration "dg_config" created with primary database "psoa"
DGMGRL> add database 'ssoa' as
connect identifier is ssoa;
Database "ssoa" added
DGMGRL> enable configuration;
Enabled.
Complete the following steps to verify that the Data Guard Broker configuration was created successfully.
Example 4-10 Verifying the Data Guard Broker Configuration
DGMGRL> show configuration; Configuration - dg_config Protection Mode: MaxPerformance Databases: psoa- Primary database ssoa- Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS
You can perform a database switchover and switchback.
Performing a Switchover Operation Using Oracle Data Guard Broker
To perform a switchover operation by using Oracle Data Guard Broker, complete the following tasks:
Verify the Oracle Data Guard Broker configuration that you created using the instructions provided in Creating a Data Guard Broker Configuration.
To verify the configuration, run the following command:
DGMGRL> show configuration; Configuration - dg_config Protection Mode: MaxPerformance Databases: psoa- Primary database ssoa- Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS
Swap the roles of the primary and standby databases by running the SWITCHOVER
command. **INTERNAL XREF ERROR** shows how Data Guard Broker automatically shuts down and restarts the old primary database as a part of the switchover operation.
DGMGRL> switchover to 'ssoa'; Performing switchover NOW, please wait... New primary database "ssoa" is opening... Operation requires shutdown of instance "psoa1" on database "psoa" Shutting down instance "psoa1"... ORA-01109: database not open Database dismounted. ORACLE instance shut down. Operation requires startup of instance "psoa1" on database "psoa" Starting instance "psoa1"... ORACLE instance started. Database mounted. Switchover succeeded, new primary is "ssoa"
After the switchover is complete, using the SHOW CONFIGURATION
command verify that the switchover operation was successful:
DGMGRL> show configuration; Configuration - dg_config Protection Mode: MaxPerformance Databases: ssoa- Primary database psoa- Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS
Performing a Switchover Operation Using SQL Plus
Perform the following operations to switchover or switchback databases correctly between the newly created physical standby database and the primary Oracle RAC databases:
Note:
For information about switchover and failover operation of Oracle Data Guard Broker, see Switchover and Failover Operations in the Oracle Data Guard Broker.
Learn how to create a production site on an Oracle SOA enterprise deployment topology.
Before creating your production site:
Set up the host name aliases for the middle tier hosts as described in Planning Host Names.
Create the required volumes on the shared storage on the production site, as described in Designing Directory Structure and Volumes.
Determine the Oracle Data Guard configuration to use based on the data loss requirements of the database and network considerations such as the available bandwidth and latency when compared to the redo generation.
This section includes the following topics:
Create a production site for the Oracle SOA Suite topology.
Install and configure your production site as described in the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite.
Note:
This section provides information about how to create a production site for the Oracle SOA Suite topology. If you plan to create a production site for a different topology, see the appropriate Oracle Fusion Middleware Enterprise Deployment Guide listed under the Install a Production Environment: Plan, Install & Configure an Enterprise Deployment category.The following sections describe how to complete the installation and configuration of your production site:
Create volumes and consistency groups on the shared storage device.
To create volumes and consistency groups on the shared storage device, see Recommended Volume Design for Oracle SOA Suite.
Set up physical host names on the production site and physical host names and alias host names on the standby site.
For information about planning host names for the production and standby sites, see Planning Host Names .
Install and configure Oracle SOA Suite.
To install and configure Oracle SOA Suite, see Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite and apply the following modifications:
Configure the data sources that Oracle Fusion Middleware considering SOA Suite as an example, uses to automate connections failover in case of a failover or switchover of the primary database.
Configure all the data sources used in the domain.
Additionally, configure any persistence store using database and the leasing data source used for server migration to automate a failover. The GridLink
data sources must be modified:
Update the ONS configuration to include both production and standby site ONS.
The items in the list of ONS addresses must be separated by commas, as the following example illustrates:
prmy-scan:6200,stby-scan:6200
On the Test ONS Client Configuration page, review the connection parameters, and click Test All ONS Nodes. The following example illustrates a successful connection notification:
Connect test for prmy-scan:6200 succeeded.
Update the JDBC URL to include the appropriate services in both sites.
The following is a sample JDBC URL for the SOA Data source in an Oracle Fusion Middleware SOA Active/Passive configuration where the database uses Data Guard.
jdbc:oracle:thin:@ (DESCRIPTION_LIST = (LOAD_BALANCE = off) (FAILOVER = on) (DESCRIPTION = (CONNECT_TIMEOUT = 10) (TRANSPORT_CONNECT_TIMEOUT = 3) (RETRY_COUNT = 3) (ADDRESS_LIST = (LOAD_BALANCE = on) (ADDRESS = (PROTOCOL = TCP)(HOST = prmy-scan)(PORT = 1521) ) ) (CONNECT_DATA = (SERVICE_NAME =soaedg.example.com) ) ) (DESCRIPTION = (CONNECT_TIMEOUT =10) (TRANSPORT_CONNECT_TIMEOUT = 3) (RETRY_COUNT = 3) (ADDRESS_LIST = (LOAD_BALANCE = on) (ADDRESS = (PROTOCOL =TCP)(HOST = stby-scan)(PORT = 1521) ) ) (CONNECT_DATA = (SERVICE_NAME = soaedg.example.com)) ) )
In the Test GridLink Database Connection page, review the connection parameters, and click Test All Listeners. The following example illustrates a successful connection notification:
Connection test for jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=prmy-scan)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=soaedg.example.com))) succeeded.
For BI only: The command synchronizes connection details to the mid-tier database ensuring that BI components can access the mid-tier database when connection details are changed. Location of command:
DOMAIN_HOME/bitools/bin/sync_midtier_db.sh
Note:
On UNIX this must be performed on the master host.Execute the synchronisation script:
DOMAIN_HOME/bitools/bin/sync_midtier_db.sh
Script displays the data sources that are updated.
Re-start the Managed Server and BI system components.
Learn how create a standby site.
To create the standby site, the Oracle SOA enterprise deployment topology is used as an example.
This section includes the following topics:
Prepare to your standby site for operation.
To prepare it for operation, on your standby site:
Set up the correct alias host names and physical host names by following the instructions in Planning Host Names.
Ensure that each standby site host has an alias host name that is the same as the physical host name of its peer host at the production site.
Create, on the shared storage, the same volumes that were created on the shared storage at the production site
Create the same mount points and symbolic links (if required) that you created at the production site. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes.
For more details about symbolic links, see Storage Replication.
Middle tier hosts on a standby site do not require installation nor configuration of Oracle Fusion Middleware or Oracle WebLogic Server software. When the production site storage is replicated to the standby site storage, the software installed on the production site is replicated at the standby site.
To set up the middle tier hosts on the standby site:
If you have enabled host name verification in the Oracle WebLogic Administration Server, update the appropriate trust and key stores with the certificates of the standby site.
Certificates must be specifically created for the nodes in the standby site.
For more information about creating certificates for nodes, see Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite.
The examples in these sections show how to perform tasks for the Oracle SOA Suite enterprise topology shown in Figure 4-2.
Note:
When you set up the Oracle SOA Suite enterprise topology shown in Figure 4-2 as the production site for a Disaster Recovery topology, you must use the physical host names shown in Table 3-1 for the production site hosts instead of the host names shown in Figure 4-2.
The steps in this section must be performed on the application tier hosts on which Oracle WebLogic Server is installed.
This section includes the following topics:
utils.ImportPrivateKey
utility.keytool
utility.Learn how to generate self-signed certificates.
To generate self-signed certificates:
Import certificates into your key store with the utils.ImportPrivateKey
utility.
To import certificates into a key store using the utils.ImportPrivateKey
utility:
Validate your standby site.
To validate a standby site:
Learn how to create an asymmetric Oracle Fusion Middleware Disaster Recovery topology.
An asymmetric topology is a disaster recovery configuration where the production and standby sites differ. In most asymmetric Disaster Recovery topologies, the standby site differs from the production site in that it has fewer resources than the production site.
Ensure that you understand how to set up a symmetric topology presented earlier in this document. Many of the concepts used to set up a symmetric topology are also valid when setting up an asymmetric topology.
This section include the following topics:
Create an asymmetric standby site for your Oracle Fusion Middleware Disaster Recovery topology.
The production site is assumed to be the Oracle SOA Suite enterprise deployment shown in Figure 4-2. An asymmetric standby site differs from the production site.
To create an asymmetric standby site:
Design the production site and the standby site. Determine the resources that will be necessary at the standby site to ensure acceptable performance when the standby site assumes the production role.
Note:
The ports for the standby site instances must use the same port numbers as the peer instances at the production site. Therefore, ensure that all the port numbers that will be required at the standby site are available (not in use at the standby site).
Create the Oracle Fusion Middleware Disaster Recovery production site by performing these operations:
Create volumes on the production site's shared storage system for the Oracle Fusion Middleware instances that will be installed for the production site. See Designing Directory Structure and Volumes.
Create mount points and symbolic links on the production site hosts to the Oracle home directories for the Oracle Fusion Middleware instances on the production site's shared storage system volumes. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes;
For more details about symbolic links, see Storage Replication.
For more information about volume design, see Recommended Volume Design for Oracle SOA Suite.
Create mount points and symbolic links on the production site hosts to the Oracle Central Inventory directories for the Oracle Fusion Middleware instances on the production site's shared storage system volumes. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes;
For more details about symbolic links, see Storage Replication.
For more information about the Oracle Central Inventory directories, see Oracle Home and Oracle Inventory.
Create mount points and symbolic links on the production site hosts to the static HTML pages directories for the Oracle HTTP Server instances on the production site's shared storage system volumes, if applicable. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes;
For more details about symbolic links, see Storage Replication.
Install the Oracle Fusion Middleware instances for the production site on the volumes in the production site's shared storage system. See Creating a Production Site for the Oracle SOA Suite Topology.
Create the same volumes with the same file and directory privileges on the standby site's shared storage system as you created for the Oracle Fusion Middleware instances on the production site's shared storage system. This step is critical because it enables you to use storage replication later to create the peer Oracle Fusion Middleware instance installations for the standby site instead of installing them using Oracle Universal Installer.
Note:
When you configure storage replication, ensure that all the volumes you set up on the production site's shared storage system are replicated to the same volumes on the standby site's shared storage system.
Even though some of the instances and hosts at the production site may not exist at the standby site, you must configure storage replication for all the volumes set up for the production site's Oracle Fusion Middleware instances.
Configure the shared storage to enable storage replication between the production site's shared storage system and the standby site's shared storage system. Configure storage replication to asynchronously copy the volumes in the production site's shared storage system to the standby site's shared storage system.
Create the initial baseline snapshot copy of the production site shared storage system to set up the replication between the production site and standby site shared storage systems. Create the initial baseline snapshot and subsequent snapshot copies using asynchronous replication mode. After the baseline snapshot copy is performed, validate that all the directories for the standby site volumes have the same contents as the directories for the production site volumes. Refer to the documentation for your shared storage vendor for information about creating the initial snapshot and enabling storage replication between the production site and standby site shared storage systems.
After the baseline snapshot has been taken, perform these steps for the Oracle Fusion Middleware instances for the standby site hosts:
Set up a mount point directory on the standby site host to the Oracle home directory for the Oracle Fusion Middleware instance on the standby site's shared storage system. The mount point directory that you set up for the peer instance on the standby site host must be the same as the mount point directory that you set up for the instance on the production site host.
Set up a symbolic link on the standby site host to the Oracle home directory for the Oracle Fusion Middleware instance on the standby site's shared storage system. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes;
For more details about symbolic links, see Storage Replication. The symbolic link that you set up for the peer instance on the standby site host must be the same as the symbolic link that you set up for the instance on the production site host.
Set up a mount point directory on the standby site host to the Oracle Central Inventory directory for the Oracle Fusion Middleware instance on the standby site's shared storage system. The mount point directory that you set up for the peer instance on the standby site host must be the same as the mount point directory that you set up for the instance on the production site host.
Set up a symbolic link on the standby site host to the Oracle Central Inventory directory for the Oracle Fusion Middleware instance on the standby site's shared storage system. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes;
For more details about symbolic links, see Storage Replication. The symbolic link you set up for the peer instance on the standby site host must be the same as the symbolic link that you set up for the instance on the production site host.
Set up a mount point directory on the standby site host to the Oracle HTTP Server static HTML pages directory for the Oracle HTTP Server instance on the standby site's shared storage system. The mount point directory that you set up for the peer instance on the standby site host must be the same as the mount point directory that you set up for the instance on the production site host.
Set up a symbolic link on the standby site host to the Oracle HTTP Server static HTML pages directory for the Oracle HTTP Server instance on the standby site's shared storage system. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Storage Replication for more details about symbolic links. The symbolic link that you set up for the peer instance on the standby site host must be the same as the symbolic link that you set up for the instance on the production site host.
At this point, the Oracle Fusion Middleware instance installations for the production site have been replicated to the standby site. At the standby site, all of the following are true:
The Oracle Fusion Middleware instances are installed into the same Oracle home directories on the same volumes as at the production site, and the hosts use the same mount point directories and symbolic links for the Oracle home directories as at the production site. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Storage Replication for more details about symbolic links.
The Oracle Central Inventory directories are located in the same directories on the same volumes as at the production site, and the hosts use the same mount point directories and symbolic links for the Oracle Central Inventory directories as at the production site. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Storage Replication for more details about symbolic links.
The Oracle HTTP Server static HTML pages directories are located in the same directories on the same volumes as at the production site, and the hosts use the same mount point directories and symbolic links for the Oracle HTTP Server static HTML pages directories as at the production site. Note that symbolic links are required only in cases where the storage system does not guarantee consistent replication across multiple volumes; see Storage Replication for more details about symbolic links.
The same ports are used for the standby site Oracle Fusion Middleware instances as were used for the same instances at the production site.
To further create an asymmetric standby with fewer hosts and instances, see the following topic:
Create an asymmetric standby site that has fewer hosts and Oracle Fusion Middleware instances than the production site.
The production site for this Oracle Fusion Middleware Disaster Recovery topology is assumed to be the Oracle SOA Suite enterprise deployment shown in Figure 4-2. Setting Up a Site through Designing Directory Structure and Volumes describe how to set up this production site and the volumes for its shared storage system, and how to create the necessary mount points.
Figure 4-4 shows the asymmetric standby site for the production site shown in Figure 4-2.
Figure 4-4 An Asymmetric Standby Site with Fewer Hosts and Instances
The Oracle SOA Suite asymmetric standby site shown in Figure 4-4 has fewer hosts and instances than the Oracle SOA Suite production site shown in Figure 4-2.
The hosts WEBHOST2 and SOAHOST2 and the instances on those hosts exist at the production site in Figure 4-2, but these hosts and their instances do not exist at the asymmetric standby site in Figure 4-4. The standby site therefore has fewer hosts and fewer instances than the production site.
It is important to ensure that this asymmetric standby site will have sufficient resources to provide adequate performance when it assumes the production role.
When you follow the steps in Creating an Asymmetric Standby Site to set up this asymmetric standby site, the standby site should be properly configured to assume the production role.
To set up the asymmetric standby site correctly, create the same volumes and consistency groups on the standby site shared storage as you did on the production site shared storage. For example, for the Oracle SOA Suite deployment, the volume design recommendations in Table 4-1 and the consistency group recommendations in Table 4-2 were used to set up the production site shared storage. Use these same volume design recommendations and consistency group recommendations that you used for the production site shared storage to set up the asymmetric standby site shared storage.
Note that at an asymmetric standby site, some hosts that exist at the production site do not exist at the standby site. For example, the asymmetric standby site for Oracle SOA Suite shown in Figure 4-4, WEBHOST2 and SOAHOST2 do not exist; therefore, it is not possible or necessary for you to create mount points on these hosts to the standby site shared storage volumes.
Validate your standby site.
To validate the standby site:
Learn how to operate and administer your Oracle Fusion Middleware Disaster Recovery topology.
This section includes the following topics:
rsync
in test environments to replicate the middle tier file system data from a production site host to a standby site peer host in an Oracle Fusion Middleware Disaster Recovery topology.Learn how to force a synchronization of the production and standby sites when you introduce a change in the middle tier at the production site.
In normal operations, the standby site shared storage receives snapshots transferred periodically from the production site shared storage. After the snapshots are applied, the standby site shared storage will include all the data up to and including the data contained in the last snapshot transferred from the production site before the failover or switchover.
Be sure to force a synchronization when you introduce a change to the middle tier at the production site such as, for example, when you deploy a new application at the production site. Follow the vendor-specific instructions to force a synchronization using storage replication technology.
The databases synchronization in an Oracle Fusion Middleware Disaster Recovery topology is managed by Oracle Data Guard.
A switchover operation sets the standby site as the production role.
This operation is needed when you plan to take down the production site (for example, to perform maintenance) and to make the current standby site the production site.
To perform a switchover operation:
At this point, the former standby site becomes the new production site, and you can perform maintenance at the original production site. After you have carried out the maintenance of the original production site, you can use it as either the production site or the standby site.
Note:
This note is applicable for BI-specific systems only.
After a switchover operation, the creation of an Essbase cube with CDS may fail with an error like the following:
oracle.essbase.cds.util.CDSException: oracle.essbase.cds.util.CDSException: java.sql.SQLException: ORA-25153:
Identify the temporary tablespaces using a select statement like the following (where BIS17V1
is the Oracle Business Intelligence RCU prefix):
select username,temporary_tablespace from dba_users where username like 'BIS17V1%'
Assume that the above command returns the following list of temporary tablespaces:
USERNAME.....TEMPORARY_TABLESPACE
BIS17V1_IAU_VIEWER.....BIS17V1_IAS_TEMP
BIS17V1_STB.....BIS17V1_IAS_TEMP
BIS17V1_IAU_APPEND.....BIS17V1_IAS_TEMP
BIS17V1_MDS.....BIS17V1_IAS_TEMP
BIS17V1_IAU.....BIS17V1_IAS_TEMP
BIS17V1_BIPLATFORM......BIS17V1_IAS_TEMP
BIS17V1_OPSS.....BIS17V1_IAS_TEMP
After the switchover, drop the tablespace BIS17V1_IAS_TEMP
including contents and datafiles.
Create the temporary tablespace BIS17V1_IAS_TEMP
, as a tempfile, in location (for example) /work/primy/oradata/stnby/BIS17V1_IAS_TEMP.dbf
, with size 250 m.
Issue the following alter commands (here is where you use the list temporary tablespaces):
alter user BIS17V1_OPSS temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_BIPLATFORM temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_IAU temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_MDS temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_IAU_APPEND temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_STB temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_IAU_VIEWER temporary tablespace BIS17V1_IAS_TEMP ;
To use the original production site as the production site, perform a switchback as explained in Performing a Switchback.
A switchback operation reverts the roles of the current production and standby sites.
To perform a switchback operation:
A failover operation sets the standby site as the production role when the production site becomes unavailable.
To perform a failover operation:
Note:
After a failover operation, the creation of an Essbase cube with CDS may fail with an error like the following:oracle.essbase.cds.util.CDSException: oracle.essbase.cds.util.CDSException: java.sql.SQLException: ORA-25153:
Identify the temporary tablespaces using a select statement like the following (where BIS17V1
is the Oracle Business Intelligence RCU prefix):
select username,temporary_tablespace from dba_users where username like 'BIS17V1%'
Assume that the above command returns the following list of temporary tablespaces:
USERNAME.....TEMPORARY_TABLESPACE
BIS17V1_IAU_VIEWER.....BIS17V1_IAS_TEMP
BIS17V1_STB.....BIS17V1_IAS_TEMP
BIS17V1_IAU_APPEND.....BIS17V1_IAS_TEMP
BIS17V1_MDS.....BIS17V1_IAS_TEMP
BIS17V1_IAU.....BIS17V1_IAS_TEMP
BIS17V1_BIPLATFORM......BIS17V1_IAS_TEMP
BIS17V1_OPSS.....BIS17V1_IAS_TEMP
After the failover, drop the tablespace BIS17V1_IAS_TEMP
including contents and datafiles.
Create the temporary tablespace BIS17V1_IAS_TEMP
, as a tempfile, in location (for example) /work/primy/oradata/stnby/BIS17V1_IAS_TEMP.dbf
, with size 250 m.
Issue the following alter commands (here is where you use the list temporary tablespaces):
alter user BIS17V1_OPSS temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_BIPLATFORM temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_IAU temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_MDS temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_IAU_APPEND temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_STB temporary tablespace BIS17V1_IAS_TEMP ;
alter user BIS17V1_IAU_VIEWER temporary tablespace BIS17V1_IAS_TEMP ;
To use again the original production site as the production site, perform a switchback as explained in Performing a Switchback.
Learn how to create a clone of the read-only standby site shared storage and using it for testing the standby site.
A typical Oracle Fusion Middleware Disaster Recovery configuration uses:
Storage replication to copy Oracle Fusion Middleware middle tier file systems and data from the production site shared storage to the standby site shared storage. During normal operation, the production site is active and the standby site is passive. When the production site is active, the standby site is passive and the standby site shared storage is in read-only mode; the only write operations made to the standby site shared storage are the storage replication operations from the production site shared storage to the standby site shared storage.
Oracle Data Guard to copy database data for the production site Oracle databases to the standby databases at standby site. By default, the production site databases are active and the standby databases at the standby site are passive. The standby databases at the standby site are in Managed Recovery mode while the standby site is in the standby role (is passive). When the production site is active, the only write operations made to the standby databases are the database synchronization operations performed by Oracle Data Guard.
The standby site as the production role when the production site becomes unavailable. If the current production site becomes unavailable unexpectedly, then a failover operation (described in Performing a Failover) is performed to enable the standby site to assume the production role. Or, if the current production site is taken down intentionally (for example, for planned maintenance), then a switchover operation (described in Performing a Switchover) is performed to enable the standby site to assume the production role.
The usual method of testing a standby site is to shut down the current production site and perform a switchover operation to enable the standby site to assume the production role. However, some enterprises may want to perform periodic testing of their Disaster Recovery standby site without shutting down the current production site and performing a switchover operation.
Another method to test the standby site is to create a clone of the read-only standby site shared storage and then using the cloned standby site shared storage in testing.
To use this alternate testing method:
Use the cloning technology provided by the shared storage vendor to create a clone of the standby site's read-only volumes on the shared storage at the standby site. Ensure that the cloned standby site volumes are writable. If you want to test the standby site just once, then this can be a one-time clone operation. However, if you want to test the standby site regularly, you can set up periodic cloning of the standby site read-only volumes to the standby site's cloned read/write volumes.
Perform a backup of the standby site databases, then modify the Oracle Data Guard replication between the production site and standby site databases.
For 10.2 and later databases, follow these steps to establish a snapshot standby database:
If you do not have a flash recovery area, set one up.
Cancel Redo Apply:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Create a guaranteed restore point:
SQL> CREATE RESTORE POINT standbytest GUARANTEE FLASHBACK DATABASE;
Archive the current logs at the primary (production) site:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Defer the standby site destination that you will activate:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;
Activate the target standby database:
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
Mount the database with the Force option if the database was opened as read-only:
SQL> STARTUP MOUNT FORCE;
Lower the protection mode and open the database:
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE; SQL> ALTER DATABASE OPEN;
Use Oracle Data Guard database recovery procedures to bring the standby databases online.
On the standby site computers, modify the mount commands to point to the volumes on the standby site's cloned read/write shared storage by following these steps:
Unmount the read-only shared storage volumes.
Mount the cloned read/write volumes at the same mount point.
Before testing the standby site, modify the host name resolution method for the computers that will be used to perform the testing to ensure that the host names point to the standby site computers and not the production site computers. For example, on a Linux computer, change the /etc/hosts
file to point to the virtual IP of the load balancer for the standby site.
Perform the standby site testing.
After you complete the standby site testing, follow these steps to begin using the original production site as the production site again:
Modify the mount commands on the standby site computers to point to the volumes on the standby site's read-only shared storage. In other words, reset the mount commands back to what they were before the testing was performed.
Unmount the cloned read/write shared storage volume.
Mount the read-only shared storage volumes.
At this point, the mount commands are reset to what they were before the standby site testing was performed.
Configure Oracle Data Guard to perform replication between the production site databases and standby databases at the standby site. Performing this configuration puts the standby database into Managed Recovery mode again:
For Oracle Database 10.2 and later, follow these steps:
Revert the activated database back to a physical standby database:
SQL> STARTUP MOUNT FORCE; SQL> FLASHBACK DATABASE TO POINT standbytest; SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY; SQL> STARTUP MOUNT FORCE;
Restart managed recovery:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Reenable the standby destination and switch logs:
SQL> ALTER SYSTEM SET LOG ARCHIVE DEST STATE 2=ENABLE;
For Oracle Database 12c, set up the replication again by following the steps in Managing Physical and Snapshot Standby Databases .
Before using the original production site again, modify the host name resolution method for the computers that will be used to access the production site to ensure that the host names point to the production site computers and not the standby site computers. For example, on a Linux computer, change the /etc/hosts
file to point to the virtual IP of the load balancer for the production site.
Use the utility rsync
in test environments to replicate the middle tier file system data from a production site host to a standby site peer host in an Oracle Fusion Middleware Disaster Recovery topology.
An alternative to replicating middle tier components in test environments is to use the rsync
utility (which uses peer-to-peer file copy) to replicate middle tier file system data from a production site host to a standby site peer host in an Oracle Fusion Middleware Disaster Recovery topology. The use the rsync
utility is explained in the context of symmetric topologies.
Ensure that you are familiar with storage replication and Oracle Data Guard in an Oracle Fusion Middleware Disaster Recovery topology, because there are many similarities between using storage replication and using the rsync
utility for disaster protection and disaster recovery of your Oracle Fusion Middleware components.
Note:
Note the following important differences between using storage replication technologies and using the rsync
utility to replicate middle tier file systems:
When using storage replication, you can roll changes back to the point in time when any previous snapshot was taken at the production site.
When using the rsync
utility, replicated production site data overwrites the standby site data, and you cannot roll back a replication.
When using storage replication, the volume that you set up for each host cluster in the shared storage systems ensures data consistency for that host cluster across the production site's shared storage system and the standby site's shared storage system.
When using the rsync
utility, data consistency is not guaranteed.
Because of these differences, the rsync
utility is not supported in production environments but in test environments only.
This section includes the following topic:
rsync
utility and Oracle Data Guard in your Oracle Fusion Middleware Disaster Recovery topology.rsync
and Oracle Data Guard in Oracle Fusion Middleware Disaster Recovery TopologiesLearn how to use the UNIX rsync
utility and Oracle Data Guard in your Oracle Fusion Middleware Disaster Recovery topology.
Note:
For information about how to set up Oracle Data Guard for Oracle database, see Database Considerations.
The following sections describe how to use the rsync
utility and Oracle Data Guard to protect and force synchronization between your production and standby sites in an Oracle Fusion Middleware Disaster Recovery topology:
rsync
utility to protect and recover your Oracle Fusion Middleware middle tier components.rsync
utility.Use the UNIX rsync
utility to protect and recover your Oracle Fusion Middleware middle tier components.
To use the rsync
utility:
Learn how to perform failover or switchover operations when using the rsync
utility.
To perform a failover or switchover from the production site to the standby site when using rsync
:
To use the original production site as the new production site, perform the preceding steps again but configure the rsync replications to go in the original direction (from the original production site to the original standby site).
Oracle Site Guard orchestrates switchover and failover between two disaster recovery sites.
Oracle Site Guard:
Ensures high availability, data protection, and disaster recovery for enterprise data.
Performs Oracle Site Guard operations like switchover and failover. If the primary site becomes unavailable due to a planned or an unplanned outage, a Switchover or Failover process needs to be initiated using Oracle Site Guard.
For more information about how to use Oracle Site Guard, see Oracle Site Guard Administrator's Guide.
Apply an Oracle Fusion Middleware patch set to upgrade the Oracle homes that participate in an Oracle Fusion Middleware Disaster Recovery site.
It is assumed that the Oracle Central Inventory for any Oracle Fusion Middleware instance that you are patching is located on the production site shared storage, so that the Oracle Central Inventory for the patched instance can be replicated to the standby site.
To apply an Oracle Fusion Middleware patch:
Note:
Patches must be applied only at the production site for an Oracle Fusion Middleware 12c Disaster Recovery topology. If a patch is for an Oracle Fusion Middleware instance or for the Oracle Central Inventory, the patch will be copied when the production site shared storage is replicated to the standby site shared storage. A synchronization operation should be performed when a patch is installed at the production site.
Similarly, if a patch is installed for a production site database, Oracle Data Guard will copy the patch to the standby database at the standby site when a synchronization is performed.