Skip Headers

Oracle Data Guard Concepts and Administration
Release 2 (9.2)

Part Number A96653-02
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page Go to next page
View PDF

10
Data Guard Scenarios

This chapter provides a collection of typical scenarios you might encounter while administering your Data Guard configuration. Each scenario is presented as a detailed step-by-step example that can be adapted to your specific environment. Table 10-1 lists each of the scenarios presented in this chapter.

Table 10-1 Data Guard Scenarios 
Reference Scenario

Section 10.1

Choosing the Best Available Standby Database for a Role Transition

Section 10.2

Using a Physical Standby Database with a Time Lag

Section 10.3

Switching Over to a Physical Standby Database That Has a Time Lag

Section 10.4

Recovering from a Network Failure

Section 10.5

Recovering After the NOLOGGING Clause Is Specified

10.1 Choosing the Best Available Standby Database for a Role Transition

Every standby database is associated with one and only one primary database. A single primary database can, however, support multiple physical or logical standby databases. This scenario illustrates how to determine the information you need to choose the best available standby database for a failover or switchover operation.

For most role transitions (failovers or switchovers), if the configuration contains physical standby databases, Oracle recommends that you perform the role transition using the best available physical standby database. This is recommended because:

Because of these limitations, a logical standby database should be considered as the target for a role transition only in the the following special situations:

Once you determine whether to use a physical or a logical standby database, the specific standby database you select as the target for the role transition is determined by how much of the recent primary database modifications are available at the standby location and by how much of these modifications were applied to the standby database. Because the primary database remains accessible during switchover operations, there will be no loss of data, and the choice of the standby database used during a switchover will only affect the time required to complete the switchover. For failovers, however, the choice of standby database might involve tradeoffs between additional risk of data loss and the time required to transition a standby database to the primary role.

10.1.1 Example: Best Physical Standby Database for a Failover Operation

In a disaster, the most critical task for the DBA is to determine if it is quicker and safer to repair the primary database or fail over to a standby database. When deciding that a failover operation is necessary and multiple physical standby databases are configured, you must choose which physical standby database is the best target for the failover operation. While there are many environmental factors that can affect which standby database represents the best choice, this scenario assumes these things to be equal for the purpose of emphasizing data loss assessment.

This scenario begins with a Data Guard configuration consisting of the HQ primary database and two physical standby databases, SAT and NYC. The HQ database is operating in maximum availability protection mode, and the standby databases are each configured with three standby redo logs.


See Also:

Section 5.2 for more information about the maximum availability protection mode for physical standby databases


Table 10-2 provides information about the databases used in this scenario.

Table 10-2 Identifiers for the Physical Standby Database Example
Identifier HQ Database SAT Database NYC Database

Location

San Francisco

Seattle

New York City

Database name

HQ

HQ

HQ

Instance name

HQ

SAT

NYC

Initialization parameter file

hq_init.ora

sat_init.ora

nyc_init.ora

Control file

hq_cf1.f

sat_cf1.f

nyc_cf1

Datafile

hq_db1.f

sat_db1.f

sat_db1.f

Online redo log file 1

hq_log1.f

sat_log1.f

nyc_log1.f

Online redo log file 2

hq_log2.f

sat_log2.f

nyc_log2.f

Standby redo log file 1

hq_srl1.f

sat_srl1.f

nyc_srl1.f

Standby redo log file 2

hq_srl2.f

sat_srl2.f

nyc_srl2.f

Standby redo log file 3

hq_srl3.f

sat_srl3.f

nyc_srl3.f

Primary protection mode

Maximum availability

Not applicable

Not applicable

Standby protection mode

Not applicable

Maximum availability (synchronous)

Maximum performance (asynchronous)

Network service name (client defined)

hq_net

sat_net

nyc_net

Listener

hq_listener

sat_listener

nyc_listener


Note:

The New York city database is operating in maximum performance mode because sending redo data synchronously from HQ to NYC might impact the primary database performance during peak workload periods. However, the New York City standby database is still considered a viable candidate for failover operations because it uses standby redo logs.


Assume that an event occurs in San Francisco where the primary site is located, and the primary site is damaged in such a way that it cannot be repaired in a timely manner. You must fail over to one of the standby databases. You cannot assume that the DBA who set up the multiple standby database configuration is available to decide to which standby database to fail over. Therefore, it is imperative to have a disaster recovery plan at each standby site, as well as at the primary site. Each member of the disaster recovery team needs to know about the disaster recovery plan and be aware of the procedures to follow. This scenario identifies the information you need when deciding which standby database should be the target of the failover operation.

One method of conveying information to the disaster recovery team is to include a ReadMe file at each standby site. This ReadMe file is created and maintained by the DBA and should describe how to:

When choosing a standby database, there are two critical considerations: which standby database received the most recent redo data and which standby database has applied the most redo logs.

Follow these steps to determine which standby database is the best candidate for failover when only physical standby databases are in the configuration. Always start with the standby database providing the highest protection level. In this scenario, the Seattle standby database provides the highest protection level because it is operating in maximum availability protection level.

Step 1 Connect to the SAT physical standby database.

Issue a SQL statement such as the following:

SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
Step 2 Determine how much current redo data is available in the redo log.

Query the columns in the V$MANAGED_STANDBY view, as shown:

SQL> SELECT THREAD#, SEQUENCE#, BLOCK#, BLOCKS
     2> FROM V$MANAGED_STANDBY WHERE STATUS='RECEIVING';
   THREAD#  SEQUENCE#     BLOCK#     BLOCKS
---------- ---------- ---------- ----------
         1         14        234         16

This standby database received 249 blocks of redo data from the primary database. To compute the number of blocks received, add the BLOCKS column value to the BLOCK# column value, and subtract 1 (because block number 234 is included in the 16 blocks received).


Note:

Depending on how long the primary database has been unavailable, the previous query might not return any selected rows because the RFS process might detect the network disconnection and terminate itself. If this occurs, it is always best to select a standby database that is configured to receive the redo data in a synchronous manner.


Step 3 Obtain a list of the archived redo logs that were applied or are currently pending application to the SAT database.

Query the V$ARCHIVED_LOG view:

SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
  2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
FILE_NAME                  SEQUENCE# APP
------------------------- ---------- ---
/oracle/dbs/hq_sat_2.log           2 YES 
/oracle/dbs/hq_sat_3.log           3 YES 
/oracle/dbs/hq_sat_4.log           4 YES
/oracle/dbs/hq_sat_5.log           5 YES 
/oracle/dbs/hq_sat_6.log           6 YES 
/oracle/dbs/hq_sat_7.log           7 YES 
/oracle/dbs/hq_sat_8.log           8 YES 
/oracle/dbs/hq_sat_9.log           9 YES
/oracle/dbs/hq_sat_10.log         10 YES
/oracle/dbs/hq_sat_11.log         11 YES
/oracle/dbs/hq_sat_13.log         13  NO

This output indicates that archived redo log 11 was completely applied to the standby database. (The line for log 11 in the example output is in bold typeface to assist you in reading the output. The actual output will not display bolding.)

Also, notice the gap in the sequence numbers in the SEQUENCE# column. In the example, the gap indicates that the SAT standby database is missing archived redo log number 12.

Step 4 Connect to the NYC database to determine if it is more recent than the SAT standby database.

Issue a SQL statement such as the following:

SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
Step 5 Determine how much current redo data is available in the redo log.

Query the columns in the V$MANAGED_STANDBY view as shown:

SQL> SELECT THREAD#, SEQUENCE#, BLOCK#, BLOCKS
     2> FROM V$MANAGED_STANDBY WHERE STATUS='RECEIVING';
   THREAD#  SEQUENCE#     BLOCK#     BLOCKS
---------- ---------- ---------- ----------
         1         14        157         93

This standby database has also received 249 blocks of redo information from the primary database. To compute the number of blocks received, add the BLOCKS column value to the BLOCK# column value, and subtract 1 (because block number 157 is included in the 93 blocks received).

Step 6 Obtain a list of the archived redo logs that were applied or are currently pending application to the NYC database.

Query the V$ARCHIVED_LOG view:

SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
  2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
FILE_NAME                  SEQUENCE# APP
------------------------- ---------- ---
/oracle/dbs/hq_nyc_2.log           2 YES 
/oracle/dbs/hq_nyc_3.log           3 YES 
/oracle/dbs/hq_nyc_4.log           4 YES
/oracle/dbs/hq_nyc_5.log           5 YES 
/oracle/dbs/hq_nyc_6.log           6 YES 
/oracle/dbs/hq_nyc_7.log           7 YES 
/oracle/dbs/hq_nyc_8.log           8  NO 
/oracle/dbs/hq_nyc_9.log           9  NO
/oracle/dbs/hq_nyc_10.log         10  NO
/oracle/dbs/hq_nyc_11.log         11  NO
/oracle/dbs/hq_nyc_12.log         12  NO
/oracle/dbs/hq_nyc_13.log         13  NO

This output indicates that archived redo log 7 was completely applied to the standby database. (The line for log 7 in the example output is in bold typeface to assist you in reading the output. The actual output will not display bolding.)

More redo data was received at this location, but less was applied to the standby database.

Step 7 Choose the best target standby database.

In most cases, the physical standby database you choose as a failover target should provide a balance between risk of data loss and time required to perform the role transition. As you analyze this information to make a decision about the best failover candidate in this scenario, consider the following:

Based on your business requirements, choose the best target standby database.

Step 8 Bring the selected standby database to its most current state.
If you chose the SAT database as the best target based on your business requirements, perform the following steps:
  1. Manually retrieve any missing archived redo logs using an operating system utility. (This example uses the UNIX cp command). In this case, the SAT database is missing archived redo log 12. Because the NYC database received this archived redo log, you can copy it from the NYC database to the SAT database, as follows:
    % cp /net/nyc/oracle/dbs/hq_nyc_12.log /net/sat/oracle/dbs/hq_sat_12.log
    
    
  2. Determine if a partial archived redo log exists for the next sequence number. In this example, the next sequence number should be 14. The following UNIX command searches the directory on the SAT database for the presence of an archived redo log named hq_sat_14.log:
    % ls -l /net/sat/oracle/dbs/hq_sat_14.log 
    /net/sat/oracle/dbs/hq_sat_14.log: No such file or directory
    
    

    Because the SAT standby database is using standby redo logs, there should not be any partial archived redo logs.

  3. Register the retrieved archived redo log. (There is no need to stop the log apply services).
    SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE '/oracle/dbs/hq_sat_12.log';
    
    
  4. Query the V$ARCHIVED_LOG view again to make sure the archived redo logs were successfully applied:
    SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
      2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
    
    FILE_NAME                  SEQUENCE# APP
    ------------------------- ---------- ---
    /oracle/dbs/hq_sat_2.log           2 YES 
    /oracle/dbs/hq_sat_3.log           3 YES 
    /oracle/dbs/hq_sat_4.log           4 YES
    /oracle/dbs/hq_sat_5.log           5 YES 
    /oracle/dbs/hq_sat_6.log           6 YES 
    /oracle/dbs/hq_sat_7.log           7 YES 
    /oracle/dbs/hq_sat_8.log           8 YES 
    /oracle/dbs/hq_sat_9.log           9 YES
    /oracle/dbs/hq_sat_10.log         10 YES
    /oracle/dbs/hq_sat_11.log         11 YES
    /oracle/dbs/hq_sat_12.log         12 YES
    /oracle/dbs/hq_sat_13.log         13 YES
    
    

If you chose the NYC database as the best target based on your business requirements, perform the following steps:

  1. Determine if a partial archived redo log exists for the next sequence number. The following UNIX command searches the directory on the NYC database for the presence of an archived redo log named with the next sequence named (hq_nyc_14):
    % ls -l /net/nyc/oracle/dbs/hq_nyc_14.log
    /net/nyc/oracle/dbs/hq_nyc_14.log: No such file or directory
    
    

    Because the NYC standby database is using standby redo logs, there should not be any partial archived redo logs.

  2. Start log apply services to apply the most current log:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
      2> DISCONNECT FROM SESSION;
    
    
  3. Query the V$ARCHIVED_LOG view again to make sure the archived redo logs were successfully applied:
    SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
      2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
    FILE_NAME                  SEQUENCE# APP
    ------------------------- ---------- ---
    /oracle/dbs/hq_nyc_2.log           2 YES 
    /oracle/dbs/hq_nyc_3.log           3 YES 
    /oracle/dbs/hq_nyc_4.log           4 YES
    /oracle/dbs/hq_nyc_5.log           5 YES 
    /oracle/dbs/hq_nyc_6.log           6 YES 
    /oracle/dbs/hq_nyc_7.log           7 YES 
    /oracle/dbs/hq_nyc_8.log           8 YES 
    /oracle/dbs/hq_nyc_9.log           9 YES
    /oracle/dbs/hq_nyc_10.log         10 YES
    /oracle/dbs/hq_nyc_11.log         11 YES
    /oracle/dbs/hq_nyc_12.log         12  NO
    /oracle/dbs/hq_nyc_13.log         13  NO
    
    

    Applying the archived redo logs might take some time to complete. Therefore, you must wait until all archived redo logs are designated as applied, as shown:

    SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
      2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
    
    FILE_NAME                  SEQUENCE# APP
    ------------------------- ---------- ---
    /oracle/dbs/hq_nyc_2.log           2 YES 
    /oracle/dbs/hq_nyc_3.log           3 YES 
    /oracle/dbs/hq_nyc_4.log           4 YES
    /oracle/dbs/hq_nyc_5.log           5 YES 
    /oracle/dbs/hq_nyc_6.log           6 YES 
    /oracle/dbs/hq_nyc_7.log           7 YES 
    /oracle/dbs/hq_nyc_8.log           8 YES 
    /oracle/dbs/hq_nyc_9.log           9 YES
    /oracle/dbs/hq_nyc_10.log         10 YES
    /oracle/dbs/hq_nyc_11.log         11 YES
    /oracle/dbs/hq_nyc_12.log         12 YES
    /oracle/dbs/hq_nyc_13.log         13 YES
    
Step 9 Perform the failover operation.

You are now ready to stop log apply services and fail over the selected physical standby database to the primary role.


See Also:

Section 7.2.2 for additional information about how to perform a failover operation to a physical standby database


10.1.2 Example: Best Logical Standby Database for a Failover Operation

In a disaster when only logical standby databases are available, the critical task is to determine which logical standby database is the best target for the failover operation. While there are many environmental factors that can affect which is the best target standby database, this scenario assumes these things to be equal for the purpose of emphasizing data loss assessment.

See Also:

Section 5.2 for more information about the maximum availability protection mode for logical standby databases

This scenario starts out with a Data Guard configuration consisting of the HQ primary database and two logical standby databases, SAT and NYC. Table 10-3 provides information about each of these databases.

Table 10-3 Identifiers for Logical Standby Database Example
Identifier HQ Database SAT Database NYC Database

Location

San Francisco

Seattle

New York City

Database name

HQ

SAT

NYC

Instance name

HQ

SAT

NYC

Initialization parameter file

hq_init.ora

sat_init.ora

nyc_init.ora

Control file

hq_cf1.f

sat_cf1.f

nyc_cf1.f

Datafile

hq_db1.f

sat_db1.f

nyc_db1.f

Online redo log file 1

hq_log1.f

sat_log1.f

nyc_log1.f

Online redo log file 2

hq_log2.f

sat_log2.f

nyc_log2.f

Database link (client-defined)

hq_link

sat_link

nyc_link

Network service name (client-defined)

hq_net

sat_net

nyc_net

Listener

hq_listener

sat_listener

nyc_listener

Follow these steps to determine which standby database is the best candidate for failover when only logical standby databases are in the configuration:

Step 1 Connect to the SAT logical standby database.

Issue a SQL statement such as the following:

SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
Step 2 Determine the highest applied SCN and highest (newest) applicable SCN on the SAT database.

Query the following columns in the DBA_LOGSTDBY_PROGRESS view:

SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;

APPLIED_SCN NEWEST_SCN
----------- ----------
     144059     144059
Step 3 Obtain a list of the archived redo logs that were applied or are currently pending application to the SAT database.

Query the DBA_LOGSTDBY_LOG view:

SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#",
  2> FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, 'HH:MI:SS') TIMESTAMP,
  3> DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) "THR#"
  4> FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

FILE_NAME                 SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR#
------------------------- ---- ------------- ------------ -------- --- --- ----
/oracle/dbs/hq_sat_2.log  2           101579       101588 11:02:57 NO  NO  1
/oracle/dbs/hq_sat_3.log  3           101588       142065 11:02:01 NO  NO  1
/oracle/dbs/hq_sat_4.log  4           142065       142307 11:02:09 NO  NO  1
/oracle/dbs/hq_sat_5.log  5           142307       142739 11:02:47 YES YES 1
/oracle/dbs/hq_sat_6.log  6           142739       143973 12:02:09 NO  NO  1
/oracle/dbs/hq_sat_7.log  7           143973       144042 01:02:00 NO  NO  1
/oracle/dbs/hq_sat_8.log  8           144042       144051 01:02:00 NO  NO  1
/oracle/dbs/hq_sat_9.log  9           144051       144054 01:02:15 NO  NO  1
/oracle/dbs/hq_sat_10.log 10          144054       144057 01:02:20 NO  NO  1
/oracle/dbs/hq_sat_11.log 11          144057       144060 01:02:25 NO  NO  1
/oracle/dbs/hq_sat_13.log 13          144089       144147 01:02:40 NO  NO  1

Notice that for log 11, the SCN of 144059 (recorded in step 2) is between the FIRST_CHANGE# column value of 144057 and the NEXT_CHANGE# column value of 144060. This indicates that log 11 is currently being applied. (The line for log 11 in the example output is in bold typeface to assist you in reading the output. The actual output will not display bolding.) Also notice the gap in the sequence numbers in the SEQ# column; in the example, the gap indicates that SAT database is missing archived redo log 12.

Step 4 Connect to the NYC database.

Issue a SQL statement such as the following:

SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
Step 5 Determine the highest applied SCN and highest applicable SCN on the NYC database.

Query the following columns in the DBA_LOGSTDBY_PROGRESS view:

SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
APPLIED_SCN NEWEST_SCN
----------- ----------
     143970     144146
Step 6 Obtain a list of the logs that were processed or are currently pending processing on the NYC database.

Issue a SQL statement such as the following:

SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#", 
  2> FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, 'HH:MI:SS')  TIMESTAMP, 
  3> DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) "THR#" 
  4> FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

FILE_NAME                 SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR#
------------------------- ---- ------------- ------------ -------- --- --- ----
/oracle/dbs/hq_nyc_2.log  2           101579       101588 11:02:58 NO  NO  1
/oracle/dbs/hq_nyc_3.log  3           101588       142065 11:02:02 NO  NO  1
/oracle/dbs/hq_nyc_4.log  4           142065       142307 11:02:10 NO  NO  1
/oracle/dbs/hq_nyc_5.log  5           142307       142739 11:02:48 YES YES 1
/oracle/dbs/hq_nyc_6.log  6           142739       143973 12:02:10 NO  NO  1
/oracle/dbs/hq_nyc_7.log  7           143973       144042 01:02:11 NO  NO  1
/oracle/dbs/hq_nyc_8.log  8           144042       144051 01:02:01 NO  NO  1
/oracle/dbs/hq_nyc_9.log  9           144051       144054 01:02:16 NO  NO  1
/oracle/dbs/hq_nyc_10.log 10          144054       144057 01:02:21 NO  NO  1
/oracle/dbs/hq_nyc_11.log 11          144057       144060 01:02:26 NO  NO  1
/oracle/dbs/hq_nyc_12.log 12          144060       144089 01:02:30 NO  NO  1
/oracle/dbs/hq_nyc_13.log 13          144089       144147 01:02:41 NO  NO  1

Notice that for log 6, the SCN of 143970 (recorded in step 5) is between the FIRST_CHANGE# column value of 142739 and the NEXT_CHANGE# column value of 143973. This indicates that log 6 is currently being applied. (The line for log 6 in the example output is in bold typeface to assist you in reading the output. The actual output will not display bolding.) Also, notice that there are no gaps in the sequence of logs that remain to be processed.

Step 7 Choose the best target standby database.

In most cases, the logical standby database you choose as a failover target should provide a balance between risk of data loss and time required to perform the role transition. As you analyze this information to make a decision about the best failover candidate in this scenario, consider the following:

Based on your business requirements, choose the best target standby database.

Step 8 Bring the selected standby database to its most current state.

If you chose the SAT database as the best target based on your business requirements, perform the following steps:

  1. Manually retrieve any missing archived redo logs using an operating system utility. In this case, the SAT database is missing archived redo log 12. Because the NYC database received this archived redo log, you can copy it from the NYC database to the SAT database, as follows:
    %cp /net/nyc/oracle/dbs/hq_nyc_12.log
    /net/sat/oracle/dbs/hq_sat_12.log
    
    
  2. Determine if a partial archived redo log exists for the next sequence number. In this example, the next sequence number should be 14. The following UNIX command shows the directory on the SAT database, looking for the presence of an archived redo log named hq_sat_14.log:
    %ls -l /net/sat/oracle/dbs/hq_sat_14.log
    -rw-rw----   1 oracle    dbs  333280 Feb 12  1:03 hq_sat_14.log
    
    
  3. Stop log apply services and register both the retrieved archived redo log and the partial archived redo log:
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_sat_12.log';
    SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_sat_14.log';
    
    
    
  4. Start log apply services to apply the most current log:
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
    
    
  5. Determine the highest applied SCN on the SAT database by querying the DBA_LOGSTDBY_PROGRESS view to see if the value of the APPLIED_SCN column is equal to the value of the NEWEST_SCN column:
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    
    APPLIED_SCN NEWEST_SCN
    ----------- ----------
         144205     144205
    
    

    Because the SCN values match, you can be assured that there is no longer a delay (lag) between the current log on the primary database and the last log applied to the SAT database.

If you chose the NYC database as the best target based on your business requirements, perform the following steps:

  1. Determine if a partial archived redo log exists for the next sequence number. In this example, the next sequence number should be 14. The following UNIX command shows the directory on the NYC database, looking for the presence of an archived redo log named hq_nyc_14:
    %ls -l /net/nyc/oracle/dbs/hq_nyc_14.log
    -rw-rw----   1 oracle    dbs  333330 Feb 12  1:03 hq_nyc_14.log
    
    
  2. Register the partial archive redo log on the NYC database:
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_nyc_14.log';
    
    
  3. Start log apply services to apply the most current log:
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
    
    
  4. Determine the highest applied SCN on the NYC database by querying the DBA_LOGSTDBY_PROGRESS view to see if the value of the APPLIED_SCN column is equal to the value of the NEWEST_SCN column:
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    
    APPLIED_SCN NEWEST_SCN
    ----------- ----------
         144205     144205
    
    

    Because the SCN values match, you can sure there is no longer a delay (lag) between the current log on the primary database and the last log received and applied by the NYC database.

Step 9 Perform the failover.

You are now ready to stop log apply services and fail over the selected logical standby database to the primary role.

See Also:

Section 7.3.2 for additional information on how to perform the failover operation

10.2 Using a Physical Standby Database with a Time Lag

By default, when the standby database is performing managed recovery, it automatically applies redo logs when they arrive from the primary database. But in some cases, you might not want the logs to be applied immediately, because you want to create a time lag between the archiving of a redo log at the primary site and the application of the log at the standby site. A time lag can protect against the transfer of corrupted or erroneous data from the primary site to the standby site.

For example, suppose you run a batch job every night on the primary database. Unfortunately, you accidently ran the batch job twice and you did not realize the mistake until the batch job completed for the second time. Ideally, you need to roll back the database to the point in time before the batch job began. A primary database that has a standby database with a time lag could help you to recover. You could fail over the standby database with the time lag and use it as the new primary database.

To create a standby database with a time lag, use the DELAY attribute of the LOG_ARCHIVE_DEST_n initialization parameter in the primary database initialization parameter file. The archived redo logs are still automatically copied from the primary site to the standby site, but the logs are not immediately applied to the standby database. The logs are applied when the specified time interval expires.

This scenario uses a 4-hour time lag and covers the following topics:

Readers of this scenario are assumed to be familiar with the procedures for creating a typical standby database. The details were omitted from the steps outlined in this scenario.

See Also:

Chapter 3 for details about a creating physical databases

10.2.1 Establishing a Time Lag on a Physical Standby Database

To create a physical standby database with a time lag, modify the LOG_ARCHIVE_DEST_n initialization parameter on the primary database to set a delay for the standby database. The following is an example of how to add a 4-hour delay to the LOG_ARCHIVE_DEST_n initialization parameter:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=stdby DELAY=240';

The DELAY attribute indicates that the archived redo logs at the standby site are not available for recovery until the 4-hour time interval has expired. The time interval (expressed in minutes) starts when the archived redo logs are successfully transmitted to the standby site. The redo information is still sent to the standby database and written to the disk as normal.

See Also:

Oracle9i Supplied PL/SQL Packages and Types Reference for a description of using the DBMS_LOGSTDBY package to establish a time lag on a logical standby database.

10.2.2 Failing Over to a Physical Standby Database with a Time Lag

A standby database configured to delay application of redo logs can be used to recover from user errors or data corruptions on the primary database. In most cases, you can query the time-delayed standby database to retrieve the data needed to repair the primary database (for example, to recover the contents of a mistakenly dropped table). In cases where the damage to the primary database is unknown or when the time required to repair the primary database is prohibitive, you can also consider failing over to a time-delayed standby database.

Assume that a backup file was inadvertently applied twice to the primary database and that the time required to repair the primary database is prohibitive. You choose to fail over to a physical standby database for which redo log application is delayed. By doing so, you transition the standby database to the primary role at a point before the problem occurred, but you will likely incur some data loss. The following steps illustrate the process:

  1. Initiate the failover operation by issuing the appropriate SQL statements on the time-delayed physical standby database:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE SKIP STANDBY LOGFILE;
    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP
    
    

    The ACTIVATE statement immediately transitions the standby database to the primary role and makes no attempt to apply any additional redo data that might exist at the standby location. When using this command, you must carefully balance the cost of data loss at the standby location against the potentially extended period of downtime required to fully repair the primary database.

  2. Re-create all other standby databases in the configuration from a copy of this new primary database.

10.3 Switching Over to a Physical Standby Database That Has a Time Lag

All the redo logs are shipped to the standby site as they become available. Therefore, even when a time delay is specified for a standby database, you can make the standby database current by overriding the delay using the SQL ALTER DATABASE RECOVER MANAGED STANDBY statement.


Note:

To recover from a logical error, you must perform a failover operation instead of a switchover operation.


The following steps demonstrate how to perform a switchover to a time-delayed physical standby database that bypasses a time lag. For the purposes of this example, assume that the primary database is located in New York, and the standby database is located in Boston.

Step 1 Apply all of the archived redo logs to the original (time delayed) standby database bypassing the lag.

Switchover will not begin until the standby database applies all of the redo logs. By lifting the delay, you allow the standby database to proceed without waiting for the apply operation to finish.

Issue the following SQL statement to lift the delay:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY 
  2> DISCONNECT FROM SESSION THROUGH LAST SWITCHOVER;
Step 2 Stop read or update activity on the primary and standby databases.

You must have exclusive database access before beginning a switchover operation. Ask users to log off the primary and standby databases or query the V$SESSION view to identify users that are connected to the databases and close all open sessions except the SQL*Plus session from which you are going to execute the switchover statement.

See Also:

Oracle9i Database Administrator's Guide for more information on managing users

Step 3 Switch the primary database over to the physical standby role.

On the primary database (in New York), execute the following statement:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY 
  2> WITH SESSION SHUTDOWN;

This statement does the following:

Step 4 Shut down and start up the former primary instance without mounting the database.

Execute the following statement on the former primary database (in New York):

SQL> SHUTDOWN NORMAL;
SQL> STARTUP NOMOUNT;
Step 5 Mount the former primary database in the physical standby database role.

Execute the following statement to mount the former primary database (in New York) as a physical standby database:

SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Step 6 Switch the original standby database to the primary role.

Issue the following SQL statement:

Step 7 SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY DATABASE;Shut down and restart the new primary database instance.

Issue the following SQL statements:

SQL> SHUTDOWN;
SQL> STARTUP PFILE=Failover.ora;

10.4 Recovering from a Network Failure

The following steps describe how to recover after a network failure.

Step 1 Identify the network failure.

The V$ARCHIVE_DEST view contains the network error and identifies which standby database cannot be reached. On the primary database, execute the following SQL statement for the archived log destination that experienced the network failure. For example:

SQL> SELECT DEST_ID, STATUS, ERROR FROM V$ARCHIVE_DEST WHERE DEST_ID = 2;

DEST_ID    STATUS    ERROR
---------- --------- --------------------------------------------------------
        2  ERROR     ORA-12224: TNS:no listener

The query results show there are errors archiving to the standby database, and the cause of the error is TNS:no listener. You should check whether or not the listener on the standby site is started. If the listener is stopped, then start it.

Step 2 Prevent the primary database from stalling.

If you cannot solve the network problem quickly, and if the standby database is specified as a mandatory destination, try to prevent the database from stalling by doing one of the following:

Step 3 Archive the current redo log.

On the primary database, archive the current redo log:

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

When the network is back up again, log apply services can detect and resolve the archive gaps automatically when the physical standby database resumes managed recovery operations.

10.5 Recovering After the NOLOGGING Clause Is Specified

In some SQL statements, the user has the option of specifying the NOLOGGING clause, which indicates that the database operation is not logged in the redo log. Even though the user specifies the clause, a redo log record is still written to the redo log. However, there is no data associated with this record. This can result in log application or data access errors at the standby site and manual recovery might be required to resume log apply operations.


Note:

To avoid these problems, Oracle Corporation recommends that you always specify the FORCE LOGGING clause in the CREATE DATABASE or ALTER DATABASE statements. See the Oracle9i Database Administrator's Guide.


10.5.1 Recovery Steps for Logical Standby Databases

For logical standby databases, when SQL apply operations encounter a redo log record for an operation performed with the NOLOGGING clause, it skips over the record and continues applying changes from later records. Later, if an attempt is made to access one of the records that were updated with NOLOGGING in effect, the following error is returned: ORA-01403 no data found

To recover after the NOLOGGING clause is specified, re-create one or more tables from the primary database, as described in Section 9.1.6.


Note:

In general, use of the NOLOGGING clause is not recommended. Optionally, if you know in advance that operations using the NOLOGGING clause will be performed on certain tables in the primary database, you might want to prevent the application of SQL statements associated with these tables to the logical standby database by using the DBMS_LOGSTDBY.SKIP procedure.


10.5.2 Recovery Steps for Physical Standby Databases

When the redo log is copied to the standby site and applied to the physical standby database, a portion of the datafile is unusable and is marked as being unrecoverable. When you either fail over to the physical standby database, or open the standby database for read-only access, and attempt to read the range of blocks that are marked as UNRECOVERABLE, you will see error messages similar to the following:

ORA-01578: ORACLE data block corrupted (file # 1, block # 2521)
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

To recover after the NOLOGGING clause is specified, you need to copy the datafile that contains the unjournaled data from the primary site to the physical standby site. Perform the following steps:

Step 1 Determine which datafiles should be copied.

Follow these steps:

  1. Query the primary database:
    SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
    NAME                                                  UNRECOVERABLE
    ----------------------------------------------------- -------------
    /oracle/dbs/tbs_1.dbf                                       5216
    /oracle/dbs/tbs_2.dbf                                          0
    /oracle/dbs/tbs_3.dbf                                          0
    /oracle/dbs/tbs_4.dbf                                          0
    4 rows selected.
    
  2. Query the standby database:
    SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
    NAME                                                  UNRECOVERABLE
    ----------------------------------------------------- -------------
    /oracle/dbs/stdby/tbs_1.dbf                                 5186
    /oracle/dbs/stdby/tbs_2.dbf                                    0
    /oracle/dbs/stdby/tbs_3.dbf                                    0
    /oracle/dbs/stdby/tbs_4.dbf                                    0
    4 rows selected.
    
  3. Compare the query results of the primary and standby databases.

    Compare the value of the UNRECOVERABLE_CHANGE# column in both query results. If the value of the UNRECOVERABLE_CHANGE# column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.

    In this example, the value of the UNRECOVERABLE_CHANGE# in the primary database for the tbs_1.dbf datafile is greater, so you need to copy the tbs_1.dbf datafile to the standby site.

Step 2 On the primary site, back up the datafile that you need to copy to the standby site as follows.

Issue the following SQL statements:

SQL> ALTER TABLESPACE system BEGIN BACKUP;
SQL> EXIT;
% cp tbs_1.dbf /backup
SQL> ALTER TABLESPACE system END BACKUP;
Step 3 On the standby database, restart managed recovery.

Issue the following SQL statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
  2>SESSION;

You might get the following error messages (possibly in the alert log) when you try to restart managed recovery:

ORA-00308: cannot open archived log 'standby1'
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'

If you get the ORA-00308 error, cancel recovery by issuing the following statement from another terminal window:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

These error messages are returned when one or more logs in the archive gap have not been successfully applied. If you receive these errors, manually resolve the gaps and repeat step 3.

See Also:

Section B.3 for information on manually resolving an archive gap

10.5.3 Determining If a Backup Is Required After Unrecoverable Operations

If you performed unrecoverable operations on your primary database, determine if a new backup operation is required by following these steps:

  1. Query the V$DATAFILE view on the primary database to determine the system change number (SCN) or the time at which the Oracle database server generated the most recent invalided redo data.
  2. Issue the following SQL statement on the primary database to determine if you need to perform another backup:
    SELECT UNRECOVERABLE_CHANGE#, 
           TO_CHAR(UNRECOVERABLE_TIME, 'mm-dd-yyyy hh:mi:ss') 
    FROM   V$DATAFILE;
    
    
  3. If the query in the previous step reports an unrecoverable time for a datafile that is more recent than the time when the datafile was last backed up, then make another backup of the datafile in question.

    See Also:

    V$DATAFILE in Chapter 14 and the Oracle9i Database Reference for more information about the V$DATAFILE view