Oracle8i Standby Database Concepts and Administration
Release 2 (8.1.6)

A76995-01

Library

Product

Contents

Index

Prev Next

5
Standby Database Scenarios

This chapter describes the following standby database scenarios:

Scenario 1: Creating a Standby Database on the Same Host

This scenario describes the creation of a standby database STANDBY1 on the same host as the primary database PROD1. The host is a UNIX machine with three file systems, each mounted on a separate disk configuration on a different controller. By placing the standby database on a different file system from the primary database, you protect the primary database from a hard disk failure. By running the same-host standby database in managed recovery mode, you can keep it continuously up-to-date.

After you set up the standby database on the local host, you plan to create a standby database on a remote host for total disaster protection. In this way, even if all disks of the primary database crash or are destroyed in a disaster, you can fail over to the remote standby database and keep the database open.

Step 1: Plan the Standby Database.

Because the host uses three file systems, each on its own set of disks with its own controller, you decide to maintain the primary database files on the first file system, the standby database files on the second file system, and the ORACLE_HOME binaries on the third file system. If the primary database disks fail, you can switch to the standby database; if the ORACLE_HOME disks fail, you can switch to the remote standby database.

To host the standby database on the same machine as the primary database, you must set the following parameters in the standby database initialization parameter file:

Fortunately, most (but not all) of the primary database datafiles and redo log files are in the same directory and are named consistently. You will have to rename some of the files manually using ALTER DATABASE statements.

Because the primary database is shut down every Sunday for an hour for maintenance, you decide to use that time to make a cold, consistent backup. You can then restart the database while you make the necessary configurations for the standby database.

Step 2: Create the Standby Database.

The next step in the procedure is to create the backup that will form the basis for the standby database. You know that you can use either an inconsistent or consistent backup, but because the database must go down every Sunday for maintenance, you decide to make a consistent backup then and use it for the standby database.

  1. Determine the database files.

    On Sunday, before shutting down the primary database, you query the database to determine which datafiles it contains:

    SQL> SELECT name FROM v$datafile;
    NAME                                                                            
    --------------------------------------------------------------------------------
    /fs1/dbs/tbs_01.f                                                       
    /fs1/dbs/tbs_02.f                                                       
    /fs1/dbs/tbs_11.f                                                       
    /fs1/dbs/tbs_12.f                                                       
    /fs1/dbs/tbs_21.f                                                       
    /fs1/dbs/tbs_22.f                                                       
    /fs1/dbs/tbs_13.f                                                       
    /fs1/dbs/tbs_23.f                                                       
    /fs1/dbs/tbs_24.f                                                       
    /fs1/dbs/tbs_31.f                                                       
    /fs1/dbs/tbs_32.f                                                       
    /fs1/dbs/tbs_41.f                                                       
    /fs1/dbs2/tbs_42.f                                                       
    /fs1/dbs2/tbs_51.f                                                       
    /fs1/dbs2/tbs_52.f                                                       
    /fs1/dbs2/tbs_03.f                                                       
    /fs1/dbs3/tbs_14.f                                                       
    /fs1/dbs3/tbs_25.f                                                       
    /fs1/dbs3/tbs_33.f                                                       
    /fs1/dbs3/tbs_43.f                                                       
    /fs1/dbs3/tbs_53.f                                                       
    21 rows selected.
    
    
  2. Back up the datafiles.

    After determining which datafiles are in the database, you shut down the database with the IMMEDIATE option:

    SQL> SHUTDOWN IMMEDIATE;
    
    

    At this point, you decide to back up all of the primary datafiles to a temporary directory as follows:

    % cp /fs1/dbs/* /fs1/temp
    % cp /fs1/dbs2/* /fs1/temp
    % cp /fs1/dbs3/* /fs1/temp
    
    

    You perform some other routine maintenance operations and then restart the database as follows:

    SQL> STARTUP PFILE=initPROD1.ora;
    
    
  3. Create the standby database control file.

    After a few minutes, you create the standby database control file in the same directory in which you stored the consistent backup:

    SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/fs1/temp/stbycf.f';
    
    
  4. Transfer files to the standby file system.

    After you have successfully created the standby database control file, you can copy the datafiles and the standby database control file from the primary file system to the standby file system.

    Because the transferring of datafiles can take a long time, you first copy the control file, begin copying the datafiles, and then proceed to other tasks (such as network configuration). For example, enter the following at the UNIX command shell:

    % cp /fs1/temp/stbycf.f /fs2/dbs/cf1.f
    % cp /fs1/temp/tbs* /fs2/dbs
    

Step 3: Configure the Network Files.

In order to run a standby database in a managed standby environment, you must configure a Net8 connection between the primary and standby databases so that you can archive the redo logs to the standby service.

You use the IPC protocol to connect the primary database to the standby database because both databases are on the same host. Because you do not maintain an Oracle Names server, you must create both a tnsnames.ora entry for the primary database and a listener.ora entry for the standby database.

  1. Configure the tnsnames.ora file.

    Your next step is to open the tnsnames.ora file in a text editor:

    % vi /fs3/oracle/network/admin/tnsnames.ora
    
    
    

    Currently, only one service name entry exists in the file, a TCP/IP connection to the PROD1 database:

    prod1 = (DESCRIPTION=
            (ADDRESS=(PROTOCOL=tcp)(PORT=1512)(HOST=dlsun183))
            (CONNECT_DATA=(SID=prod1))
    )
    
    
    

    To define an IPC connection between the primary and the standby database, you add an entry with the following format:

    standby_service_name = (DESCRIPTION=
                               (ADDRESS=(PROTOCOL=ipc) (KEY=keyhandle))
                               (CONNECT_DATA=(SID=standby_sid)))
    
    

    Substitute appropriate values for standby_service_name, keyhandle, and standby_sid, as the following example shows:

    standby1 = (DESCRIPTION=
                  (ADDRESS=(PROTOCOL=ipc) (KEY=kstdby1))
                  (CONNECT_DATA=(SID=stdby1)))
    
    
  2. Configure the listener.ora file.

    Your next step is to open the listener.ora file, which is located on file system /fs3:

    % vi /fs3/oracle/network/admin/listener.ora
    
    

    You discover the following list of addresses (where on the host the listener is listening) and SIDs (which connections the listener is listening for):

    LISTENER = (ADDRESS_LIST=
     (ADDRESS=(PROTOCOL=tcp)(PORT=1512)(HOST=dlsun183))
    )
    SID_LIST_LISTENER = (SID_LIST=
     (SID_DESC=(SID_NAME=PROD1)(ORACLE_HOME=/fs3/oracle))
    )
    
    

    Currently, the listener is listening on port 1512 of the host dlsun183 for database PROD1.

    You need to edit the listener.ora file and add two entries with the following format:

    STANDBY_LISTENER = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc)
                            (KEY=keyhandle)))
    
    SID_LIST_STANDBY_LISTENER = (SID_LIST=
                      (SID_DESC=(SID_NAME=standby_sid)(ORACLE_HOME=/oracle_home)))
    

    The listener.ora file is typically located in the $ORACLE_HOME/network/admin directory on the standby site. Substitute appropriate values for keyhandle, standby_sid, and oracle_home as the following example shows:

    STBY1_LISTENER = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=ipc)   # same node as primary
                       (KEY=kstdby1)))   # ORACLE_SID standby instance is started with
    
    SID_LIST_STBY1_LISTENER = (SID_LIST=
                      (SID_DESC=(SID_NAME=stdby1)(ORACLE_HOME=/vobs/oracle)))
    
    

    Now that you have edited the listener.ora file, you must restart the listener so that it recognizes the changes in the file:

    % lsnrctl
    LSNRCTL for Solaris: Version 8.1.5.0.0 - Production on 05-APR-99 11:39:41
    
    (c) Copyright 1998 Oracle Corporation.  All rights reserved.
    
    Welcome to LSNRCTL, type "help" for information.
    
    LSNRCTL> start stby1_listener
    
    

As an alternative to the steps outlined in this section, you can use the Net8 Assistant graphical user interface to configure the network files. For additional information, see the Net8 Administrator's Guide.

Step 4: Configure the Primary Database Parameter File.

Now that you have configured the network files, you can edit the primary database initialization parameter file. The primary database is now up and running, so these changes will only be enabled if you restart the instance or issue ALTER SESSION or ALTER SYSTEM statements.

The only changes you need to make to the file involve archiving to the standby service. Currently, the primary database parameter file looks as follows:

db_name=prod1
control_files=(/fs1/dbs/cf1.f,/fs1/dbs/cf2.f)
compatible=8.1.6
log_archive_start = TRUE
log_archive_dest_1 = 'LOCATION=/fs1/arc_dest/ MANDATORY REOPEN=60'
log_archive_dest_state_1 = ENABLE
log_archive_format = log_%t_%s.arc
audit_trail=FALSE
o7_dictionary_accessibility=FALSE
global_names=FALSE
db_domain=regress.rdbms.dev.us.oracle.com
remote_login_passwordfile = exclusive

# default parameters for instance 1
processes=30
sessions=30
transactions=21
transactions_per_rollback_segment=21
distributed_transactions=10
db_block_buffers=1000
db_files=200
shared_pool_size=10000000

  1. Specify standby archive destinations.

    Currently, you archive to only one location: a local directory. Because you want to maintain the local standby database in managed recovery mode, you must specify a new archiving location using a service name.

    Open the primary database initialization parameter file with a text editor and examine the current archiving location and format:

    log_archive_dest_1 = 'LOCATION=/fs1/arc_dest/ MANDATORY REOPEN=60'
    log_archive_dest_state_1 = ENABLE
    log_archive_format = log_%t_%s.arc
    
    

    Parameter/Option  Meaning 

    LOG_ARCHIVE_DEST_1 

    Indicates an archiving destination. 

    LOCATION 

    Indicates a local directory. 

    LOG_ARCHIVE_DEST_STATE_1 

    Indicates the state of the LOG_ARCHIVE_DEST_1 archiving destination. 

    ENABLE 

    Indicates that Oracle can archive to the destination. 

    LOG_ARCHIVE_FORMAT 

    Indicates the format for filenames of log files. 

  2. Because you want to archive to the standby database with service STANDBY1, you edit the file, adding the following entries:

    log_archive_dest_2 = 'SERVICE=standby1 OPTIONAL REOPEN=180'
    log_archive_dest_state_2 = ENABLE
    

    Parameter/Option  Meaning 

    LOG_ARCHIVE_DEST_2 

    Indicates a new archiving destination. LOG_ARCHIVE_DEST_1 is already reserved for local archiving to /fs1/arc_dest/

    SERVICE 

    Indicates the service name of the standby database. 

    OPTIONAL 

    Indicates that Oracle can reuse online redo logs even if this destination fails. 

    REOPEN 

    Indicates how many seconds the archiving process waits before reattempting to archive to a previously failed destination. 

    LOG_ARCHIVE_DEST_STATE_2 

    Indicates the state of the LOG_ARCHIVE_DEST_2 archiving destination. 

    ENABLE 

    Indicates that Oracle can archive to the destination. 

    After editing the primary database initialization parameter file, create a copy for use by the standby database:

    % cp /fs1/temp/initPROD1.ora /fs3/oracle/dbs/initSTANDBY1.ora
    
    

    If the primary database initialization parameter file contains the IFILE parameter, you also need to copy the file referred to by the IFILE parameter to the standby site and, if necessary, make appropriate changes to it.

Step 5: Configure the Standby Database Parameter File.

You know that the initialization parameters shown in Table 5-1 play a key role in the standby database recovery process, and decide to edit them.

Table 5-1  Configuring Standby Database Initialization Parameters
Parameter  Setting 

COMPATIBLE 

This parameter must be the same at the primary and standby databases. Because it is already set to 8.1.6 in the primary database parameter file, you can leave the standby setting as it is. 

CONTROL_FILES 

This parameter must be different between the primary and standby databases. You decide to locate the control files in the /fs2/dbs directory. 

DB_FILE_NAME_CONVERT 

Set when you want to make your standby datafile filenames distinguishable from your primary database filenames. Most (but not all) of the datafiles are in the /fs1/dbs directory. You set this parameter to /fs2/dbs to convert the files in the /dbs subdirectory automatically; the others you will convert using ALTER DATABASE RENAME FILE statements. 

DB_FILES 

DB_FILES must be the same at both databases so that you allow the same number of files at the standby database as you allow at the primary database. Consequently, you leave this parameter alone. An instance cannot mount a database unless DB_FILES is equal to or greater than MAXDATAFILES. 

DB_NAME 

This value should be the same as the DB_NAME value in the production database parameter file. Consequently, you leave this parameter alone. 

LOCK_NAME_SPACE  

Specifies the name space that the distributed lock manager (DLM) uses to generate lock names. Set this value if the standby and primary databases share the same host. You decide to set the name to STANDBY1. 

LOG_ARCHIVE_DEST_1 

This parameter specifies the location of the archived redo logs. You must use this directory when performing manual recovery. You decide to set the value to /fs2/arc_dest/. 

LOG_FILE_NAME_CONVERT 

Set when you want to make your standby redo log filenames distinguishable from your primary database redo log filenames. Because your primary redo logs are located in /fs1/dbs, you decide to locate the standby logs in /fs2/dbs

STANDBY_ARCHIVE_DEST 

Oracle uses this value to create the name of the logs received from the primary site. You decide to set it to /fs2/stdby/

Edit the standby database parameter file as follows (with edited values in bold):

db_name = prod1                #The same as PRMYinit.ora
control_files = (/fs2/dbs/cf1.f)
compatible = 8.1.6
log_archive_start = TRUE
log_archive_dest_1='LOCATION=/fs2/arc_dest/'
log_archive_dest_state_1 = ENABLE
log_archive_format = log_%t_%s.arc
standby_archive_dest = /fs2/stdby/
db_file_name_convert = ('/fs1/dbs','/fs2/dbs')  
log_file_name_convert = ('/fs1/dbs','/fs2/dbs') 
lock_name_space = standby1                             
audit_trail=FALSE
o7_dictionary_accessibility=FALSE
global_names=FALSE
db_domain=regress.rdbms.dev.us.oracle.com
remote_login_passwordfile = exclusive

# default parameters for instance 1
processes=30
sessions=30
transactions=21
transactions_per_rollback_segment=21
distributed_transactions=10
db_block_buffers=1000
db_files=200
shared_pool_size=10000000

Step 6: Start the Standby Database in Preparation for Managed Recovery.

Now that you have configured all network and parameter files, you can enable archiving to the standby database.

  1. Set the ORACLE_SID environment variable to the same value as the SID parameter in the tnsnames.ora file on the primary site and the listener.ora file on the standby site as follows:

    % setenv ORACLE_SID stdby1
    
    
  2. Start the instance.

    First, you start the standby database instance without mounting the standby database control file, as the following example shows:

    SQL> CONNECT sys/change_on_install@standby1
    SQL> STARTUP NOMOUNT PFILE=/fs3/oracle/dbs/initSTANDBY1.ora;
    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
    
  3. Manually rename datafiles.

    Next, write a SQL script to rename the datafiles not captured by the DB_FILE_NAME_CONVERT parameter:

    ALTER DATABASE RENAME FILE /fs1/dbs2/tbs_42.f,
                               /fs1/dbs2/tbs_51.f,
                               /fs1/dbs2/tbs_52.f,
                               /fs1/dbs2/tbs_03.f,
                               /fs1/dbs3/tbs_14.f,  
                               /fs1/dbs3/tbs_25.f, 
                               /fs1/dbs3/tbs_33.f, 
                               /fs1/dbs3/tbs_43.f, 
                               /fs1/dbs3/tbs_53.f,  
    TO
                               /fs2/dbs/tbs_42.f,
                               /fs2/dbs/tbs_51.f,
                               /fs2/dbs/tbs_52.f,
                               /fs2/dbs/tbs_03.f,
                               /fs2/dbs/tbs_14.f,
                               /fs2/dbs/tbs_25.f,
                               /fs2/dbs/tbs_33.f,
                               /fs2/dbs/tbs_43.f,
                               /fs2/dbs/tbs_53.f
    /
    
    
  4. Enable changes to the primary database parameter file.

    Finally, you enable the changes you made to the primary database parameter file so that the standby database can begin receiving archived redo logs:

    SQL> CONNECT sys/change_on_install@prod1 as sysdba
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 OPTIONAL REOPEN=180';
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
    

Step 7: Identify the Logs in the Gap Sequence.

Because you have enabled the changes to the primary database parameter file, the primary database is now able to archive to the standby service name. Before you can perform managed recovery, however, you must synchronize the standby database by applying those logs containing changes made after the primary database backup, but before the first log received by the standby database.

Write the following SQL script and run it on the standby database:

SELECT high.thread#, "LowGap#", "HighGap#"
FROM 
(     SELECT thread#, MIN(sequence#)-1 "HighGap#"
      FROM
      (     SELECT a.thread#, a.sequence# 
            FROM
            (     SELECT * 
                  FROM v$archived_log
            ) a,
            (     SELECT thread#, MAX(sequence#)gap1 
                  FROM v$log_history 
                  GROUP BY thread#
            ) b
            WHERE a.thread# = b.thread#
            AND a.sequence# > gap1
      ) 
      GROUP BY thread# 
) high,

(     SELECT thread#, MIN(sequence#) "LowGap#"
      FROM
      (     SELECT thread#, sequence# 
            FROM v$log_history, v$datafile
            WHERE checkpoint_change# <= next_change#
            AND checkpoint_change# >= first_change#
      )
      GROUP BY thread#
) low
WHERE low.thread# = high.thread#;

The output of the query is as follows:

SQL> @gap
THREAD#    LowGap#    HighGap#  
---------- ---------- ----------
       1   250        252
  

Hence, you must apply log sequence 250, 251, and 252 to synchronize the standby database before initiating managed recovery.

Step 8: Copy the Logs in the Gap Sequence to the Standby File System.

The archived log filenames generated by gap sequence queries on the standby database are generated by the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_FORMAT parameters in the initialization parameter file. Before transmitting the logs from the primary site to the standby site, you must determine the correct filenames.

First, you determine the filenames of the logs in the gap that were archived by the primary database. After connecting to the primary database using SQL*Plus, issue the following SQL query to obtain the names:

SQL> CONNECT sys/change_on_install@prod1
SQL> SELECT name FROM v$archived_log WHERE sequence# IN (250,251,252);

NAME                                                                            
--------------------------------------------------------------------------------
/fs1/arc_dest/log_1_250.arc
/fs1/arc_dest/log_1_251.arc 
/fs1/arc_dest/log_1_252.arc


The gap sequence in this case consists of log_1_250.arc, log_1_251.arc, and log_1_252.arc. The settings for LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_FORMAT in the standby database parameter file are as follows:

LOG_ARCHIVE_DEST_1 = 'LOCATION=/fs2/arc_dest/'
LOG_ARCHIVE_FORMAT = log_%t_%s.arc


You move the gap sequence logs from the primary file system to the standby file system, renaming them according to values for the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_FORMAT initialization parameters at the standby site:

% cp /fs1/arc_dest/log_1_250.arc /fs2/arc_dest/log_1_250.arc
% cp /fs1/arc_dest/log_1_251.arc /fs2/arc_dest/log_1_251.arc
% cp /fs1/arc_dest/log_1_252.arc /fs2/arc_dest/log_1_252.arc

Step 9: Apply the Logs in the Gap Sequence to the Standby Database.

Now you can apply the gap sequence logs using the RECOVER AUTOMATIC STANDBY DATABASE statement. This statement uses the values of the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_FORMAT parameters to construct the target filename. Once the gap sequence logs have been applied, the standby database is synchronized with the primary database and can be placed in managed recovery mode.

While connected to the standby database in SQL*Plus, recover the database using the AUTOMATIC option:

SQL> RECOVER AUTOMATIC STANDBY DATABASE;
ORA-00279: change 35083 generated at 08/16/1999 14:08:37 needed for thread 2
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.
SQL>

After recovering the gap sequence logs, Oracle prompts you for the name of a log that does not exist. The reason is that the recovery process does not know about the logs archived to the STANDBY service by the primary database. Cancel recovery at this point:

SQL> CANCEL

Step 10: Place the Standby Database in Managed Recovery Mode.

You can now enable managed recovery using the RECOVER MANAGED STANDBY DATABASE statement. You decide to use the TIMEOUT option of the RECOVER statement to specify a time interval of 20 minutes so that Oracle waits the specified number of minutes to write the requested archived log entry to the directory of the standby database control file. If the requested archived log entry is not written to the standby database control file directory within the specified time interval, the recovery operation is canceled.

While connected to the standby database using SQL*Plus, place the standby database in managed recovery mode:

SQL> RECOVER MANAGED STANDBY DATABASE TIMEOUT 20;

Oracle now begins managed recovery. As the primary database archives redo logs to the standby site, the standby database automatically applies them.

Scenario 2: Creating a Standby Database on a Remote Host

This scenario describes the creation of a standby database STANDBY1 on a remote host. The following assumptions are being made:

Step 1: Back Up the Primary Database Datafiles.

Create the backup that will form the basis for the standby database.

  1. Query the primary database to determine the datafiles. Invoke SQL*Plus and query the V$DATAFILE view to obtain a list of the primary database datafiles, as the following example shows:

    SQL> SELECT name FROM v$datafile;
    NAME
    -----------------------------------------------------------------------
    /vobs/oracle/dbs/dbf_1.f
    1 row selected.
    
    
  2. Shut down the primary database to make a consistent backup of the datafiles:

    SQL> SHUTDOWN IMMEDIATE;
    
    
  3. Copy the primary database datafiles to a temporary location (/backup), as the following example shows:

    % cp /vobs/oracle/dbs/dbf_1.f /backup
    
    
  4. Reopen the primary database as follows:

    SQL> STARTUP PFILE=PRMYinit.ora;
    

Step 2: Create the Standby Database Control File.

  1. Before you create the standby database control file, ensure that the primary database is in ARCHIVELOG mode and that automatic archival is enabled. Issue the ARCHIVE LOG LIST statement:

    SQL> ARCHIVE LOG LIST
    
    

    If the output from the ARCHIVE LOG LIST statement displays "No Archive Mode," perform the following steps:

    1. Shut down the primary database as follows:

      SQL> SHUTDOWN IMMEDIATE;
      
      
    2. Start and mount the primary database instance without opening it:

      SQL> STARTUP MOUNT PFILE=PRMYinit.ora;
      
      
    3. Set the log archive mode as follows:

      SQL> ALTER DATABASE ARCHIVELOG;
      
      
    4. Open the primary database:

      SQL> ALTER DATABASE OPEN;
      
      
  2. Create the standby database control file by issuing the following statement:

    SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/backup/stbycf.f'
    
    

    The standby database control file and the primary database datafiles are in the same temporary location at the primary site to make copying to the standby site easier.

Step 3: Transfer the Datafiles and Control File to the Standby Site.

Copy the primary database datafiles and the standby control file from the temporary location at the primary site to the standby site, as the following example shows:

% rcp /backup/* STBYHOST:/fs2/oracle/stdby

Step 4: Configure the Network Files.

This scenario assumes that the TCP/IP network protocol is used to connect to the primary and the standby databases. This step involves editing the following parameter files:

  1. Configure the tnsnames.ora file.

    You need to edit the tnsnames.ora file and add an entry with the following format:

    standby_service_name = (DESCRIPTION=
       (ADDRESS=(PROTOCOL=tcp) (PORT=port_number)(HOST=host_name))
       (CONNECT_DATA=(SID=standby_sid)))
    
    

    The tnsnames.ora file is typically located in the $ORACLE_HOME/network/admin directory on the primary site. Substitute appropriate values for standby_service_name, port_number, host_name, and standby_sid, as the following example shows:

    standby1 = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)
              (PORT=5112)(HOST=STBYHOST))
              (CONNECT_DATA=(SID=stdby1)))
    
    
  2. Configure the listener.ora file.

    You need to edit the listener.ora file and add two entries with the following format:

    STANDBY_LISTENER = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)
                                    (PORT=port_number)(HOST=host_name)))
    
    SID_LIST_STANDBY_LISTENER = (SID_LIST=
                      (SID_DESC=(SID_NAME=standby_sid)(ORACLE_HOME=/oracle_home)))
    
    

    The listener.ora file is typically located in the $ORACLE_HOME/network/admin directory on the standby site. Substitute appropriate values for port_number, host_name, standby_sid, and oracle_home, as the following example shows:

    STDBY1_LISTENER = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)
                             (PORT=5112)(HOST=STBYHOST)))
    
    SID_LIST_STDBY1_LISTENER = (SID_LIST=
              (SID_DESC=(SID_NAME=stdby1)(ORACLE_HOME=/oracle)))
    
    

    Make sure the SID_NAME in the listener.ora file matches the SID in the tnsnames.ora file. Also, make sure the PORT and HOST values are the same in the two files. Note that you have the option of creating a new listener or adding a new address to an existing listener.

Step 5: Start the Listener on the Standby Site.

The two entries that you added in step 4 define a listener to listen for connections to standby database stdby1 on port 5112. In this step, you must start the stdby1_listener listener. For example:

% lsnrctl start stdby1_listener;

Step 6: Configure the Standby Initialization Parameter File.

  1. Copy the primary database initialization parameter file from the primary site to the standby site. From the standby site, issue a command similar to the following:

    % rcp /vobs/oracle/dbs/PRMYinit.ora STBYHOST:/fs2/oracle/stdby/STBYinit.ora
    
    
  2. Edit the standby initialization parameter file (STBYinit.ora). Edit the following parameters:

    Parameter  Value 

    CONTROL_FILES 

    stbycf.f 

    STANDBY_ARCHIVE_DEST 

    /fs2/oracle/stdby/ 

    LOG_ARCHIVE_DEST_1 

    /fs2/oracle/stdby/ 

    LOG_ARCHIVE_FORMAT 

    stdby_%t_%s 

    DB_FILE_NAME_CONVERT 

    ('/vobs/oracle/dbs','/fs2/oracle/stdby') 

    LOG_FILE_NAME_CONVERT 

    ('/vobs/oracle/dbs','/fs2/oracle/stdby') 

    LOG_ARCHIVE_START 

    FALSE 

The STBYinit.ora file looks as follows:

#
#parameter file STBYinit.ora
#

db_name=prod1                   #The same as PRMYinit.ora

# The following two parameters have been changed from PRMYinit.ora
control_files=/fs2/oracle/stdby/stbycf.f
lock_name_space=stdby1

# The following parameters are the same as PRMYinit.ora
audit_trail=FALSE
o7_dictionary_accessibility=FALSE
global_names=FALSE
db_domain=regress.rdbms.dev.us.oracle.com
commit_point_strength=1

processes=30
sessions=30
transactions=21
transactions_per_rollback_segment=21
distributed_transactions=10
db_block_buffers=100
shared_pool_size=4000000
ifile=/vobs/oracle/work/tkinit.ora # Verify that file exists on the standby site
                                   # and that the file specification is valid

# specific parameters for standby database
log_archive_format = stdby_%t_%s.arc
standby_archive_dest=/fs2/oracle/stdby/
log_archive_dest_1='LOCATION=/fs2/oracle/stdby/'
db_file_name_convert=('/vobs/oracle/dbs','/fs2/oracle/stdby')
log_file_name_convert=('/vobs/oracle/dbs','/fs2/oracle/stdby')
log_archive_start=FALSE
log_archive_trace=127 

Step 7: Copy the Standby Initialization Parameter File.

  1. Make a copy of the STBYinit.ora file by issuing the following command:

    % cp STBYinit.ora Failover.ora
    
    
    

    Edit Failover.ora so if you fail over to the stdby1 standby database, you can use the Failover.ora file as the initialization parameter file for the new primary database. Make sure you use appropriate values for the LOG_ARCHIVE_DEST_n parameters.

  2. Edit the tnsnames.ora file on the standby site in case failover to the standby database occurs. See step 4 for information on how to configure the tnsnames.ora file.

Step 8: Start the Standby Database.

Start the standby database to enable archiving.

  1. Set the ORACLE_SID environment variable to the same value as the SID parameter in the tnsnames.ora file on the primary site and the listener.ora file on the standby site as follows:

    % setenv ORACLE_SID stdby1
    
    
  2. Start SQL*Plus:

    SQL> CONNECT sys/sys_password as sysdba
    
    
  3. Start the standby database instance without mounting the database:

    SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
    
    
  4. Mount the standby database:

    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    

Step 9: Configure the Primary Initialization Parameter File.

  1. Specify the archive destination by adding the following entry to the PRMYinit.ora file:

    LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 MANDATORY REOPEN=60'
    
    
  2. Enable the archive destination state by adding the following entry to the PRMYinit.ora file:

    LOG_ARCHIVE_DEST_STATE_2 = ENABLE
    
    
  3. Issue the following statements to ensure that the initialization parameters you have set in this step take effect:

    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby1 MANDATORY REOPEN=60';
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    

Step 10: Apply the Logs in the Gap Sequence.

  1. On the primary database, archive the current redo log as follows:

    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    
    
  2. On the standby database, run the following SQL script to identify the archived redo logs in the gap sequence:

    SELECT high.thread#, "LowGap#", "HighGap#"
    FROM
    (
        SELECT thread#, MIN(sequence#)-1 "HighGap#"
        FROM
        (
            SELECT a.thread#, a.sequence#
            FROM
            (
                SELECT *
                FROM v$archived_log
            ) a,
            (
                SELECT thread#, MAX(next_change#)gap1
                FROM v$log_history
                GROUP BY thread#
            ) b
            WHERE a.thread# = b.thread#
            AND a.next_change# > gap1
        )
        GROUP BY thread#
    ) high,
    
    (
        SELECT thread#, MIN(sequence#) "LowGap#"
        FROM
        (
            SELECT thread#, sequence#
            FROM v$log_history, v$datafile
            WHERE checkpoint_change# <= next_change#
            AND checkpoint_change# >= first_change#
        )
        GROUP BY thread#
    ) low
    WHERE low.thread# = high.thread#;
    
    
         THREAD#    LowGap#    HighGap# 
        ---------- ---------- ----------
                 1         90         92
    
    
  3. On the primary database, obtain the filenames of the logs in the gap sequence by performing a query on the V$ARCHIVED_LOG view as follows:

    SELECT name FROM v$archived_log 
    WHERE thread#=1 AND sequence#<=92 AND sequence#>=90;
    
    NAME
    -----------------------------------------
    /vobs/oracle/dbs/r_1_90.arc
    /vobs/oracle/dbs/r_1_91.arc
    /vobs/oracle/dbs/r_1_92.arc
    

  4. Transfer the logs in the gap sequence from the primary database to the standby database as follows:

    % rcp /vobs/oracle/dbs/r_1_90.arc STBYHOST:/fs2/oracle/stdby/stdby_1_90.arc
    % rcp /vobs/oracle/dbs/r_1_91.arc STBYHOST:/fs2/oracle/stdby/stdby_1_91.arc
    % rcp /vobs/oracle/dbs/r_1_92.arc STBYHOST:/fs2/oracle/stdby/stdby_1_92.arc
    
    
  5. On the standby database, issue the following SQL statement to manually apply the logs in the gap sequence:

    SQL> RECOVER STANDBY DATABASE;
    

Step 11: Place the Standby Database in Managed Recovery Mode.

On the standby database, enable managed recovery by issuing the following SQL statement:

SQL> RECOVER MANAGED STANDBY DATABASE;

Scenario 3: Accommodating Physical Changes in the Primary Database

This scenario describes the procedures you should follow when a physical change is made in the primary database. The following topics are covered:

Adding a Datafile to the Primary Database

To maintain consistency when you add a datafile to the primary database, you must add a corresponding datafile to the standby database. Otherwise, changes in the online redo logs that relate to the new datafile in the primary database will not be applied to the standby database.

If  Then 

You create a new datafile in the primary database 

Create a new datafile in the standby database. 

You transfer an existing datafile from another database to the primary database 

Copy the datafile from the primary database to the standby database. When you copy the datafile, you preserve the contents of the datafile. 

Creating a New Datafile in the Standby Database

  1. In the primary database, assume you create a datafile as follows:

    SQL> CREATE TABLESPACE tbs_4 DATAFILE 'tbs_4.f' SIZE 2M;
    
    
  2. Start the standby database:

    SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
    
    
  3. On the primary site, force a log switch as follows:

    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    
    
  4. Place the standby database in managed recovery mode:

    SQL> RECOVER MANAGED STANDBY DATABASE;
    
    

    This SQL statement causes Oracle to stop applying archived redo logs because a datafile on the primary site does not exist on the standby site. Messages similar to the following are displayed when you try to archive the redo logs:

    ORA-00283: recovery session canceled due to errors
    ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
    ORA-01110: data file 4: '/vobs/oracle/dbs/stdby/tbs_4.f'
    
    

    The error messages indicate that the datafile has been added to the standby database control file, but the datafile has not been created yet.

  5. On the standby site, create the new datafile by issuing the following statement:

    SQL> ALTER DATABASE CREATE DATAFILE '/vobs/oracle/dbs/stdby/tbs_4.f'
          AS '/vobs/oracle/dbs/stdby/tbs_4.f';
    
    
  6. On the standby site, resume applying archived redo logs by issuing the following statement:

    SQL> RECOVER MANAGED STANDBY DATABASE;
    

Copying a Datafile from the Primary Database

In the primary database, assume you have transferred a datafile from another database to the primary database.

  1. In the primary database, back up the new datafile:

    SQL> ALTER TABLESPACE tbs_4 BEGIN BACKUP;
    
    
  2. Copy the datafile to a temporary directory:

    % cp tbs_4.f /backup
    
    
  3. In the primary database, issue the following statement:

    SQL> ALTER TABLESPACE tbs_4 END BACKUP;
    
    
  4. Copy the tbs_4.f datafile from the backup temporary directory to the standby site.

    % cp /backup/tbs_4.f STBYHOST:/vobs/oracle/dbs/stdby
    
    
  5. On the standby site, resume applying archived redo logs by issuing the following statement:

    SQL> RECOVER MANAGED STANDBY DATABASE;
    

Refreshing the Standby Database Control File After You Add a Datafile to the Primary Database

After you add a datafile to the primary database and you either create or copy a corresponding datafile to the standby database, the status field in the V$DATAFILE view will contain the "RECOVER" value. The "RECOVER" value indicates that the datafile needs to be recovered. The "RECOVER" value is not accurate in this situation. You need to refresh the standby database control file to get an accurate value in the status field of the V$DATAFILE view. See Refreshing the Standby Database Control File for additional information.


Note:

While refreshing the standby database control file will give you an accurate value, it may invalidate managed recovery. If managed recovery is invalidated, your only option is manual recovery. 


Renaming a Datafile in the Primary Database

When you rename one or more datafiles in the primary database, you also need to rename the corresponding datafiles in the standby database.

  1. Rename the datafile at the primary site:

    SQL> ALTER TABLESPACE tbs_4 OFFLINE;
    SQL> ALTER TABLESPACE tbs_4 RENAME DATAFILE 'tbs_4.f' TO 'tbs_x.f';
    SQL> ALTER DATABASE RECOVER TABLESPACE tbs_4;
    SQL> ALTER TABLESPACE tbs_4 ONLINE;
    
    
  2. At the primary site, create the standby database control file by issuing the following statement:

    SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'stbycf.f';
    
    
  3. Ensure that the standby database has applied all of the online redo logs by issuing the following statement:

    SQL> RECOVER AUTOMATIC STANDBY DATABASE;
    
    
  4. Shut down the standby database with the IMMEDIATE option:

    SQL> SHUTDOWN IMMEDIATE;
    
    
  5. Copy the standby database control file from the primary site to the standby site, overwriting the control file that exists on the standby site:

    % rcp stbycf.f STBYHOST:/fs2/oracle/stdby/
    
    
  6. Rename the datafile at the standby site:

    % mv tbs_4.f tbs_x.f
    
    
  7. Start the standby database instance without mounting the database:

    SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
    
    
  8. Mount the standby database:

    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
    
  9. Identify and apply the logs in the gap sequence.

  10. On the standby database, enable managed recovery by issuing the following statement:

    SQL> RECOVER MANAGED STANDBY DATABASE;
    
    

If you do not rename the corresponding file at the standby site, and then attempt to refresh the standby database control file, the standby database will attempt to use the renamed datafile, but it will not find the renamed datafile. Consequently, you will see error messages similar to the following:

ORA-00283: recovery session canceled due to errors
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/vobs/oracle/dbs/stdby/tbs_x.f'

Deleting a Datafile or Tablespace in the Primary Database

When you delete one or more datafiles or tablespaces in the primary database, you also need to delete the corresponding datafiles or tablespaces in the standby database. You also need to refresh the standby database control file.

  1. Delete the datafile at the primary site as follows:

    SQL> DROP TABLESPACE tbs_4;
    % rm tbs_4.f
    
    
  2. Refresh the standby database control file. See Refreshing the Standby Database Control File for the steps.

  3. Delete the corresponding datafile at the standby site as follows:

    % rm tbs_4.f
    

Adding or Removing Online Redo Logs

One method of tuning available to the DBA is changing the size and number of online redo logs. Consequently, when you add or remove an online redo log at the primary site, it is important that you refresh the standby database control file.

  1. Add or remove an online redo log as follows:

    SQL> ALTER DATABASE ADD LOGFILE 'prmy3.log' SIZE 100K;
    
    

    or

    SQL> ALTER DATABASE DROP LOGFILE 'prmy3.log';
    
    
  2. Refresh the standby database control file. See Refreshing the Standby Database Control File for the steps.

Altering Control Files

If you use the CREATE CONTROLFILE statement in the primary database to change one or more database parameters, you need to refresh the standby database control file. Some of the parameters you can change with this statement are the maximum number of redo log file groups, redo log file members, archived redo log files, data files, or instances that can concurrently have the database mounted and open.

  1. On the primary database, alter the control file as follows:

    SQL> CREATE CONTROLFILE REUSE DATABASE stby1 NORESETLOGS
        LOGFILE  'prmy1.log' SIZE 100K, 'prmy2.log' SIZE 100K
        DATAFILE 'dbf_1.f' SIZE 10M MAXLOGFILES 52 ARCHIVELOG;
    
    
  2. Refresh the standby database control file. See Refreshing the Standby Database Control File for the steps.

If you use the RESETLOGS clause of the CREATE CONTROLFILE statement, you will invalidate the standby database. Once the standby database is invalidated, your only option is to re-create the standby database.

Refreshing the Standby Database Control File

In some cases, when you make a physical change to the primary database, in addition to making the corresponding change to the standby database, you also need to refresh the standby database control file. The online redo logs do not record changes made to the primary database control file. This section describes the cases where you need to refresh the standby database control file and the steps to follow.

You need to refresh the standby database control file whenever you:

Perform the following steps to keep the standby database control file synchronized with the primary database control file:

  1. At the primary site, create the standby database control file by issuing the following statement:

    SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'stbycf.f';
    
    
  2. If the standby database is in managed recovery mode, you need to cancel recovery by issuing the following statement:

    SQL> RECOVER MANAGED STANDBY DATABASE CANCEL;
    
    
    
    
  3. Shut down the standby database with the IMMEDIATE option:

    SQL> SHUTDOWN IMMEDIATE;
    
    
  4. Copy the standby database control file from the primary site to the standby site, overwriting the control file that exists on the standby site:

    % rcp stbycf.f STBYHOST:/fs2/oracle/stdby/
    
    
  5. Start the standby database instance without mounting the database:

    SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
    
    
  6. Mount the standby database:

    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
    
  7. Identify and apply the logs in the gap sequence.

  8. On the standby database, enable managed recovery by issuing the following statement:

    SQL> RECOVER MANAGED STANDBY DATABASE;
    
    

    You may get the following error messages when you try to enable managed recovery:

    ORA-00308: cannot open archived log 'standby1'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/vobs/oracle/dbs/stdby/tbs_1.f'
    
    

    If you get the ORA-00308 error, cancel recovery by issuing the following statement:

    SQL> CANCEL
    
    

    These error messages are issued when one or more logs in the gap sequence have not been successfully applied. If you receive these errors, repeat steps 7 and 8.

Physical Changes That Require You to Rebuild the Standby Database

Some physical changes you make to the primary database can invalidate the standby database. Once a standby database is invalidated, your only option is to rebuild it.

The following clauses of the ALTER DATABASE statement invalidate the standby database:

Scenario 4: Recovering After the NOLOGGING Clause Was Specified

In some SQL statements, the user has the option of specifying the NOLOGGING clause, which indicates that the database operation is not logged in the redo log file. Even though the user specifies the NOLOGGING clause, a redo log record is still written to the redo log. However, when the redo log file is transferred to the standby site and applied to the standby database, a portion of the datafile is unusable and marked as being unrecoverable. When you either activate the standby database, or open the standby database with the read-only option, and attempt to read the range of blocks that are marked as "UNRECOVERABLE," you will see error messages similar to the following:

ORA-01578: ORACLE data block corrupted (file # 1, block # 2521)
ORA-01110: data file 1: '/vobs/oracle/dbs/stdby/tbs_1.f'
ORA-26040: Data block was loaded using the NOLOGGING option

In order to recover after the NOLOGGING clause was specified, you need to copy the datafile that contains the unjournaled data from the primary site to the standby site. Perform the following steps:

  1. Determine which datafiles should be copied.

    1. Issue the following query in the primary database:

      SQL> SELECT name, unrecoverable_change# FROM v$datafile;
      NAME                                                     UNRECOVERA
      -------------------------------------------------------- ----------
      /vobs/oracle/dbs/tbs_1.f                                       5216
      /vobs/oracle/dbs/tbs_2.f                                          0
      /vobs/oracle/dbs/tbs_3.f                                          0
      /vobs/oracle/dbs/tbs_4.f                                          0
      4 rows selected.
      
      
    2. Issue the following query in the standby database:

      SQL> SELECT name, unrecoverable_change# FROM v$datafile;
      NAME                                                     UNRECOVERA
      -------------------------------------------------------- ----------
      /vobs/oracle/dbs/stdby/tbs_1.f                                 5186
      /vobs/oracle/dbs/stdby/tbs_2.f                                    0
      /vobs/oracle/dbs/stdby/tbs_3.f                                    0
      /vobs/oracle/dbs/stdby/tbs_4.f                                    0
      4 rows selected.
      
      
    3. Compare the query results from the primary and the standby databases.

      Compare the value of the unrecoverable_change# column in both query results. If the value of the unrecoverable_change# column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.

      In this example, the value of the unrecoverable_change# in the primary database for the tbs_1.f datafile is greater, so you need to copy the tbs_1.f datafile to the standby site.

  2. On the primary site, back up the datafile that you need to copy to the standby site as follows:

    SQL> ALTER TABLESPACE system BEGIN BACKUP;
    SQL> EXIT;
    % cp tbs_1.f /backup
    SQL> ALTER TABLESPACE system END BACKUP;
    
    
  3. Create a new standby database control file.

    In the primary database, issue the following statement:

    SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/backup/stbycf.f;
    
    
  4. Shut down the standby database.

    In the standby database, issue the following statement:

    SQL> SHUTDOWN IMMEDIATE;
    
    
  5. Copy the datafile and the standby database control file from the primary site to the standby site as follows:

    % rcp /backup/* STBYHOST:/fs2/oracle/stdby/
    
    
  6. Start the standby database instance without mounting the database:

    SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
    
    
  7. Mount the standby database:

    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
    
  8. Identify and apply the logs in the gap sequence.

  9. On the standby database, enable managed recovery by issuing the following statement:

    SQL> RECOVER MANAGED STANDBY DATABASE;
    
    

    You may get the following error messages when you try to enable managed recovery:

    ORA-00308: cannot open archived log 'standby1'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/vobs/oracle/dbs/stdby/tbs_1.f'
    
    

    If you get the ORA-00308 error, cancel recovery by issuing the following statement:

    SQL> CANCEL
    
    

    These error messages are issued when one or more logs in the gap sequence have not been successfully applied. If you receive these errors, repeat steps 8 and 9.

Scenario 5: Deciding Which Standby Database to Fail Over to in a Multiple Standby Database Configuration

Every standby database is associated with one and only one primary database. A single primary database can, however, support multiple standby databases. This scenario identifies the kind of information you need in order to decide which of the multiple standby databases to activate.

One of the important things to consider in a multiple standby database configuration is whether the archive destination is mandatory or optional. The following table lists an advantage and disadvantage for each destination:

Destination  Advantage  Disadvantage 

MANDATORY 

All archived redo log files are archived to the mandatory archive destination. After you apply the archived redo log files at the standby site, you can ensure that the standby database is up-to-date. Furthermore, you can activate the standby database as the new production database with minimum loss of data. 

In some cases, (such as network failure), the archived redo log files cannot reach the mandatory archive destination, causing the archiving of the redo log file to stop. In the worst case, if all of the online redo log files are full, and cannot be archived, the primary database instance will stop working.

You can issue the following SQL query to determine whether the primary database stopped because it was not able to switch to an online redo log:

SELECT decode(count(*),0,'NO','YES') "switch_possible"

FROM v$log

WHERE archived='YES';

If the output from the query displays "Yes," a log switch is possible; if the output displays "No," a log switch is not possible.  

OPTIONAL 

The primary database continues to operate normally when archival of the redo logs to the optional archive destination at the standby site is interrupted. 

In some cases, (such as network failure), the archived redo log files cannot reach the optional archive destination, causing the archiving of the redo log files to stop. There is the potential for multiple gap sequences in the standby database. The gaps must be resolved and the redo logs must be transferred manually from the primary site to the standby site before automatic archival can resume. 

Consider the following recommendations in a multiple standby database configuration:

Suppose the primary database is located in San Francisco and supports five standby databases as follows:

Standby  Location  Type  Description 

local directory 

Mandatory 

Local copy of the archived redo logs. 

San Francisco 

Mandatory 

Fail over to this standby database when there is physical damage at the primary site. This standby site is connected to the primary site by a local area network. 

Boston 

Optional 

Fail over to this standby database when a disaster occurs that affects San Francisco. 

Los Angeles 

Optional 

This standby site receives archived redo logs, but does not apply them. See Scenario 9: Standby Database with No Ongoing Recovery for a description of this type of configuration. 

San Francisco 

Optional 

This standby site receives archived redo logs, and applies them after an 8-hour time lag. See Scenario 10: Standby Database with a Time Lag for a description of this type of configuration. 

Assume that a disaster occurs in San Francisco where the primary site is located, and the primary host is damaged. One of the standby databases must be activated. You cannot assume that the database administrator (DBA) who set up the multiple standby database configuration is available to decide which standby database to fail over to. Therefore, it is imperative to have a disaster recovery plan at each standby site, as well as at the primary site. Each member of the disaster recovery team needs to know about the disaster recovery plan and be aware of the procedures to follow. This scenario identifies the kind of information that the person who is making the decision would need when deciding which standby database to activate. One method of conveying information to the disaster recovery team is to include a ReadMe file at each standby site.

The ReadMe file at each site should describe how to:

The following example shows the contents of a sample ReadMe file:

----------------Standby Database Disaster Recovery ReadMe File----------------

Warning:
********************************************************************************
Perform the steps in this procedure only if you are responsible for failing over 
to a standby database after the primary database fails.

If you perform the steps outlined in this file unnecessarily, you may corrupt 
the entire database system.
********************************************************************************

Multiple Standby Database Configuration:

No.     Location       Type     IP Address
--- --------------- --------- --------------
 1   San Francisco   Primary   128.1.124.25 
 2   San Francisco   Standby   128.1.124.157
 3   Boston          Standby   136.132.1.55
 4   Los Angeles     Standby   145.23.82.16
 5   San Francisco   Standby   128.1.135.24

You are in system No. 3, which is located in Boston.

Perform the following steps to fail over to the most up-to-date and available 
standby database:

1.  Log on to the local standby database as a DBA.

    a)  Log on with the following user name and password:

               username: Standby3
               password: zkc722Khn

    b)  Invoke SQL*Plus as follows:

        % sqlplus
     
    c)  Connect as the DBA as follows:

        CONNECT sys/s23LsdIc AS SYSDBA

2.  Connect to as many remote systems as possible. You can connect to a maximum
    of four systems. System 4 does not have a firewall, so you can connect to it
    directly. Systems 1, 2, and 5 share the same firewall host.  You need to go
    to the firewall host first and then connect to each system.  The IP address
    for the firewall host is 128.1.1.100.  Use the following user name and
    password:
               username: Disaster
               password: 82lhsIW32

3.  Log on to as many remote systems as possible with the following user names
    and passwords:

    Login information:

    No.     Location     IP Address    username   password
    --- --------------- ------------- ---------- ----------
    1   San Francisco   128.1.124.25   Oracle8i   sdd290Ec
    2   San Francisco   128.1.124.157  Standby2   ei23nJHb
    3                    (L o c a l)
    4   Los Angeles     145.23.82.16   Standby4   23HHoe2a
    5   San Francisco   128.1.135.24   Standby5   snc#$dnc

4.  Invoke SQL*Plus on each remote system you are able to log on to as follows:

    % sqlplus

5.  Connect to each remote database as follows:

    CONNECT sys/password AS SYSDBA

    The DBA passwords for each location are:

    No.     Location      Password
    --- --------------- -----------
    1   San Francisco     x2dwlsd91
    2   San Francisco     a239s1DAq
    3           (L o c a l)
    4   Los Angeles       owKL(@as23
    5   San Francisco     sad_KS13x

6.  If you are able to log on to System 1, invoke SQL*Plus and issue the
    following statements:

    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP PFILE=PRMYinit.ora;

    Note: If you are able to execute the STARTUP statement successfully, the
          primary database has not been damaged.  Do not continue with this
          procedure.

7.  Issue the following SQL statements on each standby database (including the
    one on this machine) that you were able to connect to:

    SQL> RECOVER MANAGED STANDBY DATABASE CANCEL;
    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    SQL> RECOVER AUTOMATIC STANDBY DATABASE;
           *** press the Return key for each of the prompts
    SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM V$LOG_HISTORY GROUP BY THREAD#;

    Compare the query results of each standby database. Activate the
    standby database with the largest sequence number.

8.  Fail over to the standby database with the largest sequence number.

    On the standby database with the largest sequence number, invoke SQL*Plus
    and issue the following SQL statements:

    SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP PFILE=Failover.ora;

------------End of Standby Database Disaster Recovery ReadMe File-------------

Scenario 6: Configuring Client Application Failover

When a standby database is activated, it becomes a production database, and is no longer capable of serving as a standby database. Client applications need to redirect their connections from the original primary database to the newly activated production database. This scenario describes the following ways to set up client failover:

Local TNS Configuration

The tnsnames.ora file can be configured for multiple addresses. In a local TNS configuration, at least one of the addresses should be the address of a standby site. Modify the tnsnames.ora file at each client site to ensure that an address for a standby site has been supplied. The tnsnames.ora file is typically located in the $ORACLE_HOME/network/admin directory. You can assign multiple addresses to one TNS name and use the FAILOVER option. See the Net8 Administrator's Guide for details about how to set multiple addresses and how to use the FAILOVER option. The following example shows an entry that has an address for a standby site in addition to the address for the primary site:

ProductDB = (   DESCRIPTION=
                (FAILOVER=ON)
                (LOAD_BALANCE=OFF)
                (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=PRMYHOST.foo.com))
                (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=STBYHOST.foo.com))
                (CONNECT_DATA=(SID=db1))
            )

In this example, the primary database is located at PRMYHOST.foo.com, and the standby database is located at STBYHOST.foo.com. When the client application connects to ProductDB, it tries to send a connection request to PRMYHOST.foo.com first. If there is no response, the client application tries to send another connection request to STBYHOST.foo.com. When the primary database is down and the standby database is activated, the client application can connect to the new production database automatically.

If the primary database fails after the connection has been established, the client application will not automatically direct the remaining request to the newly activated production database. You must establish the connection to the ProductDB database again.

Oracle Names Server Configuration

If you are using an Oracle Names server, you can change the TNS name settings on the server. You can assign multiple addresses to one TNS name and use the FAILOVER option. See the Net8 Administrator's Guide for details about how to set multiple addresses and how to use the FAILOVER option on an Oracle Names server. The format for setting up the TNS name on the Oracle Names server is the same as the format for the local tnsnames.ora file. Therefore, the example in Local TNS Configuration also applies to the Oracle Names server configuration.

Transparent Application Failover (TAF) Configuration

The following configurations require that the client reconnect to the database server when the primary database fails over to the standby database:

However, if you are using an Oracle Call Interface (OCI) client, you can use transparent application failover (TAF). TAF is the ability of applications to automatically reconnect to the database if the connection fails. If the client application is not involved in a database transaction, then users may not notice the failure of the primary database server. See Oracle8i Designing and Tuning for Performance for details on how to configure TAF. The following example shows address information for the ProductDB database and the Standby1 database:

ProductDB=( DESCRIPTION=
            (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=PRMYHOST.foo.com))
            (CONNECT_DATA=(SID=db1)(FAILOVER_MODE=(BACKUP=Standby1)
                                                  (TYPE=session)
                                                  (METHOD=basic)))
          )
Standby1 =( DESCRIPTION=
            (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=STBYHOST.foo.com))
            (CONNECT_DATA=(SID=db1))
          )

Sequence of events:

  1. Client application is connected to the ProductDB database.

  2. The primary database in PRMYHOST.foo.com fails.

  3. The standby database is activated as the new production database.

  4. When the client application fails to connect to PRMYHOST.foo.com, it uses the database specified with the BACKUP parameter in the FAILOVER_MODE clause, and automatically connects to STBYHOST.foo.com.

Manual Network Configuration

Instead of setting the configurations so that the client can automatically fail over to the new production database, you can always choose to manually modify the network settings after the standby database is activated.

You can modify the local tnsnames.ora file. Redirect the TNS name pointing to the original production database to the newly activated production database. For example, assume the original TNS setting is as follows:

ProductDB=( DESCRIPTION=
               (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=PRMYHOST.foo.com))
                (CONNECT_DATA=(SID=db1))
          )

When the primary database on PRMYHOST.foo.com fails, and the standby database in STBYHOST.foo.com is activated, causing it to become the new production database, you need to edit the tnsnames.ora file and change the entry to the following:

ProductDB=( DESCRIPTION=
               (ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=STBYHOST.foo.com))
                (CONNECT_DATA=(SID=db1))
          )

You do not need to change your client application; subsequent connections to the production database will be sent to the new production database on STBYHOST .foo.com.

If you are using an Oracle Names server, make a similar change for the corresponding entry. All clients using this Oracle Names server will send their subsequent connections to the new production database on STBYHOST.foo.com.

You can also change the settings on the DNS server. Change the settings for the domain name, which is used by the clients to locate the production database.

For example, assume the following:

Item  Value 

Domain name 

ProductDB.foo.com 

tnsnames.ora entry 

ProductDB=(DESCRIPTION=

(ADDRESS=(PROTOCOL=tcp) (PORT=1521) (HOST=ProductDB.foo.com))

(CONNECT_DATA=(SID=db1)

) 

DNS server entry for domain name 

ProductDB.foo.com IN A 136.1.23.15 

IP address of STBYHOST.foo.com 

128.3.151.63 

Change the DNS server entry for the domain name in the DNS server to the following:

ProductDB.foo.com   IN  A   128.3.151.63

After you change the DNS settings, not all clients know about the change immediately. The old DNS settings may be cached somewhere, causing some clients to continue to use the old settings. The old settings need to be replaced with the new settings.

Scenario 7: Recovering After a Network Failure

When a standby database is in managed recovery mode, the standby database automatically applies archived redo logs as it receives them from the primary database. When the network goes down, automatic archival from the primary database to the standby database stops.

If the standby database is specified as an optional archive destination, then the primary database continues to operate normally.

When the network is up and running again, automatic archival of the archived redo logs from the primary database to the standby database resumes. However, if the standby database is specified as an optional archive destination, and a log switch occurred at the primary site, the standby database has a gap sequence for the time when the network was down. The standby database cannot apply any of the archived redo logs at the standby site until the gap sequence is resolved.

If the standby database is specified as a mandatory archive destination, then the primary database will not archive any redo logs until the network failure is resolved and the primary database is able to archive to the standby site.

The primary database may eventually stall if the network problem is not fixed in a timely manner because the primary database will not be able to switch to an online redo log that has not been archived. You can issue the following SQL query to determine whether the primary database stalled because it was not able to switch to an online redo log:

SELECT decode(count(*),0,'NO','YES') "switch_possible"
FROM v$log
WHERE archived='YES';

If the output from the query displays "Yes," a log switch is possible; if the output displays "No," a log switch is not possible.

This scenario describes how to recover after a network failure.

  1. Identify the network failure.

    The V$ARCHIVE_DEST view contains the network error and identifies which standby database cannot be reached. On the primary database, issue the following SQL statement:

    SQL> SELECT * FROM V$ARCHIVE_DEST
    DEST_ID    STATUS    BINDING   NAME_SP TARGET  REOPEN_SEC DESTINATION
    FAIL_DATE FAIL_SEQUE FAIL_BLOCK ERROR
    ---------- --------- --------- ------- ------- ----------
    -------------------------------------------------------------------------------- 
    ---------
    ---------- ----------
    --------------------------------------------------------------------------------
             1 VALID     MANDATORY SYSTEM  PRIMARY          0 /vobs/oracle/dbs
    0          0
             2 ERROR     MANDATORY SYSTEM  STANDBY          5 standby1
    26-JUL-99         61          0 ORA-12541: TNS:no listener
             3 INACTIVE  OPTIONAL  SYSTEM  PRIMARY          0
    0          0
             4 INACTIVE  OPTIONAL  SYSTEM  PRIMARY          0
    0          0
             5 INACTIVE  OPTIONAL  SYSTEM  PRIMARY          0
    0          0
    
    

    The query results show there are errors archiving to the standby1 standby database.

  2. Try to resolve the network failure.

    The query results in step 1 display the error as TNS:no listener. You should check whether the listener on the standby site is started. If the listener is stopped, then start it.

  3. Prevent the primary database from stalling, if possible.

    If you are not able to solve the network problem quickly, and if the standby database is specified as a mandatory destination, you should try to prevent the database from stalling by doing one of the following:

    • Disable the mandatory archive destination:

      SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = DEFER;
      
      

      When the network problem is resolved, you can enable the archive destination again:

      SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
      
      
    • Change the archive destination from mandatory to optional:

      SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 OPTIONAL 
      REOPEN=60';
      
      

      When the network problem is resolved, you can change the archive destination from optional back to mandatory:

      SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 MANDATORY 
      REOPEN=60';
      
      

    In some cases, you may not be the person responsible for correcting the problem. You can periodically query the V$ARCHIVE_DEST view to see if the problem has been resolved.

  4. On the primary database, archive the current redo log:

    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    
    
  5. Identify and apply the logs in the gap sequence.

    When the network is up and running again, and new archived logs are transferred from the primary to the standby database, they will not be applied to the standby database until the logs in the gap sequence have been applied manually.

    On the standby database, run the following SQL script to identify the archived redo logs in the gap sequence:

    SELECT high.thread#, "LowGap#", "HighGap#"
    FROM
    (
        SELECT thread#, MIN(sequence#)-1 "HighGap#"
        FROM
        (
            SELECT a.thread#, a.sequence#
            FROM
            (
                SELECT *
                FROM v$archived_log
            ) a,
            (
                SELECT thread#, MAX(next_change#)gap1
                FROM v$log_history
                GROUP BY thread#
            ) b
            WHERE a.thread# = b.thread#
            AND a.next_change# > gap1
        )
        GROUP BY thread#
    ) high,
    
    (
        SELECT thread#, MIN(sequence#) "LowGap#"
        FROM
        (
            SELECT thread#, sequence#
            FROM v$log_history, v$datafile
            WHERE checkpoint_change# <= next_change#
            AND checkpoint_change# >= first_change#
        )
        GROUP BY thread#
    ) low
    WHERE low.thread# = high.thread#;
    
    
         THREAD#    LowGap#    HighGap# 
        ---------- ---------- ----------
                 1         90         92
    
    
  6. On the primary database, obtain the filenames of the logs in the gap sequence by performing a query on the V$ARCHIVED_LOG view as follows:

    SELECT name FROM v$archived_log 
    WHERE thread#=1 AND sequence#<=92 AND sequence#>=90;
    
    NAME
    -----------------------------------------
    /vobs/oracle/dbs/r_1_90.arc
    /vobs/oracle/dbs/r_1_91.arc
    /vobs/oracle/dbs/r_1_92.arc
    

  7. Transfer the logs in the gap sequence from the primary database to the standby database as follows:

    % rcp /vobs/oracle/dbs/r_1_90.arc STBYHOST:/fs2/oracle/stdby/stdby_1_90.arc
    % rcp /vobs/oracle/dbs/r_1_91.arc STBYHOST:/fs2/oracle/stdby/stdby_1_91.arc
    % rcp /vobs/oracle/dbs/r_1_92.arc STBYHOST:/fs2/oracle/stdby/stdby_1_92.arc
    
    
  8. On the standby database, issue the following statement to manually apply the gap sequence:

    SQL> RECOVER AUTOMATIC STANDBY DATABASE;
    
    
  9. Repeat steps 5 through 8 until there are no more gaps.

    A network failure can cause multiple gaps in the standby database. The gaps must be resolved one at a time.

It is important to specify a local directory at the primary site as a mandatory archive destination, so that all of the archived redo logs reside on the same system as the primary database. When the primary system is unreachable, and the primary database is part of a multiple standby database configuration, you can try to identify the archived redo logs at the other standby sites.

Scenario 8: Re-Creating a Standby Database

This scenario describes the case where you have failed over to a standby database and have begun using it as a normal production database. After a period of time, you decide you want to fail back to the original production system and make the production database the standby database again. The general steps are:

  1. Create a standby database at the original primary site.

  2. Fail over to the standby database at the original primary site.

  3. Create a new standby database at the original standby site.

The detailed steps follow:

  1. Copy the original standby initialization parameter file from the original standby site to the original primary site as follows:

    % rcp STBYinit.ora PRMYHOST:fallback.ora
    
    

    The fallback.ora file will become the standby initialization parameter file for the standby database at the original primary site.

  2. On the original primary site, configure the fallback.ora file. You need to modify the following parameters:

    Parameter  Value 

    LOCK_NAME_SPACE 

    fallback 

    LOG_ARCHIVE_FORMAT 

    r_%t_%s.arc 

    STANDBY_ARCHIVE_DEST 

    /vobs/oracle/dbs/ 

    LOG_ARCHIVE_DEST_1 

    /vobs/oracle/dbs/ 

    DB_FILE_NAME_CONVERT 

    ('/fs2/oracle/stdby','/vobs/oracle/dbs') 

    LOG_FILE_NAME_CONVERT 

    ('/fs2/oracle/stdby','/vobs/oracle/dbs') 

    The fallback.ora file looks as follows:

    #
    #parameter file fallback.ora
    #
    
    db_name=prod1               #The same as PRMYinit.ora
    
    control_files=/fs2/oracle/stdby/stbycf.f
    lock_name_space=fallback;
    
    audit_trail=FALSE
    o7_dictionary_accessibility=FALSE
    global_names=FALSE
    db_domain=regress.rdbms.dev.us.oracle.com
    commit_point_strength=1
    
    processes=30
    sessions=30
    transactions=21
    transactions_per_rollback_segment=21
    distributed_transactions=10
    db_block_buffers=100
    shared_pool_size=4000000
    ifile=/vobs/oracle/work/tkinit.ora
    
    # specific parameters for standby database
    log_archive_format = r_%t_%s.arc
    standby_archive_dest=/vobs/oracle/dbs/
    log_archive_dest_1='LOCATION=/vobs/oracle/dbs/'
    log_archive_dest_state_1 = ENABLE
    db_file_name_convert=('/fs2/oracle/stdby','/vobs/oracle/dbs')
    log_file_name_convert=('/fs2/oracle/stdby','/vobs/oracle/dbs')
    log_archive_start=FALSE
    log_archive_trace=127
    
    
  3. On the original primary site, create or modify the primary initialization parameter file.

    You need to supply appropriate values for the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_STATE_1 parameters.

    Note: Step 7 of Scenario 2: Creating a Standby Database on a Remote Host suggested that you make a copy of the standby database initialization parameter file. If you made a copy, then you can modify the copy in this step.

  4. On the original standby site, back up the production database data files.

  5. On the original standby site, create the standby database control file.

  6. Copy the production database data files and the standby database control file from the original standby site to the original primary site.

  7. Archive the current online redo log and shut down the production database to prevent further modifications as follows:

    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    SQL> SHUTDOWN IMMEDIATE;
    
    
  8. Copy the archived redo logs that were generated after the production database data files were backed up from the original standby site to the original primary site, as follows:

    % rcp /fs2/oracle/stdby/stdby_1_102.arc PRMYHOST:/vobs/oracle/dbs/r_1_102.arc
    % rcp /fs2/oracle/stdby/stdby_1_103.arc PRMYHOST:/vobs/oracle/dbs/r_1_103.arc
    
    
  9. On the original primary site, start and mount the standby database:

    SQL> CONNECT sys/sys_password AS SYSDBA
    SQL> STARTUP NOMOUNT PFILE=fallback.ora;
    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
    
  10. Apply the logs in the gap sequence:

    SQL> RECOVER AUTOMATIC STANDBY DATABASE;
    
    
  11. Activate the standby database:

    SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
    SQL> SHUTDOWN IMMEDIATE;
    
    
  12. Start the production database at the original primary site:

    SQL> STARTUP PFILE=PRMYinit.ora;
    
    
  13. Configure the network settings to enable client applications to access the production database.

  14. Create a new standby database at the original standby site. See Scenario 2: Creating a Standby Database on a Remote Host for the steps to follow to create a standby database.

Scenario 9: Standby Database with No Ongoing Recovery

This scenario describes what the DBA needs to do when archived online redo logs are automatically transferred to the standby site, but they are not automatically applied to the standby database.

For example, suppose a standby database is set up on a host running a Web server. The workload for the host is very heavy, and you do not want to add workload to the host by automatically applying redo logs as they arrive from the primary site. You decide to let the host receive archived redo logs without applying them. Thus, the system resources for applying the redo logs will be saved. Later, if you decide to open the standby database as a read-only database to perform some queries, or activate the standby database as a production database, you can stop the Web server and apply all of the redo logs at one time.

This scenario assumes that you already have a standby database set up. Steps 1 through 10 in Scenario 2: Creating a Standby Database on a Remote Host describe how to set up a standby database.

The listener for the standby database should be started. The standby database should be started and mounted, but not in recovery mode. (Thus, the time and resources for applying redo logs will be saved.)

This scenario is similar to Scenario 2: Creating a Standby Database on a Remote Host. In Scenario 2, the standby database is in managed recovery mode. When a standby database is in managed recovery mode, it automatically applies archived redo logs received from the primary site.

Managing a Standby Database with No Ongoing Recovery

When the archived redo logs are not immediately applied to the standby database, there is no ongoing recovery at the standby site. This section describes the tasks a DBA needs to consider when managing a standby database with no ongoing recovery. This section covers the following topics:

Adding a Datafile to the Primary Database

When a datafile is added to the primary site, you need to decide what action to take on the standby site. You have the following options:

Responding to a Change in the Primary Database Control File

When the control file on the primary site has been altered, if you want to account for it on the standby site, you should apply the redo logs at the standby site by issuing the following statement:

SQL> RECOVER AUTOMATIC STANDBY DATABASE;


After you apply the redo logs at the standby site, refresh the standby database control file. See Refreshing the Standby Database Control File for the steps.

Responding When the NOLOGGING Clause Is Specified

In order to recover after the NOLOGGING clause was specified at the primary site, you must do the following:

  1. Apply the redo logs at the standby site by issuing the following statement:

    SQL> RECOVER AUTOMATIC STANDBY DATABASE;
    
    
  2. Copy the affected datafiles and standby database control file to the standby site.

    See Scenario 4: Recovering After the NOLOGGING Clause Was Specified for the procedure to follow.

    If you can afford to lose the changes incurred by specifying NOLOGGING at the primary site, you may choose to do nothing.

    Responding to Network Problems

    If this standby site is mandatory, you do not need to do anything at the standby site. If the standby site is optional, you should resolve the gaps immediately by applying the redo logs. It is very important to resolve the gaps in a timely manner. Otherwise, you run the risk of not being able to access the archived redo logs when the primary site is unreachable.

    Activating a Standby Database with No Ongoing Recovery

    Before you activate the standby database, you should apply all applicable redo logs. You must resolve any gaps in the redo log sequence before you activate the standby database, as outlined in the following steps:

    1. Run the following SQL script to identify the archived redo logs in the gap sequence:

      SELECT high.thread#, "LowGap#", "HighGap#"
      FROM
      (
          SELECT thread#, MIN(sequence#)-1 "HighGap#"
          FROM
          (
              SELECT a.thread#, a.sequence#
              FROM
              (
                  SELECT *
                  FROM v$archived_log
              ) a,
              (
                  SELECT thread#, MAX(sequence#)gap1
                  FROM v$log_history
                  GROUP BY thread#
              ) b
              WHERE a.thread# = b.thread#
              AND a.sequence# > gap1
          )
          GROUP BY thread#
      ) high,
      
      (
          SELECT thread#, MIN(sequence#) "LowGap#"
          FROM
          (
              SELECT thread#, sequence#
              FROM v$log_history, v$datafile
              WHERE checkpoint_change# <= next_change#
              AND checkpoint_change# >= first_change#
          )
          GROUP BY thread#
      ) low
      WHERE low.thread# = high.thread#;
          
      
      THREAD#    LowGap#    HighGap#  
      ---------- ---------- ----------
               1         90         92
      
      

      The gap sequence is the LowGap# to the HighGap#. In this example, the gap sequence is 90, 91, and 92 for thread 1. If no gap sequence is selected in this step, go to step 5.

    2. Identify the archived redo logs that need to be transferred from the primary site to the standby site.

      Obtain the filenames of the logs in the gap sequence by performing a query on the V$ARCHIVED_LOG view as follows:

      SELECT name FROM v$archived_log 
      WHERE thread#=1 AND sequence#<=92 AND sequence#>=90;
      
      NAME
      -----------------------------------------
      /vobs/oracle/dbs/r_1_90.arc
      /vobs/oracle/dbs/r_1_91.arc
      /vobs/oracle/dbs/r_1_92.arc
      

    3. Transfer the logs in the gap sequence from the primary database to the standby database as follows:

      % rcp /vobs/oracle/dbs/r_1_90.arc STBYHOST:/fs2/oracle/stdby/stdby_1_90.arc
      % rcp /vobs/oracle/dbs/r_1_91.arc STBYHOST:/fs2/oracle/stdby/stdby_1_91.arc
      % rcp /vobs/oracle/dbs/r_1_92.arc STBYHOST:/fs2/oracle/stdby/stdby_1_92.arc
      
      

      It is important to specify a local directory at the primary site as a mandatory archive destination so that all of the archived redo logs reside on the same system as the primary database. When the primary system is unreachable, and the primary database is part of a multiple standby database configuration, you can try to identify the archived redo logs at the other standby sites.

    4. On the standby database, issue the following statement to manually apply the logs in the gap sequence:

      SQL> RECOVER AUTOMATIC STANDBY DATABASE;
      
      

      Repeat steps 1 through 4 until there are no more gaps.

    5. Activate the standby database by issuing the following statement:

      SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
      
      
    6. Shut down the standby database instance as follows:

      SQL> SHUTDOWN IMMEDIATE;
      
      
    7. Start the new production instance.

      If necessary, build the parameter file for the new production instance. You can build it from the parameter file for the standby database. Then, you can issue the following statement at the standby database:

      SQL> STARTUP PFILE=FailOver.ora;
      

    Scenario 10: Standby Database with a Time Lag

    In managed recovery mode, the standby database automatically applies redo logs when they arrive from the primary database. But in some cases, you may not want the logs to be applied immediately, because you want to create a time lag between the archiving of a redo log at the primary site and the application of the log at the standby site. A time lag can protect against the transfer of corrupted or erroneous data from the primary site to the standby site.

    For example, suppose you run a batch job every night on the primary database. Unfortunately, you accidently ran the batch job twice and you did not realize the mistake until the batch job completed for the second time. Ideally, you need to roll back the database to the point in time before the batch job began. A primary database that has a standby database with a time lag (for example, 8 hours) could help you to recover. You could activate the standby database with the time lag and use it as the new production database.

    To create a standby database with a time lag, use manual recovery mode instead of managed recovery mode. The online redo log files can still be automatically transferred, just as with managed recovery mode. But, the log files are not immediately applied to the standby database. You can run a script periodically to check whether each redo log is old enough to meet your desired time lag. The script will move the redo logs that are old enough to the manual recovery directory and then, manually apply them.

    This scenario use a 4-hour time lag in the examples and covers the following topics:

    Readers of this scenario are assumed to be familiar with the procedures for creating a typical standby database. The details have been omitted from the steps outlined in this scenario. Refer to Scenario 2: Creating a Standby Database on a Remote Host for details of normal standby database setup.

    Creating a Standby Database with a Time Lag

    Perform the following steps to create a standby database with a time lag:

    1. Back up the datafiles and create the standby database control file at the primary site.

    2. Transfer the datafiles and the standby database control file to the standby site.

    3. Configure the tnsnames.ora and listener.ora network files.

    4. Start the listener on the standby site.

    5. Configure the standby initialization parameter file.

      Edit the STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_DEST_1 parameters.

      The LOG_ARCHIVE_DEST_1 parameter specifies the location of the archived redo logs. You must use this directory when performing manual recovery. You must use the directory specified by the STANDBY_ARCHIVE_DEST parameter when performing managed recovery.

      For example, assume you have set these parameters to the following values:

      STANDBY_ARCHIVE_DEST=/fs2/oracle/stdby_log/
      LOG_ARCHIVE_DEST_1    ='LOCATION=/fs2/oracle/stdby/'
      
      
    6. Mount the standby database. Do not open it or put it in managed recovery mode.

    7. Configure the primary initialization parameter file.

    8. Identify and apply the logs in the gap sequence.

    9. Write a script that you can periodically run to check the log files in the managed recovery directory and move the log files that have a specified timestamp to the manual recovery directory. If new redo logs are being moved, start the manual recovery mode and apply the newly moved redo logs.

      The following PERL script performs what is outlined in this step:

      #!/usr/local/bin/perl
      
      #How many hours the standby database should lag behind the primary database
      $LAG_HOUR = 4;
      
      #The manual recovery directory
      $DEST_DIR = '/fs2/oracle/stdby/';
      
      #The flag for whether there are new logs to be applied.
      $needApply = 0;
      
      #Check the managed recovery directory
      while ( </fs2/oracle/stby_log/*.arc> ) {
          # Get the timestamp of the file
          $file_time = (stat($_))[9];
          # See if the file is "old enough"
          if ( time-$file_time > $LAG_HOUR*60*60 ) {
              print  "mv $_ $DEST_DIR\n";
              system "mv $_ $DEST_DIR";
              $needApply = 1;
           }
      }
      #If redo logs were moved in this round, apply them
      if ( $needApply == 1 ) {
          system "/usr/Lagged_Standby/ApplyLog";
      }
      

      The SHELL script (/usr/Lagged_Standby/ApplyLog) used to apply the redo logs consists of the following:

      sqlplus internal << EOF
      
          recover automatic standby database;
          cancel
          exit
      
      EOF
      
      
      1. Refer to your platform-specific documentation for information on how to create a job that is triggered at specific times throughout the day.

        For example, in UNIX, you can write a CRON job file. Issue the man crontab command at the UNIX command shell to get help on how to write a CRON job file.

        You should decide how frequently to schedule the job. The more frequently the job runs, the more granularity you can get, and the less volume of logs need to be applied at a time. The minimum granularity is one redo log file.

        For example, if you want the job to run every 10 minutes, you can write the following CRON job file, (suppose the script file written in step 9 is /usr/Lagged_Standby/lag4.pl):

        0,10,20,30,40,50 * * * * /usr/Lagged_Standby/lag4.pl
        
        

        Suppose the preceding CRON job file is /usr/Lagged_Standby/jobs.cron. You can schedule the CRON job by issuing the following command:

        % crontab /usr/Lagged_Standby/jobs.cron
        
        

        The lag4.pl script will run every 10 minutes. The exact time lag will appear after the specified time (in this example, it is 4 hours).

    Managing a Standby Database with a Time Lag

    The runtime scenario of a standby database with a time lag is slightly different from a standby database with no time lag, because the application of the online redo logs lags behind.

    As the DBA, you need to keep the following tasks in mind when managing a standby database with a time lag:

    • When a datafile is added to the primary site, you need to copy the new datafile to the standby site.

    • Before you refresh the standby database control file, you must apply all of the archived redo logs that have been transferred to the standby site, but have not been applied to the standby database. Perform the following steps:

      1. Manually move all archived redo logs to the manual recovery directory, as the following example shows:

        % mv /fs2/oracle/stdby_log/*.arc /fs2/oracle/standby/
        
        
      2. Apply all of the archived redo logs as follows:

        SQL> RECOVER AUTOMATIC STANDBY DATABASE;
        
        
      3. Refresh the standby database control file.

      This way, the standby database control file will be updated, but the lag disappears. You need to wait for the specified time lag (for example, 4 hours) to reinforce the lag.

    • To recover after the NOLOGGING clause was specified at the primary site, you must apply all of the archived redo logs first. If you can afford to lose the changes incurred by specifying NOLOGGING at the primary site, you can choose to do nothing.

      1. Manually move all archived redo logs to the manual recovery directory. For example:

        % mv /fs2/oracle/stdby_log/*.arc /fs2/oracle/standby/
        
        
      2. Apply all of the archived redo logs:

        SQL> RECOVER AUTOMATIC STANDBY DATABASE;
        
        
      3. Copy the affected datafiles and standby database control file.

      This way, the standby database control file will be updated, but the lag disappears. You need to wait for the specified time lag (for example, 4 hours) to reinforce the lag.

    • When there are network problems:

      If this standby site is mandatory, you do not need to do anything at the standby site. If the standby site is optional, you can resolve the gaps immediately by performing the following steps:

      1. Manually move all archived redo logs to the manual recovery directory. For example:

        % mv /fs2/oracle/stdby_log/*.arc /fs2/oracle/standby/
        
        
      2. Apply all of the archived redo logs:

        SQL> RECOVER AUTOMATIC STANDBY DATABASE;
        
        
      3. Identify and apply the logs in the gap sequence.

      This way, the standby database control file will be updated, but the lag disappears. You need to wait for the specified time lag (for example, 4 hours) to reinforce the lag.

    Rolling Back the Database to a Specified Time

    In the case stated at the beginning of this scenario, you may want to take advantage of the time lag, and get a production database whose status is a specified time (for example, 4 hours) before the current production database. You can activate the appropriate time-lagged standby database as follows:

    1. Activate the standby database by issuing the following statement:

      SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
      
      
    2. Shut down the standby database instance:

      SQL> SHUTDOWN IMMEDIATE;
      
      
    3. Start the new production instance:

      SQL> STARTUP PFILE=Failover.ora;
      

    Bypassing the Time Lag and Activating the Standby Database

    If you do not want to take advantage of the time lag, you can activate the standby database as a normal standby database with no time lag as follows:

    1. Manually move all archived redo logs to the manual recovery directory. For example:

      % mv /fs2/oracle/stdby_log/*.arc /fs2/oracle/standby/
      
      
    2. Apply all of the archived redo logs:

      SQL> RECOVER AUTOMATIC STANDBY DATABASE;
      
      
    3. Activate the standby database by issuing the following statement:

      SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
      
      
    4. Shut down the standby database instance:

      SQL> SHUTDOWN IMMEDIATE;
      
      
    5. Start the new production instance:

      SQL> STARTUP PFILE=Failover.ora;
      
      

Prev Next
Oracle
Copyright © 1999 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index