Skip Headers

Oracle9i Data Guard Concepts and Administration
Release 2 (9.2)

Part Number A96653-01
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

10
Data Guard Scenarios

This chapter is divided into the following sections:

10.1 Physical Standby Database Scenarios

This section contains the following scenarios relating to physical standby databases:

10.1.1 Scenario 1: Creating a Physical Standby Database on the Same System

This scenario describes the creation of a physical standby database standby1 on the same system as the primary database primary1. This is a UNIX system with three file systems, each mounted on a separate disk configuration on a different controller. By placing the physical standby database on a different file system from the primary database, you protect the primary database from a hard disk failure. By running the same-site standby database in managed recovery mode, you can keep it continuously up-to-date.

After you set up the physical standby database on the local site, you plan to create a physical standby database on a remote system for total disaster protection. In this way, even if all disks of the primary database fail or are destroyed in a disaster, you can fail over to the remote standby database and keep the database open.

Step 1 Plan the physical standby database.

Because the site uses three file systems, each on its own set of disks with its own controller, you decide to maintain the primary database files on the first file system, the standby database files on the second file system, and the ORACLE_HOME binaries on the third file system. If the primary database disks fail, you can switch to the standby database; if the ORACLE_HOME disks fail, you can switch to the remote standby database.

To host the physical standby database on the same system as the primary database, you must set the following parameters in the standby database initialization parameter file:

Because the primary database is shut down every Sunday for an hour for maintenance, you decide to use that time to make a closed, consistent backup. You can then restart the database while you make the necessary configurations for the standby database.

Step 2 Create the physical standby database.

The next step in the procedure is to create the backup that will form the basis for the physical standby database. You know that you can use either an inconsistent or consistent backup, but because the database is shut down every Sunday for maintenance, you decide to make a consistent backup then and use it for the standby database.

  1. Determine the database files.

    On Sunday, before shutting down the primary database, you query the database to determine which datafiles it contains:

    SQL> SELECT NAME FROM V$DATAFILE;
    NAME                                                                            
    ----------------------------------------------------------------------------
    /fs1/dbs/tbs_01.dbf  
    /fs1/dbs/tbs_02.dbf  
    /fs1/dbs/tbs_11.dbf  
    /fs1/dbs/tbs_12.dbf  
    /fs1/dbs/tbs_21.dbf  
    /fs1/dbs/tbs_22.dbf  
    /fs1/dbs/tbs_13.dbf  
    /fs1/dbs/tbs_23.dbf  
    /fs1/dbs/tbs_24.dbf  
    /fs1/dbs/tbs_31.dbf  
    /fs1/dbs/tbs_32.dbf  
    /fs1/dbs/tbs_41.dbf  
    /fs1/dbs/tbs_42.dbf 
    /fs1/dbs/tbs_51.dbf 
    /fs1/dbs/tbs_52.dbf 
    /fs1/dbs/tbs_03.dbf 
    /fs1/dbs/tbs_14.dbf 
    /fs1/dbs/tbs_25.dbf 
    /fs1/dbs/tbs_33.dbf 
    /fs1/dbs/tbs_43.dbf 
    /fs1/dbs/tbs_53.dbf 
    21 rows selected.
    
  2. Back up the datafiles.

    After determining which datafiles are in the primary database, you shut down the database with the IMMEDIATE option:

    SQL> SHUTDOWN IMMEDIATE;
    

    You can copy the datafiles from the primary file system to the physical standby file system. Because the transfer of datafiles can take a long time, you may want to copy the datafiles and then proceed to other tasks (such as network configuration). For example, enter the following at the UNIX command shell:

    % cp /fs1/dbs/tbs* /fs2/dbs
    
    

    After you perform some other routine maintenance operations, restart the database as follows:

    SQL> STARTUP PFILE=PRMYinit.ora;
    
  3. Ensure that the primary database is in ARCHIVELOG mode and that archiving is enabled.

    You can confirm the primary database is in ARCHIVELOG mode either by viewing the output from the SQL*Plus ARCHIVE LOG LIST command or by querying the V$DATABASE view.

  4. Create the physical standby database control file.

    After a few minutes, you create the standby database control file in the same directory in which you stored the consistent backup:

    SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/fs2/dbs/cf1.ctl';
    
    
Step 3 Configure Oracle Net.

To run a physical standby database in a Data Guard environment, you must configure an Oracle Net connection between the primary and physical standby databases so that you can archive the redo logs to the standby service.

You use the IPC protocol to connect the primary database to the physical standby database because both databases are on the same site. Because you are using the local naming method, you must create new entries in the tnsnames.ora file. You must also add corresponding entries in the listener.ora file.

  1. Configure the tnsnames.ora file.

    Currently, only one service name entry exists in your configuration, a TCP/IP connection to the primary1 database.

    Using Oracle Net Manager, define an IPC connection between the primary and the standby database using standby1 as the service name, stdby1 as the SID, and kstdby1 as the IPC key.

  2. Configure the listener.ora file.

    Using Oracle Net Manager, modify the listener and add a listening address using IPC as the protocol and kstdby1 as the IPC key. Add your standby database stdby1 as a database service for this listener. Alternatively, you can add a new listener.

    See Also:

    Oracle9i Net Services Administrator's Guide

Step 4 Configure the primary database parameter file.

Now that you have configured the network files, you can edit the primary database initialization parameter file. The primary database is now up and running, so these changes will only be enabled if you restart the instance or issue ALTER SESSION or ALTER SYSTEM statements.

The only changes you need to make to the file involve archiving to the standby service. Currently, the primary database initialization parameter file looks as follows:

#
#parameter file PRMYinit.ora
#
db_name=primary1
control_files=(/fs1/dbs/cf1.ctl,/fs1/dbs/cf2.ctl)
compatible=9.0.1.0.0
log_archive_start = true
log_archive_dest_1 = 'LOCATION=/fs1/arc_dest/ MANDATORY REOPEN=60'
log_archive_dest_state_1 = ENABLE
log_archive_format = log_%t_%s.arc
audit_trail=FALSE
o7_dictionary_accessibility=false
global_names=false
db_domain=us.oracle.com
remote_login_passwordfile = exclusive

# default parameters for instance 1
processes=30
sessions=30
transactions=21
transactions_per_rollback_segment=21
db_block_buffers=1000
db_files=200
shared_pool_size=10000000
  1. Specify standby archive destinations.

    Currently, you archive to only one location: a local directory. Because you want to maintain the local standby database in managed recovery mode, you must specify a new archiving location using a service name.

    Open the primary database initialization parameter file with a text editor and examine the current archiving location and format:

    log_archive_dest_1 = 'LOCATION=/fs1/arc_dest/ MANDATORY REOPEN=60'
    log_archive_dest_state_1 = ENABLE
    log_archive_format = log_%t_%s.arc
    
    Parameter/Option Meaning

    LOG_ARCHIVE_DEST_1

    Indicates an archiving destination.

    LOCATION

    Indicates a local directory.

    LOG_ARCHIVE_DEST_STATE_1

    Indicates the state of the
    LOG_ARCHIVE_DEST_1 archiving destination.

    ENABLE

    Indicates that the Oracle database server can archive to the destination.

    LOG_ARCHIVE_FORMAT

    Indicates the format for filenames of archived redo logs.

  2. Because you want to archive to the standby database with service standby1, you edit the file, adding the following entries:
    log_archive_dest_2 = 'SERVICE=standby1 OPTIONAL REOPEN=180'
    log_archive_dest_state_2 = ENABLE
    
    Parameter/Option Meaning

    LOG_ARCHIVE_DEST_2

    Indicates a new archiving destination. LOG_ARCHIVE_DEST_1 is already reserved for local archiving to /fs1/arc_dest/.

    SERVICE

    Indicates the service name of the standby database.

    OPTIONAL

    Indicates that the Oracle database server can reuse online redo logs even if this destination fails.

    REOPEN

    Indicates how many seconds the archiving process waits before reattempting to archive to a previously failed destination.

    LOG_ARCHIVE_DEST_STATE_2

    Indicates the state of the LOG_ARCHIVE_DEST_2 archiving destination.

    ENABLE

    Indicates that the Oracle database server can archive to the destination.

    After editing the primary database initialization parameter file, create a copy for use by the standby database:

    % cp /fs1/PRMYinit.ora /fs3/oracle/dbs/STBYinit.ora
    

    If the primary database initialization parameter file contains the IFILE parameter, you also need to copy the file referred to by the IFILE parameter to the standby site and, if necessary, make appropriate changes to it.

Step 5 Configure the physical standby database parameter file.

You know that the initialization parameters shown in Table 10-1 play a key role in the standby database recovery process, and decide to edit them.

Table 10-1  Configuring Standby Database Initialization Parameters
Parameter Setting

COMPATIBLE

This parameter must be the same at the primary and standby databases. Because it is already set to 9.0.1.0.0 in the primary database parameter file, you can leave the standby setting as it is.

CONTROL_FILES

This parameter must be different between the primary and standby databases. You decide to locate the control files in the /fs2/dbs directory.

DB_FILE_NAME_CONVERT

You must rename your standby datafile filenames to direct the standby database to correctly access its datafiles. A convenient way to do so is to use this parameter. If you have more than one set of files mapping from your primary datafiles to your standby datafiles, you need to specify multiple mapping pairs as values to this parameter. For this example, you decide to map primary datafiles from /fs1/dbs to /fs2/dbs, from /fs1/dbs2 to /fs2/dbs, and from /fs1/dbs3 to /fs2/dbs.

DB_FILES

DB_FILES must be the same at both databases so that you allow the same number of files at the standby database as you allow at the primary database. Consequently, you leave this parameter alone. An instance cannot mount a database unless DB_FILES is equal to or greater than MAXDATAFILES.

DB_NAME

This value should be the same as the DB_NAME value in the primary database parameter file. Consequently, you leave this parameter alone.

FAL_SERVER

This parameter specifies the net service name that the standby database should use to connect to the FAL server. You decide to set the name to primary1.

FAL_CLIENT

This parameter specifies the net service name that the FAL server should use to connect to the standby database. You decide to set the name to standby1.

LOCK_NAME_SPACE

This parameter specifies the name space that the distributed lock manager (DLM) uses to generate lock names. Set this value if the standby and primary databases share the same site. You decide to set the name to standby1.

LOG_ARCHIVE_DEST_1

This parameter specifies the location of the archived redo logs. You must use this directory when performing manual recovery. You decide to set the value to /fs2/arc_dest/.

LOG_FILE_NAME_CONVERT

You must rename your online log filenames on the standby database to allow correct access of its online log file. Even though the online log files are not used until you fail over to the standby database, it is good practice to rename your online log files. Much like the DB_FILE_NAME_CONVERT initialization parameter, you can specify more than one mapping pair as values to this parameter. For this example, you decide to map the primary online log file from /fs1/dbs to the standby online log file location /fs2/dbs.

STANDBY_ARCHIVE_DEST

The Oracle database server uses this value to create the name of the logs received from the primary site. You decide to set it to /fs2/arc_dest/.

Edit the standby database initialization parameter file as follows:

#
#parameter file STBYinit.ora
#
db_name = primary1                #The same as PRMYinit.ora
control_files = (/fs2/dbs/cf1.ctl)
compatible = 9.0.1.0.0
log_archive_start = true
log_archive_dest_1='LOCATION=/fs2/arc_dest/'
log_archive_dest_state_1 = ENABLE
log_archive_format = log_%t_%s.arc
standby_archive_dest = /fs2/arc_dest/
db_file_name_convert = ('/fs1/dbs','/fs2/dbs',
                        '/fs1/dbs2','/fs2/dbs',
                        '/fs1/dbs3','/fs2/dbs')
log_file_name_convert = ('/fs1/dbs','/fs2/dbs') 
lock_name_space = standby1                             
fal_server=primary1
fal_client=standby1
audit_trail=false
o7_dictionary_accessibility=false
global_names=false
db_domain=us.oracle.com
remote_login_passwordfile = exclusive

# default parameters for instance 1
processes=30
sessions=30
transactions=21
transactions_per_rollback_segment=21
db_block_buffers=1000
db_files=200
shared_pool_size=10000000
Step 6 Start the standby database in preparation for managed recovery.

Now that you have configured all network and parameter files, you can enable archiving to the standby database.

  1. Set the ORACLE_SID environment variable to the same value as the service name parameter in the tnsnames.ora file on the primary site and the listener.ora file on the standby site as follows:
    % setenv ORACLE_SID stdby1
    
  2. Start the instance.

    First, you start the standby database instance without mounting the standby database control file, as the following example shows:

    SQL> CONNECT SYS/CHANGE_ON_INSTALL@standby1 AS SYSDBA
    SQL> STARTUP NOMOUNT PFILE=/fs3/oracle/dbs/STBYinit.ora;
    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
  3. Enable changes to the primary database parameter file.

    Finally, you enable the changes you made to the primary database parameter file so that the standby database can begin receiving archived redo logs:

    SQL> CONNECT SYS/CHANGE_ON_INSTALL@primary1 AS SYSDBA
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1
      2> OPTIONAL REOPEN=180';
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
    
Step 7 Place the standby database in managed recovery mode.

You can now start managed recovery using the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE statement. You decide to use the TIMEOUT option of the RECOVER statement to specify a time interval of 20 minutes that log apply services will wait until log transport services write the requested archived log entry to the directory of the standby database control file. If the requested archived log entry is not written to the standby database control file directory within the specified time interval, the recovery operation is canceled.

While connected to the standby database using SQL*Plus, place the standby database in managed recovery mode:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE TIMEOUT 20;

The standby database is now in managed recovery. When you start managed recovery, log apply services automatically identify and resolve any archive gaps that may exist. As the primary database archives redo logs to the standby site, the standby database automatically applies them.

10.1.2 Scenario 2: Creating a Physical Standby Database on a Remote Site

This scenario describes the creation of a physical standby database standby1 on a remote site. The following assumptions are being made:

Step 1 Back up the primary database datafiles.

Create the backup that will form the basis for the physical standby database.

  1. Query the primary database to determine the datafiles. Query the V$DATAFILE view to obtain a list of the primary database datafiles, as the following example shows:
    SQL> SELECT NAME FROM V$DATAFILE;
    NAME
    -----------------------------------------------------------------------
    /oracle/dbs/dbf_1.dbf
    1 row selected.
    
  2. Shut down the primary database to make a consistent backup of the datafiles:
    SQL> SHUTDOWN IMMEDIATE;
    
  3. Copy the primary database datafiles to a temporary location (/backup):
    % cp /oracle/dbs/dbf_1.dbf /backup
    
  4. Reopen the primary database as follows:
    SQL> STARTUP PFILE=PRMYinit.ora;
    
Step 2 Create the physical standby database control file.
  1. Before you create the standby database control file, ensure that the primary database is in ARCHIVELOG mode and that automatic archiving is enabled. Issue the SQL*Plus ARCHIVE LOG LIST command, as follows:
    SQL> ARCHIVE LOG LIST
    

    If the output from the ARCHIVE LOG LIST statement displays "No Archive Mode," perform the following steps:

    1. Shut down the primary database as follows:
      SQL> SHUTDOWN IMMEDIATE;
      
    2. Start and mount the primary database instance without opening it:
      SQL> STARTUP MOUNT PFILE=PRMYinit.ora;
      
    3. Set the ARCHIVELOG mode as follows:
      SQL> ALTER DATABASE ARCHIVELOG;
      
    4. Open the primary database:
      SQL> ALTER DATABASE OPEN;
      
  2. Create the standby database control file by issuing the following statement:
    SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/backup/stbycf.ctl'
    

    The standby database control file and the primary database datafiles are in the same temporary location at the primary site to make copying to the standby site easier.

Step 3 Transfer the datafiles and control file to the physical standby site.

Transfer the primary database datafiles and the standby control file from the temporary location at the primary site to the standby site, as the following example shows:

% rcp /backup/* stbyhost:/fs2/oracle/stdby
Step 4 Configure Oracle Net.

This scenario assumes that the TCP/IP network protocol is used to connect to the primary and the physical standby databases. To achieve remote archiving of redo log files, Oracle Net must be set up correctly from the primary database to the standby database. To allow the physical standby database to fetch archive gaps from the primary database and to facilitate switchover operations, Oracle Net must again be set up correctly from the standby database to the primary database.

This step involves editing the following files:

  1. On the primary system, use Oracle Net Manager to add a service called standby1 that uses the TCP protocol, add the host name of your standby system, and add the service name parameter stdby1. Verify that the listener is configured to receive requests for the primary database. If it is not, add the primary database to the listener as a new database service.
  2. On the standby system, use Oracle Net Manager to add a service called primary1 with the TCP protocol that points to your primary database system and the service name parameter prmy. Also, add a listener called stdby1_listener and add the standby database stdby1 as a database service.

    See Also:

    Oracle9i Net Services Administrator's Guide for detailed directions on using the Oracle Net Manager

Step 5 Start the listener on the primary and standby sites.

Start the standby listener on the standby site. For example:

% lsnrctl start stdby1_listener

Normally, your default listener is started on your primary site. Restart the default listener on the primary database to pick up the new definitions. For example:

% lsnrctl stop
% lsnrctl start
Step 6 Configure the physical standby database initialization parameter file.

To configure the initialization parameter file for the standby database:

  1. Copy the primary database initialization parameter file from the primary site to the physical standby site. From the primary site, issue a command similar to the following:
    % rcp /oracle/dbs/PRMYinit.ora stbyhost:/fs2/oracle/stdby/STBYinit.ora
    
  2. Edit the standby initialization parameter file (STBYinit.ora). Edit the following parameters:

    Parameter Value

    CONTROL_FILES

    stbycf.ctl

    STANDBY_ARCHIVE_DEST

    /fs2/oracle/stdby/

    LOG_ARCHIVE_DEST_1

    /fs2/oracle/stdby/

    LOG_ARCHIVE_FORMAT

    stdby_%t_%s

    DB_FILE_NAME_CONVERT

    ('/oracle/dbs','/fs2/oracle/stdby')

    LOG_FILE_NAME_CONVERT

    ('/oracle/dbs','/fs2/oracle/stdby')

    LOG_ARCHIVE_START

    true

    FAL_SERVER

    standby1

    FAL_CLIENT

    primary1

The STBYinit.ora initialization parameter file looks as follows:

#
#parameter file STBYinit.ora
#

db_name=primary1                   # The same as PRMYinit.ora

control_files=/fs2/oracle/stdby/stbycf.ctl
audit_trail=false
o7_dictionary_accessibility=false
global_names=false
db_domain=us.oracle.com
commit_point_strength=1
processes=30
sessions=30
transactions=21
transactions_per_rollback_segment=21
db_block_buffers=100
shared_pool_size=4000000
ifile=/oracle/work/tkinit.ora # Verify that file exists on the standby site
                                   # and that the file specification is valid

# specific parameters for standby database
log_archive_format = stdby_%t_%s.arc
standby_archive_dest=/fs2/oracle/stdby/
log_archive_dest_1='LOCATION=/fs2/oracle/stdby/'
db_file_name_convert=('/oracle/dbs','/fs2/oracle/stdby')
log_file_name_convert=('/oracle/dbs','/fs2/oracle/stdby')
log_archive_start=true
log_archive_trace=127 
fal_server=standby1
fal_client=primary1
Step 7 Copy the physical standby database initialization parameter file.
  1. Make a copy of the STBYinit.ora file by issuing the following command:
    % cp STBYinit.ora Failover.ora
    

    Edit Failover.ora so if you fail over to the stdby1 standby database, you can use the Failover.ora file as the initialization parameter file for the new primary database. Make sure you use appropriate values for the LOG_ARCHIVE_DEST_n parameters.

  2. Edit the tnsnames.ora file on the standby site in case failover to the standby database occurs. See step 4 for information on how to configure the tnsnames.ora file.
Step 8 Start the physical standby database.

Start the physical standby database to start archiving.

  1. Set the ORACLE_SID environment variable to the same value as the service name parameter in the tnsnames.ora file on the primary site and the listener.ora file on the standby site as follows:
    % SETENV ORACLE_SID stdby1
    
  2. Start SQL*Plus:
    SQL> CONNECT SYS/SYS_PASSWORD AS SYSDBA
    
  3. Start the standby database instance without mounting the database:
    SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
    
  4. Mount the standby database:
    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
Step 9 Configure the primary initialization parameter file.
  1. Specify the archive destination by adding the following entry to the PRMYinit.ora file:
    LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 MANDATORY REOPEN=60'
    
  2. Enable the archive destination state by adding the following entry to the PRMYinit.ora file:
    LOG_ARCHIVE_DEST_STATE_2 = ENABLE
    
  3. Issue the following statements to ensure that the initialization parameters you have set in this step take effect:
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby1 
      2> MANDATORY REOPEN=60';
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
    
Step 10 Place the standby database in managed recovery mode.

On the standby database, enable managed recovery by issuing the following SQL statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

10.1.3 Scenario 3: Accommodating Physical Changes in the Primary Database

This scenario describes the procedures you should follow when a physical change is made in the primary database. The following topics are covered:

10.1.3.1 Adding a Datafile to the Primary Database

To maintain consistency when you add a datafile to the primary database, you must also add a corresponding datafile to the standby database. Otherwise, changes in the online redo logs that relate to the new datafile in the primary database will not be applied to the standby database. The steps you perform depend on how the datafile was created on the primary database and how the initialization parameter STANDBY_FILE_MANAGEMENT is defined on the standby database.

If Then

You create a new datafile on the primary database with the STANDBY_FILE_MANAGEMENT initialization parameter set to auto on the standby database

The new datafile is automatically created on the standby database.

You create a new datafile on the primary database with the STANDBY_FILE_MANAGEMENT initialization parameter undefined or set to manual on the standby database

You must manually copy the new datafile to the standby database and re-create the standby control file.

You copy an existing datafile from another database to primary database

You must also copy the new datafile to the standby database and re-create the standby control file.

The following sections describe the steps to enable the automatic creation of datafiles on the standby database, and the manual steps necessary if standby file management is not automatic.

Enabling the Automatic Creation of Datafiles on the Standby Database

When you create a new datafile on the primary database, the datafile is automatically created on the standby database if the initialization parameter STANDBY_FILE_MANAGEMENT is set to auto in the standby database initialization file.

The following steps describe how to set up your standby database to automate the creation of datafiles.

  1. Add the following line to your standby database initialization file.
    STANDBY_FILE_MANAGEMENT=auto
    
  2. Shut down the standby database and restart it to pick up the changes in the initialization file and then restart managed recovery.
    SQL> SHUTDOWN;
    SQL> STARTUP NOMOUNT PFILE=STBYinit.ora
    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
    

    Alternatively, you can dynamically set the STANDBY_FILE_MANAGEMENT initialization parameter with an ALTER SYSTEM statement to avoid shutting down and restarting the instance. For example:

    SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
    
  3. Add a new tablespace to the primary database.
    SQL> CREATE TABLESPACE new_ts DATAFILE 't_db2.dbf'
      2> SIZE 1m AUTOEXTEND ON MAXSIZE UNLIMITED;
    
  4. Archive the current redo log so it will get copied to the standby database.
    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    
  5. Verify that the new datafile was added to the primary database.
    SQL> SELECT NAME FROM V$DATAFILE;
    NAME
    ----------------------------------------------------------------------
    /oracle/dbs/t_db1.dbf
    /oracle/dbs/t_db2.dbf
    
  6. Verify that the new datafile was added to the standby database.
    SQL> SELECT NAME FROM V$DATAFILE;
    NAME
    ----------------------------------------------------------------------
    /oracle/dbs/s2t_db1.dbf
    /oracle/dbs/s2t_db2.dbf
    
Manually Creating New Datafiles on the Standby Database

The following steps describe how to manually create datafiles on the standby database when standby file management is manual.

  1. Add a new tablespace to the primary database.
    SQL> CREATE TABLESPACE new_ts DATAFILE 't_db2.dbf'
      2> SIZE 1m AUTOEXTEND ON MAXSIZE UNLIMITED;
    
  2. Verify that the new datafile has been added to the primary database.
    SQL> SELECT NAME FROM V$DATAFILE;
    NAME
    
    ----------------------------------------------------------------------
    /oracle/dbs/t_db1.dbf
    /oracle/dbs/t_db2.dbf
    
  3. Set the new tablespace offline, copy the new datafile to the standby database with an operating system command, and then set the new tablespace back online.
    SQL> ALTER TABLESPACE new_ts OFFLINE;
    
    % cp t_db2.dbf s2t_db2.dbf
    
    SQL> ALTER TABLESPACE new_ts ONLINE;
    
    %rcp s2t_db2.dbf standby_location
    
    
  4. Archive the current redo log on the primary database so it will get copied to the standby database.
    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    
  5. Verify that the datafile has been added on the standby database when the log file has been applied on the standby database.
    SQL> SELECT NAME FROM V$DATAFILE;
    NAME
    
    ----------------------------------------------------------------------
    /oracle/dbs/s2t_db1.dbf
    /oracle/dbs/s2t_db2.dbf
    

10.1.3.2 Renaming a Datafile in the Primary Database

When you rename one or more datafiles in the primary database, you also need to rename the corresponding datafiles in the standby database.


Note:

Unlike adding or deleting datafiles, the STANDBY_FILE_MANAGEMENT initialization parameter has no effect when renaming datafiles.


  1. To rename the datafile at the primary site, you must take the tablespace offline:
    SQL> ALTER TABLESPACE tbs_4 OFFLINE;
    
  2. Exit SQL and execute the following command to rename the datafile on the system:
    % mv tbs_4.dbf tbs_x.dbf
    
  3. Go back into SQL and execute the following statements to rename the datafile in the database and to bring the tablespace back online:
    SQL> ALTER TABLESPACE tbs_4 RENAME FILE 'tbs_4.dbf' TO 'tbs_x.dbf';
    SQL> ALTER TABLESPACE tbs_4 ONLINE;
    
  4. Rename the datafile at the standby site:
    % mv tbs_4.dbf tbs_x.dbf
    
  5. On the standby database, restart managed recovery by issuing the following statement:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
      2> SESSION;
    

If you do not rename the corresponding file at the standby site, and then attempt to refresh the standby database control file, the standby database will attempt to use the renamed datafile, but it will not find the renamed datafile. Consequently, you will see error messages similar to the following in the alert log:

ORA-00283: recovery session canceled due to errors
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/oracle/dbs/stdby/tbs_x.dbf'

10.1.3.3 Deleting a Datafile or Dropping a Tablespace in the Primary Database

When you delete one or more datafiles or drop one or more tablespaces in the primary database, you also need to delete the corresponding datafiles in the standby database.

  1. Drop the tablespace at the primary site:
    SQL> DROP TABLESPACE tbs_4;
    SQL> ALTER SYSTEM SWITCH LOGFILE;
    % rm tbs_4.dbf
    
  2. Delete the corresponding datafile on the standby site after the archived redo log has been applied to the standby database. For example:
    % rm tbs_4.dbf
    

10.1.3.4 Adding or Dropping Online Redo Logs

One method of tuning available to the database administrator (DBA) is changing the size and number of online redo logs. Consequently, when you add or drop an online redo log at the primary site, it is important that you refresh the standby database control file.

  1. Add or drop an online redo log as follows:
    SQL> ALTER DATABASE ADD LOGFILE 'prmy3.log' SIZE 100K;
    

    or

    SQL> ALTER DATABASE DROP LOGFILE 'prmy3.log';
    
  2. Repeat this same command on each standby database to add or drop the online redo log.

10.1.3.5 Physical Changes That Require You to Rebuild the Standby Database

Some physical changes you make to the primary database can invalidate the standby database. Once a standby database is invalidated, your only option is to rebuild it.

The following clauses of the ALTER DATABASE statement invalidate the standby database:

10.1.4 Scenario 4: Recovering After the NOLOGGING Clause Is Specified

In some SQL statements, the user has the option of specifying the NOLOGGING clause, which indicates that the database operation is not logged in the redo log file. Even though the user specifies the NOLOGGING clause, a redo log record is still written to the redo log. However, when the redo log file is copied to the standby site and applied to the standby database, a portion of the datafile is unusable and marked as being unrecoverable. When you either fail over to the standby database, or open the physical standby database with the read-only option, and attempt to read the range of blocks that are marked as UNRECOVERABLE, you will see error messages similar to the following:

ORA-01578: ORACLE data block corrupted (file # 1, block # 2521)
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

Note:

To avoid this problem, Oracle Corporation recommends that you always specify the FORCE LOGGING clause in the CREATE DATABASE statement when creating standby databases. See the Oracle9i Database Administrator's Guide.


To recover after the NOLOGGING clause is specified, you need to copy the datafile that contains the unjournaled data from the primary site to the physical standby site. Perform the following steps:

  1. Determine which datafiles should be copied.
    1. Issue the following query in the primary database:
      SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
      NAME                                                  UNRECOVERABLE
      ----------------------------------------------------- -------------
      /oracle/dbs/tbs_1.dbf                                       5216
      /oracle/dbs/tbs_2.dbf                                          0
      /oracle/dbs/tbs_3.dbf                                          0
      /oracle/dbs/tbs_4.dbf                                          0
      4 rows selected.
      
    2. Issue the following query in the standby database:
      SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
      NAME                                                  UNRECOVERABLE
      ----------------------------------------------------- -------------
      /oracle/dbs/stdby/tbs_1.dbf                                 5186
      /oracle/dbs/stdby/tbs_2.dbf                                    0
      /oracle/dbs/stdby/tbs_3.dbf                                    0
      /oracle/dbs/stdby/tbs_4.dbf                                    0
      4 rows selected.
      
    3. Compare the query results from the primary and the standby databases.

      Compare the value of the UNRECOVERABLE_CHANGE# column in both query results. If the value of the UNRECOVERABLE_CHANGE# column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.

      In this example, the value of the UNRECOVERABLE_CHANGE# in the primary database for the tbs_1.dbf datafile is greater, so you need to copy the tbs_1.dbf datafile to the standby site.

  2. On the primary site, back up the datafile that you need to copy to the standby site as follows:
    SQL> ALTER TABLESPACE system BEGIN BACKUP;
    SQL> EXIT;
    % cp tbs_1.dbf /backup
    SQL> ALTER TABLESPACE system END BACKUP;
    
  3. Copy the datafile and the standby database control file from the primary site to the standby site as follows:
    % rcp /backup/* stbyhost:/fs2/oracle/stdby/
    
  4. On the standby database, restart managed recovery by issuing the following statement:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
      2>SESSION;
    

    You may get the following error messages (possibly in the alert log) when you try to restart managed recovery:

    ORA-00308: cannot open archived log 'standby1'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error 
    below
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
    

    If you get the ORA-00308 error, cancel recovery by issuing the following statement from another terminal window:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    

    These error messages are issued when one or more logs in the archive gap have not been successfully applied. If you receive these errors, manually resolve the gaps and repeat step 4.

    See Also:

    Section B.3 for information on resolving an archive gap

10.1.5 Scenario 5: Deciding Which Standby Database to Fail Over to in a Multiple Physical Standby Database Configuration

Every standby database is associated with one and only one primary database. A single primary database can, however, support multiple standby databases. This scenario identifies the kind of information you need in order to decide which of the multiple standby databases is the best target for the failover operation.

One of the important things to consider in a multiple standby database configuration is whether the archive destination is mandatory or optional. The following table lists an advantage and disadvantage for each destination:

Destination Advantage Disadvantage

MANDATORY

All archived redo logs are archived to the mandatory archive destination. After you apply the archived redo logs at the standby site, you can ensure that the standby database is up-to-date. Furthermore, you can fail over the standby database as the new primary database with minimum loss of data.

In some cases, (such as network failure), the archived redo logs cannot reach the mandatory archive destination, causing the archiving of the redo log to stop. In the worst case, if all online redo logs are full and cannot be archived, the primary database instance will stop working.

You can issue the following SQL query to determine whether the primary database stopped because it was not able to switch to an online redo log:

SELECT decode (count(*), 0, 'NO', 'YES') "switch_possible"

FROM V$LOG

WHERE ARCHIVED='YES';

If the output from the query displays "Yes," a log switch is possible; if the output displays "No," a log switch is not possible.

See Section 10.1.7.

OPTIONAL

The primary database continues to operate normally when archival of the redo logs to the optional archive destination at the standby site is interrupted.

An archive gap may cause data loss because archive logs that are required to be applied to the standby database are unavailable. This results in the managed recovery operation terminating before all primary data has been applied to the standby database.

Consider the following recommendations in a multiple standby database configuration:

Suppose the primary database is located in San Francisco and supports five standby databases as follows:

Standby Location Type Description

1

Local directory

Mandatory

Local copy of the archived redo logs.

2

San Francisco

Mandatory

Fail over to this standby database when there is physical damage at the primary site. This standby site is connected to the primary site by a local area network.

3

Boston

Optional

Fail over to this standby database when San Francisco is no longer available.

4

Los Angeles

Optional

This standby site receives archived redo logs, but does not apply them.

5

San Francisco

Optional

This standby site receives archived redo logs, and applies them after an 8-hour time lag. See Section 10.1.9 for a description of this type of configuration.

Assume that something occurs in San Francisco where the primary site is located, and the primary site is damaged. You must fail over to one of the standby databases. You cannot assume that the database administrator (DBA) who set up the multiple standby database configuration is available to decide which standby database to fail over to. Therefore, it is imperative to have a disaster recovery plan at each standby site, as well as at the primary site. Each member of the disaster recovery team needs to know about the disaster recovery plan and be aware of the procedures to follow. This scenario identifies the kind of information that the person who is making the decision would need when deciding which standby database should be the target of the failover operation.

One method of conveying information to the disaster recovery team is to include a ReadMe file at each standby site. This ReadMe file is created and maintained by the DBA and should describe how to:

The following example shows the contents of a sample ReadMe file:

----------------Standby Database Disaster Recovery ReadMe File----------------

Warning:
********************************************************************************
Perform the steps in this procedure only if you are responsible for failing over 
to a standby database after the primary database fails.

If you perform the steps outlined in this file unnecessarily, you may corrupt 
the entire database system.
********************************************************************************

Multiple Standby Database Configuration:

No.     Location       Type     IP Address
--- --------------- --------- --------------
 1   San Francisco   Primary   128.1.124.25 
 2   San Francisco   Standby   128.1.124.157
 3   Boston          Standby   136.132.1.55
 4   Los Angeles     Standby   145.23.82.16
 5   San Francisco   Standby   128.1.135.24

You are in system No. 3, which is located in Boston.

Perform the following steps to fail over to the most up-to-date and available 
standby database:

1.  Log on to the local standby database as a DBA.

    a)  Log on with the following user name and password:

               username: Standby3
               password: zkc722Khn

    b)  Invoke SQL*Plus as follows:

        % sqlplus
     
    c)  Connect as the DBA as follows:

        CONNECT sys/s23LsdIc AS SYSDBA

2.  Connect to as many remote systems as possible. You can connect to a maximum
    of four systems. System 4 does not have a firewall, so you can connect to it
    directly. Systems 1, 2, and 5 share the same firewall host.  You need to go
    to the firewall host first and then connect to each system.  The IP address
    for the firewall host is 128.1.1.100.  Use the following user name and
    password:
               username: Disaster
               password: 82lhsIW32

3.  Log on to as many remote systems as possible with the following user names
    and passwords:

    Login information:

    No.     Location     IP Address    username   password
    --- --------------- ------------- ---------- ----------
    1   San Francisco   128.1.124.25   Oracle9i   sdd290Ec
    2   San Francisco   128.1.124.157  Standby2   ei23nJHb
    3                    (L o c a l)
    4   Los Angeles     145.23.82.16   Standby4   23HHoe2a
    5   San Francisco   128.1.135.24   Standby5   snc#$dnc

4.  Invoke SQL*Plus on each remote system you are able to log on to as follows:

    % sqlplus

5.  Connect to each remote database as follows:

    CONNECT sys/password AS SYSDBA

    The DBA passwords for each location are:

    No.     Location      Password
    --- --------------- -----------
    1   San Francisco     x2dwlsd91
    2   San Francisco     a239s1DAq
    3           (L o c a l)
    4   Los Angeles       owKL(@as23
    5   San Francisco     sad_KS13x

6.  If you are able to log on to System 1, invoke SQL*Plus and issue the
    following statements:

    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP PFILE=PRMYinit.ora;

    Note: If you are able to execute the STARTUP statement successfully, the
          primary database has not been damaged.  Do not continue with this
          procedure.

7.  Issue the following SQL statements on each standby database (including the
    one on this system) that you were able to connect to:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    SQL> SHUTDOWN IMMEDIATE;
    SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM V$LOG_HISTORY GROUP BY THREAD#;

    Compare the query results of each standby database. Fail over to the
    standby database with the largest sequence number.

8.  Fail over to the standby database with the largest sequence number.

    On the standby database with the largest sequence number, invoke SQL*Plus
    and issue the following SQL statements:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP PFILE=Failover.ora;

------------End of Standby Database Disaster Recovery ReadMe File-------------

10.1.6 Scenario 6: Switching Over a Primary Database to a Standby Database

Reversing the roles of a primary and standby database is referred to as a switchover operation. The following steps outline what commands must be issued to perform a switchover operation.

For this discussion, boston is initially the primary database and lonoon is initially the standby database. During the following switchover scenario, lonoon will become the primary database and boston will become the standby database.

This scenario assumes that the primary and standby databases have been previously created and initialized.

See Also:

Section 7.2.2, Section 10.1.1, and Section 10.1.2

Step 1 End read or update activity on the primary and standby databases.

Exclusive database access is required by the DBA before beginning a switchover operation. Ask users to log off the primary and standby databases or query the V$SESSION view to identify users that are connected to the databases and close all open sessions except the SQL*Plus session from which you are going to issue the switchover command.

See Also:

Oracle9i Database Administrator's Guide for more information on managing users

Step 2 Switch the primary database over to the physical standby role.

On the primary database, boston, execute the following statement:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION 
SHUTDOWN;

This statement does the following:

Step 3 Shut down and start up the former primary instance without mounting the database.

Execute the following statement on boston:

SQL> SHUTDOWN NORMAL;
SQL> STARTUP NOMOUNT;
Step 4 Mount the former primary database in the physical standby database role.

Execute the following statement on boston:

SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Step 5 Switch the former physical standby database over to the primary database role.

Execute the following statement on lonoon after the final logs have been received and applied:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

This statement does the following:

Step 6 Shut down the database.

Execute the following statement on lonoon:

SQL> SHUTDOWN;
Step 7 Start up the database in the primary role.

Execute the following statement on lonoon:

SQL> STARTUP;         
Step 8 Put the physical standby database in managed recovery mode.

Execute the following ALTER DATABASE statement on the standby database, boston, to place it in managed recovery mode. The following statement includes the DISCONNECT FROM SESSION clause, which starts a detached server process and immediately returns control to the user. (Note that this does not disconnect the current SQL session.)

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

If you want to start a detached server process and immediately return control to the user, add the DISCONNECT FROM SESSION option to the ALTER DATABASE statement. Note that this does not disconnect the current SQL session.

Step 9 Start archiving logs from the primary database to the physical standby database.

Execute the following statement on the new primary database, lonoon:

SQL> ALTER SYSTEM ARCHIVE LOG START;
SQL> ALTER SYSTEM SWITCH LOGFILE;

10.1.7 Scenario 7: Recovering After a Network Failure

When a standby database is in managed recovery mode, the standby database automatically applies archived redo logs as it receives them from the primary database. When the network goes down, automatic archival from the primary database to the standby database stops.

If the standby database is specified as an optional archive destination, then the primary database continues to operate normally.

When the network is up and running again, automatic archival of the archived redo logs from the primary database to the standby database resumes. However, if the standby database is specified as an optional archive destination, and a log switch occurred at the primary site, the standby database has an archive gap for the time when the network was down. The archive gap is automatically detected and resolved.

See Also:

Section 6.5 for information on archive gaps

If the standby database is specified as a mandatory archive destination, then the primary database will not overwrite any redo logs until the network failure is resolved and the primary database is able to archive to the standby site.

The primary database may eventually stall if the network problem is not fixed in a timely manner, because the primary database will not be able to switch to an online redo log that has not been archived. You can issue the following SQL query to determine whether the primary database stalled because it was not able to switch to an online redo log:

SELECT decode(COUNT(*),0,'NO','YES') "switch_possible"
FROM V$LOG
WHERE ARCHIVED='YES';

If the output from the query displays "Yes," a log switch is possible; if the output displays "No," a log switch is not possible.

It is important to specify a local directory at the primary site as a mandatory archive destination, so that all of the archived redo logs reside on the same system as the primary database. When the primary system is unreachable, and the primary database is part of a multiple standby database configuration, you can try to identify the archived redo logs at the other standby sites.

This scenario describes how to recover after a network failure.

Step 1 Identify the network failure.

The V$ARCHIVE_DEST view contains the network error and identifies which standby database cannot be reached. On the primary database, issue the following SQL statement for the archived log destination that experienced the network failure. For example:

SQL> SELECT DEST_ID, STATUS, ERROR FROM V$ARCHIVE_DEST WHERE DEST_ID = 2;

DEST_ID    STATUS
---------- ---------
ERROR
-----------------------------------------------------------------
        2  ERROR
ORA-12224: TNS:no listener

The query results show there are errors archiving to the standby database, and the cause of the error as TNS:no listener. You should check whether the listener on the standby site is started. If the listener is stopped, then start it.

Step 2 Prevent the primary database from stalling.

If you cannot solve the network problem quickly, and if the physical standby database is specified as a mandatory destination, try to prevent the database from stalling by doing one of the following:

Step 3 Archive the current redo log.

On the primary database, archive the current redo log:

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

When the network is back up again, log apply services can detect and resolve the archive gaps automatically if you place the standby database in managed recovery mode.

10.1.8 Scenario 8: Re-creating a Physical Standby Database

This scenario describes the case where you have failed over to a physical standby database and have begun using it as a normal primary database. After a period of time, you decide you want to switch back to the original primary system and make the primary database the standby database again.

Step 1 Create a physical standby database at the original primary site.
  1. Copy the original standby initialization parameter file from the original standby site to the original primary site as follows:
    % rcp STBYinit.ora PRMYHOST:fallback.ora
    

    The fallback.ora file will become the standby initialization parameter file for the standby database at the original primary site.

  2. On the original primary site, configure the fallback.ora file. You need to modify the following parameters:

    Parameter Value

    CONTROL_FILES

    /fs2/oracle/stdby/cf1.ctl

    LOCK_NAME_SPACE

    fallback

    LOG_ARCHIVE_FORMAT

    r_%t_%s.arc

    STANDBY_ARCHIVE_DEST

    /oracle/dbs/

    LOG_ARCHIVE_DEST_1

    'LOCATION=/oracle/dbs/'

    DB_FILE_NAME_CONVERT

    ('/fs2/oracle/stdby','/oracle/dbs')

    LOG_FILE_NAME_CONVERT

    ('/fs2/oracle/stdby','/oracle/dbs')

    The fallback.ora file looks as follows:

    #
    #parameter file fallback.ora
    #
    
    db_name=primary1               #The same as PRMYinit.ora
    
    control_files=/fs2/oracle/stdby/cf1.ctl
    lock_name_space=fallback;
    
    audit_trail=false
    o7_dictionary_accessibility=false
    global_names=false
    db_domain=us.oracle.com
    commit_point_strength=1
    
    processes=30
    sessions=30
    transactions=21
    transactions_per_rollback_segment=21
    db_block_buffers=100
    shared_pool_size=4000000
    ifile=/oracle/work/tkinit.ora
    
    # specific parameters for standby database
    log_archive_format = r_%t_%s.arc
    standby_archive_dest=/oracle/dbs/
    log_archive_dest_1='LOCATION=/oracle/dbs/'
    log_archive_dest_state_1 = ENABLE
    db_file_name_convert=('/fs2/oracle/stdby','/oracle/dbs')
    log_file_name_convert=('/fs2/oracle/stdby','/oracle/dbs')
    log_archive_start=true
    log_archive_trace=127
    
  3. On the original primary site, create or modify the primary initialization parameter file.

    You need to supply appropriate values for the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_STATE_1 initialization parameters.


    Note:

    If you made a copy of the standby database initialization parameter file, then you can modify the copy in this step.


  4. On the original standby site, back up the primary database datafiles.
  5. On the original standby site, create the standby database control file.
  6. Copy the primary database datafiles and the standby database control file from the original standby site to the original primary site.
  7. Archive the current online redo log and shut down the primary database to prevent further modifications as follows:
    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    SQL> SHUTDOWN IMMEDIATE;
    
  8. Copy the archived redo logs that were generated after the primary database datafiles were backed up from the original standby site to the original primary site, as follows:
    % rcp /fs2/oracle/stdby/stdby_1_102.arc prmyhost:/oracle/dbs/r_1_102.arc
    % rcp /fs2/oracle/stdby/stdby_1_103.arc prmyhost:/oracle/dbs/r_1_103.arc
    
Step 2 Switch over to the standby database at the original primary site.
  1. On the primary database, execute the following statement:
    SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY WITH SESSION SHUTDOWN;
    
    

    This statement does the following:

    • Closes the primary database, terminating any active sessions
    • Archives any unarchived log files and applies them to the standby database
    • Adds a switchover marker to the header of the last log file being archived
    • Creates a backup of the current control file
    • Converts the current control file into a standby control file
  2. Shut down and start up the former primary instance without mounting the database.

    Execute the following statements on the former primary instance:

    SQL> SHUTDOWN NORMAL;
    SQL> STARTUP NOMOUNT;
    
    
  3. Mount the former primary database in the physical standby database role.

    Execute the following statement on the primary database:

    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
    
  4. On the original primary site, start and mount the physical standby database:
    SQL> CONNECT SYS/SYS_PASSWORD AS SYSDBA
    SQL> STARTUP NOMOUNT PFILE=fallback.ora;
    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
  5. Switch over to the physical standby database:
    SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    SQL> SHUTDOWN IMMEDIATE;
    
  6. Start the primary database at the original primary site:
    SQL> STARTUP PFILE=PRMYinit.ora;
    
  7. Configure the network settings to enable client applications to access the primary database.

10.1.9 Scenario 9: Standby Database with a Time Lag

By default, in managed recovery mode, the standby database automatically applies redo logs when they arrive from the primary database. But in some cases, you may not want the logs to be applied immediately, because you want to create a time lag between the archiving of a redo log at the primary site and the application of the log at the standby site. A time lag can protect against the transfer of corrupted or erroneous data from the primary site to the standby site.

For example, suppose you run a batch job every night on the primary database. Unfortunately, you accidently ran the batch job twice and you did not realize the mistake until the batch job completed for the second time. Ideally, you need to roll back the database to the point in time before the batch job began. A primary database that has a standby database with a time lag (for example, 8 hours) could help you to recover. You could fail over the standby database with the time lag and use it as the new primary database.

To create a standby database with a time lag, use the DELAY attribute of the LOG_ARCHIVE_DEST_n initialization parameter in the primary database initialization parameter file. The archived redo log files are still automatically copied from the primary site to the standby site, but the log files are not immediately applied to the standby database. The log files are applied when the specified time interval has expired.

This scenario use a 4-hour time lag and covers the following topics:

Readers of this scenario are assumed to be familiar with the procedures for creating a typical standby database. The details have been omitted from the steps outlined in this scenario.

See Also:

Section 10.1.2 for details about standby database setup

10.1.9.1 Creating a Standby Database with a Time Lag

To create a standby database with a time lag, modify the LOG_ARCHIVE_DEST_n initialization parameter on the primary database to set a delay for the standby database. For example, to specify a 4-hour delay, set the parameter as follows:

ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='DELAY=240';

The DELAY attribute indicates that the archived redo logs at the standby site are not available for recovery until the 4-hour time interval has expired. The time interval (expressed in minutes) starts when the archived redo logs are successfully transmitted to the standby site. The redo information is still sent to the standby database and written to the disk as normal.

10.1.9.2 Managing a Standby Database with a Time Lag

The runtime scenario of a standby database with a time lag is slightly different from a standby database with no time lag, because the application of the online redo logs lags behind.

As the DBA, you need to keep the following tasks in mind when managing a standby database with a time lag:

10.1.9.3 Rolling Back the Database to a Specified Time

In the case stated at the beginning of this scenario, you may want to take advantage of the time lag, and get a primary database whose status is a specified time (for example, 4 hours) before the current primary database. You can fail over to the appropriate time-lagged standby database as follows:

  1. Fail over the standby database by issuing the following statement:
    SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
    
  2. Shut down the standby database instance:
    SQL> SHUTDOWN IMMEDIATE;
    
  3. Start the new primary instance:
    SQL> STARTUP PFILE=Failover.ora;
    

10.1.9.4 Bypassing the Time Lag and Switching Over to the Standby Database

If you do not want to take advantage of the time lag, you can switch over to the standby database as a normal standby database with no time lag as follows:

  1. Switch over the primary database first. Then restart the original primary database as a physical standby database.
  2. Apply all of the archived redo logs on this standby database:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY DISCONNECT FROM SESSION 
    THROUGH LAST SWITCHOVER;
    
  3. Switch over to the physical standby database by issuing the following statement:
    SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY DATABASE;
    
  4. Shut down the standby database instance:
    SQL> SHUTDOWN;
    
  5. Start the new primary instance:
    SQL> STARTUP PFILE=Failover.ora;
    

10.1.10 Scenario 10: Using a Standby Database to Back Up the Primary Database

This scenario describes using a physical standby database to back up the primary database. This allows the standby site to off-load the task of backup at the primary site. The backup at the standby site can be done while the standby database is in the managed recovery mode. When the primary database needs to be restored, you can use the backup created at the standby site if you choose not to fail over to the standby database.

Step 1 Back up the standby database.

The standby database can be used to back up the datafiles and the archived redo logs. The primary control file must be backed up at the primary site because the standby database uses a different standby control file. Because the control file is usually much smaller than the datafiles, backing up the control file at the primary site does not significantly affect the performance of the primary database.

To back up a database at the standby site:

  1. At the standby site, start the Recovery Manager utility (RMAN) with the NOCATALOG option to back up the standby datafiles and log files. Assume that stby is the connect string of the standby database. For example:
    % rman target sys/change_on_install@stby nocatalog
    connected to target database: ADE3 (DBID=1417165739)
    using target database controlfile instead of recovery catalog
    
  2. Back up the standby datafiles. You can perform the backup while the standby database is still in managed recovery mode.
    RMAN> run {
    2> allocate channel c1 type disk;
    3> backup database;
    4> }
    
    allocated channel: c1
    channel c1: sid=13 devtype=DISK
    
    Starting backup at 07-NOV-00
    channel c1: starting full datafile backupset
    channel c1: specifying datafile(s) in backupset
    input datafile fno=00001 name=/oracle/dbs/s3t_db1.dbf
    channel c1: starting piece 1 at 07-NOV-00
    channel c1: finished piece 1 at 07-NOV-00
    piece handle=/oracle/dbs/04c9t2ki_1_1 comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:35
    Finished backup at 07-NOV-00
    
    Starting Control File Autobackup at 07-NOV-00
    warning - controlfile is not current, controlfile autobackup skipped
    Finished Control File Autobackup at 07-NOV-00
    released channel: c1
    
  3. Back up the archived redo logs.

    After you back up a database, Oracle Corporation recommends that you record the current sequence number and thread number applied at the standby site. This is important because the archived logs, after this backup, may be needed to restore the database at the primary site if the primary database loses its local archived logs.

    For example, suppose there is only one thread and the sequence number at the time of the database backup is 15. After several new logs are archived to the standby site, you can back up these archived logs using the following commands:

    RMAN> run {
    2> allocate channel c1 type disk;
    3> backup archivelog from logseq 15;
    4> }
    
    allocated channel: c1
    channel c1: sid=9 devtype=DISK
    
    Starting backup at 07-NOV-00
    channel c1: starting archive log backupset
    channel c1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=15 recid=15 stamp=413043150
    input archive log thread=1 sequence=16 recid=16 stamp=413043168
    input archive log thread=1 sequence=17 recid=17 stamp=413043433
    input archive log thread=1 sequence=18 recid=18 stamp=413043442
    input archive log thread=1 sequence=19 recid=19 stamp=413043450
    input archive log thread=1 sequence=20 recid=20 stamp=413043454
    channel c1: starting piece 1 at 07-NOV-00
    channel c1: finished piece 1 at 07-NOV-00
    piece handle=/oracle/dbs/05c9t3c9_1_1 comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:03
    Finished backup at 07-NOV-00
    
    Starting Control File Autobackup at 07-NOV-00
    warning - controlfile is not current, controlfile autobackup skipped
    Finished Control File Autobackup at 07-NOV-00
    released channel: c1
    
  4. View the backup set using the list command:
    RMAN> list backup;
    
    List of Backup Sets
    ===================
    
    BS Key  Type LV Size       Device Type Elapsed Time Completion Time
    ------- ---- -- ---------- ----------- ------------ ---------------
    4       Full    104M       DISK        00:00:29     07-NOV-00
            BP Key: 4   Status: AVAILABLE   Tag:
            Piece Name: /oracle/dbs/04c9t2ki_1_1
      List of Datafiles in backup set 4
      File LV Type Ckp SCN    Ckp Time  Name
      ---- -- ---- ---------- --------- ----
      1       Full 73158      07-NOV-00
    /oracle/dbs/s3t_db1.dbf
    
    BS Key  Device Type Elapsed Time Completion Time
    ------- ----------- ------------ ---------------
    5       DISK        00:00:02     07-NOV-00
            BP Key: 5   Status: AVAILABLE   Tag:
            Piece Name: /oracle/dbs/05c9t3c9_1_1
    
      List of Archived Logs in backup set 5
      Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
      ---- ------- ---------- --------- ---------- ---------
      1    15      73139      07-NOV-00 73156      07-NOV-00
      1    16      73156      07-NOV-00 73158      07-NOV-00
      1    17      73158      07-NOV-00 73163      07-NOV-00
      1    18      73163      07-NOV-00 73165      07-NOV-00
      1    19      73165      07-NOV-00 73166      07-NOV-00
      1    20      73166      07-NOV-00 73167      07-NOV-00
    
  5. You can also use the RMAN BACKUP TABLESPACE and BACKUP DATAFILE commands to back up individual tablespaces and datafiles.
Step 2 Restore the backup at the primary site.

To restore the backup at the primary site, take the following steps:

  1. Set up a recovery catalog database on the primary site.

    See Also:

    Oracle9i Recovery Manager User's Guide for instructions on setting up a catalog database

  2. Ensure that the primary database is in the mounted state.
  3. Move the standby control file and the backup pieces to the primary site.


    Note:

    If the backup is on disk, the backup pieces must reside under the same directory at the primary site as they do at the standby site. This is a current restriction of RMAN. This is not an issue if the backup is on tape, which is the preferred method.


  4. Resynchronize the catalog database with the standby control file.

    Assume that prmy is the connect string of the primary database, and rcat is the connect string of the recovery catalog database.

    % rman target sys/change_on_install@prmy catalog rman/rman@rcat
    
    connected to target database: ADE3 (DBID=1417165739)
    connected to recovery catalog database
    
    RMAN> resync catalog from controlfilecopy 'scf3.ctl';
    
  5. List the backup set obtained by the catalog database after the resynchronization:
    RMAN> list backup;
    
    starting full resync of recovery catalog
    full resync complete
    
    List of Backup Sets
    ===================
    
    BS Key  Type LV Size       Device Type Elapsed Time Completion Time
    ------- ---- -- ---------- ----------- ------------ ---------------
    182     Full    104M       DISK        00:00:29     07-NOV-00
            BP Key: 184   Status: AVAILABLE   Tag:
            Piece Name: /oracle/dbs/04c9t2ki_1_1
      List of Datafiles in backup set 182
      File LV Type Ckp SCN    Ckp Time  Name
      ---- -- ---- ---------- --------- ----
      1       Full 73158      07-NOV-00 /oracle/dbs/t_db1.dbf
    
    BS Key  Device Type Elapsed Time Completion Time
    ------- ----------- ------------ ---------------
    183     DISK        00:00:02     07-NOV-00
            BP Key: 185   Status: AVAILABLE   Tag:
            Piece Name: /oracle/dbs/05c9t3c9_1_1
    
      List of Archived Logs in backup set 183
      Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
      ---- ------- ---------- --------- ---------- ---------
      1    15      73139      07-NOV-00 73156      07-NOV-00
      1    16      73156      07-NOV-00 73158      07-NOV-00
      1    17      73158      07-NOV-00 73163      07-NOV-00
      1    18      73163      07-NOV-00 73165      07-NOV-00
      1    19      73165      07-NOV-00 73166      07-NOV-00
      1    20      73166      07-NOV-00 73167      07-NOV-00
    
  6. Restore the primary database:
    RMAN> run {
    2> allocate channel c1 type disk;
    3> restore database;
    4> }
    
    allocated channel: c1
    channel c1: sid=11 devtype=DISK
    
    Starting restore at 07-NOV-00
    
    channel c1: starting datafile backupset restore
    channel c1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to /oracle/dbs/t_db1.dbf
    channel c1: restored backup piece 1
    piece handle=/oracle/dbs/04c9t2ki_1_1 tag=null
    params=NULL
    channel c1: restore complete
    Finished restore at 07-NOV-00
    released channel: c1
    
  7. Restore the archived log.

    After restoring the database, you can try to open the database:

    SQL> ALTER DATABASE OPEN;
    alter database open
    *
    ERROR at line 1:
    ORA-01113: file 1 needs media recovery
    ORA-01110: data file 1: '/oracle/dbs/t_db1.dbf'
    

    The ORA-01113 error indicates that media recovery is required on the database before you can open it. Issue the manual recovery statement, RECOVER DATABASE, as follows:

    SQL> RECOVER DATABASE;
    ORA-00279: change 73158 generated at 11/07/2000 14:12:47 needed for
    thread 1
    ORA-00289: suggestion : /oracle/dbs/db11_17.dbf
    ORA-00280: change 73158 for thread 1 is in sequence #17
    
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    

    The resulting error messages indicate that the database needs log sequence #17 for recovery. You can provide logs in several ways:

    • If the primary database did not lose its local archived logs, then you can use these log files.
    • You can move the standby archived logs back to the primary database. In this case, some renaming may be necessary.
    • If you already backed up the archived log at the standby site, you can move the backup piece to the primary site, and restore these archived logs, as shown in the following example:
      RMAN> run {
      2> allocate channel c1 type disk;
      3> restore archivelog from logseq 15;
      4> }
      
      allocated channel: c1
      channel c1: sid=11 devtype=DISK
      
      Starting restore at 07-NOV-00
      
      channel c1: starting archive log restore to default destination
      channel c1: restoring archive log
      archive log thread=1 sequence=15
      channel c1: restoring archive log
      archive log thread=1 sequence=16
      channel c1: restoring archive log
      archive log thread=1 sequence=17
      channel c1: restoring archive log
      archive log thread=1 sequence=18
      channel c1: restoring archive log
      archive log thread=1 sequence=19
      channel c1: restoring archive log
      archive log thread=1 sequence=20
      channel c1: restored backup piece 1
      piece handle=/oracle/dbs/05c9t3c9_1_1 tag=null
      params=NULL
      channel c1: restore complete
      Finished restore at 07-NOV-00
      released channel: c1
      

After restoring the archived logs, you can recover the database and then open it.

10.2 Logical Standby Database Scenarios

This section presents several scenarios describing the configuration and maintenance of a logical standby database. The scenarios discussed in this section include the following:

The scenarios in this section assume that a primary and logical standby database have been configured according to the guidelines in Chapter 4 unless otherwise noted. Also, assume that the logical standby databases described in the scenarios have been configured to support the maximum protection mode.

Table 10-2 lists the identifier values used in the scenario examples.

Table 10-2  Identifiers for Logical Standby Database Scenarios
Identifier Primary Database Logical Standby Database

Location

San Francisco

Seattle

Database name

HQ

SAT

Instance name

HQ

SAT

Initialization parameter file

hq_init.ora

sat_init.ora

Control file

hq_cf1.f

sat_cf1.f

Datafile

hq_db1.f

sat_db1.f

Online redo log file 1

hq_log1.f

sat_log1.f

Online redo log file 2

hq_log2.f

sat_log2.f

Database link (client-defined)

hq_link

sat_link

Net service name (client-defined)

hq_net

sat_net

Listener

hq_listener

sat_listener

10.2.1 Scenario 1: Skipping a Transaction

As you plan your new logical standby database configuration, you must take into consideration the database objects in the primary database that cannot be supported by the logical standby database. For example, a table that is defined with a LONG datatype is an unsupported datatype in a logical standby database.

See Also:

Section 4.1 for a list of unsupported datatypes, tables, and other information about database objects

Operating a logical standby database when the primary database contains unsupported objects should be considered seriously, because ignoring unsupported objects can stop log apply services on the standby database. In addition to identifying unsupported datatypes, you should identify transactions that depend on other transactions that reference unsupported database objects, and define actions to correct these dependencies. Assuming that the dependencies have been identified, you might handle these transactions by defining filters that ignore (skip) all activity to the dependent objects, or by defining filters that ignore all activity in the entire schema.

This section provides the following scenarios that define filters to accomplish these tasks:

These scenarios make the following assumptions:

10.2.1.1 Skipping Activity on All Objects in a Schema

The logical standby database in this scenario uses the GET_AVDATA procedure to off-load the media streaming workload from the HQ database. The DBA needs to filter redo data coming from the primary database because the AUDIO table in the MOVIES schema contains the LONG RAW datatype (an unsupported datatype). Thus, the following scenario describes how to skip activity occurring to the MOVIE schema. Note that any DML transaction involving the AUDIO table is automatically skipped.

To work around this datatype problem, perform the following steps on the logical standby database:

  1. Stop log apply services:
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
    
  2. Define a filter that skips DML transactions involving all objects in the MOVIES schema:
    SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML', 'MOVIES', '%', null);
    
    
  3. Define a filter that skips all DDL transactions that create or modify objects in the MOVIES schema:
    SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL', 'MOVIES', '%', null);
    
    
  4. Resume log apply services to start using these new settings:
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
    

10.2.1.2 Skipping Activity on Specific Objects in a Schema

The logical standby database in this scenario off-loads the primary database by handling queries consisting solely of movie information. Because there is no reason to maintain video data without the audio, this scenario skips all activity involving the VIDEO table. To do this, perform the following steps on the logical standby database:

  1. Stop log apply services.
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
    
  2. Use the SKIP procedure to define a filter that skips DML transactions involving the VIDEO table in the MOVIES schema:
    SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML', 'MOVIES', 'VIDEO', null);
    
    
  3. Define a filter that skips all DDL that modifies the VIDEO table in the MOVIES schema.
    SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL', 'MOVIES', 'VIDEO', null);
    
    
  4. Define a filter that skips all DDL transactions that modify the AUDIO table in the MOVIES schema.
    SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL', 'MOVIES', 'AUDIO', null);
    
    
  5. Start log apply services to begin using these settings.
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
    
    

This scenario does not define a DML filter against the AUDIO table, because the DML is automatically ignored by the logical standby database. However, you must define a filter for the DDL transactions, because the schema DDL transactions are permitted by default on unsupported objects. This could introduce a problem for a logical standby database in a case where the AUDIO table is modified on the primary database such that a new column CONVERTED_DATA of type BLOB is added. The contents of the DATA column is migrated to the CONVERTED_DATA column, and the DATA column is dropped. This would qualify the AUDIO table for support on logical standby database.

However, during the conversion process, the standby database's AUDIO table was unsupported, meaning that the CONVERTED_DATA column is empty. Subsequent DML transactions on the now-supported AUDIO table will fail, because the transaction would not find data in the CONVERTED_DATA column. Defining a filter that skips all DDL transactions prevents redefinitions of the AUDIO table. To enable support for a table in a case such as this, see Section 10.2.2, "Scenario 2: Creating or Re-creating a Table"

10.2.2 Scenario 2: Creating or Re-creating a Table

To create a table on a logical standby database, you use the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure.

See Also:

Oracle9i Supplied PL/SQL Packages and Types Reference for more information about the DBMS_LOGSTDBY package.

This procedure creates or re-creates a table on a logical standby database from an existing table definition on a primary database. In addition to creating a table, the procedure also imports data from the primary table.

Typically, you use table instantiation to recover a table that has experienced an invalid DML operation, such as a NOLOGGING DML transaction, but you can also use the procedure to enable support on a table that was formerly unsupported. That is, the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure offers a way to configure or reconfigure a table for support by a logical standby database, without having to back up and restore the database.

Before you can create a table, it must meet the requirements described in Section 4.1 that explain:

The steps described in this scenario ensure that logical standby database will be able to support a table on the primary database. This scenario creates the AUDIO table in the MOVIES schema on the SAT standby database using the definitions and data as found on the HQ primary database. The scenario assumes the metadata definition for the AUDIO table remains unmodified throughout the creation process.

To create a table, perform the following steps on the logical standby database:

  1. Stop log apply services:
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
    
  2. Remove any existing filters on the AUDIO table, if any:
    SQL> SELECT * FROM DBA_LOGSTDBY_SKIP;
    ERROR  STATEMENT_OPT         OWNER          NAME            PROC
    ---------------------------------------------------------------------
    N      SCHEMA_DDL            MOVIES         AUDIO
    
    SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP('SCHEMA_DDL', 'MOVIES', 'AUDIO');
    
    
  3. Create the AUDIO table in the MOVIES schema using the database link to the HQ primary database:
    SQL> EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE('AUDIO', 'MOVIES', 'HQ_NET');
    
    
  4. Resume log apply services to start support on the table:
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
    

10.2.3 Scenario 3: Failover Operations When In Maximum Availability Mode

This scenario describes a failover operation in a configuration with the maximum availability mode. This mode is the highest grade of data protection supported by a Data Guard configuration that contains logical standby databases.

In a disaster, the biggest task is to determine which logical standby database is the best target for the failover operation. While there are many environmental factors that affect which is the best target standby database, this scenario assumes these things to be equal for the purpose of emphasizing data loss assessment. Once you identify a target standby database, initiating a failover operation is a matter of issuing a command.

See Also:

Section 5.7.2 for more information about the maximum availability protection mode for logical standby databases

This scenario starts out with a Data Guard configuration consisting of the HQ primary database and the SAT standby database, and adds a logical standby database named NYC. The following table provides information about the values used for the new NYC logical standby database.

Identifier Values on the Standby Database

Location

New York City

Database name

NYC

Instance name

NYC

Initialization parameter file

nyc_init.ora

Control file

nyc_cf1.f

Datafile

nyc_db1.f

Online redo log file 1

nyc_log1.f

Online redo log file 2

nyc_log2.f

Database link (client-defined)

nyc_link

Net service name (client-defined)

nyc_net

Listener

nyc_listener

Step 1 Determine the standby site with the least amount of lost data

To do this, perform the following steps:

  1. Connect to the SAT logical standby database.
    SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
    
    
  2. Determine the highest applied SCN and highest (newest) applicable SCN on the SAT database by querying the following columns in the DBA_LOGSTDBY_PROGRESS view. For example:
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN NEWEST_SCN
    ----------- ----------
         144059     144059
    
    
  3. Obtain a list of the archived redo logs that have been applied or are currently pending application to the SAT database by querying the DBA_LOGSTDBY_LOG view. For example:
    SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#",-
    FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, 'HH:MM:SS') TIMESTAMP,-
    DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) "THR#"-
    FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;
    
    FILE_NAME                 SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR#
    ------------------------- ---- ------------- ------------ -------- --- --- ----
    /oracle/dbs/nq_sat_2.log  2           101579       101588 11:02:57 NO  NO  1
    /oracle/dbs/nq_sat_3.log  3           101588       142065 11:02:01 NO  NO  1
    /oracle/dbs/nq_sat_4.log  4           142065       142307 11:02:09 NO  NO  1
    /oracle/dbs/nq_sat_5.log  5           142307       142739 11:02:47 YES YES 1
    /oracle/dbs/nq_sat_6.log  6           142739       143973 12:02:09 NO  NO  1
    /oracle/dbs/nq_sat_7.log  7           143973       144042 01:02:00 NO  NO  1
    /oracle/dbs/nq_sat_8.log  8           144042       144051 01:02:00 NO  NO  1
    /oracle/dbs/nq_sat_9.log  9           144051       144054 01:02:15 NO  NO  1
    /oracle/dbs/nq_sat_10.log 10          144054       144057 01:02:20 NO  NO  1
    /oracle/dbs/nq_sat_11.log 11          144057       144060 01:02:25 NO  NO  1
    /oracle/dbs/nq_sat_13.log 13          144089       144147 01:02:40 NO  NO  1
    
    

    Notice the gap in the sequence numbers in the SEQ# column; in the example, the gap indicates that SAT database is missing archived redo log number 12.

  4. Connect to the NYC database:
    SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
    
    
  5. Determine the highest applied SCN and highest applicable SCN on the NYC database by querying the following columns in the DBA_LOGSTDBY_PROGRESS view:
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    
    APPLIED_SCN NEWEST_SCN
    ----------- ----------
         143970     144146
    
    
  6. Obtain a list of the log files that have been processed or are currently pending processing on the NYC database. For example:
    SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#", -
    FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, 'HH:MM:SS')  TIMESTAMP, -
    DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) "THR#" -
    FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;
    
    FILE_NAME                 SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR#
    ------------------------- ---- ------------- ------------ -------- --- --- ----
    /oracle/dbs/hq_nyc_2.log  2           101579       101588 11:02:58 NO  NO  1
    /oracle/dbs/hq_nyc_3.log  3           101588       142065 11:02:02 NO  NO  1
    /oracle/dbs/hq_nyc_4.log  4           142065       142307 11:02:10 NO  NO  1
    /oracle/dbs/hq_nyc_5.log  5           142307       142739 11:02:48 YES YES 1
    /oracle/dbs/hq_nyc_6.log  6           142739       143973 12:02:10 NO  NO  1
    /oracle/dbs/hq_nyc_7.log  7           143973       144042 01:02:11 NO  NO  1
    /oracle/dbs/hq_nyc_8.log  8           144042       144051 01:02:01 NO  NO  1
    /oracle/dbs/hq_nyc_9.log  9           144051       144054 01:02:16 NO  NO  1
    /oracle/dbs/hq_nyc_10.log 10          144054       144057 01:02:21 NO  NO  1
    /oracle/dbs/hq_nyc_11.log 11          144057       144060 01:02:26 NO  NO  1
    /oracle/dbs/hq_nyc_12.log 12          144060       144089 01:02:30 NO  NO  1
    /oracle/dbs/hq_nyc_13.log 13          144089       144147 01:02:41 NO  NO  1
    
    
  7. To apply archived redo logs to bring the SAT database to its most current state, including any partially archived log files, perform the following steps:
    1. Manually recover any missing archived redo logs using an operating system utility. In this case, the SAT database is missing archived redo log 12. Because the NYC received this archived redo log, you can copy it from the NYC database to the SAT database, as follows:
      %cp /net/nyc/oracle/dbs/hq_nyc_12.log
      /net/sat/oracle/dbs/hq_sat_12.log
      
      
    2. Determine if a partial archived redo log exists for the next sequence number. In this example, the next sequence number should be 14. The following UNIX command shows the directory on the SAT database, looking for the presence of an archived redo log named hq_sat_14.log.
      %ls -l /net/sat/oracle/dbs/hq_sat_14.log
      -rw-rw----   1 oracle    dbs  333280 Feb 12  1:03 hq_sat_14.log
      
      
    3. Register both the recovered archived redo log and the partial archived redo log.
      SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
      SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_sat_12.log';
      SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_sat_14.log';
      
      
      
    4. Start log apply services to apply to the most current log.
      SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
      
      
  8. Bring the NYC database up to its most current state, including any partial archived redo logs by performing the following steps:
    1. Determine if a partial archived redo log exists for the next sequence number. The following UNIX command shows the directory on the NYC database, looking for the presence of an archived redo log named with the next sequence named (hq_nyc_14). For example:
      %ls -l /net/nyc/oracle/dbs/hq_nyc_14.log
      -rw-rw----   1 oracle    dbs  333330 Feb 12  1:03 hq_nyc_14.log
      
      
    2. Register the partial archive log file on the NYC database:
      SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
      SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_nyc_14.log';
      
      
      
    3. Start log apply services to apply to the most current log.
      SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
      
      
  9. Determine the highest applied SCN on the SAT database. To be sure that log apply services have completed applying all archived redo logs, query the DBA_LOGSTDBY_PROGRESS view to see if the value of the APPLIED_SCN column is equal to the value of the NEWEST_SCN column:
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    
    APPLIED_SCN NEWEST_SCN
    ----------- ----------
         144200     144200
    
    

    Because the SCN values match, you can be assured that there is no longer a delay (lag) between the primary database's current log and the last log received and applied by the SAT database.

  10. Determine the highest applied SCN on the NYC database by querying the DBA_LOGSTDBY_PROGRESS view to see if the value of the APPLIED_SCN column is equal to the value of the NEWEST_SCN column:
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    
    APPLIED_SCN NEWEST_SCN
    ----------- ----------
         144205     144205
    
    

    Because the SCN values match, you can be assured that there is no longer a delay (lag) between the primary database's current log and the last log received and applied by the NYC database.

In most cases, the logical standby database you choose as a failover target should be a balance between data loss and performance. As you analyze this information to make a decision about the best failover candidate in this scenario, consider the following:

Step 2 Perform the failover operation.

Once you determine which standby database is going to be the failover target, initiate a failover operation by connecting to the standby database and issuing the following statements:

  1. Stop log apply services:
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    
    
  2. Issue the following statement to fail over to the logical standby database that will become the new primary database:
    SQL> ALTER DATABASE ACTIVATE LOGICAL STANDBY DATABASE;
    

10.2.4 Scenario 4: Switchover Operations

This scenario presents the requirements and procedures to ensure a successful switch over of roles between a primary database and a logical standby database. During a switchover operation, Data Guard takes measures to prevent data divergence during the role transition.

The scenario includes information about setting up initialization parameter files prior to initiating the switchover operation. In some cases, a switchover operation requires that you reconfigure the original primary database as a standby database to the new primary database. However, it is possible to eliminate this requirement in configurations with multiple standby databases without violating the database protection mode.

See Also:

Section 5.7 for more information about data protection modes

Performing a switchover operation without this requirement is the most likely scenario, because this allows a former primary database to undergo routine maintenance operation without downtime or data protection penalties.

Step 1 Modify the initialization parameter files.
  1. Modify the SAT database's initialization parameter file, sat_init.ora, to ensure that the SAT database can function as a primary database, as follows:
    • Configure an additional archive destination using the Oracle Net service name for the primary database, which specifies the same database protection mode that is currently specified for the primary database.
    • Defer the new destination.

    The following example shows the LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_DEST_STATE_n initialization parameters in the sat_init.ora parameter file:

    LOG_ARCHIVE_DEST_1 = 'LOCATION=SAT MANDATORY REOPEN=5'
    LOG_ARCHIVE_DEST_2 = 'SERVICE=HQ_NET LGWR SYNC AFFIRM MANDATORY NOREGISTER'
    LOG_ARCHIVE_DEST_STATE_1 = ENABLE
    LOG_ARCHIVE_DEST_STATE_2 = DEFER
    LOG_ARCHIVE_FORMAT = "_SAT_%s"
    STANDBY_ARCHIVE_DEST = 'HQ'
    
    
  2. Modify the initialization parameter file, hq_init.ora, for the HQ database. to ensure that the HQ database can function as a logical standby database. Modify this file:
    • To ensure the file names for locally archived log files and received archived log files do not collide if they reside in the same directory
    • To tune the initialization parameters appropriately for log apply services

    The following example shows the relevant parameters in the hq_init.ora initialization parameter file:

    LOG_ARCHIVE_DEST_1 = 'LOCATION=HQ MANDATORY REOPEN=5'
    LOG_ARCHIVE_DEST_2 = 'SERVICE=SAT_NET LGWR SYNC AFFIRM MANDATORY NOREGISTER'
    LOG_ARCHIVE_DEST_STATE_1 = ENABLE
    LOG_ARCHIVE_DEST_STATE_2 = ENABLE
    LOG_ARCHIVE_FORMAT = "_HQ_%s"
    STANDBY_ARCHIVE_DEST = 'SAT'
    LOG_PARALLELISM = 1
    PARALLEL_MAX_SERVERS = 9
    SHARED_POOL_SIZE = 167772160
    
    
Step 2 Switch over the HQ (primary) database to the logical standby role.

To switch over from the HQ database to the SAT database, perform the following steps:

  1. Verify the database is currently a primary database by querying the V$DATABASE view. For example:
    SQL> SELECT NAME, DATABASE_ROLE FROM V$DATABASE;
    NAME      DATABASE_ROLE
    --------- ----------------
    HQ        PRIMARY
    
    
  2. Prepare the primary database to become a logical standby database:
    SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY;
    
    
  3. Disable log transport services from transmitting archived redo logs to the SAT logical standby database:
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = DEFER;
    
    
  4. Set any desired initialization parameters for the logical standby database. To view the current settings on the SAT database, query the database link using the following SQL statements:
    SQL> SELECT NAME, SUBSTR(VALUE,1,30) "Value" -
    FROM SYSTEM.LOGSTDBY$PARAMETERS@SAT_LINK ORDER BY NAME;
    
    NAME                           Value
    ------------------------------ ------------------------------
    FIRST_SCN                      101579
    LMNR_SID                       1
    PRIMARY                        1457871294
    TRANSACTION_CONSISTENCY        FULL
    _SYNCPOINT_INTERVAL            0
    
    
  5. To have the HQ database operate with transaction consistency, set it using the following PL/SQL procedure.
    SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('TRANSACTION_CONSISTENCY', 'FULL');
    
Step 3 Switch over the SAT (standby) database to the primary role.
  1. Wait for a COMPLETED_SESSION record to display in the DBA_LOGSTDBY_PARAMETERS table. This indicates that the last of the archived redo logs from the primary database have been applied. For example:
    SQL> SELECT NAME, SUBSTR(VALUE,1,30) "VALUE" FROM -
       > DBA_LOGSTDBY_PARAMETERS WHERE NAME='COMPLETED_SESSION';
    NAME                            VALUE
    ------------------------------ ------------------------------
    COMPLETED_SESSION               SWITCHOVER
    
    
  2. Verify the SAT database is a logical standby database by querying the V$DATABASE view, as follows:
    SQL> SELECT NAME, DATABASE_ROLE FROM V$DATABASE;
    NAME      DATABASE_ROLE
    --------- ----------------
    SAT       LOGICAL STANDBY
    
    
  3. Enable archiving to the new standby destination, which is the HQ database:
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
    
    
  4. Prepare the SAT database to become the new primary database using the following SQL statement.
    SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL PRIMARY;
    
    
  5. Verify the SAT database has changed to the role of a primary database by querying the V$DATABASE view:
    SQL> SELECT NAME, DATABASE_ROLE FROM V$DATABASE;
    NAME      DATABASE_ROLE
    --------- ----------------
    SAT       PRIMARY
    
Step 4 Start log apply services on the HQ database

On the HQ (logical standby) database, start log apply services on the new standby database using a database link to the new primary database, as follows:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NEW PRIMARY SAT_LINK;

Go to previous page Go to next page
Oracle
Copyright © 1999, 2002 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback