Oracle8i Standby Database Concepts and Administration
Release 2 (8.1.6)

A76995-01

Library

Product

Contents

Index

Prev Next

4
Performing Maintenance on a Standby Database

This chapter describes how to perform typical maintenance operations on a standby database. It includes the following topics:

Monitoring Events That Affect the Standby Database

To prevent possible problems, you should be aware of events that affect a standby database and learn how to monitor them. Most changes to a primary database are automatically propagated to a standby database through archived redo logs and so require no user intervention. Nevertheless, some changes to a primary database require manual intervention at the standby site.

This section contains the following topics:

Monitoring the Primary and Standby Databases

Table 4-1 indicates whether a command is normally propagated or requires extra administrative efforts to be fully propagated. It also describes how to respond to these events.

Table 4-1  Propagating a Command
Primary Database Event  Detection at Primary Site  Detection at Standby Site  Response 

Archiving errors 

  • ERROR.V$ARCHIVE _DESTINATION

  • alert.log

  • ARCHIVED.V$LOG

  • Archiving trace files

 

remote file server (RFS) process trace file 

Create scripts to push or pull archived redo logs if errors occur or if performance is degraded. See Adding Tablespaces or Datafiles to the Primary Database

Thread events 

  • alert.log

  • V$THREAD

 

alert.log 

Thread events are automatically propagated through archived logs, so no extra action is necessary. 

Redo log changes 

  • alert.log

  • V$LOG and STATUS.V$LOGFILE

 

N/A 

Redo log changes do not affect standby database unless a redo log is cleared or lost. In these cases, you must rebuild the standby database. See Creating the Standby Database Files.

Pre-clear the logs on the standby database with the ALTER DATABASE CLEAR LOGFILE statement. See Clearing Online Redo Logs

Issue CREATE CONTROLFILE 

alert.log 

Database functions normally until it encounters redo depending on any parameter changes. 

Re-create the standby control file (see Refreshing the Standby Database Control File). Re-create the standby database if the primary database is opened RESETLOGS. 

Media recovery performed 

alert.log 

N/A 

Re-create the standby database if the RESETLOGS option is utilized. 

Tablespace status changes (made read/write or read-only, placed online or offline) 

  • DBA_TABLESPACES

  • alert.log

 
  • Verify that all datafiles are online.

  • V$RECOVER_FILE

 

Status changes are automatically propagated, so no response is necessary. Datafiles remain online. 

Add datafile or create tablespace 

  • DBA_DATA_FILES

  • alert.log

 
  • ORA-283, ORA-1670, ORA-1157, ORA-1110

  • Standby recovery stops.

 

Manually create datafile and restart recovery. See Adding Tablespaces or Datafiles to the Primary Database

Drop tablespace 

  • DBA_DATA_FILES

  • alert.log

 

alert.log 

Remove datafile from operating system. 

Tablespace or datafile taken offline, or datafile is dropped offline 

  • V$RECOVER_FILE

  • alert.log

The tablespace or datafile requires recovery when you attempt to bring it online. 

  • Verify that all datafiles are online.

  • V$RECOVER_FILE

 

Datafiles remain online. The tablespace or datafile is fine after standby database activation. 

Rename datafile 

alert.log 

N/A 

N/A 

Unlogged or unrecoverable operations 

  • Direct loader invalidates block range redo entry in online redo log. Check V$DATAFILE.

  • V$DATABASE

 

alert.log. File blocks are invalidated unless they are in the future of redo, in which case they are not touched. 

Unlogged changes are not propagated to the standby database. If you want to apply these changes, see Performing Direct Path Operations

Recovery progress 

alert.log 

  • V$RECOVER_LOG

  • alert.log

 

Make sure the standby database is not following behind the primary database. 

Autoextend a datafile 

alert.log 

May cause operation to fail on standby database because it lacks disk space. 

Ensure that there is enough disk space for the expanded datafile. 

Issue OPEN RESETLOGS or CLEAR UNARCHIVED LOGFILES statements 

alert.log 

Standby database is invalidated. 

Rebuild the standby database. See Creating the Standby Database Files

Change initialization parameter 

alert.log 

May cause failure because of redo depending on the changed parameter. 

Dynamically change the standby parameter or shut down the standby database and edit the initialization parameter file. 

Determining Which Archived Logs Have Been Received by the Standby Site

The following table lists two methods for gaining information about archived redo logs received by the standby site:

Method  Advantages  Disadvantages 

Access V$ARCHIVED_LOG view on the standby database. 

  • Is easily accessible.

  • Does not require setting additional parameters.

 
  • Lists minimal information.

  • Does not record archiving errors.

 

Set archive tracing on primary and standby sites through the LOG_ARCHIVE_TRACE initialization parameter.  

  • Allows you to control the level of detail in the trace output.

  • Gives extensive information if desired.

  • Records archiving errors as well as successes.

 

Requires setting an initialization parameter and interpreting trace output. 

Accessing the V$ARCHIVED_LOG View

The simplest way to determine the most recent archived log received by the standby site is to query the V$ARCHIVED_LOG view. This view is only useful after the standby site has started receiving logs, because before that time the view is populated by old archived log records generated from the primary control file. For example, you can execute the following script (sample output included):

col name format a20
col thread# format 999
col sequence# format 999
col first_change# format 999999
col next_change# format 999999

SELECT thread#, sequence# AS "SEQ#", name, first_change# AS "FIRSTSCN", 
       next_change# AS "NEXTSCN",archived, deleted,completion_time AS "TIME"
FROM   v$archived_log
/

SQL> @archived_script

THREAD#       SEQ# NAME                   FIRSTSCN    NEXTSCN ARC DEL TIME
------- ---------- -------------------- ---------- ---------- --- --- ---------
      1        947 /arc_dest/arc_1_947       33113      33249 YES NO  23-JUN-99

Setting Archive Tracing

To see the progression of the archiving of redo logs to the standby site, set the LOG_ARCHIVE_TRACE parameter in the primary and standby initialization parameter files.

LOG_ARCHIVE_TRACE on   Causes Oracle to write  In trace file 

Primary database 

Audit trail of archiving process activity (ARCn and foreground processes) on primary database 

Whose filename is specified in the USER_DUMP_DEST initialization parameter 

Standby database 

Audit trail of RFS process activity relating to archived redo logs on the standby database 

Whose filename is specified in the USER_DUMP_DEST initialization parameter 

Determining the Location of the Trace Files

The trace files for a database are located in the directory specified by the USER_DUMP_DEST parameter in the initialization parameter file. Connect to the primary and standby instances using SQL*Plus and issue a SHOW statement to determine the location:

SQL> SHOW PARAMETER user_dump_dest
NAME                                 TYPE    VALUE
------------------------------------ ------- ------------------------------
user_dump_dest                       string  ?/rdbms/log
Setting the Log Trace Parameter

The format for the archiving trace parameter is as follows, where trace_level is an integer:

LOG_ARCHIVE_TRACE=trace_level

To enable, disable, or modify the LOG_ARCHIVE_TRACE parameter in a primary database, do one of the following:

To enable, disable, or modify the LOG_ARCHIVE_TRACE parameter in a standby database in read-only or recovery mode, issue the following SQL statement:

ALTER SYSTEM SET ...;

If managed recovery is active, then issue the ALTER SYSTEM statement from a different standby session so that it affects trace output generated by the remote file service (RFS) when the next archived log is received from the primary database. For example, enter:

SQL> ALTER SYSTEM SET log_archive_trace=32;
Choosing an Integer Value

The integer values for the LOG_ARCHIVE_TRACE parameter represent levels of tracing data. In general, the higher the level, the more detailed the information. The following integer levels are available:

Level  Meaning 

Disable archivelog tracing - default setting. 

Track archival of REDO log file. 

Track archival status per archivelog destination. 

Track archival operational phase. 

Track archivelog destination activity. 

16 

Track detailed archivelog destination activity. 

32 

Track archivelog destination parameter modifications. 

64 

Track ARCn process state activity.  

You can combine tracing levels by setting the value of the LOG_ARCHIVE_TRACE parameter to the sum of the individual levels. For example, setting the parameter to 3 generates level 1 and level 2 trace output.

Following are examples of the ARC0 trace data generated on the primary site by the archival of redo log 387 to two different destinations: the service STANDBY1 and the local directory /vobs/oracle/dbs.


Note:

The level numbers do not appear in the actual trace output: they are shown here for clarification only. 


Level   Corresponding entry content (sample) 
-----   -------------------------------- 
( 1)    ARC0: Begin archiving log# 1 seq# 387 thrd# 1 
( 4)    ARC0: VALIDATE 
( 4)    ARC0: PREPARE 
( 4)    ARC0: INITIALIZE 
( 4)    ARC0: SPOOL 
( 8)    ARC0: Creating archive destination 2 : 'standby1' 
(16)    ARC0:  Issuing standby Create archive destination at 'standby1' 
( 8)    ARC0: Creating archive destination 1 : '/vobs/oracle/dbs/d1arc1_387.dbf' 
(16)    ARC0:  Archiving block 1 count 1 to : 'standby1' 
(16)    ARC0:  Issuing standby Archive of block 1 count 1 to 'standby1' 
(16)    ARC0:  Archiving block 1 count 1 to :  '/vobs/oracle/dbs/d1arc1_387.dbf' 
( 8)    ARC0: Closing archive destination 2  : standby1 
(16)    ARC0:  Issuing standby Close archive destination at 'standby1' 
( 8)    ARC0: Closing archive destination 1  :  /vobs/oracle/dbs/d1arc1_387.dbf 
( 4)    ARC0: FINISH 
( 2)    ARC0: Archival success destination 2 : 'standby1' 
( 2)    ARC0: Archival success destination 1 : '/vobs/oracle/dbs/d1arc1_387.dbf' 
( 4)    ARC0: COMPLETE, all destinations archived 
(16)    ARC0: ArchivedLog entry added: /vobs/oracle/dbs/d1arc1_387.dbf 
(16)    ARC0: ArchivedLog entry added: standby1 
( 4)    ARC0: ARCHIVED 
( 1)    ARC0: Completed archiving log# 1 seq# 387 thrd# 1 
 
(32)  Propagating archive 0 destination version 0 to version 2 
         Propagating archive 0 state version 0 to version 2 
         Propagating archive 1 destination version 0 to version 2 
         Propagating archive 1 state version 0 to version 2 
         Propagating archive 2 destination version 0 to version 1 
         Propagating archive 2 state version 0 to version 1 
         Propagating archive 3 destination version 0 to version 1 
         Propagating archive 3 state version 0 to version 1 
         Propagating archive 4 destination version 0 to version 1 
         Propagating archive 4 state version 0 to version 1 
 
(64) ARCH: changing ARC0 KCRRNOARCH->KCRRSCHED 
        ARCH: STARTING ARCH PROCESSES 
        ARCH: changing ARC0 KCRRSCHED->KCRRSTART 
        ARCH: invoking ARC0 
        ARC0: changing ARC0 KCRRSTART->KCRRACTIVE 
        ARCH: Initializing ARC0 
        ARCH: ARC0 invoked 
        ARCH: STARTING ARCH PROCESSES COMPLETE 
        ARC0 started with pid=8 
        ARC0: Archival started

Following is the trace data generated by the RFS process on the standby site as it receives archived log 387 in directory /stby and applies it to the standby database:

level    trace output (sample) 
----    ------------------ 
( 4)      RFS: Startup received from ARCH pid 9272 
( 4)      RFS: Notifier 
( 4)      RFS: Attaching to standby instance 
( 1)      RFS: Begin archive log# 2 seq# 387 thrd# 1 
(32)      Propagating archive 5 destination version 0 to version 2 
(32)      Propagating archive 5 state version 0 to version 1 
( 8)      RFS: Creating archive destination file: /stby/parc1_387.dbf 
(16)      RFS:  Archiving block 1 count 11 
( 1)      RFS: Completed archive log# 2 seq# 387 thrd# 1 
( 8)      RFS: Closing archive destination file: /stby/parc1_387.dbf 
(16)      RFS: ArchivedLog entry added: /stby/parc1_387.dbf 
( 1)      RFS: Archivelog seq# 387 thrd# 1 available 04/02/99 09:40:53 
( 4)      RFS: Detaching from standby instance 
( 4)      RFS: Shutdown received from ARCH pid 9272

Determining Which Logs Have Been Applied to the Standby Database

Query the V$LOG_HISTORY view on the standby database, which records the latest log sequence number that has been applied. For example, issue the following query:

SQL> SELECT thread#, max(sequence#) AS "LAST_APPLIED_LOG"
  2> FROM   v$log_history
  3> GROUP BY thread#;

THREAD# LAST_APPLIED_LOG
------- ----------------
      1              967

In this example, the archived redo log with log sequence number 967 is the most recently applied log.


Note:

V$LOG is not updated during recovery. 


Responding to Events That Affect the Standby Database

Typically, physical changes to the primary database require a manual response on the standby database. This section contains the following topics:

Adding Tablespaces or Datafiles to the Primary Database

Adding a tablespace or datafile to the primary database generates redo that, when applied at the standby database, automatically adds the datafile name to the standby control file. If the standby database locates the file with the filename specified in the control file, then recovery continues. If the standby database is unable to locate a file with the filename specified in the control file, then recovery terminates.

Perform one of the following procedures to create a new datafile in the primary database and update the standby database. Note that if you do not want the new datafile in the standby database, you can take the datafile offline manually using the following syntax:

SQL> ALTER DATABASE DATAFILE 'filename' OFFLINE DROP;
To add a tablespace or datafile to the primary database and create the datafile in the standby database:

  1. Create a tablespace on the primary database as usual. For example, to create new datafile t_db2.f in tablespace tbs_2, issue:

    SQL> CREATE TABLESPACE tbs_2 DATAFILE 't_db2.f' SIZE 2M; 
    
    
    
  2. If the standby database is shut down, start the standby instance without mounting it. For example, enter:

    SQL> STARTUP NOMOUNT pfile=/private1/stby/initSTANDBY.ora
    
    

    If the standby database is currently in managed recovery mode, skip to step 4.

  3. Mount the standby database, then place it in managed recovery mode:

    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    SQL> RECOVER MANAGED STANDBY DATABASE;
    
    
    
  4. Switch redo logs on the primary database to initiate redo archival to the standby database:

    SQL> ALTER SYSTEM SWITCH LOGFILE;
    
    

    If the recovery process on the standby database tries to apply the redo containing the CREATE TABLESPACE statement, it stops because the new datafile does not exist on the standby site.

  5. Either wait for the standby database to cancel recovery because it cannot find the new datafile, or manually cancel managed recovery:

    SQL> RECOVER MANAGED STANDBY DATABASE CANCEL;
    
    
    

    Note that CREATE TABLESPACE redo adds the new filename to the standby control file. The following alert.log entry is generated:

    WARNING! Recovering datafile 2 from a fuzzy file. If not the current file it 
    might be an online backup taken without entering the begin backup command. 
    Successfully added datafile 2 to media recovery 
    Datafile #2: '/private1/stby/t_db2.f' 
    
    
    
  6. Create the datafile on the standby database. For example, issue:

    SQL> ALTER DATABASE CREATE DATAFILE '/private1/stby/t_db2.f' 
                                     AS '/private1/stby/t_db2.f'; 
    
     
    
  7. Place the standby database in managed recovery mode:

    SQL> RECOVER MANAGED STANDBY DATABASE;
    
    
    

Continue normal processing on the primary database. The primary and standby databases are now synchronized.

See Also:

For more information on offline datafile alterations, see Taking Datafiles in the Standby Database Offline

Renaming Datafiles on the Primary Database

Datafile renames on your primary database do not take effect at the standby database until you refresh the standby database control file. To keep the datafiles at the primary and standby databases synchronized when you rename primary database datafiles, perform analogous operations on the standby database.

Adding or Dropping Redo Logs on the Primary Database

You can add redo log file groups or members to the primary database without affecting the standby database. Similarly, you can drop log file groups or members from the primary database without affecting your standby database. Enabling and disabling of threads at the primary database has no effect on the standby database.

Consider whether to keep the online redo log configuration the same at the primary and standby databases. Although differences in the online redo log configuration between the primary and standby databases do not affect the standby database functionality, they do affect the performance of the standby database after activation. For example, if the primary database has 10 redo logs and the standby database has 2, and you then activate the standby database so that it functions as the new primary database, the new primary database is forced to archive more frequently than the old primary database.

To prevent problems after standby activations, Oracle Corporation recommends keeping the online redo log configuration the same at the primary and standby databases. Note that when you enable a log file thread with the ALTER DATABASE ENABLE THREAD statement at the primary database, you must create a new control file for your standby database before activating it. See Refreshing the Standby Database Control File for procedures.

Resetting or Clearing Unarchived Redo Logs on the Primary Database

If you clear log files at the primary database by issuing the ALTER DATABASE CLEAR UNARCHIVED LOGFILE statement, or open the primary database using the RESETLOGS option, you invalidate the standby database. Because both of these operations reset the primary log sequence number to 1, you must re-create the standby database in order to be able to apply archived logs generated by the primary database. See Creating the Standby Database Files for the procedure. See Scenario 8: Re-Creating a Standby Database for additional information.

Altering the Primary Database Control File

If you use the CREATE CONTROLFILE statement at the primary database to perform any of the following operations, you may invalidate the control file for the standby database:

Using the CREATE CONTROLFILE statement with the RESETLOGS option on your primary database will force the next open of the primary database to reset the online logs, thereby invalidating the standby database.

If you have invalidated the control file for the standby database, re-create the file using the procedures in Refreshing the Standby Database Control File.

Taking Datafiles in the Standby Database Offline

You can take standby database datafiles offline as a means to support a subset of your primary database's datafiles. For example, you may decide not to recover the primary database's temporary tablespaces on the standby database.

Take the datafiles offline using the following statement on the standby database:

ALTER DATABASE DATAFILE 'filename' OFFLINE DROP;

If you execute this statement, then the tablespace containing the offline files must be dropped after opening the standby database.

Performing Direct Path Operations

When you perform a direct load originating from any of the following, the performance improvement applies only to the primary database (there is no corresponding recovery process performance improvement on the standby database):

The standby database recovery process continues to sequentially read and apply the redo information generated by the unrecoverable direct load.

Propagating UNRECOVERABLE Processes Manually

Primary database processes using the UNRECOVERABLE option are not propagated to the standby database because these processes do not appear in the archived redo logs. If you perform an UNRECOVERABLE operation at the primary database and then recover the standby database, you do not receive error messages during recovery; instead, Oracle writes error messages in the standby database alert.log file. The following error message is displayed:

26040, 00000, "Data block was loaded using the NOLOGGING option\n" 
//* Cause: Trying to access data in block that was loaded without  
//*        redo generation using the NOLOGGING/UNRECOVERABLE option 
//* Action: Drop the object containing the block.

Although the error message recommends dropping the object that contains the block, do not perform this operation. Instead, perform any one of the following tasks:

Determining Whether a Backup Is Required After UNRECOVERABLE Operations

If you have performed UNRECOVERABLE operations on your primary database, determine whether a new backup is required.

To determine whether a new backup is necessary:

  1. Query the V$DATAFILE view on the primary database to determine the system change number (SCN) or time at which Oracle generated the most recent invalidation redo data.

  2. Issue the following SQL statement on the primary database to determine whether you need to perform another backup:

    SELECT unrecoverable_change#, 
           to_char(unrecoverable_time, 'mm-dd-yyyy hh:mi:ss') 
    FROM   v$datafile;
    
    
  3. If the query in the previous step reports an unrecoverable time for a datafile that is more recent than the time when the datafile was last backed up, then make another backup of the datafile in question.

    See Also:

    For more information about the V$DATAFILE view, see the Oracle8i Reference

Refreshing the Standby Database Control File

The following steps describe how to refresh, or create a copy, of changes you have made to the primary database control file. Refresh the standby database control file after making major structural changes to the primary database, such as adding or dropping files.

To refresh the standby database control file:

  1. Start a SQL*Plus session on the standby instance and issue the CANCEL statement on the standby database to halt its recovery process.

    SQL> RECOVER CANCEL  # for manual recovery mode
    SQL> RECOVER MANAGED STANDBY DATABASE CANCEL   # for managed recovery mode
    
    
    
  2. Shut down the standby instances:

    SQL> SHUTDOWN IMMEDIATE
    
    
    
  3. Start a SQL*Plus session on the production instance and create the control file for the standby database:

    SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'filename';
    
    
    
  4. Transfer the standby control file and archived log files to the standby site using an operating system utility appropriate for binary files.

  5. Connect to the standby instance and mount (but do not open) the standby database:

    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
    
    
  6. Restart the recovery process on the standby database:

    SQL> RECOVER STANDBY DATABASE  # recovers using location for logs 
                                   # specified in initialization parameter file
    SQL> RECOVER FROM 'location' STANDBY DATABASE # recovers from specified
                                                  # location
    

Clearing Online Redo Logs

After creating the standby database, you can clear standby database online redo logs to optimize performance by issuing the following statement, where integer refers to the number of the log group:

ALTER DATABASE CLEAR LOGFILE GROUP integer;

This statement optimizes standby activation because it is no longer necessary for Oracle to zero the logs at activation. Zeroing involves writing zeros to the entire contents of the redo log and then setting a new header to make the redo log look like it was when it was created. Zeroing occurs during a RESETLOGS operation.

If you clear the logs manually, Oracle realizes at activation that the logs already have zeros and skips the zeroing step. This optimization is important because it can take a long time to write zeros into all of the online logs. If you prefer not to perform this operation during maintenance, Oracle clears the online logs automatically during activation.

Backing Up the Standby Database

If necessary, you can back up your standby database, but not while the database is in manual or managed recovery mode. You must take the standby database out of managed recovery mode, make the backups, then resume managed recovery. You can make the backups when the database is shut down or in read-only mode.

The following table lists some advantages and disadvantages of these methods:

Method  Advantages  Disadvantages 

Shut down the standby database. 

Can back up the database after performing other maintenance operations requiring database shutdown. 

The primary database may create a gap sequence because the standby database is not receiving archived logs. If you create a gap sequence, you must perform manual recovery before you can place the standby database in managed recovery mode. 

Place the standby database in read-only mode. 

The standby site continues to receive archived logs from the primary database so no gap sequence is generated. 

 

To back up tablespaces on a standby database when it is in read-only mode:

  1. Start a SQL*Plus session on the standby database and take the database out of managed or manual recovery mode:

    RECOVER MANAGED STANDBY DATABASE CANCEL    # for managed recovery
    RECOVER CANCEL                             # for manual recovery
    
    
    
  2. Open the database in read-only mode:

    ALTER DATABASE OPEN READ ONLY
    
    
    
  3. Take backups of some tablespaces using operating system utilities. You should not back up the standby control file.

    Minimize the time that the database is down. For example, to back up datafiles tbs11.f, tbs12.f, and tbs13.f in tablespace TBS_1 on UNIX you might enter:

    % cp /disk1/oracle/dbs/tbs11.f /disk2/backup/tbs11.bk
    % cp /disk1/oracle/dbs/tbs12.f /disk2/backup/tbs12.bk
    % cp /disk1/oracle/dbs/tbs13.f /disk2/backup/tbs13.bk
    
    
    
  4. Terminate all active user sessions on the standby database.

  5. Place the database in manual or managed recovery mode:

    RECOVER MANAGED STANDBY DATABASE     # for managed recovery		
    RECOVER STANDBY DATABASE             # for manual recovery
    
    
    
  6. Back up the control file on the primary database using an operating system utility. You must back up the primary database control file, not the standby database control file.

  7. Repeat the preceding steps until you have backed up each tablespace in the database.

To back up tablespaces on a standby database when it is shut down:

  1. Start a SQL*Plus session on the standby database and take the database out of managed or manual recovery mode:

    RECOVER MANAGED STANDBY DATABASE CANCEL    # for managed recovery
    RECOVER CANCEL                             # for manual recovery
    
    
    
  2. Shut down the database:

    SHUTDOWN IMMEDIATE
    
    
  3. Make cold backups of some tablespaces using operating system utilities. Minimize the time that the database is down. For example, to back up datafiles tbs11.f, tbs12.f, and tbs13.f in tablespace TBS_1 on UNIX you might enter:

    % cp /disk1/oracle/dbs/tbs11.f /disk2/backup/tbs11.bk
    % cp /disk1/oracle/dbs/tbs12.f /disk2/backup/tbs12.bk
    % cp /disk1/oracle/dbs/tbs13.f /disk2/backup/tbs13.bk
    
    
    
  4. Use SQL*Plus to start the Oracle instance at the standby database without mounting it, specifying a parameter file if necessary:

    STARTUP NOMOUNT pfile = initSTANDBY.ora
    
     
    
  5. Mount the database:

    ALTER DATABASE MOUNT STANDBY DATABASE
    
    
    
  6. Place the database in manual or managed recovery mode:

    RECOVER MANAGED STANDBY DATABASE     # for managed recovery		
    RECOVER STANDBY DATABASE             # for manual recovery
    
    
    
  7. Repeat the preceding steps until you have backed up each tablespace in the database.


Prev Next
Oracle
Copyright © 1999 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index