|Oracle® Data Guard Concepts and Administration
10g Release 1 (10.1)
Part Number B10823-01
This chapter describes how to manage logical standby databases. This chapter contains the following topics:
The topics in this chapter describe how to use SQL statements, initialization parameters, views, and the
DBMS_LOGSTDBY PL/SQL package to manage logical standby databases.
See Oracle Data Guard Broker to use the Data Guard broker to automate the management tasks described in this chapter.
DBMS_LOGSTDBY PL/SQL package provides procedures to help you configure and manage a logical standby database. You can use the
DBMS_LOGSTDBY PL/SQL package to perform management tasks such as the following on a logical standby database:
DBMS_LOGSTDBY PL/SQL package includes procedures to help you manage SQL Apply on a logical standby database. Using it you can do the following:
Table 9-1 summarizes the procedures of the
DBMS_LOGSTDBY PL/SQL package.
Enables you to set the values of specific initialization parameters to configure and maintain SQL Apply.
Resets the value of specific initialization parameters to the system default values.
Ensures supplemental logging is enabled correctly and builds the LogMiner dictionary.
Creates and populates a table in the standby database from a corresponding table in the primary database.
Enables you to specify which database operations done on the primary database will not be applied to the logical standby database.
Specifies criteria to follow if an error is encountered. You can stop SQL Apply or ignore the error.
Specifies transaction identification information to skip (ignore) while applying specific transactions to the logical standby database. This subprogram also allows alternate statements to be executed.
Modifies the options set in the
Modifies the options set in the
Modifies the options set in the
See PL/SQL Packages and Types Reference for complete information about the
ALTER DATABASE GUARD statement controls user access to tables in a logical standby database. The database guard is set to
ALL by default on a logical standby database.
ALTER DATABASE GUARD statement allows the following keywords:
ALL to prevent all users, other than
SYS, from making changes to any data in the logical standby database.
STANDBY to prevent all users, other than
SYS, from making DML and DDL changes to any table or sequence being maintained through SQL Apply.
NONE if you want typical security for all data in the database.
For example, use the following statement to enable users to modify tables not maintained by SQL Apply:
Privileged users can temporarily turn the database guard off and on for the current session using the
ALTER SESSION DISABLE GUARD and
ALTER SESSION ENABLE GUARD statements, respectively. This statement replaces the
DBMS_LOGSTDBY.GUARD_BYPASS PL/SQL procedure that performed the same function in Oracle9i. The
ALTER SESSION [ENABLE|DISABLE] GUARD statement is useful when you want to temporarily disable the database guard to make changes to the database, as described in Section 9.1.4.
Be careful not to let the primary and logical standby databases diverge while the database guard is disabled.
Periodically, you need to remove archived redo log files that are no longer needed by SQL Apply to reclaim disk space. Perform the following steps to remove archived redo log files from the file system:
This statement also updates the
DBA_LOGMNR_PURGED_LOG view that displays the archived redo log files that are no longer needed.
DBA_LOGMNR_PURGED_LOGview to list the archived redo log files that can be removed:
SQL> SELECT * FROM DBA_LOGMNR_PURGED_LOG; FILE_NAME ------------------------------------ /boston/arc_dest/arc_1_40_509538672.log /boston/arc_dest/arc_1_41_509538672.log /boston/arc_dest/arc_1_42_509538672.log /boston/arc_dest/arc_1_43_509538672.log /boston/arc_dest/arc_1_44_509538672.log /boston/arc_dest/arc_1_45_509538672.log /boston/arc_dest/arc_1_46_509538672.log /boston/arc_dest/arc_1_47_509538672.log
You can override the database guard to allow changes to the logical standby database by executing the
ALTER SESSION DISABLE GUARD statement. Privileged users can issue this statement to turn the database guard off for the current session.
The following sections provide some examples. The discussions in these sections assume that the database guard is set to
This section describes how to add an index to a table maintained through SQL Apply.
By default, only accounts with
SYS privileges can modify the database while the database guard is set to
STANDBY. If you are logged in as
SYSTEM or another privileged account, you will not be able to issue DDL statements on the logical standby database without first bypassing the database guard for the session.
The following example shows how to stop SQL Apply, bypass the database guard, execute SQL statements on the logical standby database, and then reenable the guard:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; Database altered. SQL> ALTER SESSION DISABLE GUARD; PL/SQL procedure successfully completed. SQL> ALTER TABLE SCOTT.EMP ADD CONSTRAINT EMPID UNIQUE (EMPNO); Table altered. SQL> ALTER SESSION ENABLE GUARD; PL/SQL procedure successfully completed. SQL> ALTER DATABASE START LOGICAL STANDBY APPLY; Database altered.
This sample procedure could be used to execute other DDL statements. Oracle recommends that you do not perform DML operations while the database guard bypass is enabled. This will introduce deviations between the primary and standby databases that will make it impossible for the logical standby database to be maintained. It is unlikely that you will be able to modify rows in a table in such a way that the logical standby database can incrementally maintain the rows.
Sometimes, a reporting application must collect summary results and store them temporarily or track the number of times a report was run. Although the main purpose of an application is to perform reporting activities, the application might need to issue DML (insert, update, and delete) operations on a logical standby database. It might even need to create or drop tables.
You can set up the database guard to allow reporting operations to modify data as long as the data is not being maintained through SQL Apply. To do this, you must:
DBMS_LOGSTDBY.SKIPprocedure. Skipped tables are not maintained through SQL Apply.
In the following example, it is assumed that the tables to which the report is writing are also on the primary database.
The example stops SQL Apply, skips the tables, and then restarts SQL Apply so that changes can be applied to the logical standby database. The reporting application will be able to write to
MYSCHEMA. They will no longer be maintained through SQL Apply.
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; Database altered. SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL','MYSCHEMA','MYTABLES%'); PL/SQL procedure successfully completed. SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML','MYSCHEMA','MYTABLES%'); PL/SQL procedure successfully completed. SQL> ALTER DATABASE START LOGICAL STANDBY APPLY; Database altered.
The example then queries the
DBA_LOGSTDBY_PARAMETERS view to verify the logical standby database is updated. Verification can take a while so you might need to repeat the query until no rows are returned, as shown in the following example:
Finally, the example sets the database guard to allow updates to the tables.
You do not need to take any action to enable or handle triggers and constraints on logical standby databases. Triggers and constraints are enabled on the standby database but they are not executed. The following describes how triggers and constraints are handled on a logical standby database:
For triggers and constraints on tables maintained by SQL Apply:
For triggers and constraints on tables not maintained by SQL Apply:
If only a subset of activity on a primary database is of interest on the standby database, use the
DBMS_LOGSTDBY.SKIP procedure to define filters that prevent SQL Apply from issuing the SQL statements on the logical standby database. (See Section 18.104.22.168 for information about SQL statements that are skipped automatically.)
Tables continue applying SQL statements after filtering out unsupported datatypes or statements automatically. However, you must use the
DBMS_LOGSTDBY.SKIP procedure to skip tables that you do not want to apply to the logical standby database. The following list shows typical examples of the types of SQL statements that can be filtered or skipped so that they are not applied on the logical standby database:
DROP INDEXDDL statements
Example 9-1 demonstrates how to skip all SQL statements that reference the
EMP table in a logical standby database.
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL', 'SCOTT', 'EMP', NULL); SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML', 'SCOTT', 'EMP', NULL); SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
In addition to skipping DML and DDL statements for schema and non-schema operations, you can also skip specific DML and DDL operations as well. Example 9-2 shows how to skip
CREATE TABLESPACE for non-schema DDL operations.
SQL> EXEC DBMS_LOGSTDBY.SKIP('CREATE TABLESPACE', NULL, NULL, NULL); SQL> EXEC DBMS_LOGSTDBY.SKIP('ALTER TABLESPACE', NULL, NULL, NULL); SQL> COLUMN ERROR FORMAT a5; SQL> COLUMN STATEMENT_OPT FORMAT a20; SQL> COLUMN OWNER FORMAT a10 SQL> COLUMN NAME FORMAT a15; SQL> COLUMN PROC FORMAT a20; SQL> SELECT * FROM DBA_LOGSTDBY_SKIP; ERROR STATEMENT_OPT OWNER NAME PROC ----- ----------------- ---------- --------------- -------------------- N CREATE TABLESPACE N ALTER TABLESPACE
Typically, you use table instantiation to re-create a table after an unrecoverable operation. You can also use the procedure to enable SQL Apply on a table that was formerly skipped.
The following list and Example 9-3 show how to re-create a table and resume SQL Apply on that table:
If any operations are being skipped for that table, resume application of each operation that is currently being skipped by using the
DBMS_LOGSTDBY.UNSKIP procedure. If multiple filters were created on the table, you will need to execute the procedure multiple times.
DBMS_LOGSTDBY.INSTANTIATE_TABLEprocedure. In addition to creating a table, this procedure also imports the data from the primary table using a database link.
Before accessing data in the newly added table, archive the current online redo log file on the primary database and ensure the archived redo log file is applied to the logical standby database.
Example 9-3 demonstrates how to add the
EMP table to a logical standby database.
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; SQL> SELECT * FROM DBA_LOGSTDBY_SKIP; ERROR STATEMENT_OPT OWNER NAME PROC --------------------------------------------------------------------- N SCHEMA_DDL SCOTT EMP N DML SCOTT EMP SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP('DML','SCOTT','EMP'); SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP('SCHEMA_DDL','SCOTT','EMP'); SQL> EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE('SCOTT','EMP','DBLINK'); SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
Log on to the primary database and issue the following statements:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT; SQL> SELECT FIRST_CHANGE# FROM V$LOG WHERE STATUS = 'CURRENT'; When the value returned by the DBA_LOGSTDBY_PROGRESS.APPLIED_SCN procedure is equal to (FIRST_CHANGE# - 1) or less than the value selected from the query of the V$LOG view, the database is consistent and you can safely run reports again.
When you query the
DBA_LOGSTDBY_EVENTS view, it displays a table of events that records activity about SQL Apply. In particular, DDL execution or anything that generates an error is recorded in the events table. You can control what and how much activity is recorded in the events table. By default, 100 records are stored in this table, but you can increase it. For example:
Additionally, you can indicate what type of events you want recorded. By default, everything is recorded in the table. However, you can set the
RECORD_APPLIED_DDL parameters to
FALSE to avoid recording these events.
Errors that cause SQL Apply to stop are always recorded in the events table (unless there is insufficient space in the system tablespace). These events are always put into the
ALERT.LOG file as well, with the keyword '
LOGSTDBY' included in the text. When querying the view, select the columns in order by
CURRENT_SCN. This ordering ensures a shutdown failure appears last in the view.
SQL Apply uses a collection of parallel execution servers and background processes that apply changes from the primary database to the logical standby database. Figure 9-1 shows the flow of information and the role that each process performs.
In Figure 9-1:
READERprocess reads redo records from the archived redo log files.
PREPARERprocesses do the heavy computing required to convert the block changes into table changes, or logical change records (LCR). At this point, the LCRs do not represent any specific transactions.
BUILDERprocess assembles completed transactions from the individual LCRs.
ANALYZERprocess examines the records, possibly eliminating transactions and identifying dependencies between the different transactions.
You can query the
V$LOGSTDBY view to see what each process is currently doing; the
TYPE column describes the task being performed. When querying the
V$LOGSTDBY view, pay special attention to the
HIGH_SCN column. This is an activity indicator. As long as it is changing each time you query the
V$LOGSTDBY view, progress is being made. The
STATUS column gives a text description of the current activity. For example:
SQL> COLUMN NAME FORMAT A30 SQL> COLUMN VALUE FORMAT A30 SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state'; NAME VALUE ------------------------------ ------------------------------ coordinator state APPLYING SQL> COLUMN STATUS FORMAT A50 SQL> COLUMN TYPE FORMAT A12 SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY; TYPE HIGH_SCN STATUS ------------ ---------- -------------------------------------------------- COORDINATOR ORA-16117: processing READER ORA-16127: stalled waiting for additional transactions to be applied BUILDER 191896 ORA-16116: no work available PREPARER 191902 ORA-16117: processing ANALYZER 191820 ORA-16120: dependencies being computed for transaction at SCN 0x0000.0002ed4e APPLIER 191209 ORA-16124: transaction 1 16 1598 is waiting on another transaction . . .
Another place to get information about current activity is the
V$LOGSTDBY_STATS view, which provides state and status information. All of the options for the
DBMS_LOGSTDBY.APPLY_SET procedure have default values, and those values (default or set) can be seen in the
V$LOGSTDBY_STATS view. In addition, a count of the number of transactions applied or transactions ready will tell you if transactions are being applied as fast as they are being read. Other statistics include information on all parts of the system. For example:
SQL> COLUMN NAME FORMAT A35 SQL> COLUMN VALUE FORMAT A35 SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS 2> WHERE NAME LIKE 'coordinator%' or NAME LIKE 'transactions%'; NAME VALUE ----------------------------------- ----------------------------------- coordinator state APPLYING transactions ready 7821 transactions applied 7802 coordinator uptime 73
This query shows how long SQL Apply has been running and how many transactions have been applied in that time. It also shows how many transactions are available to be applied.
By default, Data Guard waits for the full archived redo log file to arrive on the standby database before recovering it to the standby database. However, if you have configured a standby redo log on the standby database, you can optionally enable real-time apply, which recovers redo data from the standby redo log files as they are being filled up by the remote file server (RFS) process. With real-time apply enabled, SQL Apply recovers redo data from standby redo log files at the same time the log files are being written to, as opposed to when a log switch occurs. Immediately applying standby redo log files in this manner keeps the logical standby database closely caught up with the primary database, without requiring the standby redo log files to be archived at the standby site. This can result in quicker switchovers and failovers.
To start real-time apply on the logical standby database, issue the following statement:
Transaction data in the redo stream can span multiple redo log files. For this reason, logical standby databases use an SCN range of redo data, rather than individual archived redo log files to report the progress of SQL Apply.
DBA_LOGSTDBY_PROGRESS view displays
READ_SCN information. The
APPLIED_SCN indicates that committed transactions less than or equal to that SCN were applied. The
NEWEST_SCN is the maximum SCN to which data could be applied if no more redo data is received. This is usually the
DBA_LOGSTDBY_LOG when there are no gaps in the list.
Archived redo log files with a
NEXT_CHANGE# value that is less than the
READ_SCN value are no longer needed. The information in those log files was applied or persistently stored in the database. The time values associated with these SCN values are only estimates based on log times. They are not meant to be accurate times of when those SCN values were written on the primary database.
To see which archived redo log files were applied or were not applied, issue the following query:
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS'; Session altered. SQL> SELECT SEQUENCE#, FIRST_TIME, APPLIED 2 FROM DBA_LOGSTDBY_LOG 3 ORDER BY SEQUENCE#; SEQUENCE# FIRST_TIME APPLIED ---------- ------------------ ------- 24 23-JUL-02 18:19:05 YES 25 23-JUL-02 18:19:48 YES 26 23-JUL-02 18:19:51 YES 27 23-JUL-02 18:19:54 YES 28 23-JUL-02 18:19:59 YES 29 23-JUL-02 18:20:03 YES 30 23-JUL-02 18:20:13 YES 31 23-JUL-02 18:20:18 YES 32 23-JUL-02 18:20:21 YES 33 23-JUL-02 18:32:11 YES 34 23-JUL-02 18:32:19 CURRENT 35 23-JUL-02 19:13:20 CURRENT 36 23-JUL-02 19:13:43 CURRENT 37 23-JUL-02 19:13:46 CURRENT 38 23-JUL-02 19:13:50 CURRENT 39 23-JUL-02 19:13:54 CURRENT 40 23-JUL-02 19:14:01 CURRENT 41 23-JUL-02 19:15:11 NO 42 23-JUL-02 19:15:54 NO 19 rows selected.
Logical standby databases maintain user tables, sequences, and jobs. To maintain other objects, you must reissue the DDL statements seen in the redo data stream. Tables in the SYS schema are never maintained, because only Oracle metadata is maintained in the SYS schema.
If SQL Apply fails, an error is recorded in the
DBA_LOGSTDBY_EVENTS table. The following sections demonstrate how to recover from two such errors.
DDL statements are executed the same way on the primary database and the logical standby database. If the underlying file structure is the same on both databases, the DDL will execute on the standby database as expected. However, if the structure of the file system on the standby system differs from the file system on the primary system, it is likely that an error might result because the
DB_FILE_NAME_CONVERT will not convert the filenames of one or more sets of datafiles on the primary database to filenames on the standby database for a logical standby database.
If an error was caused by a DDL transaction that contained a file specification that does not match in the logical standby database environment, perform the following steps to fix the problem:
ALTER SESSION DISABLE
GUARDstatement to bypass the database guard so you can make modifications to the logical standby database:
In some situations, the problem that caused the transaction to fail can be corrected and SQL Apply restarted without skipping the transaction. An example of this might be when available space is exhausted. (Do not let the primary and logical standby databases diverge when skipping transactions. If possible, you should manually execute a compensating transaction in place of the skipped transaction.)
The following example shows SQL Apply stopping, the error being corrected, and then restarting SQL Apply:
SQL> SET LONG 1000 SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS'; Session altered. SQL> SELECT EVENT_TIME, COMMIT_SCN, EVENT, STATUS FROM DBA_LOGSTDBY_EVENTS; EVENT_TIME COMMIT_SCN ------------------ --------------- EVENT ------------------------------------------------------------------------------- STATUS ------------------------------------------------------------------------------- 22-OCT-03 15:47:58 ORA-16111: log mining and apply setting up 22-OCT-03 15:48:04 209627 insert into "SCOTT"."EMP" values "EMPNO" = 7900, "ENAME" = 'ADAMS', "JOB" = 'CLERK', "MGR" IS NULL, "HIREDATE" = TO_DATE('22-OCT-03', 'DD-MON-RR'), "SAL" = 950, "COMM" IS NULL, "DEPTNO" IS NULL ORA-01653: unable to extend table SCOTT.EMP by %d in tablespace
In the example, the
ORA-01653 message indicates that the tablespace was full and unable to extend itself. To correct the problem, add a new datafile to the tablespace. For example:
Then, restart SQL Apply:
When SQL Apply restarts, the transaction that failed will be re-executed and applied to the logical standby database.
Do not use the
SKIP_TRANSACTION procedure to filter DML failures. Not only is the DML that is seen in the events table skipped, but so is all the DML associated with the transaction. Thus, multiple tables might be damaged by such an action.
DML failures usually indicate a problem with a specific table. For example, assume the failure is an out-of-storage error that you cannot resolve immediately. The following steps demonstrate one way to respond to this problem.
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML','SCOTT','EMP'); SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
From this point on, DML activity for the
SCOTT.EMP table will not be applied. After you correct the storage problem, you can fix the table, provided you set up a database link to the primary database that has administrator privileges to run procedures in the
SCOTT.EMPtable and then re-create it, and pull the data over to the standby database.
SCOTT.EMPtable will contain records as of when the
INSTANTIATE_TABLEprocedure was performed (in Step 2), it is possible for the
SCOTT.EMPtable to contain records for a department not in the
Materialized views refreshed on the primary database are not automatically refreshed separately on a logical standby database. To refresh materialized views on a logical standby database, use the
ALTER SESSION DISABLE GUARD and
GUARD statements. For example:
SQL> ALTER SESSION DISABLE GUARD; SQL> EXECUTE DBMS_MVIEW.REFRESH ( 'BMVIEW', 'F', '',TRUE,FALSE,0,0,0,FALSE); SQL> ALTER SESSION ENABLE GUARD;
See PL/SQL Packages and Types Reference for more information about the
If you are using the
DBMS_LOGSTDBY.APPLY_SET procedure but you are not using the
FULL option (the default) for the
TRANSACTION_CONSISTENCY parameter, you should stop SQL Apply before refreshing materialized views on the logical standby database.
Using a logical standby database, you can upgrade Oracle database software and patch sets with almost no downtime. This section provides a conceptual overview of the upgrade process. For complete database upgrade information, see the ReadMe file for the applicable Oracle Database 10g patchset release.
If you cannot use a logical standby database because of the datatypes in your application, then perform the upgrade as documented in Oracle Database Upgrade Guide.
Figure 9-2 shows a the Data Guard configuration before the upgrade begins, with the primary and logical standby databases both running the same Oracle software version.
During the upgrade process:
To begin the upgrade, stop SQL Apply and upgrade the Oracle database software on the logical standby database to version n+1.
For more information about upgrading the Oracle database software version, see the ReadMe file for the applicable Oracle Database 10g patchset release
Figure 9-3 shows the primary database running version n, and the logical standby database running version n+1. During the upgrade, redo data accumulates on the primary system.
Restart SQL Apply and operate with version n on the primary database and version n+1 on the standby database. The Data Guard configuration can run the mixed versions shown in Figure 9-4 for an arbitrary period while you verify the upgraded Oracle software version is running properly in the production environment.
The redo data that was accumulating on the primary system is automatically transmitted and applied on the newly upgraded logical standby database.
When you are satisfied that the upgraded software is operating properly, you can reverse the database roles by performing a switchover (see Section 7.3.1). This may take only a few seconds. Activate the user applications and services on the new primary database. If application service levels degrade for some reason, then you can open the previous primary database again, switch users back, and quit the previous steps.
After the switchover, you cannot send redo data from the new primary database (B) that is running the new database software version to the new standby database (A) that is running an older software version. This means that:
Figure 9-5 shows the former standby database (version n+1) is now the primary database, and the former primary database (version n) is now the standby database. The users are connected to the new primary database.
Upgrade the new logical standby database.
For more information about upgrading the Oracle database software version, see the ReadMe file for the applicable Oracle Database 10g patchset release. Figure 9-6 shows the system after both databases were upgraded to version n+1.
When you start SQL Apply, the redo that was accumulating on the primary database is sent to the logical standby database. The primary database is protected against data loss once the redo data is available on the standby database.
Raise the compatibility level of both databases by setting the
COMPATIBLE initialization parameter. Set the
COMPATIBLE parameter on the standby database before you set it on the primary database. See Chapter 11 for more information about the
COMPATIBLE initialization parameter.
Optionally, perform a another switchover of the databases so the original primary database is once again running in the production database role (as shown in Figure 9-2).
Data Guard allows recovery on a logical standby database to continue after the primary database was opened with the
RESETLOGS option. When an
ALTER DATABASE OPEN RESETLOGS statement is issued on the primary database, the incarnation of the database changes, creating a new branch of redo data.
When a logical standby database receives a new branch of redo data, SQL Apply stops and the logical standby process (LSP) on the standby database terminates. For logical standby databases, no manual intervention is required if the standby database did not apply redo data past the new resetlogs SCN (past the start of the new branch of redo data). The following table describes how to resynchronize the standby database with the primary database branch:
|If the standby database. . .||Then. . .||Perform these steps. . .|
Has not applied redo data past the new resetlogs SCN (past the start of the new branch of redo data)
No manual intervention is necessary. SQL Apply will automatically take the new branch of redo data.
Restart SQL Apply to continue applying redo data. The LSP automatically resynchronizes the standby database with the new branch of redo data.
Has applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and Flashback Database is enabled on the standby database
The standby database is recovered in the future of the new branch of redo data.
The LSP automatically resynchronizes the standby database with the new branch.
Has applied redo data past the new resetlogs SCN (past the start of the new branch of redo data) and Flashback Database is not enabled on the standby database
The primary database has diverged from the standby on the indicated primary database branch.
Re-create the logical standby database following the procedures in Chapter 4.
Is missing intervening archived redo log files from the new branch of redo data
The LSP cannot continue until the missing log files are retrieved.
Locate and register missing archived redo log files from each branch.
Is missing archived redo log files from the end of the previous branch of redo data
The LSP cannot continue until the missing log files are retrieved.
Locate and register missing archived redo log files from the previous branch.
See Oracle Database Backup and Recovery Advanced User's Guide for more information about database incarnations, recovering through an
OPEN RESETLOGS operation, and Flashback Database.
The following sections describe actions you can take to increase system performance.
On the primary database, if a table does not have a primary key or a unique index, and you know the rows are indeed unique because you have ensured this some other way, then create a primary key
RELY constraint. On the logical standby database, create an index on the columns that make up the primary key. The following query generates a list of tables with no index information that can be used by a logical standby database to apply to uniquely identify rows. By creating an index on the following tables, performance can be improved significantly.
SQL> SELECT OWNER, TABLE_NAME FROM DBA_TABLES 2> WHERE OWNER NOT IN('SYS','SYSTEM','OUTLN','DBSNMP') 3> MINUS 3> SELECT DISTINCT TABLE_OWNER, TABLE_NAME FROM DBA_INDEXES 4> WHERE INDEX_TYPE NOT LIKE ('FUNCTION-BASED%') 5> MINUS 6> SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED;
The following example shows the creation of an index for the
EMP table. This should be done for all the tables returned by the previous query:
SQL> ALTER SESSION DISABLE GUARD; SQL> CREATE INDEX EMPI ON EMP (EMPNO); SQL> ALTER SESSION ENABLE GUARD;
Statistics should be gathered on the standby database because the cost-based optimizer (CBO) uses them to determine the optimal query execution path. New statistics should be gathered after the data or structure of a schema object is modified in ways that make the previous statistics inaccurate. For example, after inserting or deleting a significant number of rows into a table, collect new statistics on the number of rows.
Statistics should be gathered on the standby database because DML and DDL operations on the primary database are executed as a function of the workload. While the standby database is logically equivalent to the primary database, SQL Apply might execute the workload in a different way. This is why using the stats pack on the logical standby database and the
V$SYSSTAT view can be useful in determining which tables are consuming the most resources and table scans.
TRANSACTION_CONSISTENCY parameter of the
DBMS_LOGSTDBY.APPLY_SET procedure to control how transactions are applied to the logical standby database. The default setting is
FULL, which applies transactions to the logical standby database in the same order in which they were committed on the primary database.
Regardless of the consistency level chosen, the data in the logical standby database will be transactionally consistent with the primary database when SQL Apply is stopped normally.
Specify one of the following values:
Transactions are applied to the logical standby database in the exact order in which they were committed on the primary database. This option results in the lowest performance, but it is recommended when the logical standby database is used for generic reporting applications.
Transactions are committed out of order, but SQL
SELECT statements executed on the standby database always return consistent results based on the last consistent SCN known to SQL Apply.
READ_ONLY option provides better performance than the
FULL option, and SQL
SELECT statements return read-consistent results. This is particularly beneficial when you are using the logical standby database to generate reports. The
READ_ONLY option is recommended when the logical standby database is used for read-only reporting.
Transactions are applied out of order from how they were committed on the primary database. This results in the best performance of the three values. The
NONE option is useful when the logical standby database is in catch-up mode after temporary loss of network connectivity or anytime there a number of logs to apply. This option also works well as long as applications that are reading the logical standby database make no assumptions about transaction order. For example:
For example, the timeline in Figure 9-7 shows Transaction 2 starts a transaction around the time when Transaction 1 ends, and Transaction 2 commits soon after Transaction 1 commits. If
TRANSACTION_CONSISTENCY is set to:
FULL--SQL Apply guarantees Transaction 1 commits before Transaction 2
READ_ONLY--SQL Apply guarantees either:
NONE--SQL Apply does not guarantee order; it is possible that Transaction 2 will commit before Transaction 1.
If you plan to use the logical standby database:
See PL/SQL Packages and Types Reference for more information about the
SQL Apply uses parallel execution processes to perform processing and parallel apply algorithms to maintain a good SQL Apply performance level. You can adjust the maximum number of parallel execution processes for an instance by setting the PARALLEL_MAX_SERVERS initialization parameter. The default value for this parameter is derived from the values of the
PARALLEL_ADAPTIVE_MULTI_USER initialization parameters. This parameter must not be set to a value less than 5 on a logical standby database. However, for best results, set
PARALLEL_MAX_SERVERS to a minimum of 9.
You can use the
MAX_SERVERS parameter of the
DBMS_LOGSTDBY.APPLY_SET procedure to limit the number of parallel servers used by SQL Apply. The default value of this parameter is set to 9. If you set this parameter explicitly, do not set it to a value less than 5, or greater than the value of the
PARALLEL_MAX_SERVERS initialization parameter.
Increasing the number of parallel execution processes for an instance can speed up execution operations, but this improvement must be balanced against the consumption of additional system resources by the processes.
You can use the
MAX_SGA parameter of the
DBMS_LOGSTDBY.APPLY_SET procedure to set the maximum amount of shared pool space used by SQL Apply for redo cache. By default, SQL Apply will use up to one quarter of the shared pool. Generally speaking, increasing the size of the shared pool or the amount of shared pool space used by SQL Apply will improve the performance of a logical standby database. See PL/SQL Packages and Types Reference for more information about the