Before you create a logical standby database, you must first ensure the primary database is properly configured. Table 4-1 provides a checklist of the tasks that you perform on the primary database to prepare for logical standby database creation.
Note that a logical standby database uses standby redo logs (SRLs) for redo received from the primary database, and also writes to online redo logs (ORLs) as it applies changes to the standby database. Thus, logical standby databases often require additional
ARCn processes to simultaneously archive SRLs and ORLs. Additionally, because archiving of ORLs takes precedence over archiving of SRLs, a greater number of SRLs may be needed on a logical standby during periods of very high workload.
Before setting up a logical standby database, ensure the logical standby database can maintain the data types and tables in your primary database. See Appendix C for a complete list of data type and storage type considerations.
The physical organization in a logical standby database is different from that of the primary database, even though the logical standby database is created from a backup copy of the primary database. Thus, ROWIDs contained in the redo records generated by the primary database cannot be used to identify the corresponding row in the logical standby database.
Oracle uses primary-key or unique-constraint/index supplemental logging to logically identify a modified row in the logical standby database. When database-wide primary-key and unique-constraint/index supplemental logging is enabled, each
UPDATE statement also writes the column values necessary in the redo log to uniquely identify the modified row in the logical standby database.
If a table has a primary key defined, then the primary key is logged along with the modified columns as part of the
UPDATE statement to identify the modified row.
In the absence of a primary key, the shortest nonnull unique-constraint/index is logged along with the modified columns as part of the
UPDATE statement to identify the modified row.
In the absence of both a primary key and a nonnull unique constraint/index, all columns of bounded size are logged as part of the
UPDATE statement to identify the modified row. In other words, all columns except those with the following types are logged:
LONG RAW, object type, and collections.
A function-based index, even though it is declared as unique, cannot be used to uniquely identify a modified row. However, logical standby databases support replication of tables that have function-based indexes defined, as long as modified rows can be uniquely identified.
Oracle recommends that you add a primary key or a nonnull unique index to tables in the primary database, whenever possible, to ensure that SQL Apply can efficiently apply redo data updates to the logical standby database.
Perform the following steps to ensure SQL Apply can uniquely identify rows of each table being replicated in the logical standby database.
DBA_LOGSTDBY_NOT_UNIQUE view to display a list of tables that SQL Apply may not be able to uniquely identify. For example:
SQL> SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE 2> WHERE (OWNER, TABLE_NAME) NOT IN 3> (SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED) 4> AND BAD_COLUMN = 'Y'
If your application ensures the rows in a table are unique, you can create a disabled primary key
RELY constraint on the table. This avoids the overhead of maintaining a primary key on the primary database.
To create a disabled
RELY constraint on a primary database table, use the
ALTER TABLE statement with a
RELY DISABLE clause. The following example creates a disabled
RELY constraint on a table named
mytab, for which rows can be uniquely identified using the
SQL> ALTER TABLE mytab ADD PRIMARY KEY (id, name) RELY DISABLE;
When you specify the
RELY constraint, the system will assume that rows are unique. Because you are telling the system to rely on the information, but are not validating it on every modification done to the table, you must be careful to select columns for the disabled
RELY constraint that will uniquely identify each row in the table. If such uniqueness is not present, then SQL Apply will not correctly maintain the table.
To improve the performance of SQL Apply, add a unique-constraint/index to the columns to identify the row on the logical standby database. Failure to do so results in full table scans during
DELETE statements carried out on the table by SQL Apply.
Oracle Database Reference for information about the
Oracle Database SQL Language Reference for information about the
ALTER TABLE statement syntax and creating
Section 10.7.1, "Create a Primary Key RELY Constraint" for information about
RELY constraints and actions you can take to increase performance on a logical standby database
Table 4-2 provides a checklist of the tasks that you perform to create a logical standby database and specifies on which database you perform each task. There is also a reference to the section that describes the task in more detail.
You create a logical standby database by first creating a physical standby database and then transitioning it to a logical standby database. Follow the instructions in Chapter 3, "Creating a Physical Standby Database" to create a physical standby database.
You can run Redo Apply on the new physical standby database for any length of time before converting it to a logical standby database. However, before converting to a logical standby database, stop Redo Apply on the physical standby database. Stopping Redo Apply is necessary to avoid applying changes past the redo that contains the LogMiner dictionary (described in Section 126.96.36.199, "Build a Dictionary in the Redo Data").
To stop Redo Apply, issue the following statement on the physical standby database. If the database is an Oracle RAC database comprised of multiple instances, then you must first stop all Oracle RAC instances except one before issuing this statement:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
This section contains the following topics:
In Section 3.1.4, "Set Primary Database Initialization Parameters", you set up several standby role initialization parameters to take effect when the primary database is transitioned to the physical standby role.
Note:This step is necessary only if you plan to perform switchovers.
If you plan to transition the primary database to the logical standby role, then you must also modify the parameters shown in bold typeface in Example 4-1, so that no parameters need to change after a role transition:
VALID_FOR attribute in the original
LOG_ARCHIVE_DEST_1 destination to archive redo data only from the online redo log and not from the standby redo log.
LOG_ARCHIVE_DEST_3 destination on the primary database. This parameter only takes effect when the primary database is transitioned to the logical standby role.
LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/chicago/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_3= 'LOCATION=/arch2/chicago/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_STATE_3=ENABLE
To dynamically set these initialization parameter, use the SQL
ALTER SYSTEM SET statement and include the
SCOPE=BOTH clause so that the changes take effect immediately and persist after the database is shut down and started up again.
The following table describes the archival processing defined by the changed initialization parameters shown in Example 4-1.
|When the Chicago Database Is Running in the Primary Role||When the Chicago Database Is Running in the Logical Standby Role|
||Directs archiving of redo data generated by the primary database from the local online redo log files to the local archived redo log files in
||Directs archiving of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in
||Directs archiving of redo data from the standby redo log files to the local archived redo log files in
A LogMiner dictionary must be built into the redo data so that the LogMiner component of SQL Apply can properly interpret changes it sees in the redo. As part of building the LogMiner dictionary, supplemental logging is automatically set up to log primary key and unique-constraint/index columns. The supplemental logging information ensures each update contains enough information to logically identify each row that is modified by the statement.
To build the LogMiner dictionary, issue the following statement:
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
DBMS_LOGSTDBY.BUILD procedure waits for all existing transactions to complete. Long-running transactions executed on the primary database will affect the timeliness of this command.
DBMS_LOGSTDBY.BUILD PL/SQL package in Oracle Database PL/SQL Packages and Types Reference
UNDO_RETENTION initialization parameter in Oracle Database Reference
This section describes how to prepare the physical standby database to transition to a logical standby database. It contains the following topics:
Note:If you have an Oracle RAC physical standby database, shut down all but one instance, set
FALSE, and start the standby database as a single instance in
MOUNT EXCLUSIVEmode, as follows:
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=FALSE SCOPE=SPFILE; SQL> SHUTDOWN ABORT; SQL> STARTUP MOUNT EXCLUSIVE;
To continue applying redo data to the physical standby database until it is ready to convert to a logical standby database, issue the following SQL statement:
For db_name, specify a database name to identify the new logical standby database. If you are using a server parameter file (spfile) at the time you issue this statement, then the database will update the file with appropriate information about the new logical standby database. If you are not using an spfile, then the database issues a message reminding you to set the name of the
DB_NAME parameter after shutting down the database.
Note:If you are creating a logical standby database in the context of performing a rolling upgrade of Oracle software with a physical standby database, you should issue the following command instead:
SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY KEEP IDENTITY;
A logical standby database created with the
KEEP IDENTITY clause retains the same
DBID as that of its primary database. Such a logical standby database can only participate in one switchover operation, and thus should only be created in the context of a rolling upgrade with a physical standby database.
Note that the
KEEP IDENTITY clause is available only if the database being upgraded is running Oracle Database release 11.1 or later.
The statement waits, applying redo data until the LogMiner dictionary is found in the log files. This may take several minutes, depending on how long it takes redo generated in Section 188.8.131.52, "Build a Dictionary in the Redo Data" to be transmitted to the standby database, and how much redo data needs to be applied. If a dictionary build is not successfully performed on the primary database, this command will never complete. You can cancel the SQL statement by issuing the
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL statement from another SQL session.
Caution:In earlier releases, you needed to create a new password file before you opened the logical standby database. This is no longer needed. Creating a new password file at the logical standby database will cause redo transport services to not work properly.
Note:If you started with an Oracle RAC physical standby database, set
TRUE, as follows:
SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=SPFILE;
On the logical standby database, shutdown the instance and issue the
STARTUP MOUNT statement to start and mount the database. Do not open the database; it should remain closed to user access until later in the creation process. For example:
SQL> SHUTDOWN; SQL> STARTUP MOUNT;
You need to modify the
n parameters because, unlike physical standby databases, logical standby databases are open databases that generate redo data and have multiple log files (online redo log files, archived redo log files, and standby redo log files). It is good practice to specify separate local destinations for:
Archived redo log files that store redo data generated by the logical standby database. In Example 4-2, this is configured as the
Archived redo log files that store redo data received from the primary database. In Example 4-2, this is configured as the
Example 4-2 shows the initialization parameters that were modified for the logical standby database. The parameters shown are valid for the Boston logical standby database when it is running in either the primary or standby database role.
LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/boston/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_2= 'SERVICE=chicago ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_3= 'LOCATION=/arch2/boston/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_DEST_STATE_3=ENABLE
Note:If database compatibility is set to 11.1, you can also use the Flash Recovery Area to store the remote archived logs. To do this, set the following parameters (assuming you have already appropriately set
LOG_ARCHIVE_DEST_1= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ONLINE_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_3= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) DB_UNIQUE_NAME=boston'
The following table describes the archival processing defined by the initialization parameters shown in Example 4-2.
|When the Boston Database Is Running in the Primary Role||When the Boston Database Is Running in the Logical Standby Role|
||Directs archival of redo data generated by the primary database from the local online redo log files to the local archived redo log files in
||Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in
||Directs transmission of redo data to the remote logical standby database
||Directs archival of redo data received from the primary database to the local archived redo log files in
DB_FILE_NAME_CONVERTinitialization parameter is not honored once a physical standby database is converted to a logical standby database. If necessary, you should register a skip handler and provide SQL Apply with a replacement DDL string to execute by converting the path names of the primary database datafiles to the standby datafile path names. See the
DBMS_LOGSTDBYpackage in Oracle Database PL/SQL Packages and Types Reference. for information about the
To open the new logical standby database, you must open it with the
RESETLOGS option by issuing the following statement:
SQL> ALTER DATABASE OPEN RESETLOGS;
Note:If you started with a Oracle RAC physical standby database, you can start up all other standby instances at this point.
Caution:If you are co-locating the logical standby database on the same computer system as the primary database, you must issue the following SQL statement before starting SQL Apply for the first time, so that SQL Apply skips the file operations performed at the primary database. The reason this is necessary is that SQL Apply has access to the same directory structure as the primary database, and datafiles that belong to the primary database could possibly be damaged if SQL Apply attempted to reexecute certain file-specific operations.
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('ALTER TABLESPACE');
DB_FILENAME_CONVERT parameter that you set up while co-locating the physical standby database on the same system as the primary database, is ignored by SQL Apply. See Oracle Database PL/SQL Packages and Types Reference for information about
DBMS_LOGSTDBY.SKIP and equivalent behavior in the context of a logical standby database.
Because this is the first time the database is being opened, the database's global name is adjusted automatically to match the new
DB_NAME initialization parameter.
At this point, the logical standby database is running and can provide the maximum performance level of data protection. The following list describes additional preparations you can take on the logical standby database:
Upgrade the data protection mode
The Data Guard configuration is initially set up in the maximum performance mode (the default).
Enable Flashback Database
Flashback Database removes the need to re-create the primary database after a failover. Flashback Database enables you to return a database to its state at a time in the recent past much faster than traditional point-in-time recovery, because it does not require restoring datafiles from backup nor the extensive application of redo data. You can enable Flashback Database on the primary database, the standby database, or both. See Section 13.2, "Converting a Failed Primary Into a Standby Database Using Flashback Database" and Section 13.3, "Using Flashback Database After Issuing an Open Resetlogs Statement" for scenarios showing how to use Flashback Database in a Data Guard environment. Also, see Oracle Database Backup and Recovery User's Guide for more information about Flashback Database.