Oracle9i Data Guard Concepts and Administration Release 2 (9.2) Part Number A96653-01 |
|
This chapter is divided into the following sections:
This section contains the following scenarios relating to physical standby databases:
This scenario describes the creation of a physical standby database standby1
on the same system as the primary database primary1
. This is a UNIX system with three file systems, each mounted on a separate disk configuration on a different controller. By placing the physical standby database on a different file system from the primary database, you protect the primary database from a hard disk failure. By running the same-site standby database in managed recovery mode, you can keep it continuously up-to-date.
After you set up the physical standby database on the local site, you plan to create a physical standby database on a remote system for total disaster protection. In this way, even if all disks of the primary database fail or are destroyed in a disaster, you can fail over to the remote standby database and keep the database open.
Because the site uses three file systems, each on its own set of disks with its own controller, you decide to maintain the primary database files on the first file system, the standby database files on the second file system, and the ORACLE_HOME
binaries on the third file system. If the primary database disks fail, you can switch to the standby database; if the ORACLE_HOME
disks fail, you can switch to the remote standby database.
To host the physical standby database on the same system as the primary database, you must set the following parameters in the standby database initialization parameter file:
Because the primary database is shut down every Sunday for an hour for maintenance, you decide to use that time to make a closed, consistent backup. You can then restart the database while you make the necessary configurations for the standby database.
The next step in the procedure is to create the backup that will form the basis for the physical standby database. You know that you can use either an inconsistent or consistent backup, but because the database is shut down every Sunday for maintenance, you decide to make a consistent backup then and use it for the standby database.
On Sunday, before shutting down the primary database, you query the database to determine which datafiles it contains:
SQL> SELECT NAME FROM V$DATAFILE; NAME ---------------------------------------------------------------------------- /fs1/dbs/tbs_01.dbf /fs1/dbs/tbs_02.dbf /fs1/dbs/tbs_11.dbf /fs1/dbs/tbs_12.dbf /fs1/dbs/tbs_21.dbf /fs1/dbs/tbs_22.dbf /fs1/dbs/tbs_13.dbf /fs1/dbs/tbs_23.dbf /fs1/dbs/tbs_24.dbf /fs1/dbs/tbs_31.dbf /fs1/dbs/tbs_32.dbf /fs1/dbs/tbs_41.dbf /fs1/dbs/tbs_42.dbf /fs1/dbs/tbs_51.dbf /fs1/dbs/tbs_52.dbf /fs1/dbs/tbs_03.dbf /fs1/dbs/tbs_14.dbf /fs1/dbs/tbs_25.dbf /fs1/dbs/tbs_33.dbf /fs1/dbs/tbs_43.dbf /fs1/dbs/tbs_53.dbf 21 rows selected.
After determining which datafiles are in the primary database, you shut down the database with the IMMEDIATE
option:
SQL> SHUTDOWN IMMEDIATE;
You can copy the datafiles from the primary file system to the physical standby file system. Because the transfer of datafiles can take a long time, you may want to copy the datafiles and then proceed to other tasks (such as network configuration). For example, enter the following at the UNIX command shell:
% cp /fs1/dbs/tbs* /fs2/dbs
After you perform some other routine maintenance operations, restart the database as follows:
SQL> STARTUP PFILE=PRMYinit.ora;
You can confirm the primary database is in ARCHIVELOG mode either by viewing the output from the SQL*Plus ARCHIVE LOG LIST
command or by querying the V$DATABASE
view.
After a few minutes, you create the standby database control file in the same directory in which you stored the consistent backup:
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/fs2/dbs/cf1.ctl';
To run a physical standby database in a Data Guard environment, you must configure an Oracle Net connection between the primary and physical standby databases so that you can archive the redo logs to the standby service.
You use the IPC protocol to connect the primary database to the physical standby database because both databases are on the same site. Because you are using the local naming method, you must create new entries in the tnsnames.ora
file. You must also add corresponding entries in the listener.ora
file.
tnsnames.ora
file.
Currently, only one service name entry exists in your configuration, a TCP/IP connection to the primary1
database.
Using Oracle Net Manager, define an IPC connection between the primary and the standby database using standby1
as the service name, stdby1
as the SID, and kstdby1
as the IPC key.
listener.ora
file.
Using Oracle Net Manager, modify the listener and add a listening address using IPC as the protocol and kstdby1
as the IPC key. Add your standby database stdby1
as a database service for this listener. Alternatively, you can add a new listener.
Now that you have configured the network files, you can edit the primary database initialization parameter file. The primary database is now up and running, so these changes will only be enabled if you restart the instance or issue ALTER SESSION
or ALTER SYSTEM
statements.
The only changes you need to make to the file involve archiving to the standby service. Currently, the primary database initialization parameter file looks as follows:
# #parameter file PRMYinit.ora # db_name=primary1 control_files=(/fs1/dbs/cf1.ctl,/fs1/dbs/cf2.ctl) compatible=9.0.1.0.0 log_archive_start = true log_archive_dest_1 = 'LOCATION=/fs1/arc_dest/ MANDATORY REOPEN=60' log_archive_dest_state_1 = ENABLE log_archive_format = log_%t_%s.arc audit_trail=FALSE o7_dictionary_accessibility=false global_names=false db_domain=us.oracle.com remote_login_passwordfile = exclusive # default parameters for instance 1 processes=30 sessions=30 transactions=21 transactions_per_rollback_segment=21 db_block_buffers=1000 db_files=200 shared_pool_size=10000000
Currently, you archive to only one location: a local directory. Because you want to maintain the local standby database in managed recovery mode, you must specify a new archiving location using a service name.
Open the primary database initialization parameter file with a text editor and examine the current archiving location and format:
log_archive_dest_1 = 'LOCATION=/fs1/arc_dest/ MANDATORY REOPEN=60' log_archive_dest_state_1 = ENABLE log_archive_format = log_%t_%s.arc
standby1
, you edit the file, adding the following entries:
log_archive_dest_2 = 'SERVICE=standby1 OPTIONAL REOPEN=180' log_archive_dest_state_2 = ENABLE
After editing the primary database initialization parameter file, create a copy for use by the standby database:
% cp /fs1/PRMYinit.ora /fs3/oracle/dbs/STBYinit.ora
If the primary database initialization parameter file contains the IFILE
parameter, you also need to copy the file referred to by the IFILE
parameter to the standby site and, if necessary, make appropriate changes to it.
You know that the initialization parameters shown in Table 10-1 play a key role in the standby database recovery process, and decide to edit them.
Edit the standby database initialization parameter file as follows:
# #parameter file STBYinit.ora # db_name = primary1 #The same as PRMYinit.ora control_files = (/fs2/dbs/cf1.ctl) compatible = 9.0.1.0.0 log_archive_start = true log_archive_dest_1='LOCATION=/fs2/arc_dest/' log_archive_dest_state_1 = ENABLE log_archive_format = log_%t_%s.arc standby_archive_dest = /fs2/arc_dest/ db_file_name_convert = ('/fs1/dbs','/fs2/dbs', '/fs1/dbs2','/fs2/dbs', '/fs1/dbs3','/fs2/dbs') log_file_name_convert = ('/fs1/dbs','/fs2/dbs') lock_name_space = standby1 fal_server=primary1 fal_client=standby1 audit_trail=false o7_dictionary_accessibility=false global_names=false db_domain=us.oracle.com remote_login_passwordfile = exclusive # default parameters for instance 1 processes=30 sessions=30 transactions=21 transactions_per_rollback_segment=21 db_block_buffers=1000 db_files=200 shared_pool_size=10000000
Now that you have configured all network and parameter files, you can enable archiving to the standby database.
ORACLE_SID
environment variable to the same value as the service name
parameter in the tnsnames.ora
file on the primary site and the listener.ora
file on the standby site as follows:
% setenv ORACLE_SID stdby1
First, you start the standby database instance without mounting the standby database control file, as the following example shows:
SQL> CONNECT SYS/CHANGE_ON_INSTALL@standby1 AS SYSDBA SQL> STARTUP NOMOUNT PFILE=/fs3/oracle/dbs/STBYinit.ora; SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Finally, you enable the changes you made to the primary database parameter file so that the standby database can begin receiving archived redo logs:
SQL> CONNECT SYS/CHANGE_ON_INSTALL@primary1 AS SYSDBA SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 2> OPTIONAL REOPEN=180'; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
You can now start managed recovery using the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
statement. You decide to use the TIMEOUT
option of the RECOVER
statement to specify a time interval of 20 minutes that log apply services will wait until log transport services write the requested archived log entry to the directory of the standby database control file. If the requested archived log entry is not written to the standby database control file directory within the specified time interval, the recovery operation is canceled.
While connected to the standby database using SQL*Plus, place the standby database in managed recovery mode:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE TIMEOUT 20;
The standby database is now in managed recovery. When you start managed recovery, log apply services automatically identify and resolve any archive gaps that may exist. As the primary database archives redo logs to the standby site, the standby database automatically applies them.
This scenario describes the creation of a physical standby database standby1
on a remote site. The following assumptions are being made:
prmyhost
.stbyhost
.PRMYinit.ora
is the initialization parameter file for the primary database.STBYinit.ora
is the initialization parameter file for the standby database.Create the backup that will form the basis for the physical standby database.
V$DATAFILE
view to obtain a list of the primary database datafiles, as the following example shows:
SQL> SELECT NAME FROM V$DATAFILE; NAME ----------------------------------------------------------------------- /oracle/dbs/dbf_1.dbf 1 row selected.
SQL> SHUTDOWN IMMEDIATE;
/backup
):
% cp /oracle/dbs/dbf_1.dbf /backup
SQL> STARTUP PFILE=PRMYinit.ora;
ARCHIVE LOG LIST
command, as follows:
SQL> ARCHIVE LOG LIST
If the output from the ARCHIVE LOG LIST
statement displays "No Archive Mode
," perform the following steps:
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/backup/stbycf.ctl'
The standby database control file and the primary database datafiles are in the same temporary location at the primary site to make copying to the standby site easier.
Transfer the primary database datafiles and the standby control file from the temporary location at the primary site to the standby site, as the following example shows:
% rcp /backup/* stbyhost:/fs2/oracle/stdby
This scenario assumes that the TCP/IP network protocol is used to connect to the primary and the physical standby databases. To achieve remote archiving of redo log files, Oracle Net must be set up correctly from the primary database to the standby database. To allow the physical standby database to fetch archive gaps from the primary database and to facilitate switchover operations, Oracle Net must again be set up correctly from the standby database to the primary database.
This step involves editing the following files:
tnsnames.ora
file on the primary and standby siteslistener.ora
file on the primary and standby sitesstandby1
that uses the TCP protocol, add the host name of your standby system, and add the service name parameter stdby1
. Verify that the listener is configured to receive requests for the primary database. If it is not, add the primary database to the listener as a new database service.primary1
with the TCP protocol that points to your primary database system and the service name parameter prmy
. Also, add a listener called stdby1_listener
and add the standby database stdby1
as a database service.
See Also:
Oracle9i Net Services Administrator's Guide for detailed directions on using the Oracle Net Manager |
Start the standby listener on the standby site. For example:
% lsnrctl start stdby1_listener
Normally, your default listener is started on your primary site. Restart the default listener on the primary database to pick up the new definitions. For example:
% lsnrctl stop % lsnrctl start
To configure the initialization parameter file for the standby database:
% rcp /oracle/dbs/PRMYinit.ora stbyhost:/fs2/oracle/stdby/STBYinit.ora
STBYinit.ora
). Edit the following parameters:
The STBYinit.ora
initialization parameter file looks as follows:
# #parameter file STBYinit.ora # db_name=primary1 # The same as PRMYinit.ora control_files=/fs2/oracle/stdby/stbycf.ctl audit_trail=false o7_dictionary_accessibility=false global_names=false db_domain=us.oracle.com commit_point_strength=1 processes=30 sessions=30 transactions=21 transactions_per_rollback_segment=21 db_block_buffers=100 shared_pool_size=4000000 ifile=/oracle/work/tkinit.ora # Verify that file exists on the standby site # and that the file specification is valid # specific parameters for standby database log_archive_format = stdby_%t_%s.arc standby_archive_dest=/fs2/oracle/stdby/ log_archive_dest_1='LOCATION=/fs2/oracle/stdby/' db_file_name_convert=('/oracle/dbs','/fs2/oracle/stdby') log_file_name_convert=('/oracle/dbs','/fs2/oracle/stdby') log_archive_start=true log_archive_trace=127 fal_server=standby1 fal_client=primary1
STBYinit.ora
file by issuing the following command:
% cp STBYinit.ora Failover.ora
Edit Failover.ora
so if you fail over to the stdby1
standby database, you can use the Failover.ora
file as the initialization parameter file for the new primary database. Make sure you use appropriate values for the LOG_ARCHIVE_DEST_
n
parameters.
tnsnames.ora
file on the standby site in case failover to the standby database occurs. See step 4 for information on how to configure the tnsnames.ora
file.Start the physical standby database to start archiving.
ORACLE_SID
environment variable to the same value as the service name
parameter in the tnsnames.ora
file on the primary site and the listener.ora
file on the standby site as follows:
% SETENV ORACLE_SID stdby1
SQL> CONNECT SYS/SYS_PASSWORD AS SYSDBA
SQL> STARTUP NOMOUNT PFILE=STBYinit.ora;
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
PRMYinit.ora
file:
LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 MANDATORY REOPEN=60'
PRMYinit.ora
file:
LOG_ARCHIVE_DEST_STATE_2 = ENABLE
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=standby1 2> MANDATORY REOPEN=60'; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
On the standby database, enable managed recovery by issuing the following SQL statement:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
This scenario describes the procedures you should follow when a physical change is made in the primary database. The following topics are covered:
To maintain consistency when you add a datafile to the primary database, you must also add a corresponding datafile to the standby database. Otherwise, changes in the online redo logs that relate to the new datafile in the primary database will not be applied to the standby database. The steps you perform depend on how the datafile was created on the primary database and how the initialization parameter STANDBY_FILE_MANAGEMENT
is defined on the standby database.
The following sections describe the steps to enable the automatic creation of datafiles on the standby database, and the manual steps necessary if standby file management is not automatic.
When you create a new datafile on the primary database, the datafile is automatically created on the standby database if the initialization parameter STANDBY_FILE_MANAGEMENT
is set to auto
in the standby database initialization file.
The following steps describe how to set up your standby database to automate the creation of datafiles.
STANDBY_FILE_MANAGEMENT=auto
SQL> SHUTDOWN; SQL> STARTUP NOMOUNT PFILE=STBYinit.ora SQL> ALTER DATABASE MOUNT STANDBY DATABASE; SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
Alternatively, you can dynamically set the STANDBY_FILE_MANAGEMENT
initialization parameter with an ALTER SYSTEM
statement to avoid shutting down and restarting the instance. For example:
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
SQL> CREATE TABLESPACE new_ts DATAFILE 't_db2.dbf' 2> SIZE 1m AUTOEXTEND ON MAXSIZE UNLIMITED;
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
SQL> SELECT NAME FROM V$DATAFILE; NAME ---------------------------------------------------------------------- /oracle/dbs/t_db1.dbf /oracle/dbs/t_db2.dbf
SQL> SELECT NAME FROM V$DATAFILE; NAME ---------------------------------------------------------------------- /oracle/dbs/s2t_db1.dbf /oracle/dbs/s2t_db2.dbf
The following steps describe how to manually create datafiles on the standby database when standby file management is manual.
SQL> CREATE TABLESPACE new_ts DATAFILE 't_db2.dbf' 2> SIZE 1m AUTOEXTEND ON MAXSIZE UNLIMITED;
SQL> SELECT NAME FROM V$DATAFILE; NAME ---------------------------------------------------------------------- /oracle/dbs/t_db1.dbf /oracle/dbs/t_db2.dbf
SQL> ALTER TABLESPACE new_ts OFFLINE; % cp t_db2.dbf s2t_db2.dbf SQL> ALTER TABLESPACE new_ts ONLINE; %rcp s2t_db2.dbf standby_location
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
SQL> SELECT NAME FROM V$DATAFILE; NAME ---------------------------------------------------------------------- /oracle/dbs/s2t_db1.dbf /oracle/dbs/s2t_db2.dbf
When you rename one or more datafiles in the primary database, you also need to rename the corresponding datafiles in the standby database.
Note: Unlike adding or deleting datafiles, the |
SQL> ALTER TABLESPACE tbs_4 OFFLINE;
% mv tbs_4.dbf tbs_x.dbf
SQL> ALTER TABLESPACE tbs_4 RENAME FILE 'tbs_4.dbf' TO 'tbs_x.dbf'; SQL> ALTER TABLESPACE tbs_4 ONLINE;
% mv tbs_4.dbf tbs_x.dbf
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM 2> SESSION;
If you do not rename the corresponding file at the standby site, and then attempt to refresh the standby database control file, the standby database will attempt to use the renamed datafile, but it will not find the renamed datafile. Consequently, you will see error messages similar to the following in the alert log:
ORA-00283: recovery session canceled due to errors ORA-01157: cannot identify/lock data file 4 - see DBWR trace file ORA-01110: data file 4: '/oracle/dbs/stdby/tbs_x.dbf'
When you delete one or more datafiles or drop one or more tablespaces in the primary database, you also need to delete the corresponding datafiles in the standby database.
SQL> DROP TABLESPACE tbs_4; SQL> ALTER SYSTEM SWITCH LOGFILE; % rm tbs_4.dbf
% rm tbs_4.dbf
One method of tuning available to the database administrator (DBA) is changing the size and number of online redo logs. Consequently, when you add or drop an online redo log at the primary site, it is important that you refresh the standby database control file.
SQL> ALTER DATABASE ADD LOGFILE 'prmy3.log' SIZE 100K;
or
SQL> ALTER DATABASE DROP LOGFILE 'prmy3.log';
Some physical changes you make to the primary database can invalidate the standby database. Once a standby database is invalidated, your only option is to rebuild it.
The following clauses of the ALTER DATABASE
statement invalidate the standby database:
In some SQL statements, the user has the option of specifying the NOLOGGING
clause, which indicates that the database operation is not logged in the redo log file. Even though the user specifies the NOLOGGING
clause, a redo log record is still written to the redo log. However, when the redo log file is copied to the standby site and applied to the standby database, a portion of the datafile is unusable and marked as being unrecoverable. When you either fail over to the standby database, or open the physical standby database with the read-only option, and attempt to read the range of blocks that are marked as UNRECOVERABLE
, you will see error messages similar to the following:
ORA-01578: ORACLE data block corrupted (file # 1, block # 2521) ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf' ORA-26040: Data block was loaded using the NOLOGGING option
Note: To avoid this problem, Oracle Corporation recommends that you always specify the |
To recover after the NOLOGGING
clause is specified, you need to copy the datafile that contains the unjournaled data from the primary site to the physical standby site. Perform the following steps:
SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE; NAME UNRECOVERABLE ----------------------------------------------------- ------------- /oracle/dbs/tbs_1.dbf 5216 /oracle/dbs/tbs_2.dbf 0 /oracle/dbs/tbs_3.dbf 0 /oracle/dbs/tbs_4.dbf 0 4 rows selected.
SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE; NAME UNRECOVERABLE ----------------------------------------------------- ------------- /oracle/dbs/stdby/tbs_1.dbf 5186 /oracle/dbs/stdby/tbs_2.dbf 0 /oracle/dbs/stdby/tbs_3.dbf 0 /oracle/dbs/stdby/tbs_4.dbf 0 4 rows selected.
Compare the value of the UNRECOVERABLE_CHANGE#
column in both query results. If the value of the UNRECOVERABLE_CHANGE#
column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.
In this example, the value of the UNRECOVERABLE_CHANGE#
in the primary database for the tbs_1.dbf
datafile is greater, so you need to copy the tbs_1.dbf
datafile to the standby site.
SQL> ALTER TABLESPACE system BEGIN BACKUP; SQL> EXIT; % cp tbs_1.dbf /backup SQL> ALTER TABLESPACE system END BACKUP;
% rcp /backup/* stbyhost:/fs2/oracle/stdby/
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM 2>SESSION;
You may get the following error messages (possibly in the alert log) when you try to restart managed recovery:
ORA-00308: cannot open archived log 'standby1' ORA-27037: unable to obtain file status SVR4 Error: 2: No such file or directory Additional information: 3 ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152: file 1 was not restored from a sufficiently old backup ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
If you get the ORA-00308 error, cancel recovery by issuing the following statement from another terminal window:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
These error messages are issued when one or more logs in the archive gap have not been successfully applied. If you receive these errors, manually resolve the gaps and repeat step 4.
See Also:
Section B.3 for information on resolving an archive gap |
Every standby database is associated with one and only one primary database. A single primary database can, however, support multiple standby databases. This scenario identifies the kind of information you need in order to decide which of the multiple standby databases is the best target for the failover operation.
One of the important things to consider in a multiple standby database configuration is whether the archive destination is mandatory or optional. The following table lists an advantage and disadvantage for each destination:
Destination | Advantage | Disadvantage |
---|---|---|
|
All archived redo logs are archived to the mandatory archive destination. After you apply the archived redo logs at the standby site, you can ensure that the standby database is up-to-date. Furthermore, you can fail over the standby database as the new primary database with minimum loss of data. |
In some cases, (such as network failure), the archived redo logs cannot reach the mandatory archive destination, causing the archiving of the redo log to stop. In the worst case, if all online redo logs are full and cannot be archived, the primary database instance will stop working. You can issue the following SQL query to determine whether the primary database stopped because it was not able to switch to an online redo log:
If the output from the query displays " See Section 10.1.7. |
|
The primary database continues to operate normally when archival of the redo logs to the optional archive destination at the standby site is interrupted. |
An archive gap may cause data loss because archive logs that are required to be applied to the standby database are unavailable. This results in the managed recovery operation terminating before all primary data has been applied to the standby database. |
Consider the following recommendations in a multiple standby database configuration:
Suppose the primary database is located in San Francisco and supports five standby databases as follows:
Standby | Location | Type | Description |
---|---|---|---|
1 |
Local directory |
Mandatory |
Local copy of the archived redo logs. |
2 |
San Francisco |
Mandatory |
Fail over to this standby database when there is physical damage at the primary site. This standby site is connected to the primary site by a local area network. |
3 |
Boston |
Optional |
Fail over to this standby database when San Francisco is no longer available. |
4 |
Los Angeles |
Optional |
This standby site receives archived redo logs, but does not apply them. |
5 |
San Francisco |
Optional |
This standby site receives archived redo logs, and applies them after an 8-hour time lag. See Section 10.1.9 for a description of this type of configuration. |
Assume that something occurs in San Francisco where the primary site is located, and the primary site is damaged. You must fail over to one of the standby databases. You cannot assume that the database administrator (DBA) who set up the multiple standby database configuration is available to decide which standby database to fail over to. Therefore, it is imperative to have a disaster recovery plan at each standby site, as well as at the primary site. Each member of the disaster recovery team needs to know about the disaster recovery plan and be aware of the procedures to follow. This scenario identifies the kind of information that the person who is making the decision would need when deciding which standby database should be the target of the failover operation.
One method of conveying information to the disaster recovery team is to include a ReadMe file at each standby site. This ReadMe file is created and maintained by the DBA and should describe how to:
There may be firewalls between systems. The ReadMe file should include instructions for going through the firewalls.
The following example shows the contents of a sample ReadMe file:
----------------Standby Database Disaster Recovery ReadMe File---------------- Warning: ******************************************************************************** Perform the steps in this procedure only if you are responsible for failing over to a standby database after the primary database fails. If you perform the steps outlined in this file unnecessarily, you may corrupt the entire database system. ******************************************************************************** Multiple Standby Database Configuration: No. Location Type IP Address --- --------------- --------- -------------- 1 San Francisco Primary 128.1.124.25 2 San Francisco Standby 128.1.124.157 3 Boston Standby 136.132.1.55 4 Los Angeles Standby 145.23.82.16 5 San Francisco Standby 128.1.135.24 You are in system No. 3, which is located in Boston. Perform the following steps to fail over to the most up-to-date and available standby database: 1. Log on to the local standby database as a DBA. a) Log on with the following user name and password: username: Standby3 password: zkc722Khn b) Invoke SQL*Plus as follows: % sqlplus c) Connect as the DBA as follows: CONNECT sys/s23LsdIc AS SYSDBA 2. Connect to as many remote systems as possible. You can connect to a maximum of four systems. System 4 does not have a firewall, so you can connect to it directly. Systems 1, 2, and 5 share the same firewall host. You need to go to the firewall host first and then connect to each system. The IP address for the firewall host is 128.1.1.100. Use the following user name and password: username: Disaster password: 82lhsIW32 3. Log on to as many remote systems as possible with the following user names and passwords: Login information: No. Location IP Address username password --- --------------- ------------- ---------- ---------- 1 San Francisco 128.1.124.25 Oracle9i sdd290Ec 2 San Francisco 128.1.124.157 Standby2 ei23nJHb 3 (L o c a l) 4 Los Angeles 145.23.82.16 Standby4 23HHoe2a 5 San Francisco 128.1.135.24 Standby5 snc#$dnc 4. Invoke SQL*Plus on each remote system you are able to log on to as follows: % sqlplus 5. Connect to each remote database as follows: CONNECT sys/password AS SYSDBA The DBA passwords for each location are: No. Location Password --- --------------- ----------- 1 San Francisco x2dwlsd91 2 San Francisco a239s1DAq 3 (L o c a l) 4 Los Angeles owKL(@as23 5 San Francisco sad_KS13x 6. If you are able to log on to System 1, invoke SQL*Plus and issue the following statements: SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP PFILE=PRMYinit.ora; Note: If you are able to execute the STARTUP statement successfully, the primary database has not been damaged. Do not continue with this procedure. 7. Issue the following SQL statements on each standby database (including the one on this system) that you were able to connect to: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; SQL> SHUTDOWN IMMEDIATE; SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM V$LOG_HISTORY GROUP BY THREAD#; Compare the query results of each standby database. Fail over to the standby database with the largest sequence number. 8. Fail over to the standby database with the largest sequence number. On the standby database with the largest sequence number, invoke SQL*Plus and issue the following SQL statements: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH; SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP PFILE=Failover.ora; ------------End of Standby Database Disaster Recovery ReadMe File-------------
Reversing the roles of a primary and standby database is referred to as a switchover operation. The following steps outline what commands must be issued to perform a switchover operation.
For this discussion, boston
is initially the primary database and lonoon
is initially the standby database. During the following switchover scenario, lonoon
will become the primary database and boston
will become the standby database.
This scenario assumes that the primary and standby databases have been previously created and initialized.
Exclusive database access is required by the DBA before beginning a switchover operation. Ask users to log off the primary and standby databases or query the V$SESSION
view to identify users that are connected to the databases and close all open sessions except the SQL*Plus session from which you are going to issue the switchover command.
See Also:
Oracle9i Database Administrator's Guide for more information on managing users |
On the primary database, boston
, execute the following statement:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
This statement does the following:
lonoon
end-of-redo
marker to the header of the last log file being archivedExecute the following statement on boston
:
SQL> SHUTDOWN NORMAL; SQL> STARTUP NOMOUNT;
Execute the following statement on boston
:
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Execute the following statement on lonoon
after the final logs have been received and applied:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
This statement does the following:
Execute the following statement on lonoon
:
SQL> SHUTDOWN;
Execute the following statement on lonoon
:
SQL> STARTUP;
Execute the following ALTER DATABASE
statement on the standby database, boston
, to place it in managed recovery mode. The following statement includes the DISCONNECT FROM SESSION
clause, which starts a detached server process and immediately returns control to the user. (Note that this does not disconnect the current SQL session.)
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
If you want to start a detached server process and immediately return control to the user, add the DISCONNECT FROM SESSION
option to the ALTER DATABASE
statement. Note that this does not disconnect the current SQL session.
Execute the following statement on the new primary database, lonoon
:
SQL> ALTER SYSTEM ARCHIVE LOG START; SQL> ALTER SYSTEM SWITCH LOGFILE;
When a standby database is in managed recovery mode, the standby database automatically applies archived redo logs as it receives them from the primary database. When the network goes down, automatic archival from the primary database to the standby database stops.
If the standby database is specified as an optional archive destination, then the primary database continues to operate normally.
When the network is up and running again, automatic archival of the archived redo logs from the primary database to the standby database resumes. However, if the standby database is specified as an optional archive destination, and a log switch occurred at the primary site, the standby database has an archive gap for the time when the network was down. The archive gap is automatically detected and resolved.
See Also:
Section 6.5 for information on archive gaps |
If the standby database is specified as a mandatory archive destination, then the primary database will not overwrite any redo logs until the network failure is resolved and the primary database is able to archive to the standby site.
The primary database may eventually stall if the network problem is not fixed in a timely manner, because the primary database will not be able to switch to an online redo log that has not been archived. You can issue the following SQL query to determine whether the primary database stalled because it was not able to switch to an online redo log:
SELECT decode(COUNT(*),0,'NO','YES') "switch_possible" FROM V$LOG WHERE ARCHIVED='YES';
If the output from the query displays "Yes
," a log switch is possible; if the output displays "No
," a log switch is not possible.
It is important to specify a local directory at the primary site as a mandatory archive destination, so that all of the archived redo logs reside on the same system as the primary database. When the primary system is unreachable, and the primary database is part of a multiple standby database configuration, you can try to identify the archived redo logs at the other standby sites.
This scenario describes how to recover after a network failure.
The V$ARCHIVE_DEST
view contains the network error and identifies which standby database cannot be reached. On the primary database, issue the following SQL statement for the archived log destination that experienced the network failure. For example:
SQL> SELECT DEST_ID, STATUS, ERROR FROM V$ARCHIVE_DEST WHERE DEST_ID = 2; DEST_ID STATUS ---------- --------- ERROR ----------------------------------------------------------------- 2 ERROR ORA-12224: TNS:no listener
The query results show there are errors archiving to the standby database, and the cause of the error as TNS:no listener
. You should check whether the listener on the standby site is started. If the listener is stopped, then start it.
If you cannot solve the network problem quickly, and if the physical standby database is specified as a mandatory destination, try to prevent the database from stalling by doing one of the following:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = DEFER;
When the network problem is resolved, you can enable the archive destination again:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1
2> OPTIONAL REOPEN=60';
When the network problem is resolved, you can change the archive destination from optional back to mandatory:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1
2> MANDATORY REOPEN=60';
In some cases, you may not be the person responsible for correcting the problem. You can periodically query the V$ARCHIVE_DEST
view to see if the problem has been resolved.
On the primary database, archive the current redo log:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
When the network is back up again, log apply services can detect and resolve the archive gaps automatically if you place the standby database in managed recovery mode.
This scenario describes the case where you have failed over to a physical standby database and have begun using it as a normal primary database. After a period of time, you decide you want to switch back to the original primary system and make the primary database the standby database again.
% rcp STBYinit.ora PRMYHOST:fallback.ora
The fallback.ora
file will become the standby initialization parameter file for the standby database at the original primary site.
fallback.ora
file. You need to modify the following parameters:
The fallback.ora
file looks as follows:
# #parameter file fallback.ora # db_name=primary1 #The same as PRMYinit.ora control_files=/fs2/oracle/stdby/cf1.ctl lock_name_space=fallback; audit_trail=false o7_dictionary_accessibility=false global_names=false db_domain=us.oracle.com commit_point_strength=1 processes=30 sessions=30 transactions=21 transactions_per_rollback_segment=21 db_block_buffers=100 shared_pool_size=4000000 ifile=/oracle/work/tkinit.ora # specific parameters for standby database log_archive_format = r_%t_%s.arc standby_archive_dest=/oracle/dbs/ log_archive_dest_1='LOCATION=/oracle/dbs/' log_archive_dest_state_1 = ENABLE db_file_name_convert=('/fs2/oracle/stdby','/oracle/dbs') log_file_name_convert=('/fs2/oracle/stdby','/oracle/dbs') log_archive_start=true log_archive_trace=127
You need to supply appropriate values for the LOG_ARCHIVE_DEST_1
and LOG_ARCHIVE_DEST_STATE_1
initialization parameters.
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT; SQL> SHUTDOWN IMMEDIATE;
% rcp /fs2/oracle/stdby/stdby_1_102.arc prmyhost:/oracle/dbs/r_1_102.arc % rcp /fs2/oracle/stdby/stdby_1_103.arc prmyhost:/oracle/dbs/r_1_103.arc
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY WITH SESSION SHUTDOWN;
This statement does the following:
Execute the following statements on the former primary instance:
SQL> SHUTDOWN NORMAL; SQL> STARTUP NOMOUNT;
Execute the following statement on the primary database:
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL> CONNECT SYS/SYS_PASSWORD AS SYSDBA SQL> STARTUP NOMOUNT PFILE=fallback.ora; SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY; SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP PFILE=PRMYinit.ora;
By default, in managed recovery mode, the standby database automatically applies redo logs when they arrive from the primary database. But in some cases, you may not want the logs to be applied immediately, because you want to create a time lag between the archiving of a redo log at the primary site and the application of the log at the standby site. A time lag can protect against the transfer of corrupted or erroneous data from the primary site to the standby site.
For example, suppose you run a batch job every night on the primary database. Unfortunately, you accidently ran the batch job twice and you did not realize the mistake until the batch job completed for the second time. Ideally, you need to roll back the database to the point in time before the batch job began. A primary database that has a standby database with a time lag (for example, 8 hours) could help you to recover. You could fail over the standby database with the time lag and use it as the new primary database.
To create a standby database with a time lag, use the DELAY
attribute of the LOG_ARCHIVE_DEST_
n
initialization parameter in the primary database initialization parameter file. The archived redo log files are still automatically copied from the primary site to the standby site, but the log files are not immediately applied to the standby database. The log files are applied when the specified time interval has expired.
This scenario use a 4-hour time lag and covers the following topics:
Readers of this scenario are assumed to be familiar with the procedures for creating a typical standby database. The details have been omitted from the steps outlined in this scenario.
See Also:
Section 10.1.2 for details about standby database setup |
To create a standby database with a time lag, modify the LOG_ARCHIVE_DEST_
n
initialization parameter on the primary database to set a delay for the standby database. For example, to specify a 4-hour delay, set the parameter as follows:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='DELAY=240';
The DELAY
attribute indicates that the archived redo logs at the standby site are not available for recovery until the 4-hour time interval has expired. The time interval (expressed in minutes) starts when the archived redo logs are successfully transmitted to the standby site. The redo information is still sent to the standby database and written to the disk as normal.
The runtime scenario of a standby database with a time lag is slightly different from a standby database with no time lag, because the application of the online redo logs lags behind.
As the DBA, you need to keep the following tasks in mind when managing a standby database with a time lag:
STANDBY_FILE_MANAGEMENT
initialization parameter in the primary database initialization parameter file to ensure that when a datafile is created on the primary site, it is also created on the standby site. The section titled "Enabling the Automatic Creation of Datafiles on the Standby Database" shows the step-by-step procedure for automatically creating a datafile in the standby database.In the case stated at the beginning of this scenario, you may want to take advantage of the time lag, and get a primary database whose status is a specified time (for example, 4 hours) before the current primary database. You can fail over to the appropriate time-lagged standby database as follows:
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP PFILE=Failover.ora;
If you do not want to take advantage of the time lag, you can switch over to the standby database as a normal standby database with no time lag as follows:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY DISCONNECT FROM SESSION THROUGH LAST SWITCHOVER;
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY DATABASE;
SQL> SHUTDOWN;
SQL> STARTUP PFILE=Failover.ora;
This scenario describes using a physical standby database to back up the primary database. This allows the standby site to off-load the task of backup at the primary site. The backup at the standby site can be done while the standby database is in the managed recovery mode. When the primary database needs to be restored, you can use the backup created at the standby site if you choose not to fail over to the standby database.
The standby database can be used to back up the datafiles and the archived redo logs. The primary control file must be backed up at the primary site because the standby database uses a different standby control file. Because the control file is usually much smaller than the datafiles, backing up the control file at the primary site does not significantly affect the performance of the primary database.
To back up a database at the standby site:
NOCATALOG
option to back up the standby datafiles and log files. Assume that stby
is the connect string of the standby database. For example:
% rman target sys/change_on_install@stby nocatalog connected to target database: ADE3 (DBID=1417165739) using target database controlfile instead of recovery catalog
RMAN> run { 2> allocate channel c1 type disk; 3> backup database; 4> } allocated channel: c1 channel c1: sid=13 devtype=DISK Starting backup at 07-NOV-00 channel c1: starting full datafile backupset channel c1: specifying datafile(s) in backupset input datafile fno=00001 name=/oracle/dbs/s3t_db1.dbf channel c1: starting piece 1 at 07-NOV-00 channel c1: finished piece 1 at 07-NOV-00 piece handle=/oracle/dbs/04c9t2ki_1_1 comment=NONE channel c1: backup set complete, elapsed time: 00:00:35 Finished backup at 07-NOV-00 Starting Control File Autobackup at 07-NOV-00 warning - controlfile is not current, controlfile autobackup skipped Finished Control File Autobackup at 07-NOV-00 released channel: c1
After you back up a database, Oracle Corporation recommends that you record the current sequence number and thread number applied at the standby site. This is important because the archived logs, after this backup, may be needed to restore the database at the primary site if the primary database loses its local archived logs.
For example, suppose there is only one thread and the sequence number at the time of the database backup is 15. After several new logs are archived to the standby site, you can back up these archived logs using the following commands:
RMAN> run { 2> allocate channel c1 type disk; 3> backup archivelog from logseq 15; 4> } allocated channel: c1 channel c1: sid=9 devtype=DISK Starting backup at 07-NOV-00 channel c1: starting archive log backupset channel c1: specifying archive log(s) in backup set input archive log thread=1 sequence=15 recid=15 stamp=413043150 input archive log thread=1 sequence=16 recid=16 stamp=413043168 input archive log thread=1 sequence=17 recid=17 stamp=413043433 input archive log thread=1 sequence=18 recid=18 stamp=413043442 input archive log thread=1 sequence=19 recid=19 stamp=413043450 input archive log thread=1 sequence=20 recid=20 stamp=413043454 channel c1: starting piece 1 at 07-NOV-00 channel c1: finished piece 1 at 07-NOV-00 piece handle=/oracle/dbs/05c9t3c9_1_1 comment=NONE channel c1: backup set complete, elapsed time: 00:00:03 Finished backup at 07-NOV-00 Starting Control File Autobackup at 07-NOV-00 warning - controlfile is not current, controlfile autobackup skipped Finished Control File Autobackup at 07-NOV-00 released channel: c1
list
command:
RMAN> list backup; List of Backup Sets =================== BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 4 Full 104M DISK 00:00:29 07-NOV-00 BP Key: 4 Status: AVAILABLE Tag: Piece Name: /oracle/dbs/04c9t2ki_1_1 List of Datafiles in backup set 4 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---- 1 Full 73158 07-NOV-00 /oracle/dbs/s3t_db1.dbf BS Key Device Type Elapsed Time Completion Time ------- ----------- ------------ --------------- 5 DISK 00:00:02 07-NOV-00 BP Key: 5 Status: AVAILABLE Tag: Piece Name: /oracle/dbs/05c9t3c9_1_1 List of Archived Logs in backup set 5 Thrd Seq Low SCN Low Time Next SCN Next Time ---- ------- ---------- --------- ---------- --------- 1 15 73139 07-NOV-00 73156 07-NOV-00 1 16 73156 07-NOV-00 73158 07-NOV-00 1 17 73158 07-NOV-00 73163 07-NOV-00 1 18 73163 07-NOV-00 73165 07-NOV-00 1 19 73165 07-NOV-00 73166 07-NOV-00 1 20 73166 07-NOV-00 73167 07-NOV-00
BACKUP TABLESPACE
and BACKUP DATAFILE
commands to back up individual tablespaces and datafiles.To restore the backup at the primary site, take the following steps:
See Also:
Oracle9i Recovery Manager User's Guide for instructions on setting up a catalog database |
Assume that prmy
is the connect string of the primary database, and rcat
is the connect string of the recovery catalog database.
% rman target sys/change_on_install@prmy catalog rman/rman@rcat connected to target database: ADE3 (DBID=1417165739) connected to recovery catalog database RMAN> resync catalog from controlfilecopy 'scf3.ctl';
RMAN> list backup; starting full resync of recovery catalog full resync complete List of Backup Sets =================== BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 182 Full 104M DISK 00:00:29 07-NOV-00 BP Key: 184 Status: AVAILABLE Tag: Piece Name: /oracle/dbs/04c9t2ki_1_1 List of Datafiles in backup set 182 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---- 1 Full 73158 07-NOV-00 /oracle/dbs/t_db1.dbf BS Key Device Type Elapsed Time Completion Time ------- ----------- ------------ --------------- 183 DISK 00:00:02 07-NOV-00 BP Key: 185 Status: AVAILABLE Tag: Piece Name: /oracle/dbs/05c9t3c9_1_1 List of Archived Logs in backup set 183 Thrd Seq Low SCN Low Time Next SCN Next Time ---- ------- ---------- --------- ---------- --------- 1 15 73139 07-NOV-00 73156 07-NOV-00 1 16 73156 07-NOV-00 73158 07-NOV-00 1 17 73158 07-NOV-00 73163 07-NOV-00 1 18 73163 07-NOV-00 73165 07-NOV-00 1 19 73165 07-NOV-00 73166 07-NOV-00 1 20 73166 07-NOV-00 73167 07-NOV-00
RMAN> run { 2> allocate channel c1 type disk; 3> restore database; 4> } allocated channel: c1 channel c1: sid=11 devtype=DISK Starting restore at 07-NOV-00 channel c1: starting datafile backupset restore channel c1: specifying datafile(s) to restore from backup set restoring datafile 00001 to /oracle/dbs/t_db1.dbf channel c1: restored backup piece 1 piece handle=/oracle/dbs/04c9t2ki_1_1 tag=null params=NULL channel c1: restore complete Finished restore at 07-NOV-00 released channel: c1
After restoring the database, you can try to open the database:
SQL> ALTER DATABASE OPEN; alter database open * ERROR at line 1: ORA-01113: file 1 needs media recovery ORA-01110: data file 1: '/oracle/dbs/t_db1.dbf'
The ORA-01113 error indicates that media recovery is required on the database before you can open it. Issue the manual recovery statement, RECOVER DATABASE
, as follows:
SQL> RECOVER DATABASE; ORA-00279: change 73158 generated at 11/07/2000 14:12:47 needed for thread 1 ORA-00289: suggestion : /oracle/dbs/db11_17.dbf ORA-00280: change 73158 for thread 1 is in sequence #17 Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
The resulting error messages indicate that the database needs log sequence #17 for recovery. You can provide logs in several ways:
RMAN> run { 2> allocate channel c1 type disk; 3> restore archivelog from logseq 15; 4> } allocated channel: c1 channel c1: sid=11 devtype=DISK Starting restore at 07-NOV-00 channel c1: starting archive log restore to default destination channel c1: restoring archive log archive log thread=1 sequence=15 channel c1: restoring archive log archive log thread=1 sequence=16 channel c1: restoring archive log archive log thread=1 sequence=17 channel c1: restoring archive log archive log thread=1 sequence=18 channel c1: restoring archive log archive log thread=1 sequence=19 channel c1: restoring archive log archive log thread=1 sequence=20 channel c1: restored backup piece 1 piece handle=/oracle/dbs/05c9t3c9_1_1 tag=null params=NULL channel c1: restore complete Finished restore at 07-NOV-00 released channel: c1
After restoring the archived logs, you can recover the database and then open it.
This section presents several scenarios describing the configuration and maintenance of a logical standby database. The scenarios discussed in this section include the following:
The scenarios in this section assume that a primary and logical standby database have been configured according to the guidelines in Chapter 4 unless otherwise noted. Also, assume that the logical standby databases described in the scenarios have been configured to support the maximum protection mode.
Table 10-2 lists the identifier values used in the scenario examples.
As you plan your new logical standby database configuration, you must take into consideration the database objects in the primary database that cannot be supported by the logical standby database. For example, a table that is defined with a LONG
datatype is an unsupported datatype in a logical standby database.
See Also:
Section 4.1 for a list of unsupported datatypes, tables, and other information about database objects |
Operating a logical standby database when the primary database contains unsupported objects should be considered seriously, because ignoring unsupported objects can stop log apply services on the standby database. In addition to identifying unsupported datatypes, you should identify transactions that depend on other transactions that reference unsupported database objects, and define actions to correct these dependencies. Assuming that the dependencies have been identified, you might handle these transactions by defining filters that ignore (skip) all activity to the dependent objects, or by defining filters that ignore all activity in the entire schema.
This section provides the following scenarios that define filters to accomplish these tasks:
These scenarios make the following assumptions:
MOVIES
schema contains information about:
GET_AVDATA
that is used for the purposes of streaming movies.AUDIO
table) are automatically skipped.The logical standby database in this scenario uses the GET_AVDATA
procedure to off-load the media streaming workload from the HQ database. The DBA needs to filter redo data coming from the primary database because the AUDIO
table in the MOVIES
schema contains the LONG RAW
datatype (an unsupported datatype). Thus, the following scenario describes how to skip activity occurring to the MOVIE
schema. Note that any DML transaction involving the AUDIO
table is automatically skipped.
To work around this datatype problem, perform the following steps on the logical standby database:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
MOVIES
schema:
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML', 'MOVIES', '%', null);
MOVIES
schema:
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL', 'MOVIES', '%', null);
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
The logical standby database in this scenario off-loads the primary database by handling queries consisting solely of movie information. Because there is no reason to maintain video data without the audio, this scenario skips all activity involving the VIDEO
table. To do this, perform the following steps on the logical standby database:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
SKIP
procedure to define a filter that skips DML transactions involving the VIDEO
table in the MOVIES
schema:
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML', 'MOVIES', 'VIDEO', null);
VIDEO
table in the MOVIES
schema.
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL', 'MOVIES', 'VIDEO', null);
AUDIO
table in the MOVIES
schema.
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL', 'MOVIES', 'AUDIO', null);
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
This scenario does not define a DML filter against the AUDIO table,
because the DML is automatically ignored by the logical standby database. However, you must define a filter for the DDL transactions, because the schema DDL transactions are permitted by default on unsupported objects. This could introduce a problem for a logical standby database in a case where the AUDIO
table is modified on the primary database such that a new column CONVERTED_DATA
of type BLOB
is added. The contents of the DATA
column is migrated to the CONVERTED_DATA
column, and the DATA
column is dropped. This would qualify the AUDIO
table for support on logical standby database.
However, during the conversion process, the standby database's AUDIO
table was unsupported, meaning that the CONVERTED_DATA
column is empty. Subsequent DML transactions on the now-supported AUDIO
table will fail, because the transaction would not find data in the CONVERTED_DATA
column. Defining a filter that skips all DDL transactions prevents redefinitions of the AUDIO
table. To enable support for a table in a case such as this, see Section 10.2.2, "Scenario 2: Creating or Re-creating a Table"
To create a table on a logical standby database, you use the DBMS_LOGSTDBY.INSTANTIATE_TABLE
procedure.
See Also:
Oracle9i Supplied PL/SQL Packages and Types Reference for more information about the DBMS_LOGSTDBY package. |
This procedure creates or re-creates a table on a logical standby database from an existing table definition on a primary database. In addition to creating a table, the procedure also imports data from the primary table.
Typically, you use table instantiation to recover a table that has experienced an invalid DML operation, such as a NOLOGGING
DML transaction, but you can also use the procedure to enable support on a table that was formerly unsupported. That is, the DBMS_LOGSTDBY.INSTANTIATE_TABLE
procedure offers a way to configure or reconfigure a table for support by a logical standby database, without having to back up and restore the database.
Before you can create a table, it must meet the requirements described in Section 4.1 that explain:
The steps described in this scenario ensure that logical standby database will be able to support a table on the primary database. This scenario creates the AUDIO
table in the MOVIES
schema on the SAT standby database using the definitions and data as found on the HQ primary database. The scenario assumes the metadata definition for the AUDIO
table remains unmodified throughout the creation process.
To create a table, perform the following steps on the logical standby database:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
AUDIO
table, if any:
SQL> SELECT * FROM DBA_LOGSTDBY_SKIP; ERROR STATEMENT_OPT OWNER NAME PROC --------------------------------------------------------------------- N SCHEMA_DDL MOVIES AUDIO SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP('SCHEMA_DDL', 'MOVIES', 'AUDIO');
AUDIO
table in the MOVIES
schema using the database link to the HQ primary database:
SQL> EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE('AUDIO', 'MOVIES', 'HQ_NET');
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
This scenario describes a failover operation in a configuration with the maximum availability mode. This mode is the highest grade of data protection supported by a Data Guard configuration that contains logical standby databases.
In a disaster, the biggest task is to determine which logical standby database is the best target for the failover operation. While there are many environmental factors that affect which is the best target standby database, this scenario assumes these things to be equal for the purpose of emphasizing data loss assessment. Once you identify a target standby database, initiating a failover operation is a matter of issuing a command.
See Also:
Section 5.7.2 for more information about the maximum availability protection mode for logical standby databases |
This scenario starts out with a Data Guard configuration consisting of the HQ primary database and the SAT standby database, and adds a logical standby database named NYC. The following table provides information about the values used for the new NYC logical standby database.
To do this, perform the following steps:
SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
DBA_LOGSTDBY_PROGRESS
view. For example:
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS; APPLIED_SCN NEWEST_SCN ----------- ---------- 144059 144059
DBA_LOGSTDBY_LOG
view. For example:
SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#",- FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, 'HH:MM:SS') TIMESTAMP,- DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) "THR#"- FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#; FILE_NAME SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR# ------------------------- ---- ------------- ------------ -------- --- --- ---- /oracle/dbs/nq_sat_2.log 2 101579 101588 11:02:57 NO NO 1 /oracle/dbs/nq_sat_3.log 3 101588 142065 11:02:01 NO NO 1 /oracle/dbs/nq_sat_4.log 4 142065 142307 11:02:09 NO NO 1 /oracle/dbs/nq_sat_5.log 5 142307 142739 11:02:47 YES YES 1 /oracle/dbs/nq_sat_6.log 6 142739 143973 12:02:09 NO NO 1 /oracle/dbs/nq_sat_7.log 7 143973 144042 01:02:00 NO NO 1 /oracle/dbs/nq_sat_8.log 8 144042 144051 01:02:00 NO NO 1 /oracle/dbs/nq_sat_9.log 9 144051 144054 01:02:15 NO NO 1 /oracle/dbs/nq_sat_10.log 10 144054 144057 01:02:20 NO NO 1 /oracle/dbs/nq_sat_11.log 11 144057 144060 01:02:25 NO NO 1 /oracle/dbs/nq_sat_13.log 13 144089 144147 01:02:40 NO NO 1
Notice the gap in the sequence numbers in the SEQ#
column; in the example, the gap indicates that SAT database is missing archived redo log number 12.
SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
DBA_LOGSTDBY_PROGRESS
view:
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS; APPLIED_SCN NEWEST_SCN ----------- ---------- 143970 144146
SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#", - FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, 'HH:MM:SS') TIMESTAMP, - DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) "THR#" - FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#; FILE_NAME SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR# ------------------------- ---- ------------- ------------ -------- --- --- ---- /oracle/dbs/hq_nyc_2.log 2 101579 101588 11:02:58 NO NO 1 /oracle/dbs/hq_nyc_3.log 3 101588 142065 11:02:02 NO NO 1 /oracle/dbs/hq_nyc_4.log 4 142065 142307 11:02:10 NO NO 1 /oracle/dbs/hq_nyc_5.log 5 142307 142739 11:02:48 YES YES 1 /oracle/dbs/hq_nyc_6.log 6 142739 143973 12:02:10 NO NO 1 /oracle/dbs/hq_nyc_7.log 7 143973 144042 01:02:11 NO NO 1 /oracle/dbs/hq_nyc_8.log 8 144042 144051 01:02:01 NO NO 1 /oracle/dbs/hq_nyc_9.log 9 144051 144054 01:02:16 NO NO 1 /oracle/dbs/hq_nyc_10.log 10 144054 144057 01:02:21 NO NO 1 /oracle/dbs/hq_nyc_11.log 11 144057 144060 01:02:26 NO NO 1 /oracle/dbs/hq_nyc_12.log 12 144060 144089 01:02:30 NO NO 1 /oracle/dbs/hq_nyc_13.log 13 144089 144147 01:02:41 NO NO 1
%cp /net/nyc/oracle/dbs/hq_nyc_12.log /net/sat/oracle/dbs/hq_sat_12.log
hq_sat_14.log
.
%ls -l /net/sat/oracle/dbs/hq_sat_14.log -rw-rw---- 1 oracle dbs 333280 Feb 12 1:03 hq_sat_14.log
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_sat_12.log'; SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_sat_14.log';
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
hq_nyc_14
). For example:
%ls -l /net/nyc/oracle/dbs/hq_nyc_14.log -rw-rw---- 1 oracle dbs 333330 Feb 12 1:03 hq_nyc_14.log
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY; SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_nyc_14.log';
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
DBA_LOGSTDBY_PROGRESS
view to see if the value of the APPLIED_SCN
column is equal to the value of the NEWEST_SCN
column:
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS; APPLIED_SCN NEWEST_SCN ----------- ---------- 144200 144200
Because the SCN values match, you can be assured that there is no longer a delay (lag) between the primary database's current log and the last log received and applied by the SAT database.
DBA_LOGSTDBY_PROGRESS
view to see if the value of the APPLIED_SCN
column is equal to the value of the NEWEST_SCN
column:
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS; APPLIED_SCN NEWEST_SCN ----------- ---------- 144205 144205
Because the SCN values match, you can be assured that there is no longer a delay (lag) between the primary database's current log and the last log received and applied by the NYC database.
In most cases, the logical standby database you choose as a failover target should be a balance between data loss and performance. As you analyze this information to make a decision about the best failover candidate in this scenario, consider the following:
Once you determine which standby database is going to be the failover target, initiate a failover operation by connecting to the standby database and issuing the following statements:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
SQL> ALTER DATABASE ACTIVATE LOGICAL STANDBY DATABASE;
This scenario presents the requirements and procedures to ensure a successful switch over of roles between a primary database and a logical standby database. During a switchover operation, Data Guard takes measures to prevent data divergence during the role transition.
The scenario includes information about setting up initialization parameter files prior to initiating the switchover operation. In some cases, a switchover operation requires that you reconfigure the original primary database as a standby database to the new primary database. However, it is possible to eliminate this requirement in configurations with multiple standby databases without violating the database protection mode.
See Also:
Section 5.7 for more information about data protection modes |
Performing a switchover operation without this requirement is the most likely scenario, because this allows a former primary database to undergo routine maintenance operation without downtime or data protection penalties.
sat_init.ora
, to ensure that the SAT database can function as a primary database, as follows:
The following example shows the LOG_ARCHIVE_DEST_
n
and LOG_ARCHIVE_DEST_STATE_
n
initialization parameters in the sat_init.ora
parameter file:
LOG_ARCHIVE_DEST_1 = 'LOCATION=SAT MANDATORY REOPEN=5' LOG_ARCHIVE_DEST_2 = 'SERVICE=HQ_NET LGWR SYNC AFFIRM MANDATORY NOREGISTER' LOG_ARCHIVE_DEST_STATE_1 = ENABLE LOG_ARCHIVE_DEST_STATE_2 = DEFER LOG_ARCHIVE_FORMAT = "_SAT_%s" STANDBY_ARCHIVE_DEST = 'HQ'
hq_init.ora
, for the HQ database. to ensure that the HQ database can function as a logical standby database. Modify this file:
The following example shows the relevant parameters in the hq_init.ora
initialization parameter file:
LOG_ARCHIVE_DEST_1 = 'LOCATION=HQ MANDATORY REOPEN=5' LOG_ARCHIVE_DEST_2 = 'SERVICE=SAT_NET LGWR SYNC AFFIRM MANDATORY NOREGISTER' LOG_ARCHIVE_DEST_STATE_1 = ENABLE LOG_ARCHIVE_DEST_STATE_2 = ENABLE LOG_ARCHIVE_FORMAT = "_HQ_%s" STANDBY_ARCHIVE_DEST = 'SAT' LOG_PARALLELISM = 1 PARALLEL_MAX_SERVERS = 9 SHARED_POOL_SIZE = 167772160
To switch over from the HQ database to the SAT database, perform the following steps:
V$DATABASE
view. For example:
SQL> SELECT NAME, DATABASE_ROLE FROM V$DATABASE; NAME DATABASE_ROLE --------- ---------------- HQ PRIMARY
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = DEFER;
SQL> SELECT NAME, SUBSTR(VALUE,1,30) "Value" - FROM SYSTEM.LOGSTDBY$PARAMETERS@SAT_LINK ORDER BY NAME; NAME Value ------------------------------ ------------------------------ FIRST_SCN 101579 LMNR_SID 1 PRIMARY 1457871294 TRANSACTION_CONSISTENCY FULL _SYNCPOINT_INTERVAL 0
SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('TRANSACTION_CONSISTENCY', 'FULL');
COMPLETED_SESSION
record to display in the DBA_LOGSTDBY_PARAMETERS
table. This indicates that the last of the archived redo logs from the primary database have been applied. For example:
SQL> SELECT NAME, SUBSTR(VALUE,1,30) "VALUE" FROM - > DBA_LOGSTDBY_PARAMETERS WHERE NAME='COMPLETED_SESSION'; NAME VALUE ------------------------------ ------------------------------ COMPLETED_SESSION SWITCHOVER
V$DATABASE
view, as follows:
SQL> SELECT NAME, DATABASE_ROLE FROM V$DATABASE; NAME DATABASE_ROLE --------- ---------------- SAT LOGICAL STANDBY
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL PRIMARY;
V$DATABASE
view:
SQL> SELECT NAME, DATABASE_ROLE FROM V$DATABASE; NAME DATABASE_ROLE --------- ---------------- SAT PRIMARY
On the HQ (logical standby) database, start log apply services on the new standby database using a database link to the new primary database, as follows:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NEW PRIMARY SAT_LINK;
|
Copyright © 1999, 2002 Oracle Corporation. All Rights Reserved. |
|