|Oracle® Real Application Clusters Administrator's Guide
10g Release 1 (10.1)
Part Number B10765-01
This chapter explains how to configure Recovery Manager (RMAN) for use in Oracle Real Application Clusters (RAC) environments. This chapter also provides procedures for using RMAN for archiving in RAC environments and discusses redo logs and archived redo logs considerations. The topics in this chapter include:
Recovery Manager (RMAN) enables you to back up, copy, restore, and recover datafiles, control files, server parameter files (SPFILEs) and archived redo logs. It is included with the Oracle server and does not require separate installation. You can run RMAN from the command line or use RMAN in the Backup Manager in Oracle Enterprise Manager.
The snapshot control file is a temporary snapshot control file that RMAN creates to resynchronize from a read-consistent version of the control file. RMAN only needs a snapshot control file when resynchronizing with the recovery catalog or when making a backup of the current control file. In RAC, the snapshot control file is only needed on nodes where RMAN performs backups.
You can specify a cluster file system file or a raw device destination for the location of your snapshot control file so that this file is shared across all nodes in the cluster. You must ensure that this file physically or logically exists on all nodes that are used for backup and restore activities. Run the following RMAN command to determine the configured location of the snapshot control file:
SHOW SNAPSHOT CONTROLFILE NAME;
If needed, you can change the configured location of the snapshot control file. For example, on UNIX-based systems you can specify the snapshot control file location as
$ORACLE_HOME/dbs/scf/snap_prod.cf by entering the following at the RMAN prompt:
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '$ORACLE_HOME/dbs/scf/snap_prod.cf';
This command globally sets the configuration for the location of the snapshot control file throughout your cluster database. Therefore, ensure that the directory
$ORACLE_HOME/dbs/scf exists on all nodes that perform backups.
CONFIGURE command creates persistent settings across RMAN sessions. Therefore, you do not need to run this command again unless you want to change the location of the snapshot control file. You can also specify a cluster file system file or a raw device destination for the location of your snapshot control file. This file is shared across all nodes in the cluster just like other datafiles in RAC. Refer to Oracle Database Recovery Manager Reference for more information about configuring the snapshot control file.
If you set
CONFIGURE CONTROLFILE AUTOBACKUP to
ON, then RMAN automatically creates a control file and a SPFILE backup after you run the
COPY commands. RMAN can also automatically restore an SPFILE if this is required to start an instance to perform recovery. This means that the default location for the SPFILE must be available to all nodes in your RAC database.
These features are important in disaster recovery because RMAN can restore the control file even without a recovery catalog. RMAN can restore an autobackup of the control file even after the loss of both the recovery catalog and the current control file. You can change the default name that RMAN gives to this file with the
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT command. Note that if you specify an absolute path name in this command, then this path must exist identically on all nodes that participate in backups.
RMAN performs the control file autobackup on the first allocated channel. Therefore, when you allocate multiple channels with different parameters, especially when you allocate a channel with the
CONNECT command, determine which channel will perform the control file autobackup. Always allocate the channel for this node first. Refer to the Oracle Database Backup and Recovery Advanced User's Guide for more information about using the control file autobackup feature.
After configuring the RMAN snapshot control file location and enabling the RMAN control file autobackup feature, you can decide how to configure your environment to manage the archived redo logs that each node generates. When a node generates an archived redo log, Oracle always records the filename of the log in the control file of the target database. If you are using a recovery catalog, then RMAN also records the archived redo log filenames in the recovery catalog when a resynchronization occurs.
The archived redo log naming scheme that you use is important because when a node writes to a log with a specific filename on its file system, the file must be readable by any node that needs to access this archived redo log. For example, if node 1 archives a log to
/oracle/arc_dest/log_1_100.arc, then node 2 can only back up this archived redo log if it can read
/oracle/arc_dest/log_1_100.arc on its own file system.
The backup and recovery strategy that you choose depends on how you configure the archiving destinations for each node. Whether only one node performs archived redo log backups or all nodes perform archived redo log backups, you need to ensure that all archived redo logs are backed up. Because only one instance can perform recovery, the node of the instance performing recovery must have read access to all archived redo logs in your cluster.
The primary consideration is to ensure that all archived redo logs can be read from every node during recovery, and if possible during backups. This section illustrates the archived redo log naming issues for configuring archiving in your cluster database. The scenario described here is for a non-cluster file system archiving scheme. Assume that the following conditions are met:
Configure each node to write to a local archiving directory that is named the same on each node.
Do not set up a cluster file system (in other words, each node can only read from and write to its own local file system). Refer to information about cluster file systems later in this chapter.
Do not use NFS or mapped drives to enable the nodes in the cluster to gain read/write access to one another.
Example 6-1 Example Configuration for the initialization parameters file
sid1.log_archive_dest_1 = (location=/arc_dest) sid2.log_archive_dest_1 = (location=/arc_dest) sid3.log_archive_dest_1 = (location=/arc_dest)
Assume that the filenames of the archived redo logs are recorded in the control file as follows:
/arc_dest/log_1_62.arc /arc_dest/log_2_100.arc /arc_dest/log_2_101.arc /arc_dest/log_3_70.arc /arc_dest/log_1_63.arc
Given this scenario, assume that your RAC database performs recovery. If node 1 tries to read the logs for recovery, then it searches its local
/arc_dest directory for the filenames as they are recorded in the control file. Using this data, node 1 only finds the logs that it archived locally, for example
/arc_dest/log_1_63.arc. However, node 1 cannot apply the other logs because the filenames for these logs are not readable on its local file system. Thus, the recovery stalls. Avoid this by implementing the naming conventions described in the next section and then configure your cluster according to the scenarios described in "RMAN Archiving Configuration Scenarios".
For any archived redo log configuration, uniquely identify the archived redo logs with the
LOG_ARCHIVE_FORMAT parameter. The format of this parameter is operating system-specific and it can include text strings, one or more variables, and a filename extension.
Table 6-1 Archived Redo Log Filename Format Parameters
||Thread number, left-zero-padded|
||Thread number, not padded|
||Log sequence number, left-zero-padded|
||Log sequence number, not padded|
The thread parameters
%T are mandatory for RAC. For example, if the instance associated with redo thread number 1 sets
log_%t_%s.arc, then its archived redo log files are named:
log_1_1000.arc log_1_1001.arc log_1_1002.arc . . .
See Also:Oracle Database Administrator's Guide about specifying the archived redo log filename format and destination, and Oracle platform-specific documentation about the default log archiving format and destination
This section describes the archiving scenarios for a RAC database. The two configuration scenarios in this chapter describe a three-node UNIX cluster for a RAC database. For both scenarios, the
LOG_ARCHIVE_FORMAT that you specify for the instance performing recovery must be the same as the format that you specified for the instances that archived the files.
The preferred configuration for RAC is to use ASM for a recovery area with a different disk group for your recovery set than for your datafiles. Alternatively, you can use a cluster file system archiving scheme.
In this case., each node writes to a single cluster file system archived redo log destination and can read the archived redo log files of the other nodes. Read access is achieved for all nodes with a cluster file system. For example, if node 1 archives a log to
/arc_dest/log_1_100.arc on the cluster file system, then any other node in the cluster can also read this file.
If you do not use a cluster file system, then the archived redo log files cannot be on raw devices. This is because raw devices do not allow sequential writing of consecutive archive log files.
Figure 6-1 Cluster File System Archiving Scheme
The advantage of this scheme is that none of the nodes uses the network to archive logs. If each node has a local tape drive, then you can distribute an archived redo log backup so that each node backs up local logs without accessing the network. Because the filename written by a node can be read by any node in the cluster, RMAN can back up all logs from any node in the cluster. Backup and restore scripts are simplified because each node has access to all archived redo logs.
In the cluster file system scheme, each node archives to a directory that is identified with the same name on all instances within the cluster database. To configure this, set values for the
LOG_ARCH_DEST_n parameter for each instance using the
sid designator as in the following example:
sid1.LOG_ARCHIVE_DEST_1="LOCATION=/arc_dest" sid2.LOG_ARCHIVE_DEST_1="LOCATION=/arc_dest" sid3.LOG_ARCHIVE_DEST_1="LOCATION=/arc_dest"
The following list shows archived redo log entry examples that would appear in the RMAN catalog or the in the control file based on the previous example. Note that any node can archive logs using any of the threads:
/arc_dest/log_1_999.arc /arc_dest/log_1_1000.arc /arc_dest/log_1_1001.arc <- thread 1 archived in node 3 /arc_dest/log_3_1563.arc <- thread 3 archived in node 2 /arc_dest/log_2_753.arc <- thread 2 archived in node 1 /arc_dest/log_2_754.arc /arc_dest/log_3_1564.arc
In the non-cluster file system local archiving scheme, each node archives to a uniquely named local directory. If recovery is required, then you can configure the recovery node so that it can access directories on the other nodes remotely. For example, use NFS on UNIX-based systems or shared drives on Windows-based systems. Therefore, each node writes only to a local destination, but each node can also read archived redo log files in remote directories on the other nodes.
To enable RMAN to back up and recover a RAC database in one step, all archived redo logs must have uniquely identifiable names throughout the cluster. To do this, however, you cannot use the technique described in "Cluster File System Archiving Scheme" to have more than one node archive to a directory such as
/arc_dest. In UNIX environments only, the archived redo log files cannot be on the shared disk because UNIX shared disks are raw devices that you cannot easily partition for use with archived redo logs.
The advantage of this scheme is that if each node has a local tape drive, then you can distribute an archived redo log backup so that each node backs up local logs without accessing the network. The disadvantage of this scheme is that for media recovery, you must configure the node performing recovery for remote access so that it can read the archived redo log files in the archiving directories on the other nodes.
If only one node has a local tape drive, then you cannot back up all logs from a single node without configuring NFS or manually transferring the logs. This scheme has a single point of failure. If one node fails after the most recent backup, then the archived redo logs on this node that were generated after the backup are lost.
If you are in a recovery situation and if you do not have all the available archive logs, then you must perform an incomplete recovery up to the first missing archived redo log sequence number. You do not have to use a specific configuration for this scheme. However, if you want to distribute the backup processing onto multiple nodes, the easiest method is to configure channels as described in the backup scenarios in Chapter 7, " Managing Backup and Recovery ".
You can set the archiving destination values as follows in the initialization parameter file:
sid1.LOG_ARCHIVE_DEST_1="LOCATION=/arc_dest_1" sid2.LOG_ARCHIVE_DEST_1="LOCATION=/arc_dest_2" sid3.LOG_ARCHIVE_DEST_1="LOCATION=/arc_dest_3"
The following list shows the possible archived redo log entries in the database control file. Note that any node is able to archive logs from any of the threads:
/arc_dest_1/log_1_1000.arc /arc_dest_2/log_1_1001.arc <- thread 1 archived in node 2 /arc_dest_2/log_3_1563.arc <- thread 3 archived in node 2 /arc_dest_1/log_2_753.arc <- thread 2 archived in node 1 /arc_dest_2/log_2_754.arc /arc_dest_3/log_3_1564.arc
As illustrated in Table 6-2, each node has a directory containing the locally archived redo logs. Additionally, if you mount directories on the other nodes remotely through NFS or shared drives, then each node has two remote directories through which RMAN can read the archived redo log files that are archived by the remaining nodes.
Table 6-2 Location of Logs for Non-Cluster File System Local Archiving
|Node ...||Can read the archived redo log files in the directory ...||For logs archived by node ...|
||2 (through NFS)|
||3 (through NFS)|
||1 (through NFS)|
||3 (through NFS)|
||1 (through NFS)|
||2 (through NFS)|
Because NFS is not required to perform backups, node 1 can back up its local logs to its tape drive, node 2 can back up its local logs to its tape drive, and so on. However, if you are performing recovery and a surviving instance must read all the logs that are on disk but not yet backed up, then you should configure NFS as shown in Table 6-3.
Table 6-3 NFS Configuration for Shared Read Local Archiving
|Node||Directory ...||Is configured ...||And mounted on ...||On node ...|
To change the archiving mode in a RAC environment, the database must be mounted (not open) by an exclusive instance. In other words, set the
CLUSTER_DATABASE parameter to
FALSE. After executing the
ALTER DATABASE SQL statement to change the archive log mode, shutdown the instance and restart it with the
CLUSTER_DATABASE parameter reset to
TRUE before you restart the other instances. When the database goes into archive log mode, the ARCH processes automatically start.
After your RMAN configuration is operative in your RAC environment, use the
V$ARCHIVE_PROCESSES views to determine the status of the archiver processes. Depending on whether you query the global or local views, these views display information for all database instances, or for only the instance to which you are connected. Refer to Oracle Database Reference for more information about the database views.