Sun Cluster Geographic Edition Data Replication Guide for Oracle Data Guard

Chapter 1 Replicating Data With Oracle Data Guard Software

This chapter describes how to configure data replication with Oracle Data Guard software in a Sun Cluster Geographic Edition environment.

This chapter covers the following topics:

This release of Sun Cluster Geographic Edition supports the following Oracle Data Guard Database Standby types:

Sun Cluster Geographic Edition software supports the use of Oracle Data Guard for data replication when used with Oracle Real Application Clusters (RAC) software. Before you can replicate data with Oracle Data Guard, you must be familiar with the Oracle Data Guard documentation. For information about installing and configuring the Oracle Data Guard software and its latest patches, see Oracle Data Guard documentation.


Note –

During data replication, data from a primary cluster is copied to a backup, or standby cluster. The standby cluster can be located at a site that is geographically separated from the primary cluster. The distance between the primary and standby clusters depends on the distance that your data replication product supports.


The example procedures in this chapter show how to configure Oracle Data Guard to replicate data between a primary and a standby database.

Replicating Data in an Oracle Data Guard Protection Group (Task Map)

The following table summarizes the steps for configuring Oracle Data Guard data replication in a protection group.

Table 1–1 Administration Tasks for Oracle Data Guard Data Replication

Task 

Description 

Perform an initial configuration of the Oracle Data Guard software. 

See Initially Configuring Oracle Data Guard Software.

Create a protection group that is configured for Oracle Data Guard data replication. 

See How to Create and Configure an Oracle Data Guard Protection Group.

Add a configuration that is controlled by Oracle Data Guard. 

See How to Add an Oracle Data Guard Broker Configuration to an Oracle Data Guard Protection Group.

Add an application resource group to the protection group. 

See How to Add an Application Resource Group to an Oracle Data Guard Protection Group.

Replicate the protection group configuration to a standby cluster. 

See How to Replicate the Oracle Data Guard Protection Group Configuration to a Partner Cluster.

Activate the protection group. 

See How to Activate an Oracle Data Guard Protection Group.

Check the runtime status of replication. 

See Checking the Runtime Status of Oracle Data Guard Data Replication.

Detect failure. 

See Detecting Cluster Failure on a System That Uses Oracle Data Guard Data Replication.

Migrate services by using a switchover. 

See Migrating Services That Use Oracle Data Guard With a Switchover.

Migrate services by using a takeover. 

See Forcing a Takeover on Systems That Use Oracle Data Guard.

Recover data after forcing a takeover. 

See Recovering Oracle Data Guard Data After a Takeover.

Overview of Oracle Data Guard Data Replication

This section provides an overview of the integration of Oracle Data Guard with Sun Cluster Geographic Edition and highlights the differences between support for Oracle Data Guard and other data replication products, such as Sun StorageTekTM Availability Suite software, Hitachi TrueCopy, and EMC SRDF.

Oracle Data Guard Shadow Resource Groups

You can add an Oracle Data Guard Broker configuration that is controlled by the Oracle Data Guard software to a protection group. The Sun Cluster Geographic Edition software creates a shadow RAC server proxy resource group for each Oracle Data Guard Broker configuration. The name of a shadow resource group conforms to the following format:

ODGconfigurationname-rac-proxy-svr-shadow-rg

For example, an Oracle Data Guard Broker configuration named sales that is controlled by the Oracle Data Guard software has a shadow RAC server proxy resource group named sales-rac-proxy-svr-shadow-rg. If, however, the configuration name contains one or more periods (.), the periods are converted to underscore characters (_) to construct the resource group name. Consequently, the configuration name mysales.com has a shadow resource group named mysales_com -rac-proxy-svr-shadow-rg.

The shadow RAC server proxy resource group “shadows” the real RAC server proxy resource group that you created to manage and monitor the Oracle RAC databases that are under the control of Sun Cluster software.

Each shadow resource group contains a single resource: a SUNW.gds resource whose probe script reflects the status of the RAC server proxy resource group. The name of this resource conforms to the following format:


ODGconfigurationname-rac-proxy-svr-shadow-rs

For more information about RAC server proxy resource groups, see Sun Cluster Data Service for Oracle RAC Guide for Solaris OS.

A shadow RAC server proxy resource group is required because, unlike other Sun Cluster Geographic Edition replication products, the Oracle Data Guard software is an integral part of the Oracle RAC software. Oracle Data Guard requires the Oracle RAC software to be running and the databases started to replicate its data.

Consequently, putting the real RAC server proxy resource group under Sun Cluster Geographic Edition control would result in the Oracle RAC database's being shut down on the standby cluster. In contrast, the shadow RAC server proxy resource group can be placed under the control of Sun Cluster Geographic Edition. You can do so without disrupting the data replication process while still allowing the configuration to conform to the usual Sun Cluster Geographic Edition structure for managing application resource groups.

The state of the shadow RAC server proxy resource group indicates whether the database that is monitored and controlled by the RAC server proxy resource group is the primary or the standby cluster. In other words, this state indicates whether the database is online on the primary cluster and unmanaged on the standby cluster. Furthermore, the status of the shadow RAC server proxy resource reflects both the status of the RAC server proxy resource and whether the database is the primary or the standby.

Oracle Data Guard Replication Resource Groups

When an Oracle Data Guard Broker configuration that is controlling the Oracle Data Guard software is added to a protection group, the Sun Cluster Geographic Edition software creates a special replication resource for the specific Oracle Data Guard Broker configuration in the replication resource group. By monitoring these replication resource groups, the Sun Cluster Geographic Edition software is able to monitor the overall status of replication. One replication resource group with one replication resource for each Oracle Data Guard Broker configuration is created for each protection group.

The name of the replication resource group conforms to the following format:

ODGProtectiongroupName-odg-rep-rg.

The replication resource in the replication resource group monitors the replication status of the Oracle Data Guard Broker configuration on the local cluster, which is reported by the Oracle Data Guard Broker software.

The name of the replication resource conforms to the following format:

ODGBrokerConfigurationName-odg-rep-rs.


Note –

In Oracle Data Guard, a data replication resource is enabled when the protection group is activated in the cluster. Consequently, in Oracle Data Guard, in a cluster in which the protection group is deactivated, the data replication status appears as unknown.


Initially Configuring Oracle Data Guard Software

This section describes the initial steps that you need to perform to configure Oracle Data Guard replication in the Sun Cluster Geographic Edition product.


Note –

The steps in this document that describe how to use Oracle tools and commands, such as dgmgrl, are intended for illustration only. Consult your Oracle documentation to determine the detailed procedures that you need to follow to satisfy the particular needs of your environment.


The example protection group, sales-pg, in this section has been configured in a partnership that consists of two (partner) clusters, cluster-paris and cluster-newyork. An Oracle RAC database, which is managed and monitored by an individual RAC server proxy resource group on each cluster, is shadowed by the mysales_com -rac-proxy-svr-shadow-rg shadow RAC server proxy resource group. The application data is contained in the sales database and replicated by Oracle Data Guard as part of the mysales.com Oracle Data Guard Broker configuration.

The shadow RAC server proxy resource group, mysales_com-rac-proxy-svr-shadow-rg, and the Oracle Data Guard Broker configuration, mysales.com, are present on both the cluster-paris and the cluster-newyork clusters. However, the names for the RAC server proxy resource group they shadow might be different on both the cluster-paris and the cluster-newyork clusters. The sales-pg protection group protects the application data by managing the replication of data between the cluster-paris and the cluster-newyork clusters.

This section provides the following information:

Oracle Data Guard Broker Configurations

To define Oracle Data Guard Broker configurations, you need to determine the following information:

After you configure Oracle Data Guard between a pair of primary and standby databases, you create an Oracle Data Guard Broker configuration by using the ${ORACLE_HOME}/bin/dgmgrl command to define the properties of the named replication. You can use this command to set and to retrieve the previously listed Oracle Data Guard Broker properties.

You also need to determine the names of the RAC server proxy resource groups that manage the Oracle RAC databases on each cluster. You configure these names by using the Data Service configuration wizard through the clsetup command, or by following the instructions in Appendix D, Command-Line Alternatives, in Sun Cluster Data Service for Oracle RAC Guide for Solaris OS.

Of the Oracle Data Guard Broker configuration properties that are listed in the following table, you can change only the Protection Mode property with the Sun Cluster Geographic Edition software. You cannot use the Sun Cluster Geographic Edition software to modify other Oracle Data Guard Broker properties in the configuration, such as the DelayMins, MaxFailure, MaxConnections, and NetTimeout properties. You need to adjust these properties manually by using the Oracle Data Guard Broker command, or by modifying the appropriate database parameters that are held in the spfile server parameter file or the init${SID}.ora file through SQL*Plus.

Property 

Allowed Values 

Description 

Protection Mode

MaxPerformance, MaxAvailability or MaxProtection

The data replication mode that is being used by Oracle, ranging from asynchronous (MaxPerformance) to synchronous (MaxProtection)

Standby type

physical or logical

The type of replication that is being performed, either Redo Apply (physical) or SQL Apply (logical) held as part of the primary database definition

Configuration name

 

The name for the Oracle Data Guard Broker configuration, which consists of a primary and a standby database 

Primary database

 

The name of the primary database, its net service name, and its standby type 

Secondary database

 

The name of the standby database and its net service name 

The Sun Cluster Geographic Edition software modifies the Oracle Data Guard Broker configuration role changes during switchover and takeover operations.

For more information about the Oracle Data Guard Broker configuration, refer to the Oracle Data Guard Broker documentation.

ProcedureHow to Set Up Your Primary Database

In the following steps, the primary cluster is called cluster-paris (nodes phys-paris-1 and phys-paris-2), and the standby cluster is called cluster-newyork (phys-newyork-1 and phys-newyork-2). The suffix -crs is appended to the Oracle Clusterware virtual IP host names.

The primary database on cluster-paris is called sales and has instances sales1 and sales2. The standby database on cluster-newyork is called salesdr and has instances salesdr1 and salesdr2. The suffix -svc is appended to each net naming service name for each of the databases and individual instances, for example, sales-svc or sales1 -svc.

Before You Begin

Ensure that you have edited your Oracle user .profile or .cshrc file to set the correct Oracle SID, ORACLE_HOME, and PATH environment variables for the local Oracle RAC database instance. Unless otherwise stated, you only need to run the commands from a node in the primary cluster that hosts a protected database instance.

  1. Verify that you can resolve the Oracle virtual IP addresses that are used by Oracle Clusterware on all primary and standby nodes.


    phys-paris-1# getent hosts phys-paris-1-crs
    10.11.112.41    phys-paris-1-crs
    
    
  2. Create a database on the primary cluster.

    Use either the Oracle Database Configuration Assistant (dbca) or the SQL*Plus utility.

  3. Verify that an Oracle password file exists for the primary database.


    oracle (phys-paris-1)$ cd ${ORACLE_HOME}/dbs
    oracle (phys-paris-1)$ ls -l orapwsales1
    lrwxrwxrwx   1 oracle   oinstall      25 November  2 02:06 orapwsales1
    	-> /oradata/SALES/orapwsales

    Oracle Data Guard needs a consistent Oracle password file on all participating nodes in the primary and standby clusters.

    If a password file does not exist, create one as follows:


    oracle (phys-paris-1)$ orapwd file=${ORACLE_HOME}/dbs/orapwsales1 \
    	password=sysdba_password
    

    You can then move this file to a location on shared storage and create a symbolic link to that file from each node. Change the file name to reflect the local SID on each node. Later, you will copy this file to the standby cluster (cluster-newyork).

  4. Ensure that the database is in logging mode by using the sqlplus command.


    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> alter database force logging;
    Database altered.
  5. Configure the Oracle Data Guard Broker configuration file locations.

    Run the sqlplus command as follows, substituting the two file names with ones that suit your configuration. Ensure that these files are located on shared storage that is visible to all cluster-paris nodes.


    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> alter system set dg_broker_config_file1='/oradata/SALES/dr1sales.dat'
       2 scope=both sid='*';
    System altered.
    SQL> alter system set dg_broker_config_file2='/oradata/SALES/dr2sales.dat'
       2 scope=both sid='*';
    System altered.
  6. Shut down all database instances.

  7. On the primary database, mount a single database instance and enable the Oracle database flashback capability.


    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> startup mount;
    ORACLE instance started.
    
    Total System Global Area  532676608 bytes
    Fixed Size                  2031416 bytes
    Variable Size             276824264 bytes
    Database Buffers          247463936 bytes
    Redo Buffers                6356992 bytes
    Database mounted.
    System altered.
    SQL> alter database archivelog;
    Database altered.
    SQL> alter database flashback on;
    Database altered.
    SQL> alter database open;
    Database altered.
  8. Restart the other database instances.

  9. Create database standby redo logs.

    Depending on your configuration, you might need to add a number of standby redo logs. The name, number, and size of these logs depend on a number of factors, including whether you use the Optimal Flexible Architecture (OFA), how many online redo log files you have, and the size of those log files. The following example shows how to configure a single 50-Mbyte standby redo log file, where the OFA naming scheme is being used. A default, two-node Oracle RAC database normally requires that you add six log files.


    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> alter database add standby logfile size 50m;
    Database altered.
  10. Configure the Oracle log archiving destinations.

    Depending on your configuration, you might need to alter or add one or more of the Oracle log archive destination parameters. These parameters have a number of tunable properties. Consult the Oracle documentation for details. The following example shows two log archive destinations being set, one for the local cluster and one for the standby cluster, where OFA naming is used.


    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> alter system set log_archive_dest_1='location=use_db_recovery_file_dest
       2 arch mandatory valid_for=(all_logfiles,all_roles)
       3 db_unique_name=sales' scope=both sid='*';
    System altered.
    
    SQL> alter system set log_archive_dest_2='service=salesdr-svc
      2 lgwr sync affirm valid_for=(online_logfiles,primary_role)
      3 db_unique_name=salesdr' scope=both sid='*';
    System altered.
    
    SQL> alter system set log_archive_dest_10='location=use_db_recovery_file_dest'
      2 scope=both sid='*';
    System altered.
    
    SQL> alter system set standby_file_management='AUTO' scope=both sid='*';
    System altered.
  11. Configure the Fetch Archive Log (FAL) parameters.

    For the database to know where to get missing archive redo logs on the server and where to send them on the client, you need to set the FAL system properties. These properties use the net service names of the source and destination databases. You run the following sqlplus command to set the parameters to the correct values for your configuration.


    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> alter system set fal_server='salesdr-svc' scope=both sid='*';
    System altered.
    
    SQL> alter system set fal_client='sales-svc' scope=both sid='*';
    System altered.

ProcedureHow to Configure the Primary Database Listener and Naming Service

  1. Create a static listener for Oracle Data Guard.


    Note –

    Perform this step on all cluster-paris nodes.


    Oracle Data Guard requires that you configure a static listener. The following example uses ${ORACLE_HOME}=/oracle/oracle/product/10.2.0/db_1 and shows where to add the entry for the static listener in the ${ORACLE_HOME}/network/admin/listener.ora file. The SID_LIST_LISTENER_PHYS-PARIS-1 and (SID_NAME = sales1) lines vary from node to node, while the (GLOBAL_DBNAME=sales_DGMGRL) differs on cluster-newyork. Later, you will add these entries on the cluster-newyork nodes.


    oracle (phys-paris-1)$ cat ${ORACLE_HOME}/network/admin/listener.ora
    SID_LIST_LISTENER_PHYS-PARIS-1 =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = PLSExtProc)
          (ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1)
          (PROGRAM = extproc)
        )
        (SID_DESC =
          (SID_NAME = sales1)
          (GLOBAL_DBNAME=sales_DGMGRL)
          (ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1)
        )
      )
    oracle (phys-paris-1)$
  2. Restart the listener.

    To enable the static entries, restart the Oracle listener processes on each of the nodes on cluster-paris.


    oracle (phys-paris-1)$ lsnrctl stop LISTENER_PHYS_PHYS-PARIS-1
    LSNRCTL for Solaris: Version 10.2.0.3.0 - Production on 29-OCT-2008 02:04:56
    
    Copyright (c) 1991, 2006, Oracle.  All rights reserved.
    
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    The command completed successfully
    oracle$ lsnrctl start LISTENER_PHYS_PHYS-PARIS-1
    LSNRCTL for Solaris: Version 10.2.0.3.0 - Production on 29-OCT-2008 02:05:04
    
    Copyright (c) 1991, 2006, Oracle.  All rights reserved.
    
    Starting /oracle/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...
    
    TNSLSNR for Solaris: Version 10.2.0.3.0 - Production
    …
    Services Summary...
    Service "PLSExtProc" has 1 instance(s).
      Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "sales" has 2 instance(s).
      Instance "sales1", status READY, has 2 handler(s) for this service...
      Instance "sales2", status READY, has 1 handler(s) for this service...
    Service "salesXDB" has 2 instance(s).
      Instance "sales1", status READY, has 1 handler(s) for this service...
      Instance "sales2", status READY, has 1 handler(s) for this service...
    Service "sales_DGB" has 2 instance(s).
      Instance "sales1", status READY, has 2 handler(s) for this service...
      Instance "sales2", status READY, has 1 handler(s) for this service...
    Service "sales_DGMGRL" has 1 instance(s).
      Instance "sales1", status UNKNOWN, has 1 handler(s) for this service...
    Service "sales_XPT" has 2 instance(s).
      Instance "sales1", status READY, has 2 handler(s) for this service...
      Instance "sales2", status READY, has 1 handler(s) for this service...
    The command completed successfully
  3. Verify the net service naming entries for all database instances.

    Ensure that the naming service method that you are using, either tnsnames.ora or the directory service, has entries defined for all the Oracle database instances. The following example shows the type of entries that you include for the cluster-paris cluster only. Also, add entries for the standby (salesdr) database instances that you create later when you modify the pfile parameter file. In the example, the sales database dynamically registers a service name of sales with the listeners (see the database service_names initialization parameter).


    oracle (phys-paris-1)$ cat ${ORACLE_HOME}/network/admin/tnsnames.ora
    SALES1-SVC =
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-1-crs)(PORT = 1521)
          (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535))
        )
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = sales)
          (INSTANCE_NAME = sales1)
        )
      )
    
    SALES2-SVC =
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-2-crs)(PORT = 1521)
          (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535))
        )
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = sales)
          (INSTANCE_NAME = sales2)
        )
      )
    
    SALES-SVC =
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-1-crs)(PORT = 1521)
          (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535))
          (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-2-crs)(PORT = 1521)
          (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535))
          (LOAD_BALANCE = yes)
        )
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = sales)
        )
      )
    
    LISTENERS_SALES =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-1-crs)(PORT = 1521))
        (ADDRESS = (PROTOCOL = TCP)(HOST = phys-paris-2-crs)(PORT = 1521))
      )

ProcedureHow to Prepare Your Standby Database

  1. Create a backup of the primary database.

    The following example shows how to use the Oracle Recovery Manager (RMAN) utility to create a copy of the primary database that you can restore on the standby cluster-newyork cluster. The example also shows how to avoid performing a separate step to create a control file for the standby database. For more information about the options for completing this step, see your Oracle documentation.


    oracle (phys-paris-1)$ rman
    RMAN> connect target sys/DBA_password@sales-svc;
    RMAN> connect auxiliary /;
    RMAN> backup device type disk tag 'mybkup' database include current
    2> controlfile for standby;
    RMAN> backup device type disk tag 'mybkup' archivelog all not backed up;
    
  2. Copy the backup files to the standby system.

    Create the appropriate directory hierarchies on the cluster-newyork cluster and copy the database backup to this cluster. The actual locations that you specify for the files that are shown in the example depend on the specific choices that you made when you configured the database.


    oracle (phys-newyork-1)$ mkdir -p $ORACLE_BASE/admin/salesdr
    oracle (phys-newyork-1)$ cd $ORACLE_BASE/admin/salesdr
    oracle (phys-newyork-1)$ mkdir adump bdump cdump dpdump hdump pfile udump
    Make the directory for the database backup
    oracle (phys-newyork-1)$ mkdir -p /oradata/flash_recovery_area/SALES/backupset/date
    Copy over the files
    oracle (phys-newyork-1)$ cd /oradata/flash_recovery_area/SALES/backupset/date
    oracle (phys-newyork-1)$ scp oracle@phys-paris-1:`pwd`/\* .
    Make the base directory for new database files
    oracle (phys-newyork-1)$ mkdir -p /oradata/SALESDR
    
  3. Create a pfile parameter file.

    Create a suitable server initialization file for the standby (salesdr) database. The easiest way to create this file is to copy the parameters for the primary database and modify them. The following example shows how to create a pfile parameter file:


    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> CREATE PFILE='/tmp/initpfile_for_salesdr.ora' FROM SPFILE;
    File created.
    SQL> quit
    
  4. Modify the pfile parameter file.

    Change all entries that are particular to the primary cluster to entries that are suitable for the standby cluster, as shown in the following example. Modify entries that are prefixed by an Oracle SID, that is, sales1 or sales2, to use standby database instance SID names, that is, salesdr1 and salesdr2. Depending on your configuration, you might need to make additional changes.


    Note –

    Do not change the db_name parameter, as it must remain sales on both clusters.


    You created these directories previously
    *.audit_file_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/adump'
    *.background_dump_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/bdump'
    *.user_dump_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/udump'
    *.core_dump_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/cdump'
    
    Remove the following entry
    *.control_files='...list primary control files...'
    
    Add this entry
    *.db_unique_name='salesdr'
    
    *.dg_broker_config_file1='/oradata/SALESDR/dr1salesdr.dat'
    *.dg_broker_config_file2='/oradata/SALESDR/dr2salesdr.dat'
    
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=salesdrXDB)'
    
    Switch the client and server entries around, as shown in the following entries
    *.fal_client='salesdr-svc'
    *.fal_server='sales-svc'
    
    *.remote_listener='LISTENERS_SALESDR'
    
    Switch the log archive destinations
    *.log_archive_dest_1='location=use_db_recovery_file_dest arch
    mandatory valid_for=(all_logfiles,all_roles) db_unique_name=salesdr'
    *.log_archive_dest_2='service=sales-svc lgwr sync affirm 
    valid_for=(online_logfiles,primary_role) db_unique_name=sales'
  5. Copy the pfile parameter file to the standby system.

  6. Start the standby database and convert the pfile parameter file to an spfile server parameter file.

    1. As the Oracle user, log in to one of the cluster-newyork nodes and convert the pfile parameter file to an spfile server parameter file.


      oracle (phys-newyork-1)$ ORACLE_SID=salesdr1 export ORACLE_SID
      oracle (phys-newyork-1)$ sqlplus '/ as sysdba'
      SQL> startup nomount pfile='/tmp/initpfile_for_salesdr.ora';
      SQL> create spfile='/oradata/SALESDR/spfilesalesdr.ora'
        2> from pfile='/tmp/initpfile_for_salesdr.ora';
      SQL> shutdown
      
    2. Create a ${ORACLE_HOME}/dbs/initsalesdr1.ora file on all cluster-newyork nodes and, in that file, insert the following entry:


      oracle (phys-newyork-1) cat ${ORACLE_HOME}/dbs/initsalesdr1.ora
      SPFILE='/oradata/SALESDR/spfilesalesdr.ora'
    3. Restart the database, on one node only, to prepare for restoring the backed-up primary database.


      oracle (phys-newyork-1) sqlplus '/ as sysdba'
      You are now starting from the spfile
      SQL> startup nomount
      ORACLE instance started.
      
      Total System Global Area  532676608 bytes
      Fixed Size                  2031416 bytes
      Variable Size             289407176 bytes
      Database Buffers          234881024 bytes
      Redo Buffers                6356992 bytes
  7. Copy the Oracle password file for the primary database for use by the standby database.

    Copy the Oracle password file that you created on the cluster-paris cluster and place it on shared storage on the cluster-newyork cluster. Then create links to this file from each of the cluster-newyork nodes, again changing the name of the symbolic link to reflect the Oracle SID on the local standby node.

ProcedureHow to Configure the Standby Database Listener and Naming Service

  1. Create a static listener for Oracle Data Guard.


    Note –

    Perform this step on all cluster-newyork nodes.


    Oracle Data Guard requires that you configure a static listener. The following example uses ${ORACLE_HOME}=/oracle/oracle/product/10.2.0/db_1 and shows where to add the entry for the static listener in the ${ORACLE_HOME}/network/admin/listener.ora file. The SID_LIST_LISTENER_PHYS-NEWYORK-1 and (SID_NAME = salesdr1) lines vary from node to node, while the (GLOBAL_DBNAME=salesdr_DGMGRL) differs on cluster-paris.


    oracle (phys-newyork-1)$ cat ${ORACLE_HOME}/network/admin/listener.ora
    SID_LIST_LISTENER_PHYS-NEWYORK-1 =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = PLSExtProc)
          (ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1)
          (PROGRAM = extproc)
        )
        (SID_DESC =
          (SID_NAME = salesdr1)
          (GLOBAL_DBNAME=salesdr_DGMGRL)
          (ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1)
        )
      )
    oracle (phys-newyork-1)$
  2. Restart the listener.

    To enable the static entries, restart the Oracle listener processes on each of the nodes on cluster-newyork.


    oracle (phys-newyork-1)$ lsnrctl stop LISTENER_PHYS_PHYS-NEWYORK-1
    LSNRCTL for Solaris: Version 10.2.0.3.0 - Production on 29-OCT-2008 02:04:56
    
    Copyright (c) 1991, 2006, Oracle.  All rights reserved.
    
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    The command completed successfully
    oracle$ lsnrctl start LISTENER_PHYS_PHYS-NEWYORK-1
    LSNRCTL for Solaris: Version 10.2.0.3.0 - Production on 29-OCT-2008 02:05:04
    
    Copyright (c) 1991, 2006, Oracle.  All rights reserved.
    
    Starting /oracle/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...
    
    TNSLSNR for Solaris: Version 10.2.0.3.0 - Production
    …
    Services Summary...
    Service "PLSExtProc" has 1 instance(s).
      Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "salesdr" has 2 instance(s).
      Instance "salesdr1", status READY, has 2 handler(s) for this service...
      Instance "salesdr2", status READY, has 1 handler(s) for this service...
    Service "salesdrXDB" has 2 instance(s).
      Instance "salesdr1", status READY, has 1 handler(s) for this service...
      Instance "salesdr2", status READY, has 1 handler(s) for this service...
    Service "salesdr_DGB" has 2 instance(s).
      Instance "salesdr1", status READY, has 2 handler(s) for this service...
      Instance "salesdr2", status READY, has 1 handler(s) for this service...
    Service "salesdr_DGMGRL" has 1 instance(s).
      Instance "salesdr1", status UNKNOWN, has 1 handler(s) for this service...
    Service "salesdr_XPT" has 2 instance(s).
      Instance "salesdr1", status READY, has 2 handler(s) for this service...
      Instance "salesdr2", status READY, has 1 handler(s) for this service...
    The command completed successfully
  3. Verify the net service naming entries for all database instances.

    Ensure that the naming service method that you are using, either tnsnames.ora or the directory service, has entries defined for all the Oracle database instances. The following example shows the type of entries that you include for the cluster-newyork cluster only. In the example, the salesdr database dynamically registers a service name of salesdr with the listeners (see the database service_names initialization parameter).


    oracle (phys-newyork-1)$ cat ${ORACLE_HOME}/network/admin/tnsnames.ora
    SALESDR1-SVC =
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-1-crs)(PORT = 1521)
          (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535))
        )
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = salesdr)
          (INSTANCE_NAME = salesdr1)
        )
      )
    
    SALESDR2-SVC =
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-2>-crs)(PORT = 1521)
          (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535))
        )
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = salesdr)
          (INSTANCE_NAME = salesdr2)
        )
      )
    
    SALESDR-SVC =
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-1-crs)(PORT = 1521)
          (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535))
          (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-2-crs)(PORT = 1521)
          (SEND_BUF_SIZE = 65535)(RECV_BUF_SIZE = 65535))
          (LOAD_BALANCE = yes)
        )
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = salesdr)
        )
      )
    
    LISTENERS_SALESDR =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-1-crs)(PORT = 1521))
        (ADDRESS = (PROTOCOL = TCP)(HOST = phys-newyork-2-crs)(PORT = 1521))
      )
  4. Verify that the standby listener listener.ora and tnsnames.ora files have the correct entries, and restart the listener process.

    Ensure that these files include the static Oracle Data Guard listener entry and the naming service entries for the primary and standby cluster database service. If you are not using the Oracle directory naming service lookup, you need to include the entries in tnsnames.ora.


    oracle (phys-newyork-1)$ lsnrctl stop LISTENER_PHYS-NEWYORK-1
    LSNRCTL for Solaris: Version 10.2.0.3.0 - Production on 29-OCT-2008 02:04:56
    
    Copyright (c) 1991, 2006, Oracle.  All rights reserved.
    
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    The command completed successfully
    oracle$ lsnrctl start LISTENER_PHYS-NEWYORK-1
    LSNRCTL for Solaris: Version 10.2.0.3.0 - Production on 29-OCT-2008 02:05:04
    
    Copyright (c) 1991, 2006, Oracle.  All rights reserved.
    
    Starting /oracle/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...
    
    TNSLSNR for Solaris: Version 10.2.0.3.0 - Production
    …
    Services Summary...
    Service "PLSExtProc" has 1 instance(s).
      Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "salesdr_DGMGRL" has 1 instance(s).
      Instance "salesdr1", status UNKNOWN, has 1 handler(s) for this service...
    The command completed successfully

ProcedureHow to Start and Recover Your Standby Database

  1. Restore the database backup.

    Continuing to work on the cluster-newyork cluster, you can now restore the data from the backup of the primary database to the standby database. The following example shows how to use the Oracle Recovery Manager (RMAN) utility.


    oracle (phys-newyork-1) rman
    RMAN> connect target sys/oracle@sales-svc;
    RMAN> connect auxiliary /;
    RMAN> duplicate target database for standby nofilenamecheck;
  2. Add standby redo logs to the standby database.

    The exact requirements that you must meet depend on your configuration. The steps you follow are identical to those that you followed for the primary cluster.

  3. Enable flashback on the standby database.


    oracle (phys-newyork-1)$ sqlplus '/ as sysdba'
    SQL> alter database flashback on;
    Datbase altered.
    SQL> shutdown immediate;
    SQL> startup mount;
    ORACLE instance started.
    …
  4. Recover the standby database.


    oracle (phys-newyork-1) sqlplus '/ as sysdba'
    SQL> alter database recover managed standby database using current logfile disconnect;
    

ProcedureHow to Verify That Your Configuration Is Working Correctly

  1. Verify that the log file transmission is working.

    When the SQL> prompt is displayed, log in to one of the database instances on the cluster-paris cluster and perform a couple log switches.


    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> alter system switch logfile;
    SQL> alter system switch logfile;
    
  2. Check the ${ORACLE_HOME}/admin/sales/bdump/alert_sales1.log for any problems that might have prevented the logs from being archived.

    If there are errors, correct them. This process might take time. You can check that the network connectivity is correct by using the following command:


    oracle (phys-paris-1)$ tnsping salesdr-svc
    oracle (phys-newyork-1)$ tnsping sales-svc
    

ProcedureHow to Complete Configuring and Integrating Your Standby Database

  1. Register the new database and instances with Oracle Clusterware.

    Place the standby database under Oracle Clusterware control and configure it to mount when Oracle Clusterware starts.


    oracle (phys-newyork-1)$ srvctl add database -d salesdr \
     -r PHYSICAL_STANDBY -o $ORACLE_HOME -s mount;
    oracle (phys-newyork-1)$ srvctl add instance -d salesdr \
     -i salesdr1 -n $phys-newyork-1;
    oracle (phys-newyork-1)$ srvctl add instance -d salesdr \
     -i salesdr2 -n $phys-newyork-2;
    
  2. Configure the Sun Cluster Oracle RAC manageability resources.

    Integrate the standby database with Sun Cluster. You can use either the Data Service configuration wizard that is available through the clsetup utility or the browser-based Sun Cluster Manager. By integrating the standby database, you allow the standby to be managed as the primary database is, should a failover or takeover be necessary.


    Note –

    The resource and resource group that you create are used by the Sun Cluster Geographic Edition Oracle Data Guard integration.


  3. Enable Oracle Data Guard on both the primary and standby databases.

    You need to perform the following steps on only one node in each cluster (cluster-paris and cluster-newyork).


    oracle (phys-newyork-1)$ sqlplus '/ as sysdba'
    SQL> alter system set dg_broker_start=true scope=both sid='*';
    SQL> quit
    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> alter system set dg_broker_start=true scope=both sid='*';
    SQL> quit
    

ProcedureHow to Create and Enable an Oracle Data Guard Broker Configuration

To use Oracle Data Guard with Sun Cluster Geographic Edition, you need to create an Oracle Data Guard Broker configuration.

In the following example procedure, the Oracle Data Guard Broker configuration is called mysales.com. The salesdr database is a physical copy of the sales database.

  1. Create an Oracle Data Guard Broker configuration for the primary database.

    You use the dgmgrl command to create the Oracle Data Guard Broker configuration. You need to know the name of the Oracle Data Guard Broker configuration that you want to create, the name of the primary database, and the net service name through which to connect. You will need to know these properties again, when you specify the configuration to Sun Cluster Geographic Edition.


    oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svc
    DGMGRL> create configuration mysales.com as primary
    DGMGRL> database is sales connect identifier is sales-svc;
    

    If you find errors when you connect to the Oracle Data Guard Broker, check the ${ORACLE_HOME}/admin/sales/bdump/alert_prim_sid.log file. You can check that the configuration has been created by using the following command:


    oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svc
    DGMGRL> show configuration;
    Configuration
      Name:                mysales.com
      Enabled:             NO
      Protection Mode:     MaxPerformance
      Fast-Start Failover: DISABLED
      Databases:
        sales   - Primary database
    
    Current status for "mysales.com":
    DISABLED
  2. Add the standby database to the Oracle Data Guard Broker configuration.

    You need to know the name of the standby database, the net service name through which to connect, and the type of standby (physical or logical).


    oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svc
    DGMGRL> add database salesdr as connect identifier is 
     salesdr-svc maintained as physical;
    
  3. Configure the apply instance for the standby database.

    If the standby database is also a multi-instance Oracle RAC database, you can specify the instance on which you would prefer the transmitted archive redo logs to be applied. Before you enable the configuration, issue the following command:


    oracle$ dgmgrl sys/sysdba_password@sales-svc
    DGMGRL> edit database salesdr set property PreferredApplyInstance='salesdr1';
    
  4. To verify that the Oracle Data Guard Broker configuration is working correctly, enable the configuration.


    oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svc
    DGMGRL> enable configuration;
    

    If you have successfully performed all steps, you can check the status of the configuration by using the following command:


    oracle$ dgmgrl sys/sysdba_password@sales-svc
    DGMGRL> show configuration;
    Configuration
      Name:                mysales.com
      Enabled:             YES
      Protection Mode:     MaxPerformance
      Fast-Start Failover: DISABLED
      Databases:
        sales   - Primary database
        salesdr - Physical standby database
    
    Current status for "mysales.com":
    SUCCESS
  5. Verify that the Oracle Data Guard Broker configuration can switch over.

    Before you add the Oracle Data Guard Broker configuration to Sun Cluster Geographic Edition, you need to verify that you can perform a switchover of the database from the primary to the standby and back again. If this switchover does not work, Sun Cluster Geographic Edition will not be able to perform this operation either.


    oracle (phys-paris-1)$ dgmgrl sys/sysdba_password@sales-svcDGMGRL> switchover to salesdr
    Performing switchover NOW, please wait...
    Operation requires shutdown of instance "sales1" on database "sales"
    Shutting down instance "sales1"...
    ORA-01109: database not open
    
    Database dismounted.
    ORACLE instance shut down.
    Operation requires shutdown of instance "salesdr1" on database "salesdr"
    Shutting down instance "salesdr1"...
    ORA-01109: database not open
    
    Database dismounted.
    ORACLE instance shut down.
    Operation requires startup of instance "sales1" on database "sales"
    Starting instance "sales1"...
    ORACLE instance started.
    Database mounted.
    Operation requires startup of instance "salesdr1" on database "salesdr"
    Starting instance "salesdr1"...
    ORACLE instance started.
    Database mounted.
    Switchover succeeded, new primary is "salesdr"
    
    DGMGRL switchover to sales;
    Performing switchover NOW, please wait...
    Operation requires shutdown of instance "salesdr1" on database "salesdr"
    Shutting down instance "salesdr1"...
    ORA-01109: database not open
    
    Database dismounted.
    ORACLE instance shut down.
    Operation requires shutdown of instance "sales1" on database "sales"
    Shutting down instance "sales1"...
    ORA-01109: database not open
    
    Database dismounted.
    ORACLE instance shut down.
    Operation requires startup of instance "salesdr1" on database "salesdr"
    Starting instance "salesdr1"...
    ORACLE instance started.
    Database mounted.
    Operation requires startup of instance "sales1" on database "sales"
    Starting instance "sales1"...
    ORACLE instance started.
    Database mounted.
    Switchover succeeded, new primary is "sales"