Go to main content

Oracle® Solaris Cluster Geographic Edition Data Replication Guide for Oracle Data Guard

Exit Print View

Updated: June 2017
 
 

How to Prepare Your Standby Database

Perform this task from cluster nodes where you want Oracle Data Guard to run. These can be nodes of an Oracle Solaris Cluster configuration or a remote server that is not under Oracle Solaris Cluster control.

  1. Create a backup of the primary database.

    The following example shows how to use the Oracle Recovery Manager (RMAN) utility to create a copy of the primary database that you can restore on the standby cluster-newyork cluster. The example also shows how to avoid performing a separate step to create a control file for the standby database. For more information about the options for completing this step, see your Oracle Database documentation.

    oracle (phys-paris-1)$ rman
    RMAN> connect target sys/DBA_password@sales-svc;
    RMAN> connect auxiliary /;
    RMAN> backup device type disk tag 'mybkup' database include current
    2> controlfile for standby;
    RMAN> backup device type disk tag 'mybkup' archivelog all not backed up;
  2. Copy the backup files to the standby system.

    Create the appropriate directory hierarchies on the cluster-newyork cluster and copy the database backup to this cluster. The actual locations that you specify for the files that are shown in the example depend on the specific choices that you made when you configured the database.

    oracle (phys-newyork-1)$ mkdir -p $ORACLE_BASE/admin/salesdr
    oracle (phys-newyork-1)$ cd $ORACLE_BASE/admin/salesdr
    oracle (phys-newyork-1)$ mkdir adump bdump cdump dpdump hdump pfile udump
    Make the directory for the database backup
    oracle (phys-newyork-1)$ mkdir -p /oradata/flash_recovery_area/SALES/backupset/date
    Copy over the files
    oracle (phys-newyork-1)$ cd /oradata/flash_recovery_area/SALES/backupset/date
    oracle (phys-newyork-1)$ scp oracle@phys-paris-1:`pwd`/\* .
    Make the base directory for new database files
    oracle (phys-newyork-1)$ mkdir -p /oradata/SALESDR
  3. Create a pfile parameter file.

    Create a suitable server initialization file for the standby (salesdr) database. The easiest way to create this file is to copy the parameters for the primary database and modify them. The following example shows how to create a pfile parameter file:

    oracle (phys-paris-1)$ sqlplus '/ as sysdba'
    SQL> CREATE PFILE='/tmp/initpfile_for_salesdr.ora' FROM SPFILE;
    File created.
    SQL> quit
  4. Modify the pfile parameter file.

    Change all entries that are particular to the primary cluster to entries that are suitable for the standby cluster, as shown in the following example. Modify entries that are prefixed by an Oracle SID, that is, sales1 or sales2, to use standby database instance SID names, that is, salesdr1 and salesdr2. Depending on your configuration, you might need to make additional changes.


    Note -  Do not change the db_name parameter, as it must remain sales on both clusters.
    You created these directories previously
    *.audit_file_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/adump'
    *.background_dump_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/bdump'
    *.user_dump_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/udump'
    *.core_dump_dest='/oracle/oracle/product/10.2.0/db_1/admin/salesdr/cdump'
    
    Remove the following entry
    *.control_files='...list primary control files...'
    
    Add this entry
    *.db_unique_name='salesdr'
    
    *.dg_broker_config_file1='/oradata/SALESDR/dr1salesdr.dat'
    *.dg_broker_config_file2='/oradata/SALESDR/dr2salesdr.dat'
    
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=salesdrXDB)'
    
    Switch the client and server entries around, as shown in the following entries
    *.fal_client='salesdr-svc'
    *.fal_server='sales-svc'
    
    *.remote_listener='LISTENERS_SALESDR'
    
    Switch the log archive destinations
    *.log_archive_dest_1='location=use_db_recovery_file_dest arch
    mandatory valid_for=(all_logfiles,all_roles) db_unique_name=salesdr'
    *.log_archive_dest_2='service=sales-svc lgwr sync affirm
    valid_for=(online_logfiles,primary_role) db_unique_name=sales'
  5. Copy the pfile parameter file to the standby system.
  6. Start the standby database and convert the pfile parameter file to an spfile server parameter file.
    1. As the oracle user, log in to one of the cluster-newyork nodes and convert the pfile parameter file to an spfile server parameter file.
      oracle (phys-newyork-1)$ ORACLE_SID=salesdr1 export ORACLE_SID
      oracle (phys-newyork-1)$ sqlplus '/ as sysdba'
      SQL> startup nomount pfile='/tmp/initpfile_for_salesdr.ora';
      SQL> create spfile='/oradata/SALESDR/spfilesalesdr.ora'
      2> from pfile='/tmp/initpfile_for_salesdr.ora';
      SQL> shutdown
    2. Create an ${ORACLE_HOME}/dbs/initsalesdr1.ora file on all cluster-newyork nodes and, in that file, insert the following entry:
      oracle (phys-newyork-1) cat ${ORACLE_HOME}/dbs/initsalesdr1.ora
      SPFILE='/oradata/SALESDR/spfilesalesdr.ora'
    3. Restart the database, on one node only, to prepare for restoring the backed-up primary database.
      oracle (phys-newyork-1) sqlplus '/ as sysdba'
      You are now starting from the spfile
      SQL> startup nomount
      ORACLE instance started.
      
      Total System Global Area  532676608 bytes
      Fixed Size                  2031416 bytes
      Variable Size             289407176 bytes
      Database Buffers          234881024 bytes
      Redo Buffers                6356992 bytes
  7. Copy the Oracle password file for the primary database for use by the standby database.
    1. Copy the Oracle password file that you created on the cluster-paris cluster.

      Place the file on shared storage on the cluster-newyork cluster.

    2. Create links to this file from each of the cluster-newyork nodes.

      Again change the name of the symbolic link to reflect the Oracle SID on the local standby node.