Skip Headers
Oracle® TimesTen In-Memory Database TimesTen to TimesTen Replication Guide
Release 11.2.1

Part Number E13072-06
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

5 Administering an Active Standby Pair with Cache Groups

This chapter describes how to administer an active standby pair that replicates cache groups.

For information about managing failover and recovery automatically, see Chapter 6, "Using Oracle Clusterware to Manage Active Standby Pairs".

This chapter includes the following topics:

Active standby pairs with cache groups

An active standby pair that replicates a read-only cache group or an asynchronous writethrough (AWT) cache group can change the role of the cache group automatically as part of failover and recovery. This helps ensure high availability of cache instances with minimal data loss. See "Replicating an AWT cache group" and "Replicating a read-only cache group".

You can also create a special disaster recovery read-only subscriber when you set up active standby replication of an AWT cache group. This special subscriber, located at a remote disaster recovery site, can propagate updates to a second Oracle database, also located at the disaster recovery site. See "Using a disaster recovery subscriber in an active standby pair".

You cannot use an active standby pair to replicate synchronous writethrough (SWT) cache groups. If you are using an active standby pair to replicated a data store with SWT cache groups, you must either drop or exclude the SWT cache groups.

Setting up an active standby pair with a read-only cache group

This section describes how to set up an active standby pair that replicates cache tables in a read-only cache group. The active standby pair used as an example in this section is not a cache grid member.

To set up an active standby pair that replicates a local read-only cache group, complete the following tasks:

  1. Create a cache administration user in the Oracle database. See "Create users in the Oracle database" in Oracle In-Memory Database Cache User's Guide.

  2. Create a data store. See "Create a DSN for a TimesTen database" in Oracle In-Memory Database Cache User's Guide.

  3. Set the cache administration user ID and password by calling the ttCacheUidPwdSet built-in procedure. See "Set the cache administration user name and password in the TimesTen database" in Oracle In-Memory Database Cache User's Guide. For example:

    Command> call ttCacheUidPwdSet('orauser','orapwd');
    
  4. Start the cache agent on the data store. Use the ttCacheStart built-in procedure or the ttAdmin -cachestart utility.

    Command> call ttCacheStart;
    
  5. Use the CREATE CACHE GROUP statement to create the read-only cache group. For example:

    Command> CREATE READONLY CACHE GROUP readcache
           > AUTOREFRESH INTERVAL 5 SECONDS
           > FROM oratt.readtab
           > (keyval NUMBER NOT NULL PRIMARY KEY, str VARCHAR2(32));
    
  6. Ensure that the AUTOREFRESH STATE is set to PAUSED. The autorefresh state is PAUSED by default after cache group creation. You can verify the autorefresh state by executing the ttIsql cachegroups command:

    Command> cachegroups;
    
  7. Create the replication scheme using the CREATE ACTIVE STANDBY PAIR statement.

    For example, suppose master1 and master2 are defined as the master data stores. sub1 and sub2 are defined as the subscriber data stores. The data stores reside on node1, node2, node3, and node4. The return service is RETURN RECEIPT. The replication scheme can be specified as follows:

    Command> CREATE ACTIVE STANDBY PAIR master1 ON "node1", master2 ON "node2"
           > RETURN RECEIPT
           > SUBSCRIBER sub1 ON "node3", sub2 ON "node4"
           > STORE master1 ON "node1" PORT 21000 TIMEOUT 30
           > STORE master2 ON "node2" PORT 20000 TIMEOUT 30;
    
  8. Set up the replication agent policy for master1 and start the replication agent. See "Starting and stopping the replication agents".

  9. Set the replication state to ACTIVE by calling the ttRepStateSet built-in procedure on the active data store (master1). For example:

    Command> call ttRepStateSet('ACTIVE');
    
  10. Load the cache group by using the LOAD CACHE GROUP statement. This starts the autorefresh process. For example:

    Command> LOAD CACHE GROUP readcache COMMIT EVERY 256 ROWS;
    
  11. As the instance administrator, duplicate the active data store (master1) to the standby data store (master2). Use the ttRepAdmin -duplicate utility with the -keepCG option to preserve the cache group. Alternatively, you can use the ttRepDuplicateEx C function to duplicate the data store. See "Duplicating a data store". ttRepAdmin prompts for the values of -uid, -pwd, -cacheuid and -cachepwd.

    ttRepAdmin -duplicate -from master1 -host node1 -keepCG "DSN=master2;UID=;PWD="
    
  12. Set up the replication agent policy on master2 and start the replication agent. See "Starting and stopping the replication agents".

  13. The standby database enters the STANDBY state automatically. Wait for master2 to enter the STANDBY state. Call the ttRepStateGet built-in procedure to check the state of master2. For example:

    Command> call ttRepStateGet;
    
  14. Start the cache agent for master2 using the ttCacheStart built-in procedure or the ttAdmin -cacheStart utility. For example:

    Command> call ttCacheStart;
    
  15. As the instance administrator, duplicate the subscribers (sub1 and sub2) from the standby data store (master2). Use the -noKeepCG command line option with ttRepAdmin -duplicate to convert the cache tables to normal TimesTen tables on the subscribers. ttRepAdmin prompts for the values of -uid and -pwd. See "Duplicating a data store". For example:

    ttRepAdmin -duplicate -from master2 -host node2 -nokeepCG "DSN=sub1;UID=;PWD="
    
  16. Set up the replication agent policy on the subscribers and start the replication agent on each of the subscriber stores. See "Starting and stopping the replication agents".

Setting up an active standby pair with an AWT cache group

For detailed instructions for setting up an active standby pair with a global AWT cache group, see "Replicating cache tables" in Oracle In-Memory Database Cache User's Guide. The active standby pair in that section is a cache grid member.

Recovering from a failure of the active data store

This section includes the following topics:

Recovering when the standby data store is ready

This section describes how to recover the active data store when the standby data store is available and synchronized with the active data store. It includes the following topics:

When replication is return receipt or asynchronous

Complete the following tasks:

  1. Stop the replication agent on the failed data store if it has not already been stopped.

  2. On the standby data store, execute ttRepStateSet('ACTIVE'). This changes the role of the data store from STANDBY to ACTIVE. If you are replicating a read-only cache group, this action automatically causes the AUTOREFRESH state to change from PAUSED to ON for this data store.

  3. On the new active data store, execute ttRepStateSave('FAILED', 'failed_store','host_name'), where failed_store is the former active data store that failed. This step is necessary for the new active data store to replicate directly to the subscriber data stores. During normal operation, only the standby data store replicates to the subscribers.

  4. Stop the cache agent on the failed data store if it is not already stopped.

  5. Destroy the failed data store.

  6. Duplicate the new active data store to the new standby data store. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a data store. Use the -keepCG -recoveringNode command line options with ttRepAdmin in order to preserve the cache group. See "Duplicating a data store".

  7. Set up the replication agent policy on the new standby data store and start the replication agent. See "Starting and stopping the replication agents".

  8. Start the cache agent on the new standby data store.

The standby data store contacts the active data store. The active data store stops sending updates to the subscribers. When the standby data store is fully synchronized with the active data store, then the standby data store enters the STANDBY state and starts sending updates to the subscribers.The new standby data store takes over processing of the cache group automatically when it enters the STANDBY state.

Note:

You can verify that the standby data has entered the STANDBY state by using the ttRepStateGet built-in procedure.

When replication is return twosafe

Complete the following tasks:

  1. On the standby data store, execute ttRepStateSet('ACTIVE'). This changes the role of the data store from STANDBY to ACTIVE. If you are replicating a read-only cache group, this action automatically causes the AUTOREFRESH state to change from PAUSED to ON for this data store.

  2. On the new active data store, execute ttRepStateSave('FAILED', 'failed_store','host_name'), where failed_store is the former active data store that failed. This step is necessary for the new active data store to replicate directly to the subscriber data stores. During normal operation, only the standby data store replicates to the subscribers.

  3. Connect to the failed data store. This triggers recovery from the local transaction logs. If data store recovery fails, you must continue from Step 5 of the procedure for recovering when replication is return receipt or asynchronous. See "When replication is return receipt or asynchronous". If you are replicating a read-only cache group, the autorefresh state is automatically set to PAUSED.

  4. Verify that the replication agent for the failed data store has restarted. If it has not restarted, then start the replication agent. See "Starting and stopping the replication agents".

  5. Verify that the cache agent for the failed data store has restarted. If it has not restarted, then start the cache agent.

When the active data store determines that it is fully synchronized with the standby data store, then the standby store enters the STANDBY state and starts sending updates to the subscribers. The new standby data store takes over processing of the cache group automatically when it enters the STANDBY state.

Note:

You can verify that the standby data has entered the STANDBY state by using the ttRepStateSet built-in procedure.

Recovering when the standby data store is not ready

Consider the following scenarios:

  • The standby data store fails. The active data store fails before the standby comes back up or before the standby has been synchronized with the active data store.

  • The active data store fails. The standby data store becomes ACTIVE, and the rest of the recovery process begins. (See "Recovering from a failure of the active data store".) The new active data store fails before the new standby data store is fully synchronized with it.

In both scenarios, the subscribers may have had more changes applied than the standby data store.

When the active data store fails and the standby data store has not applied all of the changes that were last sent from the active data store, there are two choices for recovery:

  • Recover the active master data store from the local transaction logs.

  • Recover the standby master data store from the local transaction logs.

The choice depends on which data store is available and which is more up to date.

Recover the active data store

  1. Connect to the failed active data store. This triggers recovery from the local transaction logs. If you are replicating a read-only cache group, the autorefresh state is automatically set to PAUSED.

  2. Verify that the replication agent for the failed active data store has restarted. If it has not restarted, then start the replication agent. See "Starting and stopping the replication agents".

  3. Execute ttRepStateSet('ACTIVE') on the newly recovered store. If you are replicating a read-only cache group, this action automatically causes the AUTOREFRESH state to change from PAUSED to ON for this data store.

  4. Verify that the cache agent for the failed data store has restarted. If it has not restarted, then start the cache agent.

  5. Duplicate the active data store to the standby data store. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a data store. Use the -keepCG command line option with ttRepAdmin in order to preserve the cache group. "Duplicating a data store".

  6. Set up the replication agent policy on the standby data store and start the replication agent. See "Starting and stopping the replication agents".

  7. Wait for the standby data store to enter the STANDBY state. Use the ttRepStateGet procedure to check the state.

  8. Start the cache agent for on the standby data store using the ttCacheStart procedure or the ttAdmin -cacheStart utility.

  9. Duplicate all of the subscribers from the standby data store. See "Copying a master data store to a subscriber". Use the -noKeepCG command line option with ttRepAdmin in order to convert the cache group to regular TimesTen tables on the subscribers.

  10. Set up the replication agent policy on the subscribers and start the agent on each of the subscriber stores. See "Starting and stopping the replication agents".

Recover the standby data store

  1. Connect to the failed standby data store. This triggers recovery from the local transaction logs. If you are replicating a read-only cache group, the autorefresh state is automatically set to PAUSED.

  2. If the replication agent for the standby data store has automatically restarted, you must stop the replication agent. See "Starting and stopping the replication agents".

  3. If the cache agent has automatically restarted, stop the cache agent.

  4. Drop the replication configuration using the DROP ACTIVE STANDBY PAIR statement.

  5. Drop and re-create all cache groups using the DROP CACHE GROUP and CREATE CACHE GROUP statements.

  6. Re-create the replication configuration using the CREATE ACTIVE STANDBY PAIR statement.

  7. Set up the replication agent policy and start the replication agent. See "Starting and stopping the replication agents".

  8. Execute ttRepStateSet('ACTIVE') on the master data store, giving it the ACTIVE role. If you are replicating a read-only cache group, this action automatically causes the AUTOREFRESH state to change from PAUSED to ON for this data store.

  9. Start the cache agent on the active data store.

  10. Duplicate the active data store to the standby data store. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a data store. Use the -keepCG command line option with ttRepAdmin in order to preserve the cache group. "Duplicating a data store".

  11. Set up the replication agent policy on the standby data store and start the replication agent. See "Starting and stopping the replication agents".

  12. Wait for the standby data store to enter the STANDBY state. Use the ttRepStateGet procedure to check the state.

  13. Start the cache agent for the standby data store using the ttCacheStart procedure or the ttAdmin -cacheStart utility.

  14. Duplicate all of the subscribers from the standby data store. See "Copying a master data store to a subscriber". Use the -noKeepCG command line option with ttRepAdmin in order to convert the cache group to regular TimesTen tables on the subscribers.

  15. Set up the replication agent policy on the subscribers and start the agent on each of the subscriber stores. See "Starting and stopping the replication agents".

Failing back to the original nodes

After a successful failover, you may wish to fail back so that the active data store and the standby data store are on their original nodes. See "Reversing the roles of the active and standby data stores" for instructions.

Recovering from a failure of the standby data store

To recover from a failure of the standby data store, complete the following tasks:

  1. Detect the standby data store failure.

  2. If return twosafe service is enabled, the failure of the standby data store may prevent a transaction in progress from being committed on the active data store, resulting in error 8170, "Receipt or commit acknowledgement not returned in the specified timeout interval". If so, then call the ttRepSyncSet procedure with a localAction parameter of 2 (COMMIT) and commit the transaction again. For example:

    call ttRepSyncSet( null, null, 2);
    commit;
    
  3. Execute ttRepStateSave('FAILED','standby_store','host_name') on the active data store. After this, as long as the standby data store is unavailable, updates to the active data store are replicated directly to the subscriber data stores. Subscriber stores may also be duplicated directly from the active.

  4. If the replication agent for the standby data store has automatically restarted, stop the replication agent. See "Starting and stopping the replication agents".

  5. If the cache agent has automatically restarted, stop the cache agent.

  6. Recover the standby data store in one of the following ways:

    • Connect to the standby data store. This triggers recovery from the local transaction logs.

    • Duplicate the standby data store from the active data store. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a data store. Use the -keepCG -recoveringNode command line options with ttRepAdmin in order to preserve the cache group.See "Duplicating a data store".

    The amount of time that the standby data store has been down and the amount of transaction logs that need to be applied from the active data store determine the method of recovery that you should use.

  7. Set up the replication agent policy and start the replication agent. See "Starting and stopping the replication agents".

  8. Start the cache agent.

The standby data store enters the STANDBY state after the active data store determines that the two master data stores have been synchronized.

Note:

You can verify that the standby data has entered the STANDBY state by using the ttRepStateGet procedure

Recovering from the failure of a subscriber data store

If a subscriber data store fails, then you can recover it by one of the following methods:

If the standby data store is down or in recovery, then duplicate the subscriber from the active data store.

After the subscriber data store has been recovered, then set up the replication agent policy and start the replication agent. See "Starting and stopping the replication agents".

Reversing the roles of the active and standby data stores

To change the active data store's role to that of a standby data store and vice versa:

  1. Pause any applications that are generating updates on the current active data store.

  2. Execute ttRepSubscriberWait on the active data store, with the DSN and host of the current standby data store as input parameters. This ensures that all updates have been transmitted to the current standby data store.

  3. Stop the replication agent on the current active data store. See "Starting and stopping the replication agents".

  4. Execute ttGridDetach on the active data store to detach nodes from the cache grid.

  5. Stop the cache agent on the active data store.

  6. Execute ttRepDeactivate on the current active data store. This puts the store in the IDLE state. If you are replicating a read-only cache group, this action automatically causes the AUTOREFRESH state to change from ON to PAUSED for this data store.

  7. Execute ttRepStateSet('ACTIVE') on the current standby data store. This store now acts as the active data store in the active standby pair. If you are replicating a read-only cache group, this automatically causes the AUTOREFRESH state to change from PAUSED to ON for this data store.

  8. Start the replication agent on the old master data store.

  9. Configure the replication agent policy as needed and start the replication agent on the old active data store. Use the ttRepStateGet procedure to determine when the data store's state has changed from IDLE to STANDBY. The data store now acts as the standby data store in the active standby pair.

  10. Start the cache agent on the old active data store.

  11. Execute ttGridAttach on the active data store to re-attach nodes to the cache grid.

  12. Resume any applications that were paused in Step 1.

Detection of dual active data stores

See "Detection of dual active data stores". There is no difference for active standby pairs that replicate cache groups.

Changing the configuration of an active standby pair with cache groups

You can change an active standby pair by:

Make these changes on the active data store. After you have changed the replication scheme on the active data store, it no longer replicates updates to the standby data store or to the subscribers. You must re-create the standby data store and the subscribers and restart the replication agents.

Use the ALTER ACTIVE STANDBY PAIR statement to change the active standby pair.

To change an active standby pair, complete the following tasks:

  1. Stop the replication agent on the active data store. See "Starting and stopping the replication agents".

  2. Stop the cache agent on the active data store.

  3. Use the ALTER ACTIVE STANDBY PAIR statement to make changes to the replication scheme.

  4. Start the replication agent on the active data store. See "Starting and stopping the replication agents".

  5. Start the cache agent on the active data store.

  6. Destroy the standby data store and the subscribers.

  7. Duplicate the active data store to the standby data store. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a data store. Use the -keepCG command line option with ttRepAdmin in order to preserve the cache group. See "Duplicating a data store".

  8. Set up the replication agent policy on the standby data store and start the replication agent. See "Starting and stopping the replication agents".

  9. Wait for the standby data store to enter the STANDBY state. Use the ttRepStateGet procedure to check the state.

  10. Start the cache agent for the standby data store using the ttCacheStart procedure or the ttAdmin -cacheStart utility.

  11. Duplicate all of the subscribers from the standby data store. See "Copying a master data store to a subscriber". Use the -noKeepCG command line option with ttRepAdmin in order to convert the cache group to regular TimesTen tables on the subscribers. See "Duplicating a data store".

  12. Set up the replication agent policy on the subscribers and start the agent on each of the subscriber stores. See "Starting and stopping the replication agents".

Example 5-1 Adding a subscriber to an active standby pair

Add a subscriber data store to the active standby pair.

ALTER ACTIVE STANDBY PAIR
ADD SUBSCRIBER sub1;

Example 5-2 Dropping subscribers from an active standby pair

Drop subscriber data stores from the active standby pair.

ALTER ACTIVE STANDBY PAIR
DROP SUBSCRIBER sub1
DROP SUBSCRIBER sub2;

Example 5-3 Changing the PORT and TIMEOUT settings for subscribers

Alter the PORT and TIMEOUT settings for subscribers rep3 and rep4.

ALTER ACTIVE STANDBY PAIR
ALTER STORE sub1 SET PORT 23000 TIMEOUT 180
ALTER STORE sub2 SET PORT 23000 TIMEOUT 180;

Example 5-4 Adding tables and a cache group to an active standby pair

Add two tables and a cache group to the active standby pair.

ALTER ACTIVE STANDBY PAIR
INCLUDE TABLE tab1, tab2
INCLUDE CACHE GROUP cg0;

Using a disaster recovery subscriber in an active standby pair

TimesTen active standby pair replication provides high availability by allowing for fast switching between data stores within a data center. This includes the ability to automatically change which data store propagates changes to an Oracle database using AWT cache groups. However, for additional high availability across data centers, you may require the ability to recover from a failure of an entire site, which can include a failure of both TimesTen master data stores in the active standby pair as well as the Oracle database used for the cache groups.

You can recover from a complete site failure by creating a special disaster recovery read-only subscriber as part of the active standby pair replication scheme. The standby data store sends updates to cache group tables on the read-only subscriber. This special subscriber is located at a remote disaster recovery site and can propagate updates to a second Oracle database, also located at the disaster recovery site. The disaster recovery subscriber can take over as the active in a new active standby pair at the disaster recovery site if the primary site suffers a complete failure. Any applications may then connect to the disaster recovery site and continue operating, with minimal interruption of service.

Requirements for using a disaster recovery subscriber with an active standby pair

To use a disaster recovery subscriber, you must:

  • Use an active standby pair configuration with AWT cache groups at the primary site. The active standby pair can also include read-only cache groups in the replication scheme. The read-only cache groups are converted to regular tables on the disaster recovery subscriber. The AWT cache group tables remain AWT cache group tables on the disaster recovery subscriber.

  • Have a continuous WAN connection from the primary site to the disaster recovery site. This connection should have at least enough bandwidth to guarantee that the normal volume of transactions can be replicated to the disaster recovery subscriber at a reasonable pace.

  • Configure an Oracle database at the disaster recovery site to include tables with the same schema as the database at the primary site. Note that this database is intended only for capturing the replicated updates from the primary site, and if any data exists in tables written to by the cache groups when the disaster recovery subscriber is created, that data is deleted.

  • Have the same cache group administrator user ID and password at both the primary and the disaster recovery site.

Though it is not absolutely required, you should have a second TimesTen data store configured at the disaster recovery site. This data store can take on the role of a standby data store, in the event that the disaster recovery subscriber is promoted to an active data store after the primary site fails.

Rolling out a disaster recovery subscriber

To create a disaster recovery subscriber, follow these steps:

  1. Create an active standby pair with AWT cache groups at the primary site. The active standby pair can also include read-only cache groups. The read-only cache groups are converted to regular tables when the disaster recovery subscriber is rolled out.

  2. Create the disaster recovery subscriber at the disaster recovery site using the ttRepAdmin utility with the -duplicate and -cacheInitDR options. You must also specify the cache group administrator and password for the Oracle database at the disaster recovery site using the -cacheUid and -cachePwd options.

    If your data store includes multiple cache groups, you may improve the efficiency of the duplicate operation by using the -nThreads option to specify the number of threads that are spawned to flush the cache groups in parallel. Each thread flushes an entire cache group to Oracle and then moves on to the next cache group, if any remain to be flushed. If a value is not specified for -nThreads, only one flushing thread is spawned.

    For example, duplicate the standby data store mast2, on the system with the host name primary and the cache user ID system and password manager, to the disaster recovery subscriber drsub, and using two cache group flushing threads. ttRepAdmin prompts for the values of -uid, -pwd, -cacheUid and -cachePwd.

    ttRepAdmin -duplicate -from mast2 -host primary -cacheInitDR -nThreads 2 "DSN=drsub;UID=;PWD=;"
    

    If you use the ttRepDuplicateEx function in C, you must set the TT_REPDUPE_INITCACHEDR flag in ttRepDuplicateExArg.flags and may optionally specify a value for ttRepDuplicateExArg.nThreads4InitDR:

    int                 rc;
    ttUtilHandle        utilHandle;
    ttRepDuplicateExArg arg;
    memset( &arg, 0, sizeof( arg ) );
    arg.size = sizeof( ttRepDuplicateExArg );
    arg.flags = TT_REPDUPE_INITCACHEDR;
    arg.nThreads4InitDR = 2;
    arg.uid="ttuser"
    arg.pwd="ttuser"
    arg.cacheuid = "system";
    arg.cachepwd = "manager";
    arg.localHost = "disaster";
    rc = ttRepDuplicateEx( utilHandle, "DSN=drsub",
                           "mast2", "primary", &arg );
    

    After the subscriber is duplicated, TimesTen automatically configures the asynchronous writethrough replication scheme that propagates updates from the cache groups to the Oracle database, truncates the tables in the Oracle database that correspond to the cache groups in TimesTen, and then flushes all of the data in the cache groups to the Oracle database.

  3. If you wish to set the failure threshold for the disaster recovery subscriber, call the ttCacheAWTThresholdSet built-in procedure and specify the number of transaction log files that can accumulate before the disaster recovery subscriber is considered either dead or too far behind to catch up.

    If one or both master data stores had a failure threshold configured before the disaster recovery subscriber was created, then the disaster recovery subscriber inherits the failure threshold value when it is created with the ttRepAdmin -duplicate -initCacheDR command. If the master data stores have different failure thresholds, then the higher value is used for the disaster recovery subscriber.

    For more information about the failure threshold, see "Setting the log failure threshold".

  4. Start the replication agent for the disaster recovery subscriber using the ttRepStart procedure or the ttAdmin command with the option -repstart. For example:

    ttAdmin -repstart drsub
    

    Updates are now replicated from the standby data store to the disaster recovery subscriber, which then propagates the updates to the Oracle database at the disaster recovery site.

Switching over to the disaster recovery site

When the primary site has failed, you can switch over to the disaster recovery site in one of two ways. If your goal is to minimize risk of data loss at the disaster recovery site, you may roll out a new active standby pair using the disaster recovery subscriber as the active data store. If the goal is to absolutely minimize the downtime of your applications, at the risk of data loss if the disaster recovery data store later fails, you may instead choose to drop the replication scheme from the disaster recovery subscriber and use it as a single non-replicating data store. You may deploy an active standby pair at the disaster recovery site later.

Creating a new active standby pair after switching to the disaster recovery site

  1. Any read-only applications may be redirected to the disaster recovery subscriber immediately. Redirecting applications that make updates to the data store must wait until Step 7.

  2. Ensure that all of the recent updates to the cache groups have been propagated to the Oracle database using the ttRepSubscriberWait procedure or the ttRepAdmin command with the -wait option.

    ttRepSubscriberWait( null, null, '_ORACLE', null, 600 );
    

    If ttRepSubscriberWait returns 0x01, indicating a timeout, you may need to investigate to determine why the cache groups are not finished propagating before continuing to Step 3.

  3. Stop the replication agent on the disaster recovery subscriber using the ttRepStop procedure or the ttAdmin command with the -repstop option. For example, to stop the replication agent for the subscriber drsub, use:

    call ttRepStop;
    
  4. Drop the active standby pair replication scheme on the subscriber using the DROP ACTIVE STANDBY PAIR statement. For example:

    DROP ACTIVE STANDBY PAIR;
    
  5. If there are tables on the disaster recovery subscriber that were converted from read-only cache group tables on the active data store, drop the tables on the disaster recovery subscriber.

  6. Create the read-only cache groups on the disaster recovery subscriber. Ensure that the autorefresh state is set to PAUSED.

  7. Create a new active standby pair replication scheme using the CREATE ACTIVE STANDBY PAIR statement, specifying the disaster recovery subscriber as the active data store. For example, to create a new active standby pair with the former subscriber drsub as the active and the new data store drstandby as the standby, and using the return twosafe return service, use:

    CREATE ACTIVE STANDBY PAIR drsub, drstandby RETURN TWOSAFE;
    
  8. Set the new active standby data store to the ACTIVE state using the ttRepStateSet procedure. For example, on the data store drsub in this example, execute:

    call ttRepStateSet( 'ACTIVE' );
    
  9. Any applications which must write to the TimesTen data store may now be redirected to the new active data store.

  10. If you are replicating a read-only cache group, load the cache group using the LOAD CACHE GROUP statement to begin the autorefresh process. You may also load the cache group if you are replicating an AWT cache group, although it is not required.

  11. Duplicate the active data store to the standby data store. You can use either the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a data store. Use the -keepCG command line option with ttRepAdmin in order to preserve the cache group. See "Duplicating a data store".

  12. Set up the replication agent policy on the standby data store and start the replication agent. See "Starting and stopping the replication agents".

  13. Wait for the standby data store to enter the STANDBY state. Use the ttRepStateGet procedure to check the state.

  14. Start the cache agent for the standby data store using the ttCacheStart procedure or the ttAdmin -cacheStart utility.

  15. Duplicate all of the subscribers from the standby data store. See "Copying a master data store to a subscriber". Use the -noKeepCG command line option with ttRepAdmin in order to convert the cache group to regular TimesTen tables on the subscribers.

  16. Set up the replication agent policy on the subscribers and start the agent on each of the subscriber stores. See "Starting and stopping the replication agents".

Switching over to a single data store

  1. Any read-only applications may be redirected to the disaster recovery subscriber immediately. Redirecting applications that make updates to the data store must wait until Step 5.

  2. Stop the replication agent on the disaster recovery subscriber using the ttRepStop procedure or the ttAdmin command with the -repstop option. For example, to stop the replication agent for the subscriber drsub, use:

    call ttRepStop;
    
  3. Drop the active standby pair replication scheme on the subscriber using the DROP ACTIVE STANDBY PAIR statement. For example:

    DROP ACTIVE STANDBY PAIR;
    
  4. If there are tables on the disaster recovery subscriber that were converted from read-only cache group tables on the active data store, drop the tables on the disaster recovery subscriber.

  5. Create the read-only cache groups on the disaster recovery subscriber.

  6. Although there is no longer an active standby pair configured, AWT cache groups require the replication agent to be started. Start the replication agent on the data store using the ttRepStart procedure or the ttAdmin command with the -repstart option. For example, to start the replication agent for the data store drsub, use:

    call ttRepStart;
    
  7. Any applications which must write to a TimesTen data store may now be redirected to the this data store.

    Note:

    You may choose to roll out an active standby pair at the disaster recovery site at a later time. You may do this by following the steps in "Creating a new active standby pair after switching to the disaster recovery site", starting at Step 2 and skipping Step 4.

Returning to the original configuration at the primary site

When the primary site is usable again, you may wish to move the working active standby pair from the disaster recovery site back to the primary site. You can do this with a minimal interruption of service by reversing the process that was used to create and switch over to the original disaster recovery site. Follow these steps:

  1. Destroy original active data store at the primary site, if necessary, using the ttDestroy utility. For example, to destroy a data store called mast1, use:

    ttDestroy mast1
    
  2. Create a disaster recovery subscriber at the primary site, following the steps detailed in "Rolling out a disaster recovery subscriber". Use the original active data store for the new disaster recovery subscriber.

  3. Switch over to the new disaster recovery subscriber at primary site, as detailed in "Switching over to the disaster recovery site". Roll out the standby data store as well.

  4. Roll out a new disaster recovery subscriber at the disaster recovery site, as detailed in "Rolling out a disaster recovery subscriber".