|Oracle® TimesTen In-Memory Database Replication Guide
11g Release 2 (11.2.2)
|PDF · Mobi · ePub|
This chapter describes how to set up and start replication. All of the topics in this chapter apply to replication schemes that are not active standby pairs. Some of the topics in this chapter also apply to active standby pairs.
To set up an active standby pair, see:
This chapter includes the following topics:
This section applies to both active standby pairs and other replication schemes. It describes some of the issues to consider when replicating TimesTen data over a network. The topics include:
The network bandwidth required for TimesTen replication depends on the bulk and frequency of the data being replicated. This discussion explores the types of transactions that characterize the high and low ends of the data range and the network bandwidth required to replicate the data between TimesTen databases.
Table 10-1 provides guidelines for calculating the size of replicated records.
+ 18 bytes per column updated
+ size of old column values
+ size of new column values
+ size of the primary key or unique key
+ size of the primary key or unique key
+ size of the primary key or unique key
+ size of inserted row
Transactions are sent between replicated databases in batches. A batch is created whenever there is no more data in the transaction log buffer in the master database, or when the current batch is roughly 256K bytes. See "Copying updates between databases" for more information.
TimesTen replication uses the TCP/IP protocol, which is not optimized for a WAN environment. You can improve replication performance over a WAN by installing a third-party "TCP stack" product. If replacing the TCP stack is not a feasible solution, you can reduce the amount of network traffic that the TCP/IP protocol has to deal with by setting the
COMPRESS TRAFFIC attribute in the
CREATE ACTIVE STANDBY PAIR or
CREATE REPLICATION statement. See "Compressing replicated traffic" for details.
See installation information for your platform in Oracle TimesTen In-Memory Database Installation Guide for information about changing TCP/IP kernel parameters for better performance.
In a replication scheme, you need to identify the name of the host on which your database resides. The operating system translates this host name to one or more IP addresses. This section describes how to configure replication so that it uses the correct host names and IP addresses for each host.
This section includes these topics:
When specifying the host for a database in a replication element, you should always use the name returned by the
hostname command, as replication uses the same host name to verify that the current host is involved in the replication scheme. Replication schemes may not be created that do not include the current host.
If a host contains multiple network interfaces (with different IP addresses), you should specify which interfaces are to be used by replication using the
ROUTE clause. You must specify a priority for each interface. Replication tries to first connect using the address with the highest priority, and if a connection cannot be established, it tries the remaining addresses in order of priority until a connection is established. If a connection to a host fails while using one IP address, replication attempts to re-connect (or fall back) to another IP address, if more than one address has been specified in the
Note:Addresses for the
ROUTEclause may be specified as either host names or IP addresses. However, if your host has more than one IP address configured for a given host name, you should only configure the
ROUTEclause using the IP addresses, in order to ensure that replication uses only the IP addresses that you intend.
See "Configuring network operations" for more information.
If a replication configuration is specified using host names rather than IP addresses, replication must be able to translate host names of peers into IP addresses. For this to happen efficiently on Windows, make sure each Windows machine is set up to query either a valid WINS server or a valid DNS server that has correct information about the hosts on the network. In the absence of such servers, static HOST-to-IP entries can be entered in either:
When possible, you should use the
ROUTE clause of a replication scheme to identify database hosts and the network interfaces to use for replication. However, if you have a legacy replication configuration that does not use the
ROUTE clause, this section explains how to configure operating system and DNS files for a replication host with multiple network interfaces.
If a host contains multiple network interfaces (with different IP addresses) and replication is not configured with a
ROUTE clause, TimesTen replication tries to connect to the IP addresses in the same order as returned by the
gethostbyname call. It will try to connect using the first address; if a connection cannot be established, it tries the remaining addresses in order until a connection is established. TimesTen replication uses this same sequence each time it establishes a new connection to a host. If a connection to a host fails on one IP address, TimesTen replication attempts to re-connect (or fall back) to another IP address for the host in the same manner described above.
There are two basic ways you can configure a host to use multiple IP addresses on UNIX platforms: DNS or the
Note:If you have multiple network interface cards (NICs), be sure that "multi on" is specified in the
gethostbynamewill not return multiple addresses.
For example, if your machine has two NICs, use the following syntax for your
127.0.0.1 localhost IP_address_for_NIC_1 official_hostname optional_alias IP_address_for_NIC_2 official_hostname optional_alias
The host name
official_hostname is the name returned by the
When editing the
/etc/hosts file, keep in mind that:
You must log in as
root to change the
There should only be one line per IP address.
There can be multiple alias names on each line.
When there are multiple IP addresses for the same host name, they must be on consecutive lines.
The host name can be up to 30 characters long.
For example, the following entry in the
/etc/hosts file on a UNIX platform describes a server named
Host1 with two IP addresses:
127.0.0.1 localhost 10.10.98.102 Host1 192.168.1.102 Host1
To specify the same configuration for DNS, your entry in the domain zone file would look like:
Host1 IN A 10.10.98.102 IN A 192.168.1.102
In either case, you only need to specify
Host1 as the host name in your replication scheme and replication will use the first available IP address when establishing a connection.
In an environment in which multiple IP addresses are used, you can also assign multiple host names to a single IP address in order to restrict a replication connection to a specific IP address. For example, you might have an entry in your
/etc/hosts file that looks like:
127.0.0.1 localhost 10.10.98.102 Host1 192.168.1.102 Host1 RepHost1
or a DNS zone file that looks like:
Host1 IN A 10.10.98.102 IN A 192.168.1.102 RepHost1 IN A 192.168.1.102
If you want to restrict replication connections to IP address 192.168.1.102 for this host, you can specify
RepHost1 as the host name in your replication scheme. Another option is to simply specify the IP address as the host name in the
CREATE REPLICATION statement used to configure your replication scheme.
If a replication configuration is specified using host names rather than IP addresses, replication must be able to translate host names of peers into IP addresses. For this to happen efficiently on Windows, make sure each Windows machine is set up to query either a valid WINS server or a valid DNS server that has correct information about the hosts on the network. In the absence of such servers, static host-to-IP entries can be entered in either:
Without any of these options, a Windows machine resorts to broadcasting to detect peer nodes, which is extremely slow.
You may also encounter extremely slow host name resolution if the Windows machine cannot communicate with the defined WINS servers or DNS servers, or if the host name resolution set up is incorrect on those servers. Use the
ping command to test whether a host can be efficiently located. The
ping command responds immediately if host name resolution is set up properly.
Note:You must be consistent in identifying a database host in a replication scheme. Do not identify a host using its IP address for one database and then use its host name for the same or another database.
By default, the TimesTen main daemon, all subdaemons and all agents use any available address to listen on a socket for requests. You can modify the
ttendaemon.options file to specify an address for communication among the agents and daemons by including a
-listenaddr option. See "Managing TimesTen daemon options" in Oracle TimesTen In-Memory Database Operations Guide for details.
Suppose that your machine has two NICs whose addresses are 10.10.10.100 and 10.10.11.200. The loopback address is 127.0.0.1. Then keep in mind the following as it applies to the replication agent:
If you do not set the
-listenaddr option in the
ttendaemon.options file, then any process can talk to the daemons and agents.
If you set
-listenaddr to 10.10.10.100, then any process on the local host or the 10.10.10 net can talk to daemons and agents on 10.10.10.100. No processes on the 10.10.11 net can talk to the daemons and agents on 10.10.10.100.
If you set
-listenaddr to 127.0.0.1, then only processes on the local host can talk to the daemons and agents. No processes on other hosts can talk the daemons and agents.
Ordinarily, TimesTen replication is able to identify the hosts involved in a replication configuration using normal operating system host name resolution methods. However, in some rare instances, if the host has an unusual host name configuration, TimesTen is unable to determine that the local host matches the host name as specified in the replication scheme. When this occurs, you receive error 8191, "This store is not involved in a replication scheme," when attempting to start replication using
ttHostNameSet built-in procedure may be used in this instance to explicitly indicate to TimesTen that the current database is in fact the database specified in the replication scheme. See "ttHostNameSet" in Oracle TimesTen In-Memory Database Reference for more information.
The topics related to setting up your replication environment include:
You can replicate one or more tables on any existing database. If the database you want to replicate does not yet exist, you must first create one, as described in "Managing TimesTen Databases" in Oracle TimesTen In-Memory Database Operations Guide.
After you have identified or created the master database, create a DSN definition for the subscriber database on the target host. Set the connection attributes for the master and subscriber databases as described in "Connection attributes for replicated databases".
After you have defined the DSN for the subscriber, you can populate the subscriber database with the tables to be replicated from the master in one of two ways:
Connect to the database and use SQL statements to create new tables in the subscriber database that match those to be replicated from the master.
-duplicate utility to copy the entire contents of the master database to the subscriber. See "Duplicating a master database to a subscriber".
If you wish to configure parallel replication, see "Configuring parallel replication" for information about setting the
ReplicationApplyOrdering data store attributes.
See "Setting connection attributes for logging" for recommendations for managing the replication log files.
It is possible to replicate between databases with different settings for the
TypeMode data store attribute. However, you must make sure that the underlying data type for each replicated column is the same on each node. See "TypeMode" in Oracle TimesTen In-Memory Database Reference for more information.
In an active standby pair, use the
ReceiverThreads first connection attribute to increase the number of threads that apply changes from the active database to the standby database from 1 to 2. If you set
ReceiverThreads to 2 on the standby, you should also set it to 2 on the active to maintain increased throughput if there is a failover.
You can also set
ReceiverThreads to 2 on one or more read-only subscribers in an active standby pair to increase replication throughput from the standby database.
Databases must be hosted on systems that have two or more CPUs to take advantage of setting this attribute to 2.
By default, replication is performed with a single thread where the nodes in a replication scheme have one log reader, or transmitter thread, on the source database, and one applier, or receiving thread, on the target database. You can increase your performance by configuring parallel replication, which configures multiple threads for sending updates from the source database to the target database and for applying the updates on the target database.
Note:If you enable parallel replication, you cannot execute both DDL and DML statements in the same transaction.
There are two types of parallel replication, each of which is configured with the
ReplicationParallelism data store creation attributes and must be set when the database is created. Since both
ReplicationApplyOrdering attributes are data store attributes, they cannot be modified after database creation.
Note:All databases within the replication scheme that use parallel replication must be configured identically with the same type of parallel replication and the same number of threads or tracks.
The only time you can have different values for parallel replication attributes is during an upgrade. For details, see "Performing an upgrade on databases that use parallel replication" in the Oracle TimesTen In-Memory Database Installation Guide.
The following sections describe both options for parallel replication:
Automatic parallel replication enables you to configure multiple threads that act in parallel to replicate and apply transactional changes to nodes in a replication scheme. Automatic parallel replication enforces transactional dependencies and applies changes in commit order.
Automatic parallel replication is enabled by default with
ReplicationApplyOrdering=0. To configure parallel replication, set ReplicationParallelism to a number from 2 to 32. The number cannot exceed half the value of
LogBufParallelism. This number indicates the number of transmitter threads on the source database and the number of receiver threads on the target database. The default for
ReplicationParallelism is 1, which indicates single-threaded replication.
ReplicationParallelismis greater than 1, the
LogBufParallelismfirst connection attribute must be an integral multiple of
If the replication scheme is an active standby pair that replicates AWT cache groups, the settings for
ReplicationParallelism and the
CacheAwtParallelism data store attributes determine how many threads are used to apply changes in the TimesTen cache tables to the corresponding Oracle tables. See "Configuring parallel propagation to Oracle tables" in Oracle In-Memory Database Cache User's Guide for more information.
If your application has predictable transactional dependencies and does not require the commit order on the target database be the same as the order on the source database, you can increase replication throughput by using user-defined parallel replication, which enables the user to manually divide work across different tracks.
User-defined parallel replication configures multiple threads for sending updates from the source database to the target database and for applying the updates on the target database. The application assigns transactions to tracks. The application specifies which track a transaction belongs to when the transaction starts on the source database. The transactions in each track are applied in the order in which they are received on the target database, but commit order is not maintained for transactions across the different tracks.
Note:Use caution in assigning tracks to transactions that affect tables with foreign key relationships. If transactions on related tables are assigned to different tracks, one of the transactions can be lost because the transactions may be applied out of commit order.
In general, transactions that modify the same table should be assigned to the same replication track. In addition, updates that should be applied in order on the receiving side should use the same track. However, if all transactions insert to a particular table, they can be assigned to different tracks to increase replication throughput. You can split the workload for a table across multiple tracks with a key that ties a row to the same track.
Enable user-defined parallel replication by setting these data store attributes at database creation time:
ReplicationApplyOrdering to 1.
ReplicationParallelism to a number from 1 to 64. This number indicates the number of transmitter threads on the source database and the number of receiver threads on the target database. The default is 1, which indicates a single thread. Parallel replication requires at least two threads.
ReplicationTrack general connection attribute to a non-zero number. All transactions issued by the connection are assigned to this track. The value can be any number. TimesTen maps the
ReplicationTrack number for this connection to one of the available parallel replication threads. Thus, the application can use any number to group transactions that should be applied in order. See "ReplicationTrack" in Oracle TimesTen In-Memory Database Reference.
ALTER SESSION SQL statement to set the replication track number for the current connection. See "ALTER SESSION" in Oracle TimesTen In-Memory Database SQL Reference.
TT_REPLICATION_TRACK ODBC connection option for the
SQLSetConnectOption ODBC function. See "Features for use with replication" in Oracle TimesTen In-Memory Database C Developer's Guide
setReplicationTrack() method of the
TimesTenConnection JDBC class. See "Features for use with replication" in Oracle TimesTen In-Memory Database Java Developer's Guide
ttConfiguration built-in procedure to return the replication track number for the current connection. Use the
ttLogHolds built-in procedure to verify that multiple tracks are being used.
Do not configure user-defined parallel replication for tables that have an aging policy defined.
Databases configured for user-defined parallel replication cannot contain cache groups.
A database cannot be defined as a propagator when user-defined parallel replication is configured.
User-defined parallel replication is not supported for synchronous replication, including databases with the
RETURN RECEIPT and
RETURN TWOSAFE attributes.
Cross-release replication and migration from a database that does not have user-defined parallel replication enabled to a database that does have user-defined parallel replication enabled is not supported from release 220.127.116.11.0 until 18.104.22.168.0. It is supported from releases earlier than 22.214.171.124.0 and from 126.96.36.199.0 and later. Users of releases from 188.8.131.52.0 to 184.108.40.206.0 can perform an upgrade by first applying an in-place patch release upgrade to 220.127.116.11.0. For details, see "Performing an upgrade when using parallel replication" in the Oracle TimesTen In-Memory Database Installation Guide.
This section includes these topics:
A dedicated subdaemon thread writes the contents of the log buffer to disk periodically. These writes may be synchronous or buffered. The subdaemon thread ensures that the system I/O buffer never fills up with more transaction log data than the value of the
LogFileSize first connection attribute without being synchronized to the log buffer.
If the database is configured with
LogFlushMethod=2, then all writes to the disk are synchronous writes and the data is durably written to disk before the write call returns. If the database is configured with
LogFlushMethod=1, then the writes are buffered unless there is a specific request from an application for synchronous writes.
In addition to the periodic writes, an application can also trigger the subdaemon thread to write the buffer contents to disk. The following are cases where the application triggers a synchronous write to the disk:
When a transaction that requested a durable commit is committed. A transaction can request a durable commit by calling the
ttDurableCommit built-in procedure or by having the
DurableCommits connection attribute set to 1.
When the replication agent sends a batch of transactions to a subscriber and the master has been configured for replication with the
TRANSMIT DURABLE attribute (the default).
When the replication agent periodically executes a durable commit, whether the master database is configured with
TRANSMIT DURABLE or not.
Transactions are also written to disk durably when durable commits are configured as part of the return service failure policies and a failure has occurred.
The size of the log buffer has no influence on the ability of TimesTen to write data to disk under any of the circumstances listed above.
In databases that do not use replication, Transaction Log API (XLA), cache groups or incremental backup, unneeded records in the log buffer and unneeded transaction log files are purged each time a checkpoint is initiated, either by the automatic background checkpointing thread or by an application's call to the
ttCkptBlocking built-in procedures. In a replicated database, transactions remain in the log buffer and transaction log files until the master replication agent confirms they have been fully processed by the subscriber. Only then can the master consider purging them from the log buffer and transaction log files.
A master database transaction log can grow much larger than it would on an unreplicated database if there are changes to its subscriber state. When the subscriber is in the
start state, the master can purge logged data after it receives confirmation that the information has been received by the subscriber. However, if a subscriber becomes unavailable or is in the
pause state, the log on the master database cannot be flushed and the space used for logging can be exhausted. When the log space is exhausted, subsequent updates on the master database are aborted. Use the
ttLogHolds built-in procedure to get information about replication log holds.
For more information about transaction log growth, see "Monitoring accumulation of transaction log files" in Oracle TimesTen In-Memory Database Operations Guide.
LogBufMB specifies the maximum size of the in-memory log buffer in megabytes. This buffer is flushed to a transaction log file on the disk when it becomes full. The minimum size for
LogBufMB is 8 times the value of
You need to establish enough disk space for the transaction log files. There are two settings that control the amount of disk space used by the log:
LogFileSize setting in the DSN specifies the maximum size of a transaction log file. If logging requirements exceed this value, additional transaction log files with the same maximum size are created. If you set the
LogFileSize to a smaller value than
LogBufMB, TimesTen automatically increases the
LogFileSize to match
LogBufMB. For best performance, set
LogFileSize to their maximum values.
The log failure threshold setting specifies the maximum number of transaction log files allowed to accumulate before the master assumes a subscriber has failed. The threshold value is the number of transaction log files between the most recently written to transaction log file and the earliest transaction log file being held for the subscriber. For example, if the last record successfully received by all subscribers was in Log File 1 and the last log record written to disk is at the beginning of Log File 4, then replication is at least 2 transaction log files behind (the contents of Log Files 2 and 3). If the threshold value is 2, then the master sets the subscriber to the
failed state after detecting the threshold value had been exceeded. This may take up to 10 seconds. See "Setting the log failure threshold" for more information.
Because transactions are logged to disk, you can use bookmarks to detect the log record identifiers of the update records that have been replicated to subscribers and those that have been written to disk. To view the location of the bookmarks for the subscribers associated with
masterDSN, use the
ttBookmark built-in procedure, as described in "Show replicated log records".
If a subscriber goes down and then comes back up before the threshold is reached, then replication automatically "catches up" as the committed transactions in the transaction log files following the bookmark are automatically transmitted. However, if the threshold is exceeded, the master sets the subscriber to the
failed state. A failed subscriber must use
-duplicate to copy the master database and start over, as described in Chapter 11, "Managing Database Failover and Recovery".
See Oracle TimesTen In-Memory Database Reference for more information about TimesTen connection attributes, built-in procedures and utilities.
Define the replication scheme as described in Chapter 9, "Defining Replication Schemes". Save the
CREATE REPLICATION statement in a SQL file.
After you have described the replication scheme in a SQL file, you can execute the SQL on the database using the
-f option to the
ttIsql utility. The syntax is:
If your replication scheme is described in a file called
repscheme.sql, you can execute the file on a DSN, called
masterDSN, by entering:
> ttIsql -f repscheme.sql -connstr "dsn=masterDSN"
Under most circumstances, you should apply the same scheme to all of the replicated databases. You must invoke a separate
ttIsql command on each host to apply the replication scheme.
If your scheme includes the databases
masterDSN on host
subscriber1DSN on host
subscriber2DSN on host
S3, do the following:
> ttIsql -f repscheme.sql -connstr "dsn=masterDSN"
> ttIsql -f repscheme.sql -connstr "dsn=subscriber1DSN"
> ttIsql -f repscheme.sql -connstr "dsn=subscriber2DSN"
You can also execute the SQL file containing your replication scheme from the
ttIsql command line after connecting to a database. For example:
Command> run repscheme.sql;
The simplest method for populating a subscriber database is to duplicate the contents of the master database. Duplicating a database in this manner is also essential when recovering a failed database, as described in Chapter 11, "Managing Database Failover and Recovery". You can use the
-duplicate utility or the
ttRepDuplicateEx C function to duplicate a database.
To duplicate a database, these conditions must be fulfilled:
The instance administrator performs the duplicate operation.
The instance administrator user name must be the same on both instances involved in the duplication.
You must provide the user name and password for a user with the
ADMIN privilege on the source database.
The target DSN cannot include client/server attributes.
To duplicate the contents of a master database to a subscriber database, complete these tasks:
Apply the replication scheme to the master database. See "Applying a replication scheme to a database".
Start the replication agent for the master database. See "Starting and stopping the replication agents".
On the source database (the master), create a user and grant the
ADMIN privilege to the user:
CREATE USER ttuser IDENTIFIED BY ttuser; User created. GRANT admin TO ttuser;
Assume the user name of the instance administrator is
timesten. Logged in as
timesten on the target host (the subscriber), duplicate database
ttRepAdmin -duplicate -from masterDSN -host host1 subscriber1DSN Enter internal UID at the remote datastore with ADMIN privileges: ttuser Enter password of the internal Uid at the remote datastore:
ttuser when prompted for the password of the internal user at the remote database.
Note:The host entry can be identified with either the name of the remote host or its TCP/IP address. If you identify hosts using TCP/IP addresses, you must identify the address of the local host (
host1in this example) by using the
You can specify the local and remote network interfaces for the source and target hosts by using the
-remoteIP options of
ttRepAdmin -duplicate. If you do not specify one or both network interfaces, TimesTen chooses them.
For details, see "ttRepAdmin" in Oracle TimesTen In-Memory Database Reference.
Start the replication agent on the subscriber database.
A replication scheme can include up to 128 subscribers. A replication scheme with propagator databases can have up to 128 propagators, and each propagator can have up to 128 subscribers. An active standby pair replication scheme can include up to 127 read-only subscribers. If you are planning a replication scheme that includes a large number of subscribers, then ensure the following:
The log buffer size should result in the value of
LOG_FS_READS in the
SYS.MONITOR table being 0 or close to 0. This ensures that the replication agent does not have to read any log records from disk. If the value of
LOG_FS_READS is increasing, then increase the log buffer size.
CPU resources are adequate. The replication agent on the master database spawns a thread for every subscriber database. Each thread reads and processes the log independently and needs adequate CPU resources to transmit to the subscriber database.
Replication functions across releases only if the database of the more recent version of TimesTen was upgraded using
ttMigrate from a database of the older version of TimesTen. A database created in the current version of TimesTen is not guaranteed to replicate correctly with the older version.
For example, replication between a database created in TimesTen release 6.0 and a database created in TimesTen release 11.2.1 is not supported. However, if one database was created in TimesTen release 6.0, and the peer database was created in TimesTen release 6.0 and then upgraded to TimesTen release 11.2.1, replication between them is supported.
See "Database Upgrades" in Oracle TimesTen In-Memory Database Installation Guide.
After you have defined a replication scheme, you can start the replication agents for each database involved in the replication scheme. You must have the
ADMIN privilege to start or stop a replication agent.
You can start and stop a replication agent by using the
ttAdmin utility with the
-repStop option. You can also use the
ttRepStop built-in procedures to start and stop a replication agent from the
ttIsql command line.
To start the replication agents for the DSNs named
ttAdmin -repStart masterDSN ttAdmin -repStart subscriberDSN
To stop the replication agents, enter:
To start and stop the replication agent for the DSN named
> ttIsql masterDSN Command> call ttRepStart; Command> call ttRepStop;
You can also use the
ttAdmin utility to set the replication restart policy. By default the policy is
manual, which enables you to start and stop the replication agents as described above. Alternatively, you can set the replication restart policy for a database to
|Restart Policy||Start replication agent when the TimesTen daemon starts||Restart replication agent on errors or invalidation|
Note:The TimesTen daemon manages the replication agents. It must be running to start or stop the replication agents.
When the restart policy is
always, the replication agent is automatically started when the database is loaded into memory. See "Specifying a RAM policy" in Oracle TimesTen In-Memory Database Operations Guide to determine when a database is loaded into memory.
ttAdmin to set the replication restart policy to
To reset the policy back to manual, enter:
ttAdmin -repPolicy manual DSN
Following a database invalidation, both
always policies cause the replication agent to be automatically restarted. When the agent restarts automatically, it is often the first connection to the database. This happens after a fatal error that, for example, requires all applications to disconnect. The first connection to a database usually has to load the most recent checkpoint file and often needs to do recovery. For a very large database, this process may take several minutes. During this period, all activity on the database is blocked so that new connections cannot take place and any old connections cannot finish disconnecting. This may also result in two copies of the database existing at the same time because the old one stays around until all applications have disconnected. For very large databases for which the first-connect time may be significant, you may want to wait for the old database to become inactive first before starting up the new one. You can do this by setting the restart policy to
norestart to specify that the replication agent is not to be automatically restarted. For more information on setting policies that would prevent the database from being reloaded, see "Specifying a RAM policy" in Oracle TimesTen In-Memory Database Operations Guide to determine when a database is loaded into memory.
The state of a subscriber replication agent is described by its master database. When recovering a failed subscriber database, you must reset the replication state of the subscriber database with respect to the master database it communicates with in a replication scheme. You can reset the state of a subscriber database from either the command line or your program:
See "Monitoring Replication" for information about querying the state of a database.
A master database can set a subscriber database to either the
stop states. The database state appears as an integer value in the
STATE column in the
TTREP.REPPEERS table, as shown in Table 10-2.
Replication updates are collected and transmitted to the subscriber database as soon as possible. If replication for the subscriber database is not operational, the updates are saved in the transaction log files until they can be sent.
Replication updates are retained in the log with no attempt to transmit them. Transmission begins when the state is changed to
Replication updates are discarded without being sent to the subscriber database. Placing a subscriber database in the
Replication to a subscriber is considered failed because the threshold limit (log data) has been exceeded. This state is set by the system is a transitional state before the system sets the state to
When a master database sets one of its subscribers to the
start state, updates for the subscriber are retained in the master's log. When a subscriber is in the
stop state, updates intended for it are discarded.
When a subscriber is in the
pause state, updates for it are retained in the master's log, but are not transmitted to the subscriber database. When a master transitions a subscriber from
start, the backlog of updates stored in the master's log is transmitted to the subscriber. (There is an exception to this, which is described in Chapter 11, "Managing Database Failover and Recovery".) If a master database is unable to establish a connection to a stated subscriber, the master periodically attempts to establish a connection until successful.
ttRepAdmin from the command line to direct the
masterds master database to set the state of the
subscriberds subscriber database to
ttRepAdmin -dsn masterds -receiver -name subscriberds -state stop
Note:If you have multiple subscribers with the same name on different hosts, use the
-hostoption of the
ttRepAdminutility to identify the host for the subscriber that you want to modify.
On the master database, call the
ttRepSubscriberStateSet built-in procedure to set the state of the subscriber database (
subscriberds ON system1) in the
repscheme replication scheme to
Command> CALL ttRepSubscriberStateSet('repscheme', 'repl', 'subscriberds', 'system1', 2);
ttRepSubscriberStateSet can be used to set all of the subscribers of a master to a particular state.The
ttRepAdmin utility does not have any equivalent functionality.