11 Setting Up a Replicated System

This chapter describes how to set up and start replication.

Setting up the replication environment

The following sections describe how to set up your replication environment:

Establishing the databases

You can replicate one or more tables on any existing database. If the database you want to replicate does not yet exist, you must first create one, as described in "Managing TimesTen Databases" in Oracle TimesTen In-Memory Database Operations Guide.

After you have identified or created the master database, create a DSN definition for the subscriber database on the target host. Set the connection attributes for the master and subscriber databases as described in "Connection attributes for replicated databases".

After you have defined the DSN for the subscriber, you can populate the subscriber database with the tables to be replicated from the master in one of two ways:

  • Connect to the database and use SQL statements to create new tables in the subscriber database that match those to be replicated from the master.

  • Use the ttRepAdmin -duplicate utility to copy the entire contents of the master database to the subscriber. See "Duplicating a master database to a subscriber".

Connection attributes for replicated databases

Databases that replicate to each other must have the same DatabaseCharacterSet data store attribute. TimesTen does not perform any character set conversion between replicated databases.

If you want to configure parallel replication, see "Configuring parallel replication" for information about setting the ReplicationParallelism and ReplicationApplyOrdering data store attributes.

See "Setting connection attributes for logging" for recommendations for managing the replication log files.

It is possible to replicate between databases with different settings for the TypeMode data store attribute. However, you must make sure that the underlying data type for each replicated column is the same on each node. See "TypeMode" in Oracle TimesTen In-Memory Database Reference for more information.

In an active standby pair, use the ReceiverThreads first connection attribute to increase the number of threads that apply changes from the active database to the standby database from 1 to 2. If you set ReceiverThreads to 2 on the standby, you should also set it to 2 on the active to maintain increased throughput if there is a failover.

You can also set ReceiverThreads to 2 on one or more read-only subscribers in an active standby pair to increase replication throughput from the standby database.

Databases must be hosted on systems that have two or more CPUs to take advantage of setting this attribute to 2.

Configuring parallel replication

By default, replication is performed with a single thread where the nodes in a replication scheme have one log reader, or transmitter thread, on the source database, and one applier, or receiving thread, on the target database. You can increase your performance by configuring parallel replication, which configures multiple threads for sending updates from the source database to the target database and for applying the updates on the target database. These threads act in parallel to replicate and apply transactional changes to nodes in a replication scheme. By default, parallel replication enforces transactional dependencies and applies changes in commit order; however, you can disable enforcement of the commit order.

Note:

If you enable parallel replication, you cannot execute both DDL and DML statements in the same transaction.

Parallel replication options:

  • Automatic parallel replication: Parallel replication over multiple threads that automatically enforces transactional dependencies and all changes applied in commit order. This is the default.

  • Automatic parallel replication with disabled commit dependencies: Parallel replication over multiple threads that automatically enforces transactional dependencies, but does not enforce transactions to be committed in the same order on the subscriber database as on the master database.

  • User-defined parallel replication: For applications that use a classic replication scheme, have very predictable transactional dependencies, and do not require that the commit order on the receiver is the same as that on the originating database. You can specify the number of transaction tracks and apply specific transactions to each track. All tracks are read, transmitted and applied in parallel.

These options are configured with the ReplicationApplyOrdering and ReplicationParallelism data store creation attributes, which must be set when the database is created.

Note:

All databases within the replication scheme that use parallel replication must be configured identically with the same type of parallel replication and the same number of threads or tracks.

The only time you can have different values for parallel replication attributes is during an upgrade. For details, see "Upgrades when using parallel replication" in the Oracle TimesTen In-Memory Database Installation Guide.

The following sections describe the options for parallel replication:

Configuring automatic parallel replication

Automatic parallel replication enables you to configure multiple threads that act in parallel to replicate and apply transactional changes to nodes in either a classic or an active standby pair replication scheme. Automatic parallel replication enforces transactional dependencies and applies changes in commit order.

Enable automatic parallel replication by setting these data store attributes at database creation time:

  • Set ReplicationApplyOrdering=0, which is also the default.

  • Set ReplicationParallelism to a number from 2 to 32. The number cannot exceed half the value of LogBufParallelism. This number indicates the number of transmitter threads on the source database and the number of receiver threads on the target database. However, if you are using single-threaded replication, set ReplicationParallelism to 1, which is also the default.

Note:

If ReplicationParallelism is greater than 1, the LogBufParallelism first connection attribute must be an integral multiple of ReplicationParallelism.

If the replication scheme is an active standby pair that replicates AWT cache groups, the settings for ReplicationApplyOrdering, ReplicationParallelism, and the CacheAWTParallelism data store attributes determine how many threads are used to apply changes in the TimesTen cache tables to the corresponding Oracle database tables. See "Configuring parallel propagation to the Oracle database tables" in Oracle TimesTen Application-Tier Database Cache User's Guide for more information.

For more information on these data store attributes, see "ReplicationParallelism", "ReplicationApplyOrdering", and "LogBufParallelism" in the Oracle TimesTen In-Memory Database Reference.

Configuring automatic parallel replication with disabled commit dependencies

In order to enforce transactional dependencies and ensure that changes are applied in commit order, automatic parallel replication normally tracks:

  • Begin dependencies: Operations that force one transaction to execute before another, such as an insert of a row followed by a delete of that same row.

  • Commit dependencies: Order in which transactions are committed so that they are committed on a subscriber in the same order as on the master database.

While parallel replication improves performance by utilizing multiple threads, you can further increase throughput performance when using automatic parallel replication if the transactions do not require enforcement of commit dependencies. That is, if your application has predictable transactional dependencies and does not require the commit order on the target database be the same as the order on the source database, you can relax the enforcement of the commit dependencies and still maintain transactional correctness. For example, if separate transactions are working on separate tables, you do not need to enforce commit dependencies.

By relaxing the requirement for tracking commit dependencies, you can improve the performance for automatic parallel replication. When you do not enforce commit dependencies, all non-DDL transactions may commit on the subscribers in a different order than how they were originally executed on the master. Begin dependencies are always enforced to prevent apply order anomalies.

You can only disable commit dependencies for automatic parallel replication for an active standby pair that uses asynchronous replication and does not contain cache groups. You can set the following data store attributes at database creation time:

  • Set the ReplicationApplyOrdering=2 before you create the TimesTen database.

  • Set ReplicationParallelism to a number from 2 to 32. The number cannot exceed half the value of LogBufParallelism. This number indicates the number of transmitter threads on the source database and the number of receiver threads on the target database. However, if you are using single-threaded replication, set ReplicationParallelism to 1, which is also the default.

Note:

If ReplicationParallelism is greater than 1, the LogBufParallelism first connection attribute must be an integral multiple of ReplicationParallelism.

However, while your performance improves, this option requires the use of additional space in the temporary region of 16 MB plus an additional 16 MB for each configured replication track (ReplicationParallelism connection attribute setting). For example, if the ReplicationParallism connection attribute is set to 10, then this feature requires 16 MB + 160 MB = 176 MB of additional temporary region space.

When you use automatic parallel replication and disable commit dependencies, some of the tracks may move ahead of the others. This is known as a drift between the parallel replication tracks. You can limit the amount of drift between the replication tracks by setting the ParReplMaxDrift configuration parameter within the ttDbConfig built-in procedure.

Call ttDbConfig("ParReplMaxDrift", "30");

The example sets the allowed drift to 30 seconds between the replication tracks, after which TimesTen causes all replication tracks to catch up to each other. For more details, see "ttDBConfig" in the Oracle TimesTen In-Memory Database Reference.

Specifying replication tracks within an automatic parallel replication environment

In general, automatic parallel replication decides over which thread (track) to replicate each transaction from the master to the subscriber. Thus, you do not have to manually decide how to divide work across different tracks.

However, with dependent transactions, you may achieve better performance by manually assigning them to the same track. Thus, the application has the option to specify which track a transaction belongs to when the transaction starts on the source database with either the ReplicationTrack connection attribute or the ALTER SESSION SET REPLICATION_TRACK statement. After which, all transactions for this connection use this track. The transactions for this track are applied in the order in which they are received on the target database, but commit order is not maintained for transactions across the different tracks. If you decide to specify the track for a transaction, then make sure that you distribute the workload evenly across the tracks.

Updates that should be applied in order on the receiving side should use the same track. You can spread operations on a table across separate tracks based on key values. For example, if you have a telecom billing application, you can use hash of the account number to set the track and send all transactions for each account on a separate track.

TimesTen still computes and enforces dependencies to make sure that dependent transactions are applied in the correct order on the receiving side.

The application assigns transactions to tracks by one of these methods:

  • Set the ReplicationTrack general connection attribute to a nonzero number. All transactions issued by the connection are assigned to this track. The value can be any number. TimesTen maps the ReplicationTrack number for this connection to one of the available parallel replication threads. Thus, the application can use any number to group transactions that should be applied in order. See "ReplicationTrack" in Oracle TimesTen In-Memory Database Reference.

  • Use the ALTER SESSION SQL statement to set the replication track number for the current connection. See "ALTER SESSION" in Oracle TimesTen In-Memory Database SQL Reference.

  • Use the TT_REPLICATION_TRACK ODBC connection option for the SQLSetConnectOption ODBC function. See "Features for use with replication" in Oracle TimesTen In-Memory Database C Developer's Guide

  • Use the setReplicationTrack() method of the TimesTenConnection JDBC class. See "Features for use with replication" in Oracle TimesTen In-Memory Database Java Developer's Guide

Use the ttConfiguration built-in procedure to return the replication track number for the current connection. Use the ttLogHolds built-in procedure to verify that multiple tracks are being used.

Configuring user-defined parallel replication for classic replication schemes

If your application uses a classic replication scheme, has predictable transactional dependencies, and does not require the commit order on the target database be the same as the order on the source database, then you can increase replication throughput by using user-defined parallel replication to manually divide work across different tracks.

User-defined parallel replication configures multiple threads for sending updates from the source database to the target database and for applying the updates on the target database. The application always assigns transactions to tracks. The application specifies which track a transaction belongs to when the transaction starts on the source database. The transactions in each track are applied in the order in which they are received on the target database, but commit order is not maintained for transactions across the different tracks.

Note:

Use caution in assigning tracks to transactions that affect tables with foreign key relationships. If transactions on related tables are assigned to different tracks, one of the transactions can be lost because the transactions may be applied out of commit order.

In general, transactions that modify the same table should be assigned to the same replication track. In addition, updates that should be applied in order on the receiving side should use the same track. However, if all transactions insert to a particular table, they can be assigned to different tracks to increase replication throughput. You can split the workload for a table across multiple tracks with a key that ties a row to the same track.

User-defined parallel replication is used only for classic replication schemes.

Enable user-defined parallel replication by setting these data store attributes at database creation time:

  • Set ReplicationApplyOrdering to 1.

  • Set ReplicationParallelism to a number from 1 to 64. This number indicates the number of transmitter threads on the source database and the number of receiver threads on the target database. For single threaded replication, set to 1 (the default). To use parallel replication, set to a number from 2 to 64.

In addition, the application needs to assign transactions to tracks with the same method as described in "Specifying replication tracks within an automatic parallel replication environment". The only difference is that a user-defined parallel replication scheme always specifies the track to use for the transactions of the connection.

Restrictions on user-defined parallel replication
  • Do not configure user-defined parallel replication for tables that have an aging policy defined.

  • Databases configured for user-defined parallel replication cannot contain cache groups.

  • A database cannot be defined as a propagator when user-defined parallel replication is configured.

  • User-defined parallel replication is not supported for use within an active standby pair replication scheme.

  • User-defined parallel replication is not supported for synchronous replication, including databases with the RETURN RECEIPT and RETURN TWOSAFE attributes.

  • Cross-release replication and migration from a database that does not have user-defined parallel replication enabled to a database that does have user-defined parallel replication enabled is not supported from release 11.2.1.6.0 until 11.2.1.8.0. It is supported from releases earlier than 11.2.1.6.0 and from 11.2.1.8.0 and later. Users of releases from 11.2.1.6.0 to 11.2.1.8.0 can perform an upgrade by first applying an in-place patch release upgrade to 11.2.1.8.0. For details, see "Upgrades when using parallel replication" in the Oracle TimesTen In-Memory Database Installation Guide.

Managing the transaction log on a replicated database

This section includes these topics:

About log buffer flushing

A dedicated subdaemon thread writes the contents of the log buffer to disk periodically. These writes may be synchronous or buffered. The subdaemon thread ensures that the system I/O buffer never fills up with more transaction log data than the value of the LogFileSize first connection attribute without being synchronized to the log buffer.

If the database is configured with LogFlushMethod=2, then all writes to the disk are synchronous writes and the data is durably written to disk before the write call returns. If the database is configured with LogFlushMethod=1, then the writes are buffered unless there is a specific request from an application for synchronous writes.

In addition to the periodic writes, an application can also trigger the subdaemon thread to write the buffer contents to disk. The following are cases where the application triggers a synchronous write to the disk:

  • When a transaction that requested a durable commit is committed. A transaction can request a durable commit by calling the ttDurableCommit built-in procedure or by having the DurableCommits connection attribute set to 1.

  • When the replication agent sends a batch of transactions to a subscriber and the master has been configured for replication with the TRANSMIT DURABLE attribute (the default).

  • When the replication agent periodically executes a durable commit, whether the master database is configured with TRANSMIT DURABLE or not.

Transactions are also written to disk durably when durable commits are configured as part of the return service failure policies and a failure has occurred.

The size of the log buffer has no influence on the ability of TimesTen to write data to disk under any of the circumstances listed above.

About transaction log growth on a master database

In databases that do not use replication, Transaction Log API (XLA), cache groups or incremental backup, unneeded records in the log buffer and unneeded transaction log files are purged each time a checkpoint is initiated, either by the automatic background checkpointing thread or by an application's call to the ttCkpt or ttCkptBlocking built-in procedures. In a replicated database, transactions remain in the log buffer and transaction log files until the master replication agent confirms they have been fully processed by the subscriber. Only then can the master consider purging them from the log buffer and transaction log files.

A master database transaction log can grow much larger than it would on an unreplicated database if there are changes to its subscriber state. When the subscriber is in the start state, the master can purge logged data after it receives confirmation that the information has been received by the subscriber. However, if a subscriber becomes unavailable or is in the pause state, the log on the master database cannot be flushed and the space used for logging can be exhausted. When the log space is exhausted, subsequent updates on the master database are aborted. Use the ttLogHolds built-in procedure to get information about replication log holds.

Note:

For more information about transaction log growth, see "Monitoring accumulation of transaction log files" in Oracle TimesTen In-Memory Database Operations Guide. For more details on the ttLogHolds built-in procedure, see "Monitor replication with the ttLogHolds built-in procedure" or "ttLogHolds" in the Oracle TimesTen In-Memory Database Reference.

Setting connection attributes for logging

LogBufMB specifies the maximum size of the in-memory log buffer in megabytes. This buffer is flushed to a transaction log file on the disk when it becomes full. The minimum size for LogBufMB is 8 times the value of LogBufParallelism.

You need to establish enough disk space for the transaction log files. There are two settings that control the amount of disk space used by the log:

  • The LogFileSize setting in the DSN specifies the maximum size of a transaction log file. If logging requirements exceed this value, additional transaction log files with the same maximum size are created. For best performance, set LogBufMB and LogFileSize to their maximum values.

  • The log failure threshold setting specifies the maximum number of transaction log files allowed to accumulate before the master assumes a subscriber has failed. The threshold value is the number of transaction log files between the most recently written to transaction log file and the earliest transaction log file being held for the subscriber. For example, if the last record successfully received by all subscribers was in Log File 1 and the last log record written to disk is at the beginning of Log File 4, then replication is at least 2 transaction log files behind (the contents of Log Files 2 and 3). If the threshold value is 2, then the master sets the subscriber to the failed state after detecting the threshold value had been exceeded. This may take up to 10 seconds. See "Setting the transaction log failure threshold" for more information.

Because transactions are logged to disk, you can use bookmarks to detect the log record identifiers of the update records that have been replicated to subscribers and those that have been written to disk. To view the location of the bookmarks for the subscribers associated with masterDSN, use the ttBookmark built-in procedure, as described in "Show replicated log records".

If a subscriber goes down and then comes back up before the threshold is reached, then replication automatically "catches up" as the committed transactions in the transaction log files following the bookmark are automatically transmitted. However, if the threshold is exceeded, the master sets the subscriber to the failed state. A failed subscriber must use ttRepAdmin -duplicate to copy the master database and start over, as described in Chapter 15, "Managing Database Failover and Recovery".

See Oracle TimesTen In-Memory Database Reference for more information about TimesTen connection attributes, built-in procedures and utilities.

Duplicating a master database to a subscriber

The simplest method for populating a subscriber database is to duplicate the contents of the master database. Duplicating a database in this manner is also essential when recovering a failed database, as described in Chapter 15, "Managing Database Failover and Recovery". You can use the ttRepAdmin -duplicate utility or the ttRepDuplicateEx C function to duplicate a database.

To duplicate a database, these conditions must be fulfilled:

  • The instance administrator performs the duplicate operation.

  • The instance administrator user name must be the same on both instances involved in the duplication.

  • You must provide the user name and password for a user with the ADMIN privilege on the source database.

  • The target DSN cannot include client/server attributes.

To duplicate the contents of a master database to a subscriber database, complete these tasks:

  1. Create or alter a replication scheme to include the new subscriber database and its host. See "Defining a classic replication scheme" or "Creating and adding a subscriber database to a classic replication scheme".

  2. Apply the replication scheme to the master database. See "Applying a classic replication scheme to a database".

  3. Start the replication agent for the master database. See "Starting and stopping the replication agents".

  4. On the source database (the master), create a user and grant the ADMIN privilege to the user:

    CREATE USER ttuser IDENTIFIED BY ttuser;
    User created.
    
    GRANT admin TO ttuser;
    
  5. Assume the user name of the instance administrator is timesten. Logged in as timesten on the target host (the subscriber), duplicate database masterDSN on host1 to subscriber1DSN:

    ttRepAdmin -duplicate -from masterDSN -host host1 subscriber1DSN
    
    Enter internal UID at the remote datastore with ADMIN privileges: ttuser 
    Enter password of the internal Uid at the remote datastore:
    

    Enter ttuser when prompted for the password of the internal user at the remote database.

    Note:

    The host entry can be identified with either the name of the remote host or its TCP/IP address. If you identify hosts using TCP/IP addresses, you must identify the address of the local host (host1 in this example) by using the -localhost option.

    You can specify the local and remote network interfaces for the source and target hosts by using the -localIP and -remoteIP options of ttRepAdmin -duplicate. If you do not specify one or both network interfaces, TimesTen chooses them.

    For details, see "ttRepAdmin" in Oracle TimesTen In-Memory Database Reference.

  6. Start the replication agent on the subscriber database.

Configuring a large number of subscribers

A replication scheme can include up to 128 subscribers. A replication scheme with propagator databases can have up to 128 propagators, and each propagator can have up to 128 subscribers. An active standby pair replication scheme can include up to 127 read-only subscribers. If you are planning a replication scheme that includes a large number of subscribers, then ensure the following:

  • The log buffer size should result in the value of LOG_FS_READS in the SYS.MONITOR table being 0 or close to 0. This ensures that the replication agent does not have to read any log records from disk. If the value of LOG_FS_READS is increasing, then increase the log buffer size.

  • CPU resources are adequate. The replication agent on the master database spawns a thread for every subscriber database. Each thread reads and processes the log independently and needs adequate CPU resources to transmit to the subscriber database.

Replicating databases across releases

Replication functions across releases only if the database of the more recent version of TimesTen was upgraded using ttMigrate from a database of the older version of TimesTen. A database created in the current version of TimesTen is not guaranteed to replicate correctly with the older version.

For example, replication between a database created in TimesTen release 6.0 and a database created in TimesTen release 11.2.1 is not supported. However, if one database was created in TimesTen release 6.0, and the peer database was created in TimesTen release 6.0 and then upgraded to TimesTen release 11.2.1, replication between them is supported.

See "TimesTen Upgrades" in Oracle TimesTen In-Memory Database Installation Guide.

Starting and stopping the replication agents

After you have defined a replication scheme, you can start the replication agents for each database involved in the replication scheme. You must have the ADMIN privilege to start or stop a replication agent.

You can start and stop a replication agent by using the ttAdmin utility with the -repStart or -repStop option. You can also use the ttRepStart and ttRepStop built-in procedures to start and stop a replication agent from the ttIsql command line.

Example 11-1 Starting and stopping the replication agent with ttAdmin

To start the replication agents for the DSNs named masterDSN and subscriberDSN, enter:

ttAdmin -repStart masterDSN
ttAdmin -repStart subscriberDSN

To stop the replication agents, enter:

ttAdmin -repStop masterDSN
ttAdmin -repStop subscriberDSN

Example 11-2 Starting and stopping the replication agent from ttIsql

To start and stop the replication agent for the DSN named masterDSN, enter:

> ttIsql masterDSN
Command> call ttRepStart;
Command> call ttRepStop;

You can also use the ttAdmin utility to set the replication restart policy. By default the policy is manual, which enables you to start and stop the replication agents as described above. Alternatively, you can set the replication restart policy for a database to always or norestart.

Restart Policy Start replication agent when the TimesTen daemon starts Restart replication agent on errors or invalidation
always Yes Yes
manual No Yes
norestart No No

Note:

The TimesTen daemon manages the replication agents. It must be running to start or stop the replication agents.

When the restart policy is always, the replication agent is automatically started when the database is loaded into memory. See "Specifying a RAM policy" in Oracle TimesTen In-Memory Database Operations Guide to determine when a database is loaded into memory.

Example 11-3 Using ttAdmin to set the restart policy

To use ttAdmin to set the replication restart policy to always, enter:

ttAdmin -repPolicy always DSN

To reset the policy back to manual, enter:

ttAdmin -repPolicy manual DSN

Following a database invalidation, both manual and always policies cause the replication agent to be automatically restarted. When the agent restarts automatically, it is often the first connection to the database. This happens after a fatal error that, for example, requires all applications to disconnect. The first connection to a database usually has to load the most recent checkpoint file and often needs to do recovery. For a very large database, this process may take several minutes. During this period, all activity on the database is blocked so that new connections cannot take place and any old connections cannot finish disconnecting. This may also result in two copies of the database existing at the same time because the old one stays around until all applications have disconnected. For very large databases for which the first-connect time may be significant, you may want to wait for the old database to become inactive first before starting up the new one. You can do this by setting the restart policy to norestart to specify that the replication agent is not to be automatically restarted. For more information on setting policies that would prevent the database from being reloaded, see "Specifying a RAM policy" in Oracle TimesTen In-Memory Database Operations Guide to determine when a database is loaded into memory.

Setting the replication state of subscribers

The state of a subscriber replication agent is described by its master database. When recovering a failed subscriber database, you must reset the replication state of the subscriber database with respect to the master database it communicates with in a replication scheme. You can reset the state of a subscriber database from either the command line or your program:

  • From the command line, use ttRepAdmin -state to direct a master database to reset the replication state of one of its subscriber databases.

  • From ttIsql, call the ttRepSubscriberStateSet built-in procedure to direct a master database to reset the replication state of one or all of its subscriber databases.

See Chapter 12, "Monitoring Replication" for information about querying the state of a database.

A master database can set a subscriber database to either the start, pause, or stop states. The database state appears as an integer value in the STATE column in the TTREP.REPPEERS table, as shown in Table 11-1.

Table 11-1 Database states

State Description

start

STATE value: 0

Replication updates are collected and transmitted to the subscriber database as soon as possible. If replication for the subscriber database is not operational, the updates are saved in the transaction log files until they can be sent.

pause

STATE value: 1

Replication updates are retained in the log with no attempt to transmit them. Transmission begins when the state is changed to start.

stop

STATE value: 2

Replication updates are discarded without being sent to the subscriber database. Placing a subscriber database in the stop state discards any pending updates from the master's transaction log.

WARNING: If you are planning on restarting this subscriber, updates are not stored between the stop and the restart. Therefore, when you restart, the subscriber does not contain all of the updates from the master. If you are planning to restart, pause the subscriber instead of stopping it.

failed

STATE value: 4

Replication to a subscriber is considered failed because the threshold limit (log data) has been exceeded. This state is set by the system is a transitional state before the system sets the state to stop. Applications that connect to a failed database receive a warning. See "General failover and recovery procedures" for more information.


When a master database sets one of its subscribers to the start state, updates for the subscriber are retained in the master's log. When a subscriber is in the stop state, updates intended for it are discarded.

When a subscriber is in the pause state, updates for it are retained in the master's log, but are not transmitted to the subscriber database. When a master transitions a subscriber from pause to start, the backlog of updates stored in the master's log is transmitted to the subscriber. (There is an exception to this, which is described in Chapter 15, "Managing Database Failover and Recovery".) If a master database is unable to establish a connection to a stated subscriber, the master periodically attempts to establish a connection until successful.

Example 11-4 Using ttRepAdmin to set the subscriber state

To use ttRepAdmin from the command line to direct the masterds master database to set the state of the subscriberds subscriber database to stop:

ttRepAdmin -dsn masterds -receiver -name subscriberds -state stop

Note:

If you have multiple subscribers with the same name on different hosts, use the -host option of the ttRepAdmin utility to identify the host for the subscriber that you want to modify.

Example 11-5 Using ttRepSubscriberStateSet to set the subscriber state

On the master database, call the ttRepSubscriberStateSet built-in procedure to set the state of the subscriber database (subscriberds ON system1) in the repscheme replication scheme to stop:

Command> CALL ttRepSubscriberStateSet('repscheme', 'repl',
          'subscriberds', 'system1', 2);

Only ttRepSubscriberStateSet can be used to set all of the subscribers of a master to a particular state.The ttRepAdmin utility does not have any equivalent functionality.