Managing the Transaction Log on a Replicated Database

You can manage the transaction log on replicated databases.

This section includes these topics:

About Log Buffer Flushing

A dedicated subdaemon thread writes the contents of the log buffer to the file system periodically. These write operations may be synchronous or buffered.

The subdaemon thread ensures that the system I/O buffer never fills up with more transaction log data than the value of the LogFileSize first connection attribute without being synchronized to the log buffer.

If the database is configured with LogFlushMethod=2, then all write operations to the file system are synchronous write operations and the data is durably written to the file system before the write call returns. If the database is configured with LogFlushMethod=1, then the write operations are buffered unless there is a specific request from an application for synchronous write operations.

In addition to the periodic write operations, an application can also trigger the subdaemon thread to write the buffer contents to the file system. The following are cases where the application triggers a synchronous write operation to the file system:

  • When a transaction that requested a durable commit is committed. A transaction can request a durable commit by calling the ttDurableCommit built-in procedure or by having the DurableCommits connection attribute set to 1.

  • When the replication agent sends a batch of transactions to a subscriber and the master has been configured for replication with the TRANSMIT DURABLE attribute (the default).

  • When the replication agent periodically runs a durable commit, whether the master database is configured with TRANSMIT DURABLE or not.

Transactions are also written to the file system durably when durable commits are configured as part of the return service failure policies and a failure has occurred.

The size of the log buffer has no influence on the ability of TimesTen to write data to the file system under any of the circumstances listed above.

About Transaction Log Growth on a Master Database

In databases that do not use replication, Transaction Log API (XLA), cache groups or incremental backup, unneeded records in the log buffer and unneeded transaction log files are purged each time a checkpoint is initiated.

The unneeded transaction log files are purged either by the automatic background checkpointing thread or by an application's call to the ttCkpt or ttCkptBlocking built-in procedures.

In a replicated database, transactions remain in the log buffer and transaction log files until the master replication agent confirms they have been fully processed by the subscriber. Only then can the master consider purging them from the log buffer and transaction log files.

A master database transaction log can grow much larger than it would on an unreplicated database if there are changes to its subscriber state. When the subscriber is in the start state, the master can purge logged data after it receives confirmation that the information has been received by the subscriber. However, if a subscriber becomes unavailable or is in the pause state, the log on the master database cannot be flushed and the space used for logging can be exhausted. When the log space is exhausted, subsequent updates on the master database are aborted. Select from the SYS.GV$LOG_HOLDS or SYS.V$LOG_HOLDS system views or call the ttLogHolds built-in procedure to get information about replication log holds.

Note:

See Monitoring Accumulation of Transaction Log Files in Oracle TimesTen In-Memory Database Operations Guide.

See Monitor Replication From the Replication Log Holds in this book, SYS.GV$LOG_HOLDS or SYS.V$LOG_HOLDS in the Oracle TimesTen In-Memory Database System Tables and Views Reference or ttLogHolds in the Oracle TimesTen In-Memory Database Reference

Setting Connection Attributes for Logging

LogBufMB specifies the maximum size of the in-memory log buffer in megabytes. This buffer is flushed to a transaction log file when it becomes full.

The minimum size for LogBufMB is 8 times the value of LogBufParallelism.

You need to establish enough space for the transaction log files. There are two settings that control the amount of space used by the log:

  • The LogFileSize setting in the DSN specifies the maximum size of a transaction log file. If logging requirements exceed this value, additional transaction log files with the same maximum size are created. For best performance, set LogBufMB and LogFileSize to their maximum values.

  • The log failure threshold setting specifies the maximum number of transaction log files allowed to accumulate before the master assumes a subscriber has failed. The threshold value is the number of transaction log files between the most recently written to transaction log file and the earliest transaction log file being held for the subscriber. For example, if the last record successfully received by all subscribers was in Log File 1 and the last log record written to the file system is at the beginning of Log File 4, then replication is at least 2 transaction log files behind (the contents of Log Files 2 and 3). If the threshold value is 2, then the master sets the subscriber to the failed state after detecting the threshold value had been exceeded. This may take up to 10 seconds. See Setting the Transaction Log Failure Threshold.

Because transactions are logged to the file system, you can use bookmarks to detect the log record identifiers of the update records that have been replicated to subscribers and those that have been written to the file system. To view the location of the bookmarks for the subscribers associated with masterDSN, use the ttBookmark built-in procedure, as described in Show Replicated Log Records.

If a subscriber goes down and then comes back up before the threshold is reached, then replication automatically "catches up" as the committed transactions in the transaction log files following the bookmark are automatically transmitted. However, if the threshold is exceeded, the master sets the subscriber to the failed state. A failed subscriber must use ttRepAdmin -duplicate to copy the master database and start over, as described in Managing Database Failover and Recovery.

See Connection Attributes in the Oracle TimesTen In-Memory Database Reference.