7 Transaction Management

TimesTen supports transactions that provide atomic, consistent, isolated and durable (ACID) access to data. The following sections describe how you can configure transaction features.

Transaction overview

All operations on a TimesTen database, even those that do not modify or access application data, are executed within a transaction. When running an operation and there is no outstanding transaction, one is started automatically on behalf of the application. Transactions are completed by an explicit or implicit commit or rollback. When completed, resources (such as locks and result sets) that were acquired or opened by the transaction are released and freed.

When using SQL statements or supported APIs (such as ODBC and JDBC) to commit or rollback your transaction:

  • When you commit the current transaction, updates made in the transaction are made available to concurrent transactions.

  • When you rollback the current transaction, all updates made in the transaction are undone.

Read-only transactions hold minimal resources, so you do not need to commit every read operation. When executing write operations, commit transactions to release locks. When possible, keep write transactions short in duration. Any long-running transactions can reduce concurrency and decrease throughput because locks are held for a longer period of time, which blocks concurrent transactions. Also, long-running transactions can prevent transaction log files from being purged, causing these files to accumulate on the file system.

A connection can have only one outstanding transaction at any time and cannot be explicitly closed if it has an open transaction.

Transaction implicit commit behavior

The following sections describe how you can configure whether the application enables implicit commit behavior or requires explicit commit behavior for DML or DDL statements:

Transaction autocommit behavior

Autocommit configures whether TimesTen issues an implicit commit after DML or DDL statements. Some database APIs (such as ODBC and JDBC) support autocommit, which is enabled by default when using those APIs. Other APIs (such as OCI) do not provide an autocommit feature.

When autocommit is on, the following behavior occurs:

  • An implicit commit is issued immediately after a statement executes successfully.

  • An implicit rollback is issued immediately after a statement execution fails, such as a primary key violation.

  • If the statement generates a result set, the automatic commit is not issued until that result set and any other open result set in the transaction have been explicitly closed. Any statements executed while a result set is open is not committed until all result sets have been closed.

    Fetching all rows of a result set does not automatically close the result set. After the result set has been processed, the result set must be explicitly closed if using the read committed isolation level or the transaction must be explicitly committed or rolled back if using serializable isolation level.

    Note:

    Even with durable commits and autocommit enabled, you could lose work if there is a failure or the application exits without closing result sets.
  • If you are using ODBC or JDBC batch operations to INSERT, UPDATE or DELETE several rows in one call when autocommit is on, a commit occurs after the entire batch operation has completed. If there is an error during the batch operation, those rows that have been successfully modified are committed within this transaction. If an error occurs due to a problem on a particular row, only the successfully modified rows preceding the row with the error are committed in this transaction. The pirow parameter to the ODBC SQLParamOptions function contains the number of the rows in the batch that had a problem.

Commits can be costly for performance and intrusive if they are implicitly executed after every statement. TimesTen recommends you disable autocommit so that all commits are intentional. Disabling autocommit provides control over transactional boundaries, enables multiple statements to be executed within a single transaction, and improves performance, since there is no implicit commit after every statement.

If autocommit is disabled, transactions must be explicitly completed with a commit or rollback after any of the following:

  • Completing all the work that was to be done in the transaction.

  • Issuing a transaction-consistent (blocking) checkpoint request.

  • Updating column and table statistics to be used by the query optimizer.

  • Calling a TimesTen built-in procedure that does not generate a result set in order for the new setting specified in the procedure to take effect, such as the ttLockWait procedure.

You must establish a connection to a database before changing the autocommit setting. To disable autocommit, perform one of the following:

  • In ODBC-based applications, execute SQLSetConnectOption function with SQL_AUTOCOMMIT_OFF.

  • In JDBC applications, Connection.setAutoCommit(false) method.

  • When running ttIsql, issue the autocommit 0 command.

TimesTen DDL commit behavior

The TimesTen database issues an implicit COMMIT before and after any DDL statement. A durable commit is performed after the execution of each DDL statement. This behavior is the same as the Oracle database.

DDL statements include the following:

  • CREATE, ALTER and DROP statements for any database object, including tables, views, users, procedures and indexes.

  • TRUNCATE

  • GRANT and REVOKE

There are certain things to keep into consideration:

  • DDL changes cannot be rolled back.

  • DDL statements delete records from global temporary tables unless the tables were created with the ON COMMIT PRESERVE ROWS clause.

  • Tables created with the CREATE TABLE . . . AS SELECT statement are visible immediately.

  • TRUNCATE statements are committed automatically. However, the truncate of the parent and child tables must be truncated in separate transactions, with the child table truncated first. You cannot truncate a parent table unless the child table is empty.

Ensuring ACID semantics

As a relational database, TimesTen is ACID compliant:

  • Atomic: All TimesTen transactions are atomic: Either all database operations in a single transaction occur or none of them occur.

  • Consistent: Any transaction can bring the database from one consistent state to another.

  • Isolated: Transactions can be isolated. TimesTen has two isolation levels: read committed and serializable, which together with row level locking provide multi-user concurrency control.

  • Durable: Once a transaction has been committed, it remains committed.

The following sections detail how TimesTen ensures ACID semantics for transactions:

Transaction atomicity, consistency, and isolation

Locking and transaction logs are used to ensure ACID semantics as a transaction modifies data in a database as follows:

  • Locking: TimesTen acquires locks on data items that the transaction writes and, depending on the transaction isolation level, data items that the transaction reads. See "Concurrency control through isolation and locking" for details.

  • Transaction logging: All TimesTen transactions are atomic. Either all or none of the effects of the transaction are applied to the database. Modifications to the database are recorded in a transaction log. Atomicity is implemented by using the transaction log to undo the effects of a transaction if it is rolled back. Rollback can be caused explicitly by the application or during database recovery because the transaction was not committed at the time of failure. See "Transaction logging" for details.

The following table shows how TimesTen uses locks and transaction logs:

If Then
Transaction is terminated successfully (committed)
  • Transaction log is written to the file system (depending on durability connection attribute). See "Transaction consistency and durability" for more information.
  • Locks that were acquired on behalf of the transaction are released and the corresponding data becomes available to other transactions to read and modify.

  • All open result sets in the transaction are automatically closed.

Transaction is rolled back
  • Transaction log is used to undo the effects of the transaction and to restore any modified data items to the state they were before the transaction began.
  • Locks that were acquired on behalf of the transaction are released.

  • All open result sets in the transaction are automatically closed.

System fails (data not committed)
  • On first connect, TimesTen automatically performs database recovery by reading the latest checkpoint image and applying the transaction log to restore the database to its most recent transactionally consistent state. See "Checkpoint operations".
Application fails
  • All outstanding transactions are rolled back.

Transaction consistency and durability

TimesTen provides consistency and durability with a combination of checkpointing and transaction logging.

  • A checkpoint operation writes the current in-memory database image to a checkpoint file on the file system.

    • For TimesTen Classic, a successful checkpoint operation makes all transactions that have been committed at the time of the checkpoint operation consistent and durable.

    • In TimesTen Scaleout, the data in your database is distributed across elements. Each element keeps its own checkpoint and transaction log files. As a result, the data stored in each element is independently durable.

      Each data instance in a grid manages one element of a database. In the event of a failure, a data instance can automatically recover the data stored in its element from the checkpoint and transaction logs files while the remaining data instances continue to service applications. If the K-safety value is 2 or greater, then a failed element can be recovered from another element in its replica set.

  • All transactions are logged to an in-memory transaction log buffer, which is written to the file system either with durable or nondurable transactions. See "Durability options" for more details.

Note:

Checkpointing and logging are further described in "Checkpoint operations" and "Transaction logging".

Concurrency control through isolation and locking

TimesTen transactions support ANSI Serializable and ANSI Read Committed levels of isolation. ANSI Serializable isolation is the most stringent transaction isolation level. ANSI Read Committed allows greater concurrency. Read Committed is the default and is an appropriate isolation level for most applications.

The following sections describe transaction isolation and locking levels:

Transaction isolation levels

Transaction isolation enables each active transaction to operate as if there were no other transactions active in the system. Isolation levels determine if row-level locks are acquired when performing read operations. When a statement is issued to update a table, locks are acquired to prevent other transactions from modifying the same data until the updating transaction commits or rolls back and then releases its locks.

The Isolation connection attribute sets the isolation level for a connection. The isolation level cannot be changed in the middle of a transaction.

TimesTen supports the following two transaction isolation levels:

  • ANSI Read Committed isolation: The read committed isolation level is the recommended mode of operation for most applications, and is the default mode. It enables transactions that are reading data to execute concurrently with a transaction that is updating the same data. TimesTen makes multiple versions of data items to allow non-serializable read and write operations to proceed in parallel.

    Read operations do not block write operations and write operations do not block read operations, even when they read and write the same data. Read operations do not acquire locks on scanned rows. Write operations acquire locks that are held until the transaction commits or rolls back. Readers share a committed copy of the data, whereas a writer has its own uncommitted version. Therefore, when a transaction reads an item that is being updated by another in-progress transaction, it sees the committed version of that item. It cannot see an uncommitted version of an in-progress transaction.

    Read committed isolation level provides for better concurrency at the expense of decreased isolation because of the possibility of non-repeatable reads or phantom rows within a transaction. If an application executes the same query multiple times within the same transaction, the commit of an update from another transaction may cause the results from the second read operation to retrieve different results. A phantom row can appear (in modified form) in two different read operations within the same transaction, as the result of an early release of read locks during the transaction.

    To set read committed isolation level, if previously modified since this is the default, do one of the following:

    • Connect with the Isolation connection attribute set to 1. You can also modify this value with the ALTER SESSION SQL statement.

      ALTER SESSION SET ISOLATION_LEVEL=READ COMMITTED;
      

      See "Isolation" in the Oracle TimesTen In-Memory Database Reference or "ALTER SESSION" in Oracle TimesTen In-Memory Database SQL Reference for more information.

    • When using ttIsql, execute isolation 1 or isolation READ_COMMITTED.

    • ODBC applications can execute the SQLSetConnectOption ODBC function with the SQL_TXN_ISOLATION flag set to SQL_TXN_READ_COMMITTED. See "Option support for ODBC 2.5 SQLSetConnectOption and SQLGetConnectOption" in the Oracle TimesTen In-Memory Database C Developer's Guide for more information.

    • JDBC applications can execute the setTransactionIsolation JDBC method of the Connection object to TRANSACTION_READ_COMMITTED.

  • ANSI Serializable isolation: All locks acquired within a transaction by a read or write operation are held until the transaction commits or rolls back. Read operations block write operations, and write operations block read operations. But read operations do not block other read operations.

    • If an initial transaction reads a row, a second transaction can also read the row.

    • A row that has been read by an initial transaction cannot be updated or deleted by a second transaction until the initial transaction commits or rolls back. However, the second transaction can insert rows that are not in the same range that are locked by the first transaction. For example, if the first transaction is reading rows 1 through 10, then the second transaction cannot update rows 1 through 10. However, the second transaction can insert into rows greater than 10.

    • A row that has been inserted, updated or deleted by an initial transaction cannot be accessed in any way by a second transaction until the initial transaction either commits or rolls back.

    Serializable isolation level provides for repeatable reads and increased isolation at the expense of decreased concurrency. A transaction that executes the same query multiple times within the same transaction is guaranteed to see the same result set each time. Other transactions cannot update or delete any of the returned rows, nor can they insert a new row that satisfies the query predicate.

    To set the isolation level to serializable isolation, do one of the following:

    • Connect with the Isolation connection attribute set to 0. You can also modify this value with the ALTER SESSION SQL statement.

      ALTER SESSION SET ISOLATION_LEVEL=SERIALIZABLE ;
      

      See "Isolation" in the Oracle TimesTen In-Memory Database Reference or "ALTER SESSION" in Oracle TimesTen In-Memory Database SQL Reference for more information.

    • When using ttIsql, execute isolation 0 or isolation SERIALIZABLE.

    • ODBC applications can execute the SQLSetConnectOption ODBC function with the SQL_TXN_ISOLATION flag set to SQL_TXN_SERIALIZABLE.

    • JDBC applications can execute the setTransactionIsolation JDBC method of the Connection object to TRANSACTION_SERIALIZABLE.

    To ensure that materialized views are always in a consistent state, all view maintenance operations are performed under serializable isolation, even when the transaction is in read committed isolation. This means that the transaction obtains read locks for any data items read during view maintenance. However, the transaction releases the read locks at the end of the INSERT, UPDATE or CREATE VIEW statement that triggered the view maintenance, instead of holding them until the end of the transaction.

Note:

The ttXactAdmin utility generates a report showing lock holds and lock waits for all outstanding transactions. It can be used to troubleshoot lock contention problems where operations are being blocked, or encountering lock timeout or deadlock errors. It can also be used to rollback a specified transaction.

Locking granularities

TimesTen supports row-level locks, table-level locks and database-level locks.

Different connections can coexist with different levels of locking, but the presence of even one connection using database-level locking leads to reduced concurrency. For performance information, see "Choose the best method of locking".

Note:

When multiple locks have been obtained within the same transaction, the locks are released sequentially when the transaction ends.
  • Row-level locking: Transactions usually obtain locks on the individual rows that they access. Row-level locking is the recommended mode of operation because it provides the finest granularity of concurrency control. It allows concurrent transactions to update different rows of the same table. However, row-level locking requires space in the database's temporary memory region to store lock information.

    Row-level locking is the default. However, if it has been modified to another type of locking and you want to re-enable row-level locking, do one of the following:

    • Set the LockLevel connection attribute to 0.

    • In TimesTen Classic, call the ttLockLevel built-in procedure with the lockLevel parameter set to Row. This procedure changes the lock level between row-level and database-level locking on the next transaction and for all subsequent transactions for this connection.

    • Execute the ttOptSetFlag built-in procedure to set the RowLock parameter to 1, which enables the optimizer to consider using row locks.

    Note:

    See "LockLevel", "ttLockLevel", and "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference for more information.
  • Table-level locking: Table-level locking is recommended when concurrent transactions access different tables or a transaction accesses most of the rows of a particular table. Table-level locking provides better concurrency than database-level locking. Row-level locking provides better concurrency than table-level locking. Table-level locking requires only a small amount of space in the temporary memory region to store lock information.

    Table-level locking provides the best performance for the following:

    • Queries that access a significant number of rows of a table

    • When there are very few concurrent transactions that access a table

    • When temporary space is inadequate to contain all row locks that an operation, such as a large insert or a large delete, might acquire

    To enable table-level locking, execute the ttOptSetFlag procedure to set the TblLock parameter to 1, which enables the optimizer to consider using table locks. In addition, set RowLock to 0 so that the optimizer does not consider row-level locks.

    If both table-level and row-level locking are disabled, TimesTen defaults to row-level locking. If both table-level and row-level locking are enabled, TimesTen chooses the locking scheme that is more likely to have better performance. Even though table-level locking provides better performance than row-level locking because of reduced locking overhead, the optimizer often chooses row-level locking for better concurrency. For more information, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference.

  • Database-level locking: Database-level locking serializes all transactions, which effectively allows no concurrency on the database. When a transaction is started, it acquires an exclusive lock on the database, which ensures that there is no more than one active transaction in the database at any given time. It releases the lock when the transaction is completed.

    Database-level locking can provide better performance than row-level locking, due to reduced locking overhead. In addition, it provides higher throughput than row-level locking when running a single stream of transactions such as a bulk load operation. However, its applicability is limited to applications that never execute multiple concurrent transactions. With database-level locking, every transaction effectively runs in ANSI Serializable isolation, since concurrent transactions are disallowed.

    To enable database-level locking, do one of the following:

    • Set the LockLevel connection attribute to 1.

    • In TimesTen Classic, call the ttLockLevel built-in procedure with the lockLevel parameter set to DS. This procedure changes the lock level between row-level and database-level locking on the next transaction and for all subsequent transactions for this connection.

Setting wait time for acquiring a lock

Set the LockWait connection attribute to the maximum amount of time that a statement waits to acquire a lock before it times out. The default is 10 seconds. If a statement within a transaction waits for a lock and the lock wait interval has elapsed, an error is returned. After receiving the error, the application can reissue the statement.

Lock wait intervals are imprecise due to the scheduling of the database's managing subdaemon process to detect lock timeouts. This imprecision does not apply to zero-second timeouts, which are always immediately reported. The lock wait interval does not apply to blocking checkpoints.

The database's managing subdaemon process checks every two seconds to see if there is a deadlock in the database among concurrent transactions. If a deadlock occurs, an error is returned to one of the transactions involved in the deadlock cycle. The transaction that receives the error must rollback in order to allow the other transactions involved in the deadlock to proceed.

When running a workload of high lock-contention potential, consider setting the LockWait connection attribute to a smaller value for faster return of control to the application, or setting LockWait to a larger value to increase the successful lock grant ratio (with a risk of decreased throughput).

For more information, see "LockWait" in Oracle TimesTen In-Memory Database Reference.

Checkpoint operations

A checkpoint operation saves the in-memory image of a database to files, known as checkpoint files. By default, TimesTen performs background checkpoints at regular intervals. Checkpointing may generate a large amount of I/O activity and have a long execution time depending on the size of the database and the number of database changes since the most recent checkpoint.

Note:

Applications can programmatically initiate checkpoint operations. See "Setting and managing checkpoints" for more details.

The following sections describe checkpoint operations and how you can manage them:

Purpose of checkpoints

A checkpoint operation has two primary purposes.

  • Decreases the amount of time required for database recovery, because it provides a more up-to-date database image on which recovery can begin.

  • Makes a portion of the transaction log unneeded for any future database recovery operation, typically allowing one or more transaction log files to be deleted.

Both of these functions are very important to TimesTen applications. The reduction in recovery time is important, as the amount of a transaction log needed to recover a database has a direct impact on the amount of downtime seen by an application after a system failure. The removal of unneeded transaction log files is important because it frees file system space that can be used for new transaction log files. If these files were never purged, they would eventually consume all available space in the transaction log files directory, causing database operations to fail due to log space exhaustion.

Usage of checkpoint files

TimesTen creates two checkpoint files for each database, named dsname.ds0 and dsname.ds1, where dsname is the database path name and file name prefix specified in the database DSN. During a checkpoint operation, TimesTen determines which checkpoint file contains the most recent consistent image and then writes the next in-memory image of the database to the other file. Thus, the two files contain the two most recent database images.

  • In TimesTen Classic, the database maintains one set of checkpoint and transaction log files.

  • In TimesTen Scaleout, each element maintains its own independent set of checkpoint and transaction log files. See "Understanding Distributed Transactions in TimesTen Scaleout" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.

TimesTen uses the most recent checkpoint file and transaction log files to recover the database to its most recent transaction-consistent state after a database shutdown or system failure. (The most recent transaction log files are those written since the checkpoint was done.) If any errors occur during this process, or if the more recent checkpoint image is incomplete, then recovery restarts using the other checkpoint file. For more information on transaction logging, see "Transaction logging".

Types of checkpoints

TimesTen uses the most recent checkpoint file to recover the database to transaction-consistent state at the time of the last successful checkpoint operation completed. It uses the transaction log files to recover the database to its most recent transaction-consistent state after a database shutdown or system failure.

TimesTen supports two types of database checkpoint operations:

Fuzzy or non-blocking checkpoints

Fuzzy checkpoints, or non-blocking checkpoints, allow transactions to execute against the database while the checkpoint is in progress. Fuzzy checkpoints do not obtain locks, and therefore have a minimal impact on other database activity. Because transactions may modify the database while a checkpoint operation is in progress, the resulting checkpoint file may contain both committed and uncommitted transactions. Furthermore, different portions of the checkpoint image may reflect different points in time. For example, one portion may have been written before a given transaction committed, while another portion was written afterward. The term "fuzzy checkpoint" derives its name from this fuzzy state of the database image.

To recover the database when the checkpoint files were generated from fuzzy checkpoint operations, TimesTen requires the most recent checkpoint file and the transaction log to bring the database into its most recent transaction-consistent state.

Blocking checkpoints

Blocking checkpoints obtain an exclusive lock on the database for a portion of the checkpoint operation, which blocks all access to the database during that time. The resulting checkpoint image contains all committed transactions prior to the time the checkpoint operations acquired the exclusive lock on the database. Because no transactions can be active while the database lock is held, no modifications made by in-progress transactions are included in the checkpoint image.

Requesting a blocking checkpoint in TimesTen Classic

In TimesTen Classic, an application uses the ttCkptBlocking built-in procedure to request a blocking checkpoint. The actual checkpoint is delayed until the requesting transaction commits or rolls back. If a blocking checkpoint is requested for a database for which both checkpoint files are already up to date then the checkpoint request is ignored.

Setting and managing checkpoints

The default behavior for TimesTen checkpoints is as follows:

The following sections describe how to manage checkpointing:

Programmatically performing a checkpoint in TimesTen Classic

By default, TimesTen performs periodic fuzzy checkpoints in the background. Therefore, applications rarely need to issue manual checkpoints. However, if an application wishes to issue a manual checkpoint against a TimesTen Classic database, it can call the ttCkpt built-in procedure to request a fuzzy checkpoint or the ttCkptBlocking built-in procedure to request a blocking checkpoint. This is not recommended. See "ttCkpt" and "ttCkptBlocking" in the Oracle TimesTen In-Memory Database Reference for more information.

Configuring or turning off background checkpointing

Using connection attributes or built-in procedures, you can configure TimesTen to checkpoint either when the transaction log files contain a certain amount of data or at a specific frequency.

To configure checkpointing in TimesTen, do the following:

Configure the CkptFrequency and CkptLogVolume connection attributes as follows:

  • The CkptFrequency connection attribute controls how often, in seconds, that TimesTen performs a background checkpoint. The default is 600 seconds. Set the CkptFrequency connection attribute to 0 if you want to control background checkpointing with the CkptLogVolume connection attribute.

  • The CkptLogVolume connection attribute controls how much data, in megabytes, that collects in the transaction log file between background checkpoints. By increasing this amount, you can delay the frequency of the checkpoint. The default is 0. Set the CkptFrequency connection attribute to 0 if you want to control background checkpointing with the CkptLogVolume connection attribute.

In most cases, it is recommended to use CkptLogVolume over CkptFrequency since CkptFrequency does not take into account the rate of transactions the rate of database transactions. If both CkptFrequency and CkptLogVolume attributes are set with a value greater than 0, then a checkpoint is performed when either of the two conditions becomes true.

Alternatively, you can configure background checkpointing or turn it off by calling the ttCkptConfig built-in procedure. The values set by ttCkptConfig take precedence over those set with the connection attributes.

Note:

For information on default values and usage, see "CkptFrequency", "CkptLogVolume", and "ttCkptConfig" in the Oracle TimesTen In-Memory Database Reference.

Displaying checkpoint history and status

Select from the SYS.GV$CKPT_HISTORY or SYS.V$CKPT_HISTORY system views to display information on the most recent eight checkpoints and checkpoint attempts.

Note:

You can also use the ttCkptHistory built-in procedure to display this information.

For example, some of the information that you can confirm:

  • The amount of time a checkpoint takes (subtract the completion time of the checkpoint from the start time).

  • The type of the checkpoint: fuzzy, blocking, static or none. See "Types of checkpoints" for details.

  • The status of the checkpoint: in-progress, completed, or failed. When examining checkpoint history, verify whether any recent checkpoints failed (indicated with status of FAILED). If a checkpoint has failed, the reason is displayed in the error or additional details columns.

  • The initiator of the checkpoint. From a user-level application (including TimesTen utilities), from a background checkpoint request, or the managing subdaemon of the database.

  • The reason why the checkpoint occurred. The most popular reasons are after database creation, after recovery, final checkpoint after shutdown, after the user runs a built-in procedure, or after a flush operation.

  • The reason for a failure if a checkpoint fails.

  • The amount of data (total number of bytes) written by a typical checkpoint.

  • The checkpoint rate. See "Setting the checkpoint rate" for details on how to calculate the checkpoint rate from the data provided.

  • The percentage of the checkpoint that has been completed. If there is an in-progress checkpoint, indicates the percentage of the checkpoint that has been completed.

  • The number of actual transaction log files purged by this checkpoint. If this column displays a zero, this does not mean that no log records were purged within the transaction log file. Instead, log records can be purged continually within a transaction log file. This column displays the actual number of transaction log files purged to show when file system space is freed.

    There are times when TimesTen may not be able to purge some of the transaction files if there is a hold set on the transaction log. For example, this value may show when the checkpoint cannot purge transaction log files due to a long-running transaction or a replication bookmark located far back in the transaction log files.

  • The bookmark name (the reason for the transaction log hold) that is preventing the removal of one or more transaction logs. This name can help identify why transaction log files are remaining on the file system longer than expected. See "Log holds by TimesTen components or operations" for details on the different bookmark (transaction log hold) names.

    If this information does not provide enough context on why transaction logs are not purged, then you can also execute the ttXactAdmin built-in procedure for more details on the transaction logs.

See "SYS.GV$CKPT_HISTORY", "SYS.V$CKPT_HISTORY", "SYS.GV$LOG_HOLDS" or "SYS.V$LOG_HOLDS" in the Oracle TimesTen In-Memory Database System Tables and Views Reference for details on these views. For more information on the built-in procedures, see "ttCkptHistory", "ttLogHolds" or "ttXactAdmin" in the Oracle TimesTen In-Memory Database Reference.

Example 7-1 Checkpoint in progress

This example shows a checkpoint in progress:

% SELECT * FROM SYS.V$CKPT_HISTORY;

< 2019-02-05 16:56:34.169520, <NULL>, 
Fuzzy           , In Progress     , User            , 
BuiltIn         , <NULL>, 
0, <NULL>, <NULL>, <NULL>, <NULL>, <NULL>, <NULL>, 
<NULL>, <NULL>, <NULL>, 13, 6, 0, <NULL>, <NULL> >
 
< 2019-02-05 16:55:47.703199, 2019-02-05 16:55:48.188764, 
Fuzzy           , Completed       , Checkpointer    , 
Background      , <NULL>, 
1, 0, 8964304, 294, 33554432, 291, 5677288, 27, 1019512, 
1065408, <NULL>, 5, 0, Checkpoint, <NULL> >
 
< 2019-02-05 16:54:47.106110, 2019-02-05 16:54:47.723379, 
Static          , Completed       , Subdaemon       , 
FinalCkpt       , <NULL>, 
0, 0, 8960328, 294, 33554432, 291, 5677288, 256, 33157172, 
5321548, <NULL>, 4, 0, Checkpoint, <NULL> >
 
< 2019-02-05 16:54:41.633792, 2019-02-05 16:54:42.568469, 
Blocking        , Completed       , User            , 
BuiltIn         , <NULL>, 
1, 0, 8958160, 294, 33554432, 291, 5677288, 31, 1162112, 
6604976, <NULL>, 3, 0, Checkpoint, <NULL> >
 
< 2019-02-05 16:54:37.438827, 2019-02-05 16:54:37.977301, 
Static          , Completed       , User            ,
DbCreate        , <NULL>, 
0, 0, 1611984, 93, 33554432, 92, 1853848, 93, 33554432, 
1854052, <NULL>, 2, 0, Checkpoint, <NULL> >
 
< 2019-02-05 16:54:36.861728, 2019-02-05 16:54:37.438376, 
Static          , Completed       , User            , 
DbCreate        , <NULL>, 
1, 0, 1609936, 93, 33554432, 92, 1853848, 93, 33554432, 
1854052, <NULL>, 1, 0, Checkpoint, <NULL> >

This example shows that an error occurred during the most recent checkpoint attempt, which was a user-initiated checkpoint:

% SELECT * FROM SYS.V$CKPT_HISTORY;

< 2019-02-05 16:57:14.476860, 2019-02-05 16:57:14.477957, 
Fuzzy           , Failed , User            , 
BuiltIn         , 847, 
1, <NULL>, <NULL>, 0, 0, 0, 0, 0, 0, 0, <NULL>, 7, 0, <NULL>, 
Errors   1: TT0847: 16:57:14 (2019-02-05) >
 
< 2019-02-05 16:56:34.169520, 2019-02-05 16:56:59.715451, 
Fuzzy           , Completed       , User            , 
BuiltIn         , <NULL>, 
0, 0, 8966472, 294, 33554432, 291, 5677288, 5, 522000, 
532928, <NULL>, 6, 0, Checkpoint, <NULL> >
 
< 2019-02-05 16:55:47.703199, 2019-02-05 16:55:48.188764, 
Fuzzy           , Completed       , Checkpointer    , 
Background      , <NULL>, 
1, 0, 8964304, 294, 33554432, 291, 5677288, 27, 1019512, 
1065408, <NULL>, 5, 0, Checkpoint, <NULL> >
 
< 2019-02-05 16:54:47.106110, 2019-02-05 16:54:47.723379, 
Static          , Completed       , Subdaemon       , 
FinalCkpt       , <NULL>, 
0, 0, 8960328, 294, 33554432, 291, 5677288, 256, 33157172, 
5321548, <NULL>, 4, 0, Checkpoint, <NULL> >
 
< 2019-02-05 16:54:41.633792, 2019-02-05 16:54:42.568469, 
Blocking        , Completed       , User            , 
BuiltIn         , <NULL>, 
1, 0, 8958160, 294, 33554432, 291, 5677288, 31, 1162112, 
6604976, <NULL>, 3, 0, Checkpoint, <NULL> >
 
< 2019-02-05 16:54:37.438827, 2019-02-05 16:54:37.977301, 
Static          , Completed       , User            ,
DbCreate        , <NULL>, 
0, 0, 1611984, 93, 33554432, 92, 1853848, 93, 33554432, 
1854052, <NULL>, 2, 0, Checkpoint, <NULL> >
 
< 2019-02-05 16:54:36.861728, 2019-02-05 16:54:37.438376, 
Static          , Completed       , User            , 
DbCreate        , <NULL>, 
1, 0, 1609936, 93, 33554432, 92, 1853848, 93, 33554432, 
1854052, <NULL>, 1, 0, Checkpoint, <NULL> >

Example 7-2 Understanding checkpoint history output

This example uses the ttCkptHistory built-in procedure to select specific columns from the checkpoint history:

% SELECT type, reason, bookmarkname, logsPurged FROM ttCkptHistory;;

< Fuzzy           , BuiltIn         , Oldest Transaction Undo, 0 >
< Static          , FinalCkpt       , Checkpoint, 6 >
< Blocking        , BuiltIn         , Checkpoint, 0 >
< Blocking        , BuiltIn         , Checkpoint, 0 >
< Blocking        , BuiltIn         , Checkpoint, 0 >
< Blocking        , BuiltIn         , Backup, 5 >
< Blocking        , BuiltIn         , Backup, 0 >
< Blocking        , BuiltIn         , Backup, 0 >

The output from this example shows that the oldest checkpoints (the last rows displayed) did not purge any transaction log files (indicated by a 0 in the logsPurged column) because there was a log hold set by an incremental backup. The backup caused a transaction log file accumulation. However, eventually, the log hold was removed and five transaction log files could be purged.

A checkpoint operation is started on the fourth row from the bottom. The checkpoint places a log hold that prevents transaction log files from being purged. Six transaction log files were purged by the final checkpoint (FinalCkpt) operation.

The most recent checkpoint shows that it is a fuzzy checkpoint with the log hold of Oldest Transaction Undo. This hold marks a point for the transaction that the checkpoint log purge operation cannot pass. If you see this message over several consecutive rows, then this may indicate a long running transaction that could be causing a transaction log file accumulation.

Setting the checkpoint rate

By default, there is no limit to the rate at which checkpoint data is written to the file system. You can use the CkptRate connection attribute or the ttCkptConfig built-in procedure to set the maximum rate at which background checkpoint data is written to the file system. Checkpoints taken during recovery and final checkpoints do not honor this rate; in those situations, the rate is unlimited.

Note:

See "CkptRate" and "ttCkptConfig" in the Oracle TimesTen In-Memory Database Reference for details on using these features.

Setting a rate too low can cause checkpoints to take an excessive amount of time and cause the following problems:

  • Delay the purging of unneeded transaction log files.

  • Delay the start of backup operations.

Setting a rate too high can cause checkpoints to consume too much of the file system buffer cache bandwidth that could result in the following:

  • Reduce overall database transaction throughput, as transaction logs are prevented from writing to the file system as quickly as they normally would.

  • Cause contention with other file system I/O operations.

When choosing a rate, you should take into consideration the amount of data written by a typical checkpoint and the amount of time checkpoints usually take. Both of these pieces of information are available through the SYS.GV$CKPT_HISTORY or SYS.V$CKPT_HISTORY system views or the ttCkptHistory built-in procedure.

If a running checkpoint appears to be progressing too slowly, you evaluate the progress of this checkpoint with the Percent_Complete column of the SYS.GV$CKPT_HISTORY or SYS.V$CKPT_HISTORY system views. The rate can be increased by calling the ttCkptConfig built-in procedure. If a call to ttCkptConfig changes the rate, the new rate takes effect immediately, affecting even the running checkpoint.

Calculate the checkpoint rate (by viewing the SYS.GV$CKPT_HISTORY or SYS.V$CKPT_HISTORY system views or calling the ttCkptHistory built-in procedure):

  1. For any given checkpoint, subtract the starttime from the endtime.

  2. Divide the number of bytes written by this elapsed time in seconds to get the number of bytes per second.

  3. Divide this number by 1024*1024 to get the number of megabytes per second.

When setting the checkpoint rate, you should consider the following:

  • The specified checkpoint rate is only approximate. The actual rate of the checkpoint may be below the specified rate, depending on the hardware, system load and other factors.

  • The above method may underestimate the actual checkpoint rate, because the starttime and endtime interval includes other checkpoint activities in addition to the writing of dirty blocks to the checkpoint file.

  • The Percent_Complete field may show 100 percent before the checkpoint is actually complete. The Percent_Complete field shows only the progress of the writing of dirty blocks and does not include additional bookkeeping at the end of the checkpoint.

  • When adjusting the checkpoint rate, you may also need to adjust the checkpoint frequency, as a slower rate makes checkpoints take longer, which effectively increases the minimum time between checkpoint beginnings.

Setting the number of checkpoint file read threads

By default, TimesTen reads checkpoint files serially with a single thread. Use the CkptReadThreads connection attribute to set the number of threads that TimesTen uses to read the checkpoint files when loading the database into memory.

When using n number of threads, TimesTen divides the checkpoint file into n portions of equal size. Each thread concurrently reads a portion of the file into memory. Once all threads are done reading their portion of the checkpoint file successfully, TimesTen checks the database for consistency.

Note:

For more information, see "Set CkptReadThreads" in this book, and "CkptReadThreads" in the Oracle TimesTen In-Memory Database Reference.

Transaction logging

TimesTen maintains a transaction log for each database to track all updates made within each transaction, so that those updates can be undone if the transaction is rolled back. TimesTen recovers transactions using both the most recent checkpoint file and the most recent transaction log, which was written from the time of the last checkpoint operation after a system failure.

A transaction log record is created for each database modification, commit, and rollback. However, transaction log records are not generated for read-only transactions. Log records are first written to the transaction log buffer, which resides in the same shared memory segment as the database. The contents of the log buffer are then subsequently flushed to the latest transaction log file.

The transaction log is a logically ordered sequence of transaction log records, but physically consists of a set of one or more transaction log files and the in-memory log buffer within the TimesTen database. This log is shared by all concurrent connections.

The following sections describe how to manage and monitor the transaction log buffers and file:

Managing transaction log buffers and files

The following describes how to configure transaction log buffers and files:

  • Transaction log buffers: There is one transaction log buffer for each database and the size of the transaction log buffer can be configured using the LogBufMB DSN connection attribute.

    Strands divide the transaction log buffer available memory into a number of different regions, which can be accessed concurrently by different connections. The number of transaction log buffer strands is configured with the LogBufParallelism connection attribute. After which, each connection can execute data independent DML statements in parallel using those strands as if each has its own transaction log buffer. See "LogBufParallelism" in the Oracle TimesTen In-Memory Database Reference for more details.

  • Transaction log files: The transaction log files are created in the location that the LogDir connection attribute specifies.

    Note:

    The LogDir connection attribute is optional and if it is not specified, the transaction log files are created in the same directory as the checkpoint files. However, for best performance, TimesTen recommends that you use the LogDir connection attribute to place the transaction log files in a different physical device from the checkpoint files. If separated, I/O operations for checkpoints do not block I/O operations to the transaction log and vice versa.

    If the database is already loaded into RAM and the transaction log files and checkpoint files for your database are on the same physical device, TimesTen writes a message to the daemon log file.

    See "LogDir" in the Oracle TimesTen In-Memory Database Reference and "Check transaction log file use of file system space" in the Oracle TimesTen In-Memory Database Troubleshooting Guide for more details.

    You can configure the maximum size for the transaction log files with the LogFileSize DSN connection attribute. The default value is 64 MB. See "LogFileSize" in the Oracle TimesTen In-Memory Database Reference for more details.

    The transaction log file names have the form dsname.logn. The dsname is the database path name that is specified by the DataStore DSN connection attribute and is provided within the database's DSN. The suffix n is the transaction log file number, starting at zero.

    When the database is created, TimesTen creates reserve files named dsname.res0, dsname.res1, and dsname.res2. These files contain pre-allocated space that serves as reserved transaction log space. Reserved transaction log space allows for a limited continuation of transaction logging if the file system that holds the transaction log files becomes full. If the file system that contains the transaction logs becomes full, the database does not allow any new transactions to begin. Transactions that are in progress are allowed to continue by using the reserve files until they either commit or rollback. If there are transactions that are still in progress and the reserve files are all consumed, all database operations that generate log files are blocked.

Using NFS-mounted systems for checkpoint and transaction log files

To enable TimesTen checkpoint and transaction log files to be located on NFS-mounted file systems (on Linux platforms platforms only), ensure that allow_network_files=1 in the timesten.conf file. The allow_network_files=1 configuration is the default setting.

See "Optional attributes" in Chapter 1, "TimesTen Instance Configuration File" in the Oracle TimesTen In-Memory Database Reference for full details.

Monitoring accumulation of transaction log files

It is important to verify at frequent intervals that there are no transaction log holds that could result in an excessive accumulation of transaction log files. If too many transaction log files accumulate and fill up available file system space, new transactions in the TimesTen database cannot begin until the transaction log hold is advanced and transaction log files are purged by the next checkpoint operation.

The following sections describe transaction log operations, log holds, and accumulation of log files:

Log holds by TimesTen components or operations

Several TimesTen components or operations can cause transaction log holds. A transaction log hold prevents log files, beyond a certain point, from being purged until they are no longer needed. In normal circumstances, the log hold position is regularly advanced and log files are purged appropriately. However, if operations are not functioning properly and the hold position does not advance, there can be an excessive accumulation of log files beyond the hold position that can no longer be purged, which eventually fills available file system space.

These components and operations include the following:

  • Transactions writing log records to the transaction log file have been committed or rolled back. These can be either local database transactions or XA transactions.

  • Changes recorded in the transaction log file have been written to both checkpoint files.

  • Replication: There is a transaction log hold until the transmitting replication agent confirms that the log files have been fully processed by the receiving host.

    Possible failure modes include the following:

    • The network is down or there is a standby crash and replication is unable to deliver data to one or more subscribers. If necessary, the application can direct that logs no longer be held, then duplicate the master database to the standby when normal operations resume. Criteria for when to do this includes the amount of time required to duplicate, the amount of available file system space on the master for log files, and the transaction log growth rate.

    • The overall database transaction rate exceeds the ability of replication to keep the active and standby databases synchronized. An application can reduce the application transaction rate or the number of replicated tables.

    For more information, see "Improving Replication Performance" in the Oracle TimesTen In-Memory Database Replication Guide and "Troubleshooting Replication" in the Oracle TimesTen In-Memory Database Troubleshooting Guide.

  • XLA: There is a transaction log hold until the XLA bookmark advances.

    A possible failure mode occurs when the bookmark becomes stuck, which can occur if an XLA application terminates unexpectedly or if it disconnects without first deleting its bookmark or disabling change-tracking. If a bookmark gets too far behind, the application can delete it. If the XLA reader process is still active, it must first be terminated, so that another XLA process can connect and delete the bookmark.

  • Active standby pairs that replicate AWT cache groups: There is a transaction log hold until the replication agent confirms that the transaction corresponding to the log hold has been committed on the Oracle Database. With an active standby pair, the active database typically receives the confirmation from the standby database. If the standby database is down, the replication agent receives confirmation from Oracle Database directly.

    Possible failure modes include the following:

    • Oracle Database is down or there is a lock or resource contention.

    • The network is down, slow, or saturated.

    • With an active standby pair, replication to the standby database falls behind. Check log holds on the standby database.

    • The transaction rate to TimesTen exceeds the maximum sustainable rate that TimesTen can propagate to Oracle Database.

    For more information, see "Monitoring AWT cache groups" in the Oracle TimesTen Application-Tier Database Cache User's Guide and "Troubleshooting AWT Cache Groups" in the Oracle TimesTen In-Memory Database Troubleshooting Guide.

  • Cache groups configured with AUTOREFRESH: There is a transaction log hold until the replication agent on the active database confirms the log files have been fully processed by the standby database.

    Possible failure modes include the following:

    • Replication from the active database to the standby database is impacted because the standby database falls behind due to large workloads resulting from AUTOREFRESH mode.

    • The standby database is down or recovering, but has not been marked as FAILED through a call, initiated by either the user application or Oracle Clusterware, to the ttRepStateSave built-in procedure. The active database does not take over propagation to the Oracle Database until the state of the standby database is marked as FAILED. While the standby database is down or recovering, transaction log files are held for the Oracle Database.

    For more information, see "Monitoring autorefresh cache groups" in the Oracle TimesTen In-Memory Database Troubleshooting Guide.

  • Incremental TimesTen backup: There is a log hold until you manually run an incremental backup and until that backup completes.

    The most likely failure is if you configure incremental backups (which starts TimesTen holding log files), but you do not actually run an incremental backup, which would free the log file hold.

    Another possible failure mode can occur if the incremental backup falls too far behind the most recent entries in the transaction log. For example, ensure that an unexpected burst of transaction activity cannot fill up available transaction log file system space due to the backup holding a log file that is too old. An application can perform another incremental backup to work around this situation.

  • Long-running transaction or XA transaction: There is a transaction log hold until the transaction commits or rolls back.

    A possible failure mode can occur if an application transaction does not commit or rollback for a long time, so that it becomes necessary for the application to terminate the long-running transaction.

    Alternatively, you can rollback a transaction using the ttXactAdmin utility with the -xactIdRollback option. See "ttXactAdmin" in Oracle TimesTen In-Memory Database Reference.

  • Oldest transaction undo: Each transaction that has not yet committed or rolled back establishes a log hold. This hold starts with the first (oldest) log record of the transaction and marks a point for the transaction that the checkpoint log purge operation cannot pass. You can retrieve this log hold information from the SYS.GV$CKPT_HISTORY or SYS.V$CKPT_HISTORY system views or from the output of the ttCkptHistory built-in procedure.

Purging transaction log files

Any transaction log file is kept until TimesTen determines it can be purged. Under normal TimesTen operating conditions, unneeded transaction log files are purged each time a checkpoint is initiated. A checkpoint can be initiated either through a configurable time interval with the CkptFrequency connection attribute or a configurable log volume with the CkptLogVolume connection attribute. In TimesTen Classic, you can also initiate a checkpoint (either manually or in a background checkpointing application thread) with the ttCkpt built-in function.

If you are running out of file system space because of log files accumulating, use the CkptLogVolume connection attribute instead of the CkptFrequency connection attribute. In addition, you can tell if reclamation is blocked when frequently selecting from the SYS.GV$LOG_HOLDS or SYS.V$LOG_HOLDS system views or calling the ttLogHolds built-in procedure.

Note:

To improve performance, locate your log files on a separate physical device from the one on which the checkpoint files are located. See "Managing transaction log buffers and files" for more details.

See "Checkpointing" in the Oracle TimesTen In-Memory Database Introduction for general information on checkpointing. See "Configuring or turning off background checkpointing" for more details on CkptFrequency and CkptLogVolume connection attributes.

For more information on log holds, see "SYS.GV$LOG_HOLDS" or "SYS.V$LOG_HOLDS" in the Oracle TimesTen In-Memory Database SQL Reference or "ttLogHolds" in the Oracle TimesTen In-Memory Database Reference.

Also, see the sections for "CkptFrequency", "CkptLogVolume", and "ttCkpt" in the Oracle TimesTen In-Memory Database Reference.

Monitoring log holds and log file accumulation

Options for periodic monitoring of excessive transaction log accumulation include the following:

  • For details of all log holds, select from the SYS.GV$LOG_HOLDS or SYS.V$LOG_HOLDS system views or call the ttLogHolds built-in procedure. The information includes the following, as applicable:

    • Log file number, the offset of the hold position, and the type of hold, which can be checkpoint, replication, backup, XLA, long-running transaction, or long-running XA transaction

    • Name of the checkpoint file for a checkpoint hold

    • Name of the subscriber and the parallel track ID it uses for replication

    • Backup path for a backup hold

    • Name of the persistent subscription and process ID of the last process to open it for XLA

    • Transaction ID for a long-running transaction

    • XA XID for a long-running XA transaction

    For more information, see "SYS.GV$LOG_HOLDS" or "SYS.V$LOG_HOLDS" in the Oracle TimesTen In-Memory Database SQL Reference or "ttLogHolds" in the Oracle TimesTen In-Memory Database Reference.

  • Select from the SYS.GV$CKPT_HISTORY or SYS.V$CKPT_HISTORY system views or call the ttCkptHistory built-in procedure to check the last several checkpoints to confirm none of the returned rows has a status of FAILED.

    For more information, see "SYS.GV$CKPT_HISTORY" or "SYS.V$CKPT_HISTORY" in the Oracle TimesTen In-Memory Database SQL Reference or "ttCkptHistory" in the Oracle TimesTen In-Memory Database Reference.

  • Check the SYS.SYSTEMSTATS table for operational metrics. Each transaction log file has a unique sequence number, which starts at 0 for the first log file and increments by 1 for each subsequent log file. The number of the current log file is available in SYS.SYSTEMSTATS.log.file.latest. The number of the oldest log file not yet purged is available in SYS.SYSTEMSTATS.log.file.earliest. You should raise an error or warning if the difference in the sequence numbers exceeds an inappropriate threshold.

    For more information, see "SYS.SYSTEMSTATS" in the Oracle TimesTen In-Memory Database System Tables and Views Reference.

  • For XLA, check the SYS.TRANSACTION_LOG_API table that provides bookmark information, such as the process ID of the connected application, which could help diagnose the reason why a bookmark may be stuck or lagging.

    For more information, see "SYS.TRANSACTION_LOG_API" in the Oracle TimesTen In-Memory Database System Tables and Views Reference.

Durability options

Databases in TimesTen are persistent across power failures and crashes. TimesTen accomplishes durability by periodically saving to disk:

  • Checkpoint files: A checkpoint operation writes the current database image to a checkpoint file on the file system, which has the effect of making all transactions that committed before the checkpoint durable.

  • Transaction log files: For transactions that committed after the last checkpoint, TimesTen uses conventional logging techniques to make them durable. As each transaction progresses, it records its database modifications in an in-memory transaction log. At commit time, the relevant portion of the transaction log is flushed to the file system. This log flush operation makes that transaction, and all previously-committed transactions, durable.

    Control returns to the application after the transaction log data has been durably written to the file system. A durably committed transaction is not lost even in the event of a system failure.

Any recovery uses the last checkpoint image together with the transaction log to reconstruct the latest transaction-consistent state of the database.

When committing transactions, TimesTen reduces the performance cost for durable commits with a group commit of multiple concurrently executing transactions. TimesTen performs a single file system write operation to commit a group of concurrent transactions durably. Group commit does not improve the response time of any given commit operation, as each durable commit must wait for the file system write operation to complete, but it can significantly improve the throughput of a series of concurrent transactions.

The following sections describe the durability implementation options for TimesTen Scaleout and TimesTen Classic:

Durability for TimesTen Scaleout

In TimesTen Scaleout, the data in your database is distributed into elements. Each element keeps its own checkpoint and transaction log files. As a result, the data stored in each element is independently durable. Each data instance in a grid manages one element of a database. In the event of a failure, a data instance can automatically recover the data stored in its element from the checkpoint and transaction log files while the remaining data instances continue to service applications.

TimesTen Scaleout also enables you to keep multiple copies of your data to increase durability and fault tolerance. If the K-safety value is 2 or greater, then a failed element can be recovered from another element in its replica set. For more information on K-safety, see "High availability and fault tolerance" and "Creating a grid" in the Oracle TimesTen In-Memory Database Scaleout User's Guide.

In addition, you can change the durability settings of a database according to your performance and data durability needs. For example, you may choose if data is flushed to disk with every commit or periodically in batches in order to operate at a higher performance level. The following sections describe these durability settings:

Guaranteed durability for TimesTen Scaleout

In TimesTen Scaleout, you configure how durable your transactions are with the Durablity connection attribute. If you set the Durability attribute to 1, participants write durable prepare-to-commit log records and nondurable commit log records for distributed transactions. Having the Durability attribute set 1 ensures that committed transactions are recoverable in the case of a failure. See "Durability settings" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more information.

Non-durable distributed transactions for TimesTen Scaleout

Set the Durability connection attribute to 0 so that participants write nondurable prepare-to-commit and commit log records for distributed transactions.

K-safety provides a method for failure recovery. K-safety enables you to achieve near-continuous availability or workload distribution by providing two copies of the data. Thus, if one copy fails, another copy of the data exists.

If any element of a replica set has failed the other element in the replica set writes durable prepare-to-commit log records until the failed element recovers. This ensures that transactions in a grid with K-safety set to 2 and Durability=0 are durable under normal conditions. A transaction may become nondurable only if both elements of the replica set fail simultaneously.

TimesTen Scaleout periodically promotes transactions to epoch transactions. An epoch transaction and all transactions committed before it are more resilient to catastrophic failures, since you can recover a database to the globally consistent point marked by the epoch transaction. Each epoch commit log record is associated to a specific checkpoint file on every element. In the case of an unexpected failure of an element, the recovery process must use the checkpoint file on each element that is associated with the latest epoch commit log record, which is not necessarily the latest checkpoint available on the element.

By default, TimesTen Scaleout generates one epoch transaction every second. You can configure how often epoch transactions are automatically generated with the EpochInterval first connection attribute. In addition, you can use the ttEpochCreate or ttDurableCommit built-in procedures to manually promote a transaction to an epoch transaction.

See "Durability set to 0" in the Oracle TimesTen In-Memory Database Scaleout User's Guide for more information on epoch transactions and how to promote a transaction to an epoch transaction.

Durability for TimesTen Classic

TimesTen Classic provides durability by using the last checkpoint image together with the transaction log to reconstruct the latest transaction-consistent state of the database. Transaction log records are written to disk asynchronously or synchronously to the completion of the transaction and controlled by the application at the transaction level.

  • You can create an environment for guaranteed durability with no loss of data for those cases where data integrity must be preserved.

  • You can create an environment for durability for those cases where maximum throughput is paramount but can also tolerate minimal exposure to some data loss.

TimesTen Classic replication makes data highly available to applications with minimal performance impact. Replication enables you to achieve near-continuous availability or workload distribution by sending updates between two or more hosts. A master host is configured to asynchronously send updates and a subscriber host is configured to receive them.

Note:

For the highest availability for TimesTen Classic, use the active standby pair replication scheme configuration. See "Active standby pair with read-only subscribers" in the Oracle TimesTen In-Memory Database Replication Guide for details on active standby pair replication schemes.

With an Asynchronous WriteThrough (AWT) cache group, TimesTen Cache transmits committed updates between the TimesTen cache tables and the cached Oracle Database tables to keep these tables in the two databases synchronized. See "Asynchronous WriteThrough (AWT) cache group" in the Oracle TimesTen Application-Tier Database Cache User's Guide for details on asynchronous writethrough cache groups.

Guaranteed durability for TimesTen Classic

Guaranteed durability ensures that there is no loss of data (at the cost of some performance) for those cases where data integrity must be preserved.

To guarantee durability for all transactions, you should do at least one of the following. (There is very little advantage in using both of these options.)

  • Use replication enabled with the return twosafe service to replicate transactions. The return twosafe service provides a fully synchronous option by blocking the application until replication confirms that the update has been both received and committed on the subscriber before it is committed on the master. This provides applications a higher level of confidence that the effects of a committed transaction are durable, even if the master subsequently fails. This option is normally faster than the durable commit. See "Return twosafe replication" Oracle TimesTen In-Memory Database Replication Guide for more details.

  • Configure durable commit operations by setting the DurableCommits connection attribute to 1, which ensures that the transaction is synchronously written to the transaction log file while the application waits. See "DurableCommits" in the Oracle TimesTen In-Memory Database Reference for more information on this connection attribute.

Note:

If most of your transactions commit durably, you may want to set the LogFlushMethod first connection attribute to 1. This connection attribute configures how TimesTen writes and synchronizes log data to transaction log files. For more information, see "Use durable commits appropriately".

Nondurable transactions for TimesTen Classic

Nondurable transactions execute considerably faster than durable transactions. Connections that use nondurable transactions can coexist with connections that use guaranteed durability.

As with guaranteed durability, each transaction enters records into the in-memory transaction log buffer as it makes modifications to the database. However, when a transaction commits in nondurable mode, it does not wait for the transaction log buffer to be written to the file system before returning control to the application. Thus, unless you have configured asynchronous replication or an asynchronous writethrough cache group, a nondurable transaction may be lost in the event of a database failure. Eventually, transactions are flushed to the file system into the transaction log files by the database subdaemon process or when the in-memory transaction log buffer is full.

If your transactions require extremely high throughput with minimal exposure to data loss, then consider doing BOTH of these options:

  • Use one of these methods to perform an asynchronous write operation of your data to another database: a TimesTen database or an Oracle database.

    • Use the default asynchronous replication that replicates committed transactions from the master to the subscriber. With the default asynchronous replication, an application updates a master database and continues working without waiting for the updates to be received and applied by the subscribers. While asynchronous replication provides the best performance, it does not provide the application with confirmation that the replicated updates were committed on subscriber databases. See "Copying updates between databases" Oracle TimesTen In-Memory Database Replication Guide for more details.

    • Set up an Asynchronous WriteThrough (AWT) cache group to cause committed updates on TimesTen cache tables to be automatically and asynchronously propagated to the cached Oracle Database tables. A transaction commit operation on the TimesTen database occurs asynchronously from the commit on the Oracle database. This enables an application to continue issuing transactions on the TimesTen database without waiting for the Oracle Database transaction to complete. However, your application cannot ensure when the transactions are completed on the Oracle database. See "Asynchronous WriteThrough (AWT) cache group" in the Oracle TimesTen Application-Tier Database Cache User's Guide for details on AWT cache groups.

  • Configure for nondurable transactions by setting the DurableCommits connection attribute to 0, which asynchronously writes nondurable commit log records to the transaction log file.

    See "DurableCommits" in the Oracle TimesTen In-Memory Database Reference for more information.

The only transactions vulnerable to loss in the event of a system failure are those that committed non-durably after the last durable commit and were not replicated prior to the failure.

However, if you have critical transactions that cannot be exposed to any data loss, you can either:

  • Replicate those transactions synchronously from the master database to the subscriber database by running the ttRepSubscriberWait built-in procedure.

    Running the ttRepSubscriberWait built-in procedure on the master database using the DSN and host of the subscriber database causes the caller to wait until all transactions that committed before the call have been transmitted to the subscriber. See "ttRepSubscriberWait" in the Oracle TimesTen In-Memory Database Reference for more information.

  • Force the individual transactions to commit durably to the transaction log files with the ttDurableCommits built-in procedure. Committing a transaction durably makes that transaction and all previous transactions durable. However, this does not wait for the transaction to be replicated.

    When a durable commit is needed, the application can call the ttDurableCommit built-in procedure before committing. The ttDurableCommit built-in procedure does not actually commit the transaction; it merely causes the commit to be durable when it occurs.

    This maintains a smaller window of vulnerability to transaction loss as opposed to all transactions being committed non-durably. By committing only every nth transaction durably or performing a durable commit every n seconds, an application can achieve a quicker response time while maintaining a small window of vulnerability to transaction loss. A user can elect to perform a durable commit of a critical transaction, such as one that deals with financial exchange, that cannot be vulnerable to loss.

    See "ttDurableCommit" in the Oracle TimesTen In-Memory Database Reference for more information.

Transaction reclaim operations

After a transaction is marked by TimesTen as committed, there is a reclaim phase of the commit.

About reclaim operations

TimesTen resource cleanup occurs during the reclaim phase of a transaction commit. Consider a transaction with DELETE operations, for example. The SQL operation marks the deleted rows as deleted, but the space occupied by these rows are not actually freed until the reclaim phase of the transaction commit.

During reclaim, TimesTen reexamines all the transaction log records starting from the beginning of the transaction to determine the reclaim operations that must be performed, then performs those operations.

To improve performance, a number of transaction log records can be cached to reduce the need to access the transaction log files. This cache is referred to as the commit buffer and its size is configurable, as described in the next section, "Configuring the commit buffer for reclaim operations".

Notes:

  • The reclaim phase occurs as part of commit processing. Thus, once the reclaim phase has begun, the transaction is considered to be committed and can no longer be rolled back.

  • If an application is terminated during the reclaim phase, the cleanup operation completes the reclaim.

Configuring the commit buffer for reclaim operations

The reclaim phase of a large transaction commit results in a large amount of processing and is very resource intensive. (For this reason, smaller transactions are generally recommended.) You can improve performance, however, by increasing the maximum size of the commit buffer, which is the cache of transaction log records used during reclaim operations.

Note:

It is not recommended that you perform an extremely large transaction commit (such as 1 million rows) as it is very resource intensive. In case that you are committing an extremely large transaction, do not cancel this transaction as it could cause the transaction to take even longer to complete. This is due to the fact that if you cancel a transaction while it has not completed, TimesTen has to perform rollback operations.

You can use the TimesTen CommitBufferSizeMax connection attribute to specify the maximum size of the commit buffer, in megabytes. This setting has the scope of your current session. For efficiency, initial memory allocation will be significantly less than the maximum, but will automatically increase as needed in order to fit all the relevant log records into the commit buffer, until the allocation reaches the maximum. The allocation is then reduced back to the initial allocation after each reclaim phase. By default, the maximum is 128 KB with an initial allocation of 16 KB. See "CommitBufferSizeMax" in Oracle TimesTen In-Memory Database Reference for more information.

Be aware that an increase in the maximum size of the commit buffer may result in a corresponding increase in temporary space consumption. There is no particular limit to the maximum size you can specify, aside from the maximum value of an integer, but exceeding the available temporary space will result in an error.

Note the following related features:

  • During the course of a session, you can use ALTER SESSION to change the maximum size of the commit buffer as follows, where n is the desired maximum, in megabytes. See "ALTER SESSION" in Oracle TimesTen In-Memory Database SQL Reference for more information.

    ALTER SESSION SET COMMIT_BUFFER_SIZE_MAX = n
    
  • You can use the ttCommitBufferStats built-in procedure to gather statistics for your connection to help you tune the commit buffer maximum size. This built-in takes no parameters and returns the total number of commit buffer overflows and the highest amount of memory used by reclaim operations for transaction log records, in bytes. If there are buffer overflows, you may consider increasing the commit buffer maximum size. If there are no overflows and the highest amount of memory usage is well under the commit buffer maximum size, you may consider decreasing the maximum size.

    In TimesTen Classic, you can use the ttCommitBufferStatsReset built-in procedure to reset these statistics to 0 (zero). This is useful, for example, if you have set a new value for the commit buffer maximum size and want to restart the statistics.

    See "ttCommitBufferStats" and "ttCommitBufferStatsReset" in Oracle TimesTen In-Memory Database Reference for more information.

  • The system-wide number of commit buffer overflows is also recorded in the TimesTen statistic txn.commits.buf.overflowed in the SYS.SYSTEMSTATS table. See "SYS.SYSTEMSTATS" in Oracle TimesTen In-Memory Database System Tables and Views Reference for more information.

  • You can check the current setting of CommitBufferSizeMax by calling the ttConfiguration built-in procedure.

Recovery with checkpoint and transaction log files

If a database becomes invalid or corrupted by a system or process failure, every connection to the database is invalidated. When an application reconnects to a failed database, the subdaemon allocates a new memory segment for the database and recovers its data from the checkpoint and transaction log files.

During recovery, the latest checkpoint file is read into memory. All transactions that have been committed since the last checkpoint and whose log records are on the file system are rolled forward from the appropriate transaction log files. Note that such transactions include all transactions that were committed durably as well as all transactions whose log records aged out of the in-memory log buffer. Uncommitted or rolled-back transactions are not recovered. See "Checkpoint operations", "Transaction logging" and "Error, warning, and informational messages" for details on checkpoint and transaction log files.