TimesTen supports transactions that provide atomic, consistent, isolated and durable (ACID) access to data. The following sections describe how you can configure transaction features.
All operations on a TimesTen database, even those that do not modify or access application data, are executed within a transaction. When running an operation and there is no outstanding transaction, one is started automatically on behalf of the application. Transactions are completed by an explicit or implicit commit or roll back. When completed, resources that were acquired or opened by the transaction are released and freed, such as locks and cursors.
Use the following SQL statements to commit or roll back your transaction:
COMMIT statement commits the current transaction. Updates made in the transaction are made available to concurrent transactions.
ROLLBACK statement rolls back the current transaction. All updates made in the transaction are undone.
Note:For the syntax of the
ROLLBACKstatements, see "SQL Statements" in the Oracle TimesTen In-Memory Database SQL Reference.
Read-only transactions do not require a commit. When executing write operations, complete transactions to release locks. When possible, keep write transactions short in duration. Any long-running transactions can reduce concurrency and decrease throughput because locks are held for a longer period of time, which blocks concurrent transactions. Also, long-running transactions can prevent transaction log files from being purged, causing these files to accumulate on disk.
A connection can have only one outstanding transaction at any time and cannot be explicitly closed if it has an open transaction.
The following sections describe how you can configure whether the application enables implicit commit behavior or requires explicit commit behavior for DML or DDL statements:
Autocommit configures whether TimesTen issues an implicit commit after DML or DDL statements. By default, autocommit is enabled, following the ODBC and JDBC specifications.
When autocommit is on, the following behavior occurs:
An implicit commit is issued immediately after a statement executes successfully.
An implicit roll back is issued immediately after a statement execution fails, such as a primary key violation.
If the statement generates a result set that opens a cursor, the automatic commit is not issued until that cursor and any other open cursors in the transaction have been explicitly closed. Any statements executed while a cursor is open is not committed until all cursors have been closed.
Fetching all rows of a result set does not automatically close its cursor. After the result set has been processed, its cursor must be explicitly closed if using the read committed isolation level or the transaction must be explicitly committed or rolled back if using Serializable isolation level.
Note:Even with durable commits and autocommit enabled, you could lose work if there is a failure or the application exits without closing cursors.
If you are using ODBC or JDBC batch operations to
DELETE several rows in one call when autocommit is on, a commit occurs after the entire batch operation has completed. If there is an error during the batch operation, those rows that have been successfully modified are committed within this transaction. If an error occurs due to a problem on a particular row, only the successfully modified rows preceding the row with the error are committed in this transaction. The
pirow parameter to the ODBC
SQLParamOptions function contains the number of the rows in the batch that had a problem.
Commits can be costly for performance and intrusive if they are implicitly executed after every statement. TimesTen recommends you disable autocommit so that all commits are intentional. Disabling autocommit provides control over transactional boundaries, enables multiple statements to be executed within a single transaction, and improves performance, since there is no implicit commit after every statement.
If autocommit is disabled, transactions must be explicitly completed with a commit or roll back after any of the following:
Completing all the work that was to be done in the transaction.
Issuing a transaction-consistent (blocking) checkpoint request.
Updating column and table statistics to be used by the query optimizer.
Calling a TimesTen built-in procedure that does not generate a result set in order for the new setting specified in the procedure to take effect, such as the
You must establish a connection to a database before changing the autocommit setting. To disable autocommit, perform one of the following:
The TimesTen database issues an implicit
COMMIT before and after any DDL statement. A durable commit is performed after the execution of each DDL statement. This behavior is the same as the Oracle database.
DROP statements for any database object, including tables, views, users, procedures and indexes.
There are certain things to keep into consideration:
DDL changes cannot be rolled back.
DDL statements delete records from global temporary tables unless the tables were created with the
ON COMMIT PRESERVE ROWS clause.
Tables created with the
CREATE TABLE . . .
AS SELECT statement are visible immediately.
TRUNCATE statements are committed automatically. However, the truncate of the parent and child tables must be truncated in separate transactions, with the child table truncated first. You cannot truncate a parent table unless the child table is empty.
As a relational database, TimesTen is ACID compliant:
Atomic: All TimesTen transactions are atomic: Either all database operations in a single transaction occur or none of them occur.
Consistent: Any transaction can bring the database from one consistent state to another.
Isolated: Transactions can be isolated. TimesTen has two isolation levels: read committed and serializable, which together with row level locking provide multi-user concurrency control.
Durable: Once a transaction has been committed, it remains committed.
The following sections detail how TimesTen ensures ACID semantics for transactions:
Locking and transaction logs are used to ensure ACID semantics as a transaction modifies data in a database as follows:
Locking: TimesTen acquires locks on data items that the transaction writes and, depending on the transaction isolation level, data items that the transaction reads. See "Concurrency control through isolation and locking".
Transaction logging: All TimesTen transactions are atomic. Either all or none of the effects of the transaction are applied to the database. Modifications to the database are recorded in a transaction log. Atomicity is implemented by using the transaction log to undo the effects of a transaction if it is rolled back. Roll back can be caused explicitly by the application or during database recovery because the transaction was not committed at the time of failure. See "Transaction logging".
|Transaction is terminated successfully (committed)||
|Transaction is rolled back||
|System fails (data not committed)||
A checkpoint operation writes the current in-memory database image to a checkpoint file on disk that has the effect of making all transactions that have been committed at the time of the checkpoint operation consistent and durable.
All transactions are logged to an in-memory transaction log buffer, which is written to disk in one of the following ways:
TimesTen transactions support ANSI Serializable and ANSI Read Committed levels of isolation. ANSI Serializable isolation is the most stringent transaction isolation level. ANSI Read Committed allows greater concurrency. Read Committed is the default and is an appropriate isolation level for most applications.
The following sections describe transaction isolation and locking levels:
Transaction isolation enables each active transaction to operate as if there were no other transactions active in the system. Isolation levels determine if row-level locks are acquired when performing read operations. When a statement is issued to update a table, locks are acquired to prevent other transactions from modifying the same data until the updating transaction completes and releases its locks.
Isolation connection attribute sets the isolation level for a connection. Isolation levels have no effect if using database-level locking because transactions cannot be run concurrently. The isolation level cannot be changed in the middle of a transaction.
TimesTen supports the following two transaction isolation levels:
ANSI Read Committed isolation: The read committed isolation level is the recommended mode of operation for most applications, and is the default mode. It enables transactions that are reading data to execute concurrently with a transaction that is updating the same data. TimesTen makes multiple versions of data items to allow non-serializable read and write operations to proceed in parallel.
Read operations do not block write operations and write operations do not block read operations, even when they read and write the same data. Read operations do not acquire locks on scanned rows. Write operations acquire locks that are held until the transaction commits or rolls back. Readers share a committed copy of the data, whereas a writer has its own uncommitted version. Therefore, when a transaction reads an item that is being updated by another in-progress transaction, it sees the committed version of that item. It cannot see an uncommitted version of an in-progress transaction.
Read committed isolation level provides for better concurrency at the expense of decreased isolation because of the possibility of non-repeatable reads or phantom rows within a transaction. If an application executes the same query multiple times within the same transaction, the commit of an update from another transaction may cause the results from the read operation to retrieve different results. A phantom row appears in modified form in two different reads, in the same transaction, due to early release of read locks during the transaction.
To set read committed isolation level, if previously modified since this is the default, do one of the following:
ODBC applications execute the
SQLSetConnectOption ODBC function with the
SQL_TXN_ISOLATION flag set to
isolation=1 in the connection string.
ISOLATION 1 or
ANSI Serializable isolation: All locks acquired within a transaction by a read or write operation are held until the transaction commits or rolls back. Read operations block write operations, and write operations block read operations. As a result, a row that has been read by one transaction cannot be updated or deleted by another transaction until the original transaction terminates. Similarly, a row that has been inserted, updated or deleted by one transaction cannot be accessed in any way by another transaction until the original transaction terminates.
Serializable isolation level provides for repeatable reads and increased isolation at the expense of decreased concurrency. A transaction that executes the same query multiple times within the same transaction is guaranteed to see the same result set each time. Other transactions cannot update or delete any of the returned rows, nor can they insert a new row that satisfies the query predicate.
To set the isolation level to Serializable, do one of the following:
ODBC applications execute the
SQLSetConnectOption ODBC function with the
SQL_TXN_ISOLATION flag set to
isolation=0 in the connection string.
isolation 0 or
To ensure that materialized views are always in a consistent state, all view maintenance operations are performed under Serializable isolation, even when the transaction is in read committed isolation. This means that the transaction obtains read locks for any data items read during view maintenance. However, the transaction releases the read locks at the end of the
CREATE VIEW statement that triggered the view maintenance, instead of holding them until the end of the transaction.
ttXactAdminutility generates a report showing lock holds and lock waits for all outstanding transactions. It can be used to troubleshoot lock contention problems where operations are being blocked, or encountering lock timeout or deadlock errors. It can also be used to roll back a specified transaction.
Note:Different connections can coexist with different levels of locking, but the presence of even one connection using database-level locking leads to reduced concurrency. For performance information, see "Choose the best method of locking".
Row-level locking: Transactions usually obtain locks on the individual rows that they access. Row-level locking is the recommended mode of operation because it provides the finest granularity of concurrency control. It allows concurrent transactions to update different rows of the same table. However, row-level locking requires space in the database's temporary memory region to store lock information.
Row-level locking is the default. However, if it has been modified to another type of locking and you want to re-enable row-level locking, do one of the following:
LockLevel connection attribute to 0.
ttLockLevel built-in procedure with the
lockLevel parameter set to
Row. This procedure changes the lock level between row-level and database-level locking on the next transaction and for all subsequent transactions for this connection.
ttOptSetFlag procedure to set the
RowLock parameter to 1, which enables the optimizer to consider using row locks.
Table-level locking: Table-level locking is recommended when concurrent transactions access different tables or a transaction accesses most of the rows of a particular table. Table-level locking provides better concurrency than database-level locking. Row-level locking provides better concurrency than table-level locking. Table-level locking requires only a small amount of space in the temporary memory region to store lock information.
Table-level locking provides the best performance for the following:
Queries that access a significant number of rows of a table
When there are very few concurrent transactions that access a table
When temporary space is inadequate to contain all row locks that an operation, such as a large insert or a large delete, might acquire
To enable table-level locking, execute the
ttOptSetFlag procedure to set the
TblLock parameter to 1, which enables the optimizer to consider using table locks. In addition, set
RowLock to 0 so that the optimizer does not consider row-level locks.
If both table-level and row-level locking are disabled, TimesTen defaults to row-level locking. If both table-level and row-level locking are enabled, TimesTen chooses the locking scheme that is more likely to have better performance. Even though table-level locking provides better performance than row-level locking because of reduced locking overhead, the optimizer often chooses row-level locking for better concurrency. For more information, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference.
Note:When multiple locks have been obtained within the same transaction, the locks are released sequentially when the transaction ends.
Database-level locking: Database-level locking serializes all transactions, which effectively allows no concurrency on the database. When a transaction is started, it acquires an exclusive lock on the database, which ensures that there is no more than one active transaction in the database at any given time. It releases the lock when the transaction is completed.
Database-level locking often provides better performance than row-level locking, due to reduced locking overhead. In addition, it provides higher throughput than row-level locking when running a single stream of transactions such as a bulk load operation. However, its applicability is limited to applications that never execute multiple concurrent transactions. With database-level locking, every transaction effectively runs in ANSI Serializable isolation, since concurrent transactions are disallowed.
To enable database-level locking, do one of the following:
LockLevel connection attribute to 1.
ttLockLevel built-in procedure with the
lockLevel parameter set to
DS. This procedure changes the lock level between row-level and database-level locking on the next transaction and for all subsequent transactions for this connection.
LockWait connection attribute to the maximum amount of time that a statement waits to acquire a lock before it times out. The default is 10 seconds. If a statement within a transaction waits for a lock and the lock wait interval has elapsed, an error is returned. After receiving the error, the application can reissue the statement.
Lock wait intervals are imprecise due to the scheduling of the database's managing subdaemon process to detect lock timeouts. This imprecision does not apply to zero-second timeouts, which are always immediately reported. The lock wait interval does not apply to blocking checkpoints.
The database's managing subdaemon process checks every two seconds to see if there is a deadlock in the database among concurrent transactions. If a deadlock occurs, an error is returned to one of the transactions involved in the deadlock cycle. The transaction that receives the error must roll back in order to allow the other transactions involved in the deadlock to proceed.
When running a workload of high lock-contention potential, consider setting the
LockWait conection attribute to a smaller value for faster return of control to the application, or setting
LockWait to a larger value to increase the successful lock grant ratio (with a risk of decreased throughput).
For more information, see "LockWait" in Oracle TimesTen In-Memory Database Reference.
A checkpoint operation saves the in-memory image of a database to disk files, known as checkpoint files. By default, TimesTen performs background checkpoints at regular intervals. Checkpointing may generate a large amount of I/O activity and have a long execution time depending on the size of the database and the number of database changes since the most recent checkpoint.
Note:Applications can programmatically initiate checkpoint operations. See "Setting and managing checkpoints" for more details.
The following sections describe checkpoint operations and how you can manage them:
A checkpoint operation has two primary purposes.
Decreases the amount of time required for database recovery, because it provides a more up-to-date database image on which recovery can begin.
Makes a portion of the transaction log unneeded for any future database recovery operation, typically allowing one or more transaction log files to be deleted.
Both of these functions are very important to TimesTen applications. The reduction in recovery time is important, as the amount of a transaction log needed to recover a database has a direct impact on the amount of downtime seen by an application after a system failure. The removal of unneeded transaction log files is important because it frees disk space that can be used for new transaction log files. In addition, the fewer transaction log files you have, the less time is required to load a database into memory. If these files were never purged, they would eventually consume all available space in the transaction log directory's file system, causing database operations to fail due to log space exhaustion.
Each TimesTen database has two checkpoint files, named
dsname is the database path name and file name prefix specified in the database DSN. During a checkpoint operation, TimesTen determines which checkpoint file contains the most recent consistent image and then writes the next in-memory image of the database to the other file. Thus, the two files contain the two most recent database images.
TimesTen uses the most recent consistent checkpoint and the transaction log files to recover the database to its most recent transaction-consistent state after a database shutdown or system failure. (The most recent transaction log files are those written since the checkpoint was done.) If any errors occur during this process, or if the more recent checkpoint image is incomplete, then recovery restarts using the other checkpoint file. For more information on transaction logging, see "Transaction logging".
TimesTen supports two types of database checkpoint operations:
Fuzzy checkpoints, or non-blocking checkpoints, allow transactions to execute against the database while the checkpoint is in progress. Fuzzy checkpoints do not obtain locks of any kind, and therefore have a minimal impact on other database activity. Because transactions may modify the database while a checkpoint operation is in progress, the resulting checkpoint file may contain both committed and uncommitted transactions. Furthermore, different portions of the checkpoint image may reflect different points in time. For example, one portion may have been written before a given transaction committed, while another portion was written afterward. The term "fuzzy checkpoint" derives its name from this fuzzy state of the database image.
To recover the database when the checkpoint files were generated from fuzzy checkpoint operations, TimesTen requires the most recent consistent checkpoint file and the transaction log to bring the database into its most recent transaction-consistent state.
Transaction-consistent checkpoints, also known as blocking checkpoints, obtain an exclusive lock on the database for a portion of the checkpoint operation, blocking all access to the database during that time. The resulting checkpoint image contains all committed transactions prior to the time the checkpoint operations acquired the exclusive lock on the database. Because no transactions can be active while the database lock is held, no modifications made by in-progress transactions are included in the checkpoint image.
TimesTen uses the most recent consistent checkpoint file to recover the database to transaction-consistent state at the time of the last successful checkpoint operation completed. It uses the transaction log files to recover the database to its most recent transaction-consistent state after a database shutdown or system failure.
To request a transaction-consistent checkpoint, an application uses the
ttCkptBlocking built-in procedure. The actual checkpoint is delayed until the requesting transaction commits or rolls back. If a transaction-consistent checkpoint is requested for a database for which both checkpoint files are already up to date then the checkpoint request is ignored.
The default behavior for TimesTen checkpoints is as follows:
TimesTen performs periodic fuzzy checkpoints in the background. You can modify this behavior. See "Configuring or turning off background checkpointing" for more information.
TimesTen performs a transaction-consistent checkpoint operation of a database just before the database is unloaded from memory. See "Transaction-consistent checkpoints".
You can manage and monitor checkpoints with the following connection attributes and built-in procedures:
CkptFrequency connection attribute
CkptLogVolume connection attribute
CkptRate connection attribute
CkptReadThreads connection attribute
ttCkpt built-in procedure
ttCkptBlocking built-in procedure
ttCkptConfig built-in procedure
ttCkptHistory built-in procedure
The following sections describe how to manage checkpointing:
By default, TimesTen performs periodic fuzzy checkpoints in the background. Therefore, applications rarely need to issue manual checkpoints. However, if an application wishes to issue a manual checkpoint, it can call the
ttCkpt built-in procedure to request a fuzzy checkpoint or the
ttCkptBlocking built-in procedure to request a transaction-consistent checkpoint. This is not recommended. See "ttCkpt" and "ttCkptBlocking" in the Oracle TimesTen In-Memory Database Reference for more information.
Using connection attributes or built-in procedures, you can configure TimesTen to checkpoint either when the transaction log files contain a certain amount of data or at a specific frequency.
To configure checkpointing in TimesTen, do the following:
CkptLogVolume connection attributes as follows:
CkptFrequency connection attribute controls how often, in seconds, that TimesTen performs a background checkpoint. The default is 600 seconds. Set the
CkptFrequency connection attribute to 0 if you want to control background checkpointing with the
CkptLogVolume connection attribute.
CkptLogVolume connection attribute controls how much data, in megabytes, that collects in the transaction log file between background checkpoints. By increasing this amount, you can delay the frequency of the checkpoint. The default is 0. Set the
CkptFrequency connection attribute to 0 if you want to control background checkpointing with the
CkptLogVolume connection attribute. In most cases, it is recommended to use
CkptFrequency does not take into account the rate of transactions the rate of database transactions.
To turn off background checkpointing, set both the
CkptLogVolume connection attributes to 0.
Alternatively, you can configure background checkpointing or turn it off by calling the
ttCkptConfig built-in procedure. The values set by
ttCkptConfig take precedence over those set with the connection attributes.
For example, you can confirm:
The type of the checkpoint: fuzzy, blocking, static or none. See "Types of checkpoints" for details.
The status of the checkpoint: in-progress, completed, or failed. When examining checkpoint history, verify whether any recent checkpoints failed (indicated with status of
FAILED). If a checkpoint has failed, the reason is displayed in the error or additional details columns.
The reason why the checkpoint occurred. The most popular reasons are from a user executed built-in procedure, from a background request, after database recovery, and after a log-based catch up recovery process (Note that the log-based catch up recovery process applies only to TimesTen Scaleout).
The amount of time a checkpoint takes (subtract the completion time of the checkpoint from the start time).
The amount of data (total number of bytes) written by a typical checkpoint.
The checkpoint rate. See "Setting the checkpoint rate" for details on how to calculate the checkpoint rate from the data provided by the
ttCkptHistory built-in procedure.
The percentage of the checkpoint that has been completed. If there is an in-progress checkpoint, indicates the percentage of the checkpoint that has been completed. See "ttCkptHistory" in the Oracle TimesTen In-Memory Database Reference for more details.
The number of actual transaction log files purged by this checkpoint. If this column displays a zero, this does not mean that no log records were purged within the transaction log file. Instead, log records can be purged continually within a transaction log file. This column displays the actual number of transaction log files purged to show when disk space is freed.
There are times when TimesTen may not be able to purge some of the transaction files if there is a hold set on the transaction log. For example, this value may show when the checkpoint cannot purge transaction log files due to a long-running transaction or a replication bookmark located far back in the transaction log files.
The bookmark name (the reason for the transaction log hold) that is preventing the removal of one or more transaction logs. This name can help identify why transaction log files are remaining on disk longer than expected. See "Log holds by TimesTen components or operations" in this guide and "ttLogHolds" in the Oracle TimesTen In-Memory Database Reference for details on the different bookmark (transaction log hold) names.
If this information does not provide enough context on why transaction logs are not purged, then you can also execute the
ttXactAdmin built-in procedure for more details on the transaction logs.
Results are ordered by start time, with the most recent first.
This example shows that a
TT0847 error occurred during the most recent checkpoint attempt, which was a user-initiated checkpoint. The earlier checkpoints completed successfully.
call ttCkptHistory; < 2011-04-14 16:57:14.476860, 2011-04-14 16:57:14.477957, Fuzzy , Failed , User , BuiltIn , 847, 1, <NULL>, <NULL>, 0, 0, 0, 0, 0, 0, 0, <NULL>, 7, 0, <NULL>, Errors 1: TT0847: 16:57:14 (2011-04-14) > < 2011-04-14 16:56:34.169520, 2011-04-14 16:56:59.715451, Fuzzy , Completed , User , BuiltIn , <NULL>, 0, 0, 8966472, 294, 33554432, 291, 5677288, 5, 522000, 532928, <NULL>, 6, 0, Checkpoint, <NULL> > < 2011-04-14 16:55:47.703199, 2011-04-14 16:55:48.188764, Fuzzy , Completed , Checkpointer , Background , <NULL>, 1, 0, 8964304, 294, 33554432, 291, 5677288, 27, 1019512, 1065408, <NULL>, 5, 0, Checkpoint, <NULL> > < 2011-04-14 16:54:47.106110, 2011-04-14 16:54:47.723379, Static , Completed , Subdaemon , FinalCkpt , <NULL>, 0, 0, 8960328, 294, 33554432, 291, 5677288, 256, 33157172, 5321548, <NULL>, 4, 0, Checkpoint, <NULL> > < 2011-04-14 16:54:41.633792, 2011-04-14 16:54:42.568469, Blocking , Completed , User , BuiltIn , <NULL>, 1, 0, 8958160, 294, 33554432, 291, 5677288, 31, 1162112, 6604976, <NULL>, 3, 0, Checkpoint, <NULL> > < 2011-04-14 16:54:37.438827, 2011-04-14 16:54:37.977301, Static , Completed , User , DbCreate , <NULL>, 0, 0, 1611984, 93, 33554432, 92, 1853848, 93, 33554432, 1854052, <NULL>, 2, 0, Checkpoint, <NULL> > < 2011-04-14 16:54:36.861728, 2011-04-14 16:54:37.438376, Static , Completed , User , DbCreate , <NULL>, 1, 0, 1609936, 93, 33554432, 92, 1853848, 93, 33554432, 1854052, <NULL>, 1, 0, Checkpoint, <NULL> > 7 rows found.
This example selects specific columns from the checkpoint history:
SELECT type, reason, bookmarkname, logsPurged FROM ttCkptHistory; < Fuzzy , BuiltIn , Oldest Transaction Undo, 0 > < Static , FinalCkpt , Checkpoint, 6 > < Blocking , BuiltIn , Checkpoint, 0 > < Blocking , BuiltIn , Checkpoint, 0 > < Blocking , BuiltIn , Checkpoint, 0 > < Blocking , BuiltIn , Backup, 5 > < Blocking , BuiltIn , Backup, 0 > < Blocking , BuiltIn , Backup, 0 >
The output from this example shows that the oldest checkpoints (the last rows displayed) did not purge any transaction log files (indicated by a
0 in the
logsPurged column) because there was a log hold set by a backup. The backup caused a transaction log file accumulation. However, eventually, the log hold was removed and five transaction log files could be purged.
A checkpoint operation is started on the fourth row from the bottom. The checkpoint places a log hold that prevents transaction log files from being purged. Six transaction log files were purged by the final checkpoint (
The most recent checkpoint shows that it is a fuzzy checkpoint with the log hold of
Oldest Transaction Undo. This hold marks a point for the transaction that the checkpoint log purge operation cannot pass. If you see this message over several consecutive rows, then this may indicate a long running transaction that could be causing a transaction log file accumulation.
By default, there is no limit to the rate at which checkpoint data is written to disk. You can use the
CkptRate connection attribute or the
ttCkptConfig built-in procedure to set the maximum rate at which background checkpoint data is written to disk. Checkpoints taken during recovery and final checkpoints do not honor this rate; in those situations, the rate is unlimited.
Note:See "CkptRate" and "ttCkptConfig" in the Oracle TimesTen In-Memory Database Reference for details on using these features.
Setting a rate too low can cause checkpoints to take an excessive amount of time and cause the following problems:
Delay the purging of unneeded transaction log files.
Delay the start of backup operations.
Setting a rate too high can cause checkpoints to consume too much of the file system buffer cache bandwidth that could result in the following:
Reduce overall database transaction throughput, as transaction logs are prevented from writing to disk as quickly as they normally would.
Cause contention with other file system I/O operations.
When choosing a rate, you should take into consideration the amount of data written by a typical checkpoint and the amount of time checkpoints usually take. Both of these pieces of information are available through the
ttCkptHistory built-in procedure.
If a running checkpoint appears to be progressing too slowly when you evaluate the progress of this checkpoint with the
Percent_Complete column of the
ttCkptHistory result set, the rate can be increased by calling the
ttCkptConfig built-in procedure. If a call to
ttCkptConfig changes the rate, the new rate takes effect immediately, affecting even the running checkpoint.
Perform the following to calculate the checkpoint rate:
ttCkptHistory built-in procedure.
For any given checkpoint, subtract the starttime from the endtime.
Divide the number of bytes written by this elapsed time in seconds to get the number of bytes per second.
Divide this number by 1024*1024 to get the number of megabytes per second.
When setting the checkpoint rate, you should consider the following:
The specified checkpoint rate is only approximate. The actual rate of the checkpoint may be below the specified rate, depending on the hardware, system load and other factors.
The above method may underestimate the actual checkpoint rate, because the starttime and endtime interval includes other checkpoint activities in addition to the writing of dirty blocks to the checkpoint file.
Percent_Complete field of the
ttCkptHistory call may show 100 percent before the checkpoint is actually complete. The
Percent_Complete field shows only the progress of the writing of dirty blocks and does not include additional bookkeeping at the end of the checkpoint.
When adjusting the checkpoint rate, you may also need to adjust the checkpoint frequency, as a slower rate makes checkpoints take longer, which effectively increases the minimum time between checkpoint beginnings.
By default, TimesTen reads checkpoint files serially with a single thread. Use the
CkptReadThreads connection attribute to set the number of threads that TimesTen uses to read the checkpoint files when loading the database into memory.
n number of threads, TimesTen divides the checkpoint file into
n portions of equal size. Each thread concurrently reads a portion of the file into memory. Once all threads are done reading their portion of the checkpoint file successfully, TimesTen checks the database for consistency.
TimesTen maintains a transaction log for each database to track all updates made within a transaction, so that those updates can be undone if the transaction is rolled back. TimesTen recovers transactions using both the most recent checkpoint file and the most recent transaction log, which was committed from the time of the last checkpoint operation after a system failure.
A transaction log record is created for each database update, commit, and roll back. However, transaction log records are not generated for read-only transactions. Log records are first written to the transaction log buffer, which resides in the same shared memory segment as the database. The contents of the log buffer are then subsequently flushed to the latest transaction log file on disk.
The transaction log is a logically ordered sequence of transaction log records, but physically consists of a set of one or more transaction log files on disk and the in-memory log buffer within the TimesTen database. This log is shared by all concurrent connections.
The following sections describe how to manage and monitor the transaction log buffers and file:
Transaction log buffers: There is one transaction log buffer for each database and the size of the transaction log buffer can be configured using the
LogBufMB DSN connection attribute. Each transaction log buffer can have multiple strands. The number of transaction log buffer strands is configured with the
LogBufParallelism connection attribute.
LogDirconnection attribute is optional and if it is not specified, the transaction log files are created in the same directory as the checkpoint files. However, for best performance, TimesTen recommends that you use the
LogDirconnection attribute to place the transaction log files in a different physical device from the checkpoint files. If separated, I/O operations for checkpoints do not block I/O operations to the transaction log and vice versa.
If the database is already loaded into RAM and the transaction log files and checkpoint files for your database are on the same physical device, TimesTen writes a message to the daemon log file.
See "Configure the checkpoint files and transaction log files to be on a different physical device" in this book and "Check transaction log file use of disk space" in the Oracle TimesTen In-Memory Database Troubleshooting Guide for more details.
The maximum size for the transaction log files are configured with the
LogFileSize DSN connection attribute.
The transaction log file names have the form
dsname is the database path name that is specified by the
DataStore DSN connection attribute and is provided within the database's DSN. The suffix
n is the transaction log file number, starting at zero.
When the database is created, TimesTen creates reserve files named
.res2. These files contain pre-allocated space that serve as reserved transaction log space. Reserved transaction log space allows for a limited continuation of transaction logging if the file system that holds the transaction log files becomes full. If the file system that contains the transaction logs becomes full, the database does not allow any new transactions to begin. Transactions that are in progress are allowed to continue by using the reserve files until they either commit or roll back. If there are transactions that are still in progress and the reserve files are all consumed, all database operations that generate log files are blocked.
It is important to verify at frequent intervals that there are no transaction log holds that could result in an excessive accumulation of transaction log files. If too many transaction log files accumulate and fill up available disk space, new transactions in the TimesTen database cannot begin until the transaction log hold is advanced and transaction log files are purged by the next checkpoint operation.
The following sections describe transaction log operations, log holds, and accumulation of log files:
Any transaction log file is kept until TimesTen determines it can be purged, which can occur under the following conditions:
Transactions writing log records to the file have been committed or rolled back. These can be either local database transactions or XA transactions.
Changes recorded in the file have been written to both checkpoint files.
Changes recorded in the file have been replicated, if replication is enabled.
Changes recorded in the file have been propagated to the Oracle database, if TimesTen Cache is used and configured for that behavior.
Changes recorded in the file have been reported to XLA, if XLA is used.
Under normal TimesTen operating conditions, unneeded transaction log files are purged each time a checkpoint is initiated. A checkpoint can be initiated either through a configurable time interval with the
CkptFrequency connection attribute, a configurable log volume with the
CkptLogVolume connection attribute, or by calling the
ttCkpt built-in function, which can be called either manually or in a background checkpointing application thread.
If you are running out of disk space because of log files accumulating, use the
CkptLogVolume connection attribute instead of the
CkptFrequency connection attribute. In addition, if you execute the
ttLogHolds built-in procedure frequently, you can tell if log reclamation is blocked.
Note:To improve performance, locate your log files on a separate physical device from the one on which the checkpoint files are located. The
LogDirconnection attribute determines where log files are stored. For more information, see "Managing transaction log buffers and files" in this book or "LogDir" in the Oracle TimesTen In-Memory Database Reference.
See "Checkpointing" in the Oracle TimesTen In-Memory Database Introduction for general information. See "Configuring or turning off background checkpointing" for more details on
CkptLogVolume. Also, see the sections for "CkptFrequency", "CkptLogVolume", "ttCkpt", and "ttLogHolds" in the Oracle TimesTen In-Memory Database Reference.
Several TimesTen components or operations can cause transaction log holds. A transaction log hold prevents log files, beyond a certain point, from being purged until they are no longer needed. In normal circumstances, the log hold position is regularly advanced and log files are purged appropriately. However, if operations are not functioning properly and the hold position does not advance, there can be an excessive accumulation of log files beyond the hold position that can no longer be purged, which eventually fills available disk space.
These components and operations include the following:
Replication: There is a transaction log hold until the transmitting replication agent confirms that the log files have been fully processed by the receiving host.
Possible failure modes include the following:
The network is down or there is a standby crash and replication is unable to deliver data to one or more subscribers. If necessary, the application can direct that logs no longer be held, then duplicate the master database to the standby when normal operations resume. Criteria for when to do this includes the amount of time required to duplicate, the amount of available disk space on the master for log files, and the transaction log growth rate.
The overall database transaction rate exceeds the ability of replication to keep the active and standby databases synchronized. An application can reduce the application transaction rate or the number of replicated tables.
For more information, see "Improving Replication Performance" in the Oracle TimesTen In-Memory Database Replication Guide and "Troubleshooting Replication" in the Oracle TimesTen In-Memory Database Troubleshooting Guide.
XLA: There is a transaction log hold until the XLA bookmark advances.
A possible failure mode occurs when the bookmark becomes stuck, which can occur if an XLA application terminates unexpectedly or if it disconnects without first deleting its bookmark or disabling change-tracking. If a bookmark gets too far behind, the application can delete it. If the XLA reader process is still active, it must first be terminated, so that another XLA process can connect and delete the bookmark.
Active standby pairs that replicate AWT cache groups: There is a transaction log hold until the replication agent confirms that the transaction corresponding to the log hold has been committed on the Oracle Database. With an active standby pair, the active database typically receives the confirmation from the standby database. If the standby database is down, the replication agent receives confirmation from Oracle Database directly.
Possible failure modes include the following:
Oracle Database is down or there is a lock or resource contention.
The network is down, slow, or saturated.
With an active standby pair, replication to the standby database falls behind. Check log holds on the standby database.
The transaction rate to TimesTen exceeds the maximum sustainable rate that TimesTen can propagate to Oracle Database.
For more information, see "Monitoring AWT cache groups" in the Oracle TimesTen Application-Tier Database Cache User's Guide and "Troubleshooting AWT Cache Groups" in the Oracle TimesTen In-Memory Database Troubleshooting Guide.
Cache groups configured with
AUTOREFRESH: There is a transaction log hold until the replication agent on the active database confirms the log files have been fully processed by the standby database.
Possible failure modes include the following:
Replication from the active database to the standby database is impacted because the standby database falls behind due to large workloads resulting from
The standby database is down or recovering, but has not been marked as
FAILED through a call, initiated by either the user application or Oracle Clusterware, to the
ttRepStateSave built-in procedure. The active database does not take over propagation to the Oracle Database until the state of the standby database is marked as
FAILED. While the standby database is down or recovering, transaction log files are held for the Oracle Database.
For more information, see "Monitoring autorefresh cache groups" in the Oracle TimesTen In-Memory Database Troubleshooting Guide.
Incremental TimesTen backup: There is a transaction log hold until the backup completes.
A possible failure mode can occur if the incremental backup falls too far behind the most recent entries in the transaction log. For example, ensure that an unexpected burst of transaction activity cannot fill up available transaction log disk space due to the backup holding a log file that is too old. An application can perform another incremental backup to work around this situation.
Long-running transaction or XA transaction: There is a transaction log hold until the transaction completes.
A possible failure mode can occur if an application transaction does not commit or roll back for a long time, so that it becomes necessary for the application to terminate the long-running transaction.
If necessary, you can roll back a transaction using the
ttXactAdmin utility with the
-xactIdRollback option. See "ttXactAdmin" in Oracle TimesTen In-Memory Database Reference.
Oldest transaction undo: A transaction log hold for the oldest log record of the transaction, which is used to perform a roll back or undoing work after a crash (if necessary). This hold marks a point for the transaction that the checkpoint log purge operation cannot pass. This log hold information is only displayed within the output from the
ttCkptHistory built-in procedure.
Options for periodic monitoring of excessive transaction log accumulation include the following:
Log file number, the offset of the hold position, and the type of hold, which can be checkpoint, replication, backup, XLA, long-running transaction, or long-running XA transaction
Name of the checkpoint file for a checkpoint hold
Name of the subscriber and the parallel track ID it uses for replication
Backup path for a backup hold
Name of the persistent subscription and process ID of the last process to open it for XLA
Transaction ID for a long-running transaction
XA XID for a long-running XA transaction
For more information, see "ttLogHolds" in the Oracle TimesTen In-Memory Database Reference.
For more information, see "ttCkptHistory" in the Oracle TimesTen In-Memory Database Reference.
SYS.SYSTEMSTATS table for operational metrics. Each transaction log file has a unique sequence number, which starts at 0 for the first log file and increments by 1 for each subsequent log file. The number of the current log file is available in
SYS.SYSTEMSTATS.log.file.latest. The number of the oldest log file not yet purged is available in
SYS.SYSTEMSTATS.log.file.earliest. You should raise an error or warning if the difference in the sequence numbers exceeds an inappropriate threshold.
For more information, see "SYS.SYSTEMSTATS" in the Oracle TimesTen In-Memory Database System Tables and Views Reference.
For XLA, check the
SYS.TRANSACTION_LOG_API table that provides bookmark information, such as the process ID of the connected application, which could help diagnose the reason why a bookmark may be stuck or lagging.
For more information, see "SYS.TRANSACTION_LOG_API" in the Oracle TimesTen In-Memory Database System Tables and Views Reference.
The following sections describe durability options for logging transactions:
Durability is implemented with a combination of checkpointing and logging.
Checkpoint files: A checkpoint operation writes the current database image to a checkpoint file on disk, which has the effect of making all transactions that committed before the checkpoint durable.
Transaction log files: For transactions that committed after the last checkpoint, TimesTen uses conventional logging techniques to make them durable. As each transaction progresses, it records its database modifications in an in-memory transaction log. At commit time, the relevant portion of the transaction log is flushed to disk. This log flush operation makes that transaction, and all previously-committed transactions, durable.
Control returns to the application after the transaction log data has been durably written to disk. A durably committed transaction is not lost even in the event of a system failure.
To enable guaranteed durability, set the
DurableCommits connection attribute to 1.
Any recovery uses the last checkpoint image together with the transaction log to reconstruct the latest transaction-consistent state of the database.
Note:Committing a transaction durably makes that transaction and all previous transactions durable. Any non-durable transactions are no longer subject to loss in the event of a database failure, just as if it had originally been committed durably.
If most of your transactions commit durably, you may want to set the
LogFlushMethod first connection attribute to 2. This connection attribute configures how TimesTen writes and synchronizes log data to transaction log files. For more information, see "Use durable commits appropriately".
In delayed durability mode, as in guaranteed durability mode, each transaction enters records into the in-memory transaction log as it makes modifications to the database. However, when a transaction commits in delayed durability mode, it does not wait for the transaction log to be posted to disk before returning control to the application. Thus, a non-durable transaction may be lost in the event of a database failure. However, they execute considerably faster than durable transactions. Eventually, transactions are flushed to disk by the database's subdaemon process or when the in-memory log buffer is full.
Applications request delayed durability mode by setting the
DurableCommits connection attribute to 0. This is the default and the recommended option. Connections that use delayed durability can coexist with connections that use guaranteed durability.
Applications that wish to take advantage of the performance benefits of delayed durability mode, but can only tolerate the loss of a small number of transactions, can perform periodic durable commits in a background process. Only those transactions that committed non-durably after the last durable commit are vulnerable to loss in the event of a system failure.
The performance cost for durable commits can be reduced with a group commit of multiple concurrently executing transactions. Many threads executing at the same time, if they are short transactions, may commit at almost the same time. Then, a single disk write commits a group of concurrent transactions durably. Group commit does not improve the response time of any given commit operation, as each durable commit must wait for a disk write to complete, but it can significantly improve the throughput of a series of concurrent transactions.
When durable commits are used frequently, TimesTen can support more connections than there are CPUs, as long as transactions are short. Each connection spends more time waiting to commit than it spends using the CPU. Alternatively, applications that perform infrequent durable commits cause each connection to be very CPU-intensive for the TimesTen portion of its workload.
Applications that do not require optimal response time and can tolerate some transaction loss may elect to perform periodic durable commits. This maintains a smaller window of vulnerability to transaction loss as opposed to all transactions being committed non-durably. By committing only every nth transaction durably or performing a durable commit every n seconds, an application can achieve a quicker response time while maintaining a small window of vulnerability to transaction loss. A user can elect to perform a durable commit of a critical transaction, such as one that deals with financial exchange, that cannot be vulnerable to loss.
To enable periodic durable commits, an application does the following:
Connects with setting the connection attribute
DurableCommits=0. This causes the transactions to commit non-durably.
When a durable commit is needed, the application can call the
ttDurableCommit built-in procedure before committing. The
ttDurableCommit built-in procedure does not actually commit the transaction; it merely causes the commit to be durable when it occurs.
After a transaction is marked by TimesTen as committed, there is a reclaim phase of the commit during which database resources are reclaimed. This section discusses these reclaim operations, covering the following topics:
TimesTen resource cleanup occurs during the reclaim phase of a transaction commit. Consider a transaction with
DELETE operations, for example. The SQL operation marks the deleted rows as deleted, but the space and resources occupied by these rows are not actually freed until the reclaim phase of the transaction commit.
During reclaim, TimesTen reexamines all the transaction log records starting from the beginning of the transaction to determine the reclaim operations that must be performed, then performs those operations.
To improve performance, a number of transaction log records can be cached to reduce the need to access the transaction log on disk. This cache is referred to as the commit buffer and its size is configurable, as described in the next section, "Configuring the commit buffer for reclaim operations".
Once the reclaim phase has begun, the transaction is considered to be committed and can no longer be rolled back.
If a process is terminated during the reclaim phase, the cleanup operation will complete the reclaim.
The reclaim phase of a large transaction commit results in a large amount of processing and is very resource intensive. (For this reason, smaller transactions are generally recommended.) You can improve performance, however, by increasing the maximum size of the commit buffer, which is the cache of transaction log records used during reclaim operations.
Note:It is not recommended that you perform an extremely large transaction commit (i.e. 1 million rows) as it is very resource intensive. In case that you are committing an extremely large transaction, do not cancel this transaction as it could cause the transaction to take even longer to complete. This is due to the fact that if you cancel a transaction while it has not completed, TimesTen has to perform reclaim operations.
You can use the TimesTen
CommitBufferSizeMax connection attribute to specify the maximum size of the commit buffer, in megabytes. This setting has the scope of your current session. For efficiency, initial memory allocation will be significantly less than the maximum, but will automatically increase as needed in order to fit all the relevant log records into the commit buffer, until the allocation reaches the maximum. The allocation is then reduced back to the initial allocation after each reclaim phase. By default, the maximum is 128 KB with an initial allocation of 16 KB. (Also see "CommitBufferSizeMax" in Oracle TimesTen In-Memory Database Reference.)
Be aware that an increase in the maximum size of the commit buffer may result in a corresponding increase in temporary space consumption. There is no particular limit to the maximum size you can specify, aside from the maximum value of an integer, but exceeding the available temporary space will result in an error.
Note the following related features:
During the course of a session, you can use
ALTER SESSION to change the maximum size of the commit buffer as follows, where
n is the desired maximum, in megabytes. (Also see "ALTER SESSION" in Oracle TimesTen In-Memory Database SQL Reference.)
ALTER SESSION SET COMMIT_BUFFER_SIZE_MAX = n
You can use the
ttCommitBufferStats built-in procedure to gather statistics for your connection to help you tune the commit buffer maximum size. This built-in takes no parameters and returns the total number of commit buffer overflows and the highest amount of memory used by reclaim operations for transaction log records, in bytes. If there are buffer overflows, you may consider increasing the commit buffer maximum size. If there are no overflows and the highest amount of memory usage is well under the commit buffer maximum size, you may consider decreasing the maximum size.
ttCommitBufferStatsReset built-in procedure resets these statistics to 0 (zero). This is useful, for example, if you have set a new value for the commit buffer maximum size and want to restart the statistics.
The system-wide number of commit buffer overflows is also recorded in the TimesTen statistic
txn.commits.buf.overflowed in the
SYS.SYSTEMSTATS table. (Also see "SYS.SYSTEMSTATS" in Oracle TimesTen In-Memory Database System Tables and Views Reference.)
You can check the current setting of
CommitBufferSizeMax by calling the
ttConfiguration built-in procedure.
If a database becomes invalid or corrupted by a system or process failure, every connection to the database is invalidated. When an application reconnects to a failed database, the subdaemon allocates a new memory segment for the database and recovers its data from the checkpoint and transaction log files.
During recovery, the latest checkpoint file is read into memory. All transactions that have been committed since the last checkpoint and whose log records are on disk are rolled forward from the appropriate transaction log files. Note that such transactions include all transactions that were committed durably as well as all transactions whose log records aged out of the in-memory log buffer. Uncommitted or rolled-back transactions are not recovered. For details on checkpoint and transaction log files, see "Checkpoint operations" and "Transaction logging".
When critical events occur, TimesTen collects the daemon log entries at the moment of the critical event to assist when diagnosing critical failures. An example of critical events include database failure or a log-based catch up failure. Database invalidation occurs when TimesTen detects that the data in the database is corrupt and therefore unusable.
TimesTen generates critical event files containing a snippet of the daemon log file that was collected at the moment of the critical event. Critical event log files are created and stored in the same directory as the daemon log files (specified by
supportlog configuration attribute in the
timesten.conf file). Critical event log files are named with the format of
timestamp is the current time on the host.
When a critical event occurs, TimesTen records in the daemon log file that critical event information was collected and the name of the critical event log file.
If a database invalidation occurs, then TimesTen records in the
.inval file that critical event information was collected at the time of the database invalidation and the name of the critical event log file. The
.inval file is located in the same directory configured as the
If more than one critical event occurs in quick succession, TimesTen collects only new information for each subsequent event in the next critical event log file.
You can configure the duration of the daemon log collection with the
daemon_log_snippet_interval_in_mins configuration attribute in the
timesten.conf file. As with all changes to the
timesten.conf file, you must restart the main daemon for any change to take effect. See the "TimesTen Instance Configuration File" in the Oracle TimesTen In-Memory Database Reference for details on this configuration attribute.
The occurrence of a critical event and the name of the critical event log file are noted in the daemon log file. The following example shows the message you would see in the daemon log file if two critical events occur in quick succession.
17:08:00.224 Err : : 12532: A critical event has happened. Saving last 600 seconds snippet of daemon log at /timesten/instance/diag/ttmesg.log.ts_1568160480.gz. 17:08:00.643 Err : : 12532: Read 543878 bytes from daemon log file and wrote into the daemon log snippet file located at /timesten/instance/diag/ttmesg.log.ts_1568160480.gz 17:11:12.657 Err : : 12532: A critical event has happened. Saving last 600 seconds snippet of daemon log at /timesten/instance/diag/ttmesg.log.ts_1568160672.gz. 17:11:12.657 Err : : 12532: Part of the daemon log snippet has already been recorded in a prior snippet file ending with timestamp 1568160480.gz 17:11:13.077 Err : : 12532: Read 2678582 bytes from daemon log file and wrote into the daemon log snippet file located at /timesten/instance/diag/ttmesg.log.ts_1568160672.gz
After which, you can evaluate the collected log records in the specified
Any database invalidation event and the name of the critical event log file are noted in the
.inval file. The following example shows the messages you could see in the
.inval file if two critical events occur in quick succession.
2019-09-10 17:08:00.752 Hostname: myhost Invalidated data store: /timesten/instance/datastores/mydb Data store created: 2019/09/10 17:04:25 TimesTen Release 18.104.22.168.0 (Linux x86-64, 64-bit dbg) (myhost) Data store created by Release 22.214.171.124.0 (Linux x86-64, 64-bit dbg) (myhost) Source: Data store marked invalid by master daemon: grid LBCU pre-condition check failed A critical event has happened. Saving last 600 seconds snippet of daemon log at /timesten/instance/diag/ttmesg.log.ts_1568160480.gz. Read 953765 bytes from daemon log file and wrote into the daemon log snippet file located at /timesten/instance/diag/ttmesg.log.ts_1568160480.gz 2019-09-10 17:11:13.905 Hostname: myhost Invalidated data store: /timesten/instance/datastores/mydb Data store created: 2019/09/10 17:04:25 TimesTen Release 126.96.36.199.0 (Linux x86-64, 64-bit dbg) Data store created by Release 188.8.131.52.0 (Linux x86-64, 64-bit dbg) Source: Data store marked invalid by master daemon: subdaemon managing database exited or died A critical event has happened. Saving last 600 seconds snippet of daemon log at /timesten/instance/diag/ttmesg.log.ts_1568160673.gz. Part of the daemon log snippet has already been recorded in a prior snippet file ending with timestamp ts_1568160480.gz Read 781026 bytes from daemon log file and wrote into the daemon log snippet file located at /timesten/instance/diag/ttmesg.log.ts_1568160673.gz
After which, you can evaluate the collected log records in the specified