Transaction Tuning
You can increase performance when using transactions.
Locate Checkpoint and Transaction Log Files on Separate Physical Device
To improve performance, locate your transaction log files on a separate physical
device from the one on which the checkpoint files are located. The LogDir
connection attribute determines where log files are stored.
See Managing Transaction Log Buffers and Files in this book or LogDir in the Oracle TimesTen In-Memory Database Reference.
Size Transactions Appropriately
Performance impact: Large
INSERT
, DELETE
or
UPDATE
), incurs a file system write operation when the transaction
commits. File system I/O affects response time and may affect throughput, depending on
how effective group commit is.
Performance-sensitive applications should avoid unnecessary file system write operations at commit. Use a performance analysis tool to measure the amount of time your application spends in file system write operations (versus CPU time). If there seems to be an excessive amount of I/O, there are two steps you can take to avoid writes at commit:
-
Adjust the transaction size.
-
Adjust whether file system write operations are performed at transaction commit. See Use Durable Commits Appropriately.
Long transactions perform fewer file system write operations per unit of time than short transactions. However, long transactions also can reduce concurrency. See Transaction Management.
-
If only one connection is active on a database, longer transactions could improve performance. However, long transactions may have some disadvantages, such as longer rollback operations.
-
If there are multiple connections, there is a trade-off between transaction log I/O delays and locking delays. In this case, transactions are best kept to the natural length, as determined by requirements for atomicity and durability.
Use Durable Commits Appropriately
Non-durable commits do everything that a durable commit does except write the transaction log to the file system. Locks are released and cursors are closed, but no file system write operation is performed.
Note:
Some drivers only write data into cache memory or write to the file system some time after the operating system receives the write completion notice. In these cases, a power failure may cause some information that you thought was durably committed to be lost. To avoid this loss of data, configure your file system to write to the recording media before reporting completion or use an uninterruptible power supply.
The advantage of non-durable commits is a potential reduction in response time and increase in throughput. The disadvantage is that some transactions may be lost in the event of system failure. An application can force the transaction log to be persisted to the file system by performing an occasional durable commit or checkpoint, thereby decreasing the amount of potentially lost data. In addition, TimesTen itself periodically flushes the transaction log to the file system when internal buffers fill up, limiting the amount of data that could be lost.
Transactions can be made durable or can be made nondurable on a connection-by-connection basis. Applications can force a durable commit of a specific transaction by calling the ttDurableCommit
procedure.
Applications that use durable commits can benefit from using synchronous writes in
place of write and flush. To turn on synchronous writes set the first connection
attribute LogFlushMethod=1
. The LogFlushMethod
attribute controls how TimesTen writes and synchronizes data to transaction log files.
Although application performance can be affected by this attribute, transaction
durability is not affected. See LogFlushMethod in the Oracle TimesTen In-Memory Database
Reference.
The txn.commits.durable
column of the SYS.SYSTEMSTATS
table indicates the number of transactions that were durably committed.
Avoid Manual Checkpoints
Performance impact: Large
ttCkptConfig
built-in
procedure. Background checkpoints are less obtrusive and use non-blocking (or "fuzzy")
checkpoints.
Manual checkpoints only apply to TimesTen Classic. Manual checkpoints are not supported in TimesTen Scaleout.
If a manual checkpoint is deemed necessary, it is generally better to call ttCkpt
to perform a non-blocking (or "fuzzy") checkpoint than to call ttCkptBlocking
to perform a blocking checkpoint. Blocking or ("transaction-consistent") checkpoints can have a significant performance impact because they require exclusive access to the database. Non-blocking checkpoints may take longer, but they permit other transactions to operate against the database at the same time and thus impose less overall overhead. You can increase the interval between successive checkpoints (ckptLogVolume
parameter of the ttCkptConfig
built-in procedure) by increasing the amount of file system space available for accumulating transaction log files.
As the transaction log increases in size (if the interval between checkpoints is large), recovery time increases accordingly. If reducing recovery time after a system crash or application failure is important, it is important to properly tune the ttCkptConfig
built-in procedure. The ckpt.completed
column of the SYS.SYSTEMSTATS
table indicates how often checkpoints have successfully completed.
See ttCkptConfig in the Oracle TimesTen In-Memory Database Reference.
Turn Off Autocommit Mode
Performance impact: Large
AUTOCOMMIT
mode forces a commit after each
statement, and is enabled by default. Committing each statement after execution,
however, can significantly degrade performance. For this reason, it is generally
advisable to disable AUTOCOMMIT
, using the appropriate API for your
programming environment.
The txn.commits.count
column of the
SYS.SYSTEMSTATS
table indicates the number of transaction
commits.
If you do not include any explicit commits in your application, the application can use up important resources unnecessarily, including memory and locks. All applications should do periodic commits.
Avoid Transaction Roll Back
Performance impact: Large
Applications should avoid unnecessary rollback operations. This may mean designing the application to avoid contention and checking application or input data for potential errors in advance, if possible. The txn.rollbacks
column of the SYS.SYSTEMSTATS
table indicates the number of transactions that were rolled back.
Avoid Large DELETE Statements
Performance impact: Large
Avoid DELETE FROM Statements
If you attempt to delete a large number of rows (100,000 or more) from a table with a single SQL statement the operation can take a long time.
TimesTen logs each row deleted, in case the operation needs to be rolled back, and writing all of the transaction log records to the file system can be very time-consuming.
Another problem with such a large delete operation is that other database operations will be slowed down while the write-intensive delete transaction is occurring. Deleting millions of rows in a single transaction can take minutes to complete.
Another problem with deleting millions of rows at once occurs when the table is being replicated. Because replication transmits only committed transactions, the replication agent can be slowed down by transmitting a single, multi-hundred MB (or GB) transaction. TimesTen replication is optimized for lots of small transactions and performs slowly when millions of rows are deleted in a single transaction.
See DELETE in the Oracle TimesTen In-Memory Database SQL Reference.
Consider Using the DELETE FIRST Clause
If you want to delete a large number of rows and TRUNCATE TABLE
is
not appropriate, consider using the DELETE FIRST
NumRows
clause to delete rows from a table in batches.
The DELETE FIRST
NumRows
syntax allows you to change “DELETE
FROM
TableName
WHERE
. . ." into a sequence of “DELETE FIRST
10000
FROM
TableName
WHERE
. . ." operations.
By splitting a large DELETE
operation into a batch of smaller operations, the rows will be deleted much faster, and the overall concurrency of the system and replication will not be affected.
For more information about the DELETE FIRST
clause, see DELETE in
the Oracle TimesTen In-Memory Database SQL
Reference.
Increase the Commit Buffer Cache Size
Performance impact: Large
The reclaim phase of a large transaction commit results in a large amount of processing and is very resource intensive. You can improve performance, however, by increasing the maximum size of the commit buffer, which is the cache of transaction log records used during reclaim operations.
You can use the TimesTen CommitBufferSizeMax
connection attribute to specify the maximum size of the commit buffer, in megabytes. This setting has the scope of your current session.