System and Database Tuning

There are tips for tuning your system and databases.

Provide Enough Memory

Performance impact: Large

Configure your system so that the entire database fits in main memory. The use of virtual memory substantially decreases performance.

The database or working set does not fit if a performance monitoring tool shows excessive paging or virtual memory activity. You may have to add physical memory or configure the system software to allow a large amount of shared memory to be allocated to your process(es).

For Linux systems, you must configure shared memory so that the maximum size of a shared memory segment is large enough to contain the size of the total shared memory segment for the database.

In TimesTen Classic, the entire database resides in a single shared memory segment. In TimesTen Scaleout, you set settings for the shared memory segment of each host. Set the appropriate operating system kernel parameters for shmmax, shmall, memlock, and HugePages. See Operating System Prerequisites in the Oracle TimesTen In-Memory Database Installation, Migration, and Upgrade Guide and Configuring Linux Kernel Parameters in the Oracle TimesTen In-Memory Database Scaleout User's Guide.

TimesTen includes the ttSize utility to help you estimate the size of tables in your database. For more information, see ttSize in the Oracle TimesTen In-Memory Database Reference.

Avoid Paging of the Memory Used by the Database

Performance impact: Variable

On Linux, the MemoryLock connection attribute specifies whether the real memory used by the database should be locked while the database is loaded into memory.

Set MemoryLock to 4 to avoid paging of the memory used by the database. If MemoryLock is not set to 4, parts of the shared memory segment used by the database may be paged out to the swap area, resulting in poor database performance. See MemoryLock in the Oracle TimesTen In-Memory Database Reference.

Size Your Database Correctly

Performance impact: Variable

When you create a database, you are required to specify a database size. Specifically, you specify sizes for the permanent and temporary memory regions of the database.

For details on how to size the database and shared memory, see Specifying the Memory Region Sizes of a Database in this book and Configure Shmmax and Shmall in the Oracle TimesTen In-Memory Database Installation, Migration, and Upgrade Guide.

Calculate Shared Memory Size for PL/SQL Runtime

Performance impact: Variable

The PL/SQL runtime system uses an area of shared memory to hold metadata about PL/SQL objects in TimesTen and the executable code for PL/SQL program units that are currently running or have recently run. The size of this shared memory area is controlled by the PLSQL_MEMORY_SIZE first connect attribute.

When a new PL/SQL program unit is prepared for execution, it is loaded into shared memory. If shared memory space is not available, the cached recently-processed program units are discarded from memory until sufficient shared memory space is available. If all of the PL/SQL shared memory is being used by currently running program units, then attempts by a new connection to run PL/SQL may result in out of space errors, such as ORA-04031. If this happens, increase the PLSQL_MEMORY_SIZE.

Even if such out of space errors do not occur, the PLSQL_MEMORY_SIZE may be too small. It is less expensive in CPU time to run a PL/SQL procedure that is cached in shared memory than one that is not cached. In a production application, the goal should be for PLSQL_MEMORY_SIZE to be large enough so that frequently run PL/SQL units are always cached. The TimesTen built-in procedure ttPLSQLMemoryStats can be used to determine how often this occurs. The PinHitRatio value returned is a real number between 0 and 1.

  • 1.0: A value of 1.0 means that every PL/SQL execution occurred from the cache.

  • 0.0: A value of 0.0 means that every execution required that the program unit be loaded into shared memory.

The proper value of PLSQL_MEMORY_SIZE for a given application depends on the application. If only a small number of PL/SQL program units repeatedly run, then the size requirements can be small. If the application uses hundreds of PL/SQL program units, memory requirements increase.

Performance increases dramatically as the PinHitRatio goes up. In one set of experiments, an application program repeatedly runs a large number of PL/SQL stored procedures. With a larger value for PLSQL_MEMORY_SIZE, the application results in a PinHitRatio of around 90%, and the average runtime for a PL/SQL procedure was 0.02 seconds. With a smaller value for PLSQL_MEMORY_SIZE, there was more contention for the cache, resulting in a PinHitRatio of 66%. In this experiment the average runtime was 0.26 seconds.

The default value for PLSQL_MEMORY_SIZE is 128 MB on Linux and UNIX systems. This should be sufficient for several hundred PL/SQL program units of reasonable complexity to run. After running a production workload for some time, check the value of PinHitRatio. If it is less than 0.90, consider increasing PLSQL_MEMORY_SIZE.

Set Appropriate Limit on the Number of Open Files

Performance impact: Variable

For multithreaded applications that maintain many concurrent TimesTen database connections, the default number of open files permitted to each process by the operating system may be too low.

See Limits on Number of Open Files in the Oracle TimesTen In-Memory Database Reference.

Configure Log Buffer and Log File Size Parameters

Performance impact: Large

Increasing the value of LogBufMB and LogFileSize could have a substantial positive performance impact.

LogBufMB is related to the LogFileSize connection attribute, which should be set to the same value or integral multiples of LogBufMB for best performance.

Transaction log buffer waits occur when application processes cannot insert transaction data to the transaction log buffer and must stall to wait for transaction log buffer space to be freed. The usual reason for this is that the log flusher thread has not cleared out data fast enough. This may indicate that transaction log buffer space is insufficient, file system transfer rate is insufficient, writing to the file system is taking too long, or the log flusher is CPU-bound.

  • A small LogBufMB could slow down write transactions when these transactions have to wait for a log flush operation to make space for redo logging. An oversized log buffer has no impact other than consuming extra memory. The value of LOG_BUFFER_WAITS shows the number of times a thread was delayed while trying to insert a transaction log record into the log buffer because the log buffer was full. Generally, if this value is increasing, it indicates that the log buffer is too small.

  • The LogFileSize attribute specifies the maximum size of each transaction log file in megabytes. If LogFileSize is too small, then TimesTen has to create multiple log files within a transaction log flush operation. The overhead of file creation often leads to LOG_BUF_WAITS events, which could significantly impact performance.

You can observe the value of the LOG_BUFFER_WAITS metric (using either the SYS.MONITOR table or the ttIsql monitor command) to see if it increases while running a workload. After which, you can increase the value of LogBufMB to improve the database performance.

The trade-off from increasing the value of LogBufMB is that more transactions are buffered in memory and may be lost if the process crashes. If DurableCommits are enabled, increasing the value of LogBufMB value does not improve performance.

If log buffer waits still persist after increasing the LogBufMB and LogFileSize values, then review the other possible issues mentioned above.

The LogBufParallelism connection attribute specifies the number of transaction log buffer strands. This attribute improves transaction throughput when multiple connections run write transactions concurrently. Configure this to the lesser value of the number of CPU cores on the system and the number of concurrent connections running write transactions.

See LogBufMB, LogFileSize and LogBufParallelism in the Oracle TimesTen In-Memory Database Reference.

Avoid Connection Overhead for TimesTen Classic Databases

Performance impact: Large

An idle database is a database with no connections. By default, TimesTen loads an idle database into memory when a first connection is made to it. When the final application disconnects from a database, a delay occurs when the database is written to the file system.

If applications are continually connecting and disconnecting from a database, the database may be loaded to and unloaded from memory continuously, resulting in excessive file system I/O and poor performance. Similarly, if you are using a very large database you may want to pre-load the database into memory before any applications attempt to use it.

To avoid the latency of loading a database into memory, you can change the RAM policy of a TimesTen Classic database to enable databases to always remain in memory. The trade-off is that since the TimesTen Classic database is never unloaded from memory, a final disconnect checkpoint never occurs. So, applications should configure frequent checkpoints with the ttCkptConfig built-in procedure. See ttCkptConfig in the Oracle TimesTen In-Memory Database Reference.

Alternatively, you can specify that the TimesTen Classic database remain in memory for a specified interval of time and accept new connections. If no new connections occur in this interval, TimesTen Classic unloads the database from memory and checkpoints it. You can also specify a setting to enable a system administrator to load and unload the database from memory manually.

To change the RAM policy of a TimesTen Classic database, use the ttAdmin utility. For more details on the RAM policy and the ttAdmin utility, see Specifying a RAM Policy in this book and the ttAdmin section in the Oracle TimesTen In-Memory Database Reference.

Load the Database Into RAM When Duplicating

Performance impact: Large

Load a TimesTen Classic database into memory with the ttAdmin -ramLoad command before you duplicate this database with the ttRepAdmin -duplicate utility. This places the database in memory, available for connections, instead of unloading it with a blocking checkpoint.

See Avoid Connection Overhead for TimesTen Classic Databases in this book and the ttAdmin and ttRepAdmin sections in the Oracle TimesTen In-Memory Database Reference.

Prevent Reloading of the Database After Automatic Recovery Fails

Performance impact: Large

When TimesTen Classic recovers after a database is invalidated, a new database is reloaded. However, the invalidated database is only unloaded after all connections to this database are closed. Thus, both the invalidated database and the recovered database could exist in RAM at the same time.

Reloading a large database into memory when an invalidated database still exists in memory can fill up available RAM. See Preventing an Automatic Reload of the Database After Failure on how to stop automatic reloading of the database.

Reduce Contention

Performance impact: Large

Database contention can substantially impede application performance.

To reduce contention in your application:

If your application suffers a decrease in performance because of lock contention and a lack of concurrency, reducing contention is an important first step in improving performance.

The lock.locks_granted.immediate, lock.locks_granted.wait and lock.timeouts columns in the SYS.SYSTEMSTATS table provide some information on lock contention:

  • lock.locks_granted.immediate counts how often a lock was available and was immediately granted at lock request time.

  • lock.locks_granted.wait counts how often a lock request was granted after the requestor had to wait for the lock to become available.

  • lock.timeouts counts how often a lock request was not granted because the requestor did not want to wait for the lock to become available.

If limited concurrency results in a lack of throughput, or if response time is an issue, an application can reduce the number of threads or processes making API (JDBC, ODBC, or OCI) calls. Using fewer threads requires some queuing and scheduling on the part of the application, which has to trade off some CPU time for a decrease in contention and wait time. The result is higher performance for low-concurrency applications that spend the bulk of their time in the database.

Consider Special Options for Maintenance

Performance impact: Medium

During special operations such as initial loading, you can choose different options than you would use during standard operations.

In particular, consider using database-level locking for bulk loading; an example would be using ttBulkCp or ttMigrate. These choices can improve loading performance by a factor of two.

An alternative to database-level locking is to exploit concurrency. Multiple copies of ttBulkCp -i can be run using the -notblLock option. Optimal batching for ttBulkCp occurs by adding the -xp 256 option.

See ttBulkCp and ttMigrate in the Oracle TimesTen In-Memory Database Reference.

Check Your Driver

Performance impact: Large

The TimesTen direct driver: If you only use direct connections and have no need to use both direct and client-server connections from the same application process, then link directly with the TimesTen direct driver for maximum performance.

The TimesTen client driver on Windows: Use the TimesTen client driver in your DSN when using the Windows ODBC driver manager. If you do not need the functionality of the Windows ODBC driver manager, link direct with the TimesTen client driver for improved performance.

The TimesTen data manager driver on Linux and UNIX platforms: There are two versions of the TimesTen data manager driver for Linux and UNIX platforms: a debug and production version. Unless you are debugging, use the production version. The debug library can be significantly slower.

The TimesTen driver manager on Linux and UNIX platforms: If you need to use direct and client/server connections from the same application process to a TimesTen database, then you should use the TimesTen driver manager (as opposed to a generic ODBC driver manager) to minimize the performance impact. If you do not want to use the TimesTen driver manager, you can use an open source or commercial driver manager, such as unixODBC or iODBC. However, generic driver managers could result in significant performance penalties and functionality limitations.

See Creating a DSN on Linux and UNIX for TimesTen Classic and Connecting to TimesTen with ODBC and JDBC Drivers.

Enable Tracing Only as Needed

Performance impact: Large

Both ODBC and JDBC provide a trace facility to help debug applications. For performance runs, make sure that tracing is disabled except when debugging applications.

To turn the JDBC tracing on, use:

DriverManager.setLogStream method:
DriverManager.setLogStream(new PrintStream(System.out, true));

By default tracing is off. You must call this method before you load a JDBC driver. Once you turn the tracing on, you cannot turn it off for the entire execution of the application.

Use Metrics to Evaluate Performance

Performance impact: Variable

You can use the ttStats utility to collect and display database metrics.

The ttStats utility can perform the following functions.

  • Monitor and display database performance metrics in real-time, calculating rates of change during each preceding interval.

  • Collect and store snapshots of metrics to the database then produce reports with values and rates of change from specified pairs of snapshots. (These functions are performed through calls to the TT_STATS PL/SQL package.)

The ttStats utility gathers metrics from TimesTen system tables, views, and built-in procedures. In reports, this includes information such as a summary of memory usage, connections, and load profile, followed by metrics (as applicable) for SQL statements, transactions, PL/SQL memory, replication, logs and log holds, checkpoints, cache groups, latches, locks, XLA, and TimesTen connection attributes.

Call the ttStatsConfig built-in procedure to modify statistics collection parameters that affect the ttStatus utility. See ttStatsConfig in the Oracle TimesTen In-Memory Database Reference.

See ttStats in the Oracle TimesTen In-Memory Database Reference. See TT_STATS in the Oracle TimesTen In-Memory Database PL/SQL Packages Reference.

Migrate Data With Character Set Conversions

Performance impact: Variable

If character set conversion is requested when migrating databases, performance may be slower than if character set conversion is not requested.

Use TimesTen Native Integer Data Types If Appropriate

Performance impact: Variable

Declare the column types of a table with TimesTen native integer data types, TT_TINYINT, TT_SMALLINT, TT_INTEGER, or TT_BIGINT instead of the NUMBER data type to save space and improve the performance of calculations.

TimesTen internally maps the SMALLINT, INTEGER, INT, DECIMAL, NUMERIC, and NUMBER data types to the NUMBER data type. When you define the column types of a table, ensure that you choose the appropriate native integer data type for your column to avoid overflow. For more information on native integer data types and the NUMBER data type, see Numeric Data Types in the Oracle TimesTen In-Memory Database SQL Reference.

If you use a TimesTen native integer data type when you create a TimesTen cache group that reads and writes to an Oracle database, ensure that the data type of the columns of the TimesTen cache group and the Oracle database are compatible. For more information about the mapping between Oracle Database and TimesTen data types, see Mappings Between Oracle Database and TimesTen Data Types in the Oracle TimesTen In-Memory Database Cache Guide.

Configure the Checkpoint Files and Transaction Log Files to be on a Different Physical Device

Performance impact: Large

For best performance, TimesTen recommends that you use the LogDir connection attribute to place the transaction log files on a different physical device from the checkpoint files. This is important if your application needs to have consistent response times and throughput.

TimesTen places the checkpoint files in the location specified by the DataStore connection attribute, which is the full path name of the database and the file name prefix. If the transaction log files and checkpoint files are on different physical devices, I/O operations for checkpoints do not block I/O operations to the transaction log and vice versa. For more information on transaction logging, see Transaction Logging.

See LogDir and DataStore in the Oracle TimesTen In-Memory Database Reference.

Note:

If the database is already loaded into RAM and the transaction log files and checkpoint files for your database are on the same physical device, TimesTen writes a message to the daemon log file.