For best performance, all HADB processes (clu_xxx_srv) must fit in physical memory. They should not be paged or swapped. The same applies for shared memory segments in use.
You can configure the size of some of the shared memory segments. If these segments are too small, performance suffers, and user transactions are delayed or even aborted. If the segments are too large, then the physical memory is wasted.
You can configure the following parameters:
The HADB stores data on data devices, which are allocated on disks. The data must be in the main memory before it can be processed. The HADB node allocates a portion of shared memory for this purpose. If the allocated database buffer is small compared to the data being processed, then disk I/O will waste significant processing capacity. In a system with write-intensive operations (for example, frequently updated session states), the database buffer must be big enough that the processing capacity used for disk I/O does not hamper request processing.
The database buffer is similar to a cache in a file system. For good performance, the cache must be used as much as possible, so there is no need to wait for a disk read operation. The best performance is when the entire database contents fits in the database buffer. However, in most cases, this is not feasible. Aim to have the “working set” of the client applications in the buffer.
Also monitor the disk I/O. If HADB performs many disk read operations, this means that the database is low on buffer space. The database buffer is partitioned into blocks of size 16KB, the same block size used on the disk. HADB schedules multiple blocks for reading and writing in one I/O operation.
Use the hadbm deviceinfo command to monitor disk use. For example, hadbm deviceinfo --details will produce output similar to this:
NodeNo TotalSize FreeSize Usage 0 512 504 1% 1 512 504 1%
The columns in the output are:
TotalSize: size of device in MB.
FreeSize: free size in MB.
Usage: percent used.
Use the hadbm resourceinfo command to monitor resource usage, for example the following command displays data buffer pool information:
%hadbm resourceinfo --databuf NodeNo Avail Free Access Misses Copy-on-write 0 32 0 205910260 8342738 400330 1 32 0 218908192 8642222 403466
The columns in the output are:
Avail: Size of buffer, in Mbytes.
Free: Free size, when the data volume is larger than the buffer. (The entire buffer is used at all times.)
Access: Number of times blocks that have been accessed in the buffer.
Misses: Number of block requests that “missed the cache” (user had to wait for a disk read)
Copy-on-write: Number of times the block has been modified while it is being written to disk.
For a well-tuned system, the number of misses (and hence the number of reads) must be very small compared to the number of writes. The example numbers above show a miss rate of about 4% (200 million access, and 8 million misses). The acceptability of these figures depends on the client application requirements.
To change the size of the database buffer, use the following command:
hadbm set DataBufferPoolSize
This command restarts all the nodes, one by one, for the change to take effect. For more information on using this command, see Configuring HADB in Sun Java System Application Server Enterprise Edition 8.2 High Availability Administration Guide.
Before it executes them, HADB logs all operations that modify the database, such as inserting, deleting, updating, or reading data. It places log records describing the operations in a portion of shared memory referred to as the (tuple) log buffer. HADB uses these log records for undoing operations when transactions are aborted, for recovery in case of node crash, and for replication between mirror nodes.
The log records remain in the buffer until they are processed locally and shipped to the mirror node. The log records are kept until the outcome (commit or abort) of the transaction is certain. If the HADB node runs low on tuple log, the user transactions are delayed, and possibly timed out.
Begin with the default value. Look for HIGH LOAD informational messages in the history files. All the relevant messages will contain tuple log or simply log, and a description of the internal resource contention that occurred.
Under normal operation the log is reported as 70 to 80% full. This is because space reclamation is said to be “lazy.” HADB requires as much data in the log as possible, to recover from a possible node crash.
Use the following command to display information on log buffer size and use:
hadbm resourceinfo --logbuf
For example, output might look like this:
Node No. Avail Free Size 0 44 42 1 44 42
The columns in the output are:
Node No.:The node number.
Avail: Size of buffer, in megabytes.
Free Size: Free size, in MB, when the data volume is larger than the buffer. The entire buffer is used at all times.
Change the size of the log buffer with the following command:
hadbm set LogbufferSize
This command restarts all the nodes, one by one, for the change to take effect. For more information on using this command, see Configuring HADB in Sun Java System Application Server Enterprise Edition 8.2 High Availability Administration Guide.
The node internal log (nilog) contains information about physical (as opposed to logical, row level) operations at the local node. For example, it provides information on whether there are disk block allocations and deallocations, and B-tree block splits. This buffer is maintained in shared memory, and is also checked to disk (a separate log device) at regular intervals. The page size of this buffer, and the associated data device is 4096 bytes.
Large BLOBs necessarily allocate many disk blocks, and thus create a high load on the node internal log. This is normally not a problem, since each entry in the nilog is small.
Begin with the default value. Look out for HIGH LOAD informational messages in the history files. The relevant messages contain nilog, and a description of the internal resource contention that occurred.
Use the following command to display node internal log buffer information:
hadbm resourceinfo --nilogbuf
For example, the output might look something like this:
Node No. Avail Free Size 0 11 11 1 11 11
To change the size of the nilog buffer, use the following command:
hadbm set InternalLogbufferSize
The hadbm restarts all the nodes, one by one, for the change to take effect. For more information on using this command, see Configuring HADB in Sun Java System Application Server Enterprise Edition 8.2 High Availability Administration Guide.
If the size of the nilog buffer is changed, the associated log device (located in the same directory as the data devices) also changes. The size of the internal log buffer must be equal to the size of the internal log device. The command hadbm set InternalLogBufferSize ensures this requirement. It stops a node, increases the InternalLogBufferSize, re initializes the internal log device, and brings up the node. This sequence is performed on all nodes.
Each row level operation requires a lock in the database. Locks are held until a transaction commits or rolls back. Locks are set at the row (BLOB chunk) level, which means that a large session state requires many locks. Locks are needed for both primary, and mirror node operations. Hence, a BLOB operation allocates the same number of locks on two HADB nodes.
When a table refragmentation is performed, HADB needs extra lock resources. Thus, ordinary user transactions can only acquire half of the locks allocated.
If the HADB node has no lock objects available, errors are written to the log file. For more information , see Chapter 14, HADB Error Messages, in Sun Java System Application Server Enterprise Edition 8.2 Error Message Reference.
To calculate the number of locks needed, estimate the following parameters:
Number of concurrent users that request session data to be stored in HADB (one session record per user)
Maximum size of the BLOB session
Persistence scope (max session data size in case of session/modified session and maximum number of attributes in case of modified session). This requires setAttribute() to be called every time the session data is modified.
If:
x is the maximum number of concurrent users, that is, x session data records are present in the HADB, and
y is the session size (for session/modified session) or attribute size (for modified attribute),
Then the number of records written to HADB is:
xy/7000 + 2x
Record operations such as insert, delete, update and read will use one lock per record.
Locks are held for both primary records and hot-standby records. Hence, for insert, update and delete operations a transaction will need twice as many locks as the number of records. Read operations need locks only on the primary records. During refragmentation and creation of secondary indices, log records for the involved table are also sent to the fragment replicas being created. In that case, a transaction needs four times as many locks as the number of involved records. (Assuming all queries are for the affected table.)
If refragmentation is performed, the number of locks to be configured is:
Nlocks = 4x (y/7000 + 2) = 2xy/3500 + 2x
Otherwise, the number of locks to be configured is:
Nlocks = 2x (y/7000 + 2) = xy/3500 + 4x
Start with the default value. Look for exceptions with the indicated error codes in the Application Server log files. Remember that under normal operations (no ongoing refragmentation) only half of the locks might be acquired by the client application.
To get information on allocated locks and locks in use, use the following command:
hadbm resourceinfo --locks
For example, the output displayed by this command might look something like this:
Node No. Avail Free Waits 0 50000 50000 na 1 50000 50000 na
Avail: Number of locks available.
Free: Number of locks in use.
Waits: Number of transactions that have waited for a lock.“na” (not applicable) if all locks are available.
To change the number of locks, use the following command:
hadbm set NumberOfLocks
The hadbm restarts all the nodes, one by one, for the change to take effect. For more information on using this command, see Configuring HADB in Sun Java System Application Server Enterprise Edition 8.2 High Availability Administration Guide.
This section describes some of the timeout values that affect performance.
These values govern how much time the server waits for a connection from the pool before it times out. In most cases, the default values work well. For detailed tuning information, see Tuning JDBC Connection Pools.
Some values that may affect performance are:
response-timeout-in-seconds -The time for which the load balancer plug-in will wait for a response before it declares an instance dead and fails over to the next instance in the cluster. Make this value large enough to accommodate the maximum latency for a request from the server instance under the worst (high load) conditions.
health checker: interval-in-seconds - Determines how frequently the load balancer pings the instance to see if it is healthy. Default value is 30 seconds. If the response-timeout-in-seconds is optimally tuned, and the server doesn’t have too much traffic, then the default value works well.
health checker: timeout-in-seconds - How long the load balancer waits after “pinging” an instance. The default value is 100 seconds.
The combination of the health checker’s interval-in-seconds and timeout-in-seconds values determine how much additional traffic goes from the load balancer plug-in to the server instances.
For more information on configuring the load balancer plug-in, see Configuring the Load Balancer in Sun Java System Application Server Enterprise Edition 8.2 High Availability Administration Guide.
The sql_client time out value may affect performance.