|Bookshelf Home | Contents | Index | PDF|
Before running EIM, you should consult with an experienced DBA in order to evaluate the amount of space necessary to store the data to be inserted in the EIM tables and the Siebel base tables. Also, for example with Oracle, you can make sure that the extent sizes of those tables and indexes are defined accordingly.
Avoiding excessive extensions and keeping a small number of extents for tables and indexes is important because extent allocation and disallocation activities (such as truncate or drop commands) can be demanding on CPU resources.
When purging data from an EIM table, use the TRUNCATE command as opposed to the DELETE command. The TRUNCATE command releases the data blocks and resets the high water mark while the DELETE command does not, which causes additional blocks to be read during processing. Also, be sure to drop and recreate the indexes on the EIM table to release the empty blocks.
Multiple EIM processes can be executed against an EIM table provided they all use different batches or batch ranges. The concern is that you may experience contention for locks on common objects. To run multiple jobs in parallel against the same EIM table, you should make sure that the FREELIST parameter is set appropriately for the tables and indexes used in the EIM processing.
This includes EIM tables and indexes, as well as base tables and indexes. The value of this parameter specifies the number of block IDs that will be stored in memory which are available for record insertion. Generally, you should set this to at least half of the intended number of parallel jobs to be run against the same EIM table (for example, a FREELIST setting of 10 should permit up to 20 parallel jobs against the same EIM table).
Another method to improve performance is to put small tables that are frequently accessed in cache. The value of BUFFER_POOL_KEEP determines the portion of the buffer cache that will not be flushed by the LRU algorithm. This allows you to put certain tables in memory, which improves performance when accessing those tables. This also makes sure that after accessing a table for the first time, it will always be kept in the memory. Otherwise, it is possible that the table will get pushed out of memory and will require disk access the next time used.
Be aware that the amount of memory allocated to the keep area is subtracted from the overall buffer cache memory (defined by DB_BLOCK_BUFFERS). A good candidate for this type of operation is the S_LST_OF_VAL table. The syntax for keeping a table in the cache is as follows:
When there are 255 or more NVL functions in an update statement, Oracle updates the wrong data due to hash keys overflow. This is an Oracle-specific issue. To avoid this problem, use less than 255 NVL functions in the update statement.
|Performance Tuning Guide|