Oracle Enterprise Manager Event Test Reference Manual
Release 9.0.1

Part Number A89872-02
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents

Master Index

Feedback

Go to previous page Go to next page

2
Oracle Database Event Tests

The Oracle Enterprise Manager Advanced Event Tests for the Oracle Database are divided into a series of categories that enable you to find the event test you want to register.

Summary of Database Event Tests

The Oracle Advanced Event Tests for the database service type are grouped into the following categories:

The full descriptions of the individual event tests follow the tables. The event tests are in alphabetical order.

Table 2-1 Database Audit Management Event Test  
Event Test  Description 

User Audit 

This test monitors specified database user connections. For example, an alert is displayed when a particular database user connection, specified by the User name argument, has been detected. 

Table 2-2 Database Fault Management Event Tests  
Event Test  Description 

Alert 

This event test signifies that the database being monitored has generated errors to the ALERT log file since the last sample time. The ALERT log file is a special trace file containing a chronological log of messages and errors. An alert event is triggered when Oracle Exception (ORA-006xx), deadlock detected (ORA-00060), or data block corrupted (ORA-01578) messages are written to the ALERT log file. A warning is displayed when other ORA messages are written to the ALERT log file. 

Archiver Hung 

This event test signifies that the archiver of the database being monitored has been temporarily suspended since the last sample time.

If the database is running in ARCHIVELOG mode, an alert is displayed when archiving is hung (ORA-00257) messages are written to the ALERT file. The ALERT file is a special trace file containing a chronological log of messages and errors. If the database is not running in ARCHIVELOG mode, this test will not register. 

Broken Jobs 

The Oracle server job queue is a database table that stores information about local jobs such as the PL/SQL call to execute for a job such as when to run a job. Database replication is also managed by using Oracle's job queue mechanism using jobs to push deferred transactions to remote master sites, to purge applied transactions from the deferred transaction queue or to refresh snapshot refresh groups.

A job can be broken in two ways:

  • Oracle has failed to successfully execute the job after sixteen attempts.

  • The job has been explicitly marked as broken by using the procedure DBMS_ JOB.BROKEN

This event test checks for broken DBMS jobs. An alert is generated if the number of broken jobs exceeds the value specified by the threshold argument. 

Data Block Corruption 

This event test signifies that the database being monitored has generated a corrupted block error to the ALERT file since the last sample time. The ALERT file is a special trace file containing a chronological log of messages and errors. An alert event is triggered when data block corrupted messages (ORA-01578, ORA-27048, and ORA-01157) are written to the ALERT file. 

Deferred Transactions 

Oracle uses deferred transactions to propagate data-level changes asynchronously among master sites in an advanced replication system as well as from an updatable snapshot to its master table. This event test checks for the number of deferred transactions. An alert is generated if the number of deferred transactions exceeds the value specified by the threshold argument. 

Error Transactions 

Oracle uses deferred transactions to propagate data-level changes asynchronously among master sites in an advanced replication system as well as from an updatable snapshot to its master table. If a transaction is not successfully propagated to the remote site, Oracle rolls back the transaction, logs the transaction in the SYS.DEFERROR view in the remote destination database. This test checks for the number of transactions in SYS.DEFERROR view and raises an alert if it exceeds the value specified by the threshold argument. 

Failed Jobs 

The Oracle server job queue is a database table that stores information about local jobs such as the PL/SQL call to execute for a job such as when to run a job. Database replication is also managed by using the Oracle job queue mechanism using jobs to push deferred transactions to remote master sites, to purge applied transactions from the deferred transaction queue or to refresh snapshot refresh groups.

If a job returns an error while Oracle is attempting to execute it, the job fails. Oracle repeatedly tries to execute the job doubling the interval of each attempt. If the job fails sixteen times, Oracle automatically marks the job as broken and no longer tries to execute it. This test checks for failed DBMS jobs. An alert is generated if the number of failed jobs exceeds the value specified by the threshold argument. 

Probe 

This event test checks whether a new connection can be established to a database. If the maximum number of users is exceeded or the listener is down, this test is triggered.

Note: The choice of user credentials for the Probe event test should be considered. If the preferred user has the RESTRICTED SESSION privilege, the user will be able to connect to a database even if the LICENSE_MAX_SESSIONS limit is reached. 

Session Terminated 

This test signifies that a session terminated unexpectedly since the last sample time. The ALERT file is a special trace file containing a chronological log of messages and errors. An alert is displayed when session unexpectedly terminated (ORA-00603) messages are written to the ALERT file. 

Unscheduled Jobs 

The Oracle server job queue is a database table that stores information about local jobs. This event test checks for unscheduled DBMS jobs. An alert is generated when the number of jobs, whose execution time has exceeded the value specified by the Job Completion Time argument, exceeds the value specified in the Critical Threshold. A job's completion date/time is calculated by using the NEXT_DATE value in the SYS.DBA_JOBS view plus the approximate time it takes to complete a job as specified by the job completion time argument. 

User Blocks 

This event test signifies that a database user is blocking at least one other user from performing an action, such as updating a table. An alert is generated if the number of consecutive blocking occurrences reaches the specified value.

Note: The catblock.sql script needs to be run on the managed database prior to using the User Blocks test. This script creates some additional tables, views, and public synonyms that are required by the User Blocks test. 

Table 2-3 Database Performance Management Event Tests - Application Activity  
Event Test  Description 

Average File Read Time 

This data item represents the average time spent performing a read from this datafile during the sample period. This value will always be 0 unless the TIMED_STATISTICS parameter is TRUE.

The value of this item is reported in 100ths of a second. Therefore a value of 100 would mean on average that one second of time was spent per physical read to this file during the last sample period.

There is a drilldown chart available from this chart called Timed Statistics Chart. This chart shows the current value for the TIMED_STATISTICS parameter. Use the Turn On Timed Statistics drilldown to turn on timed statistics for the instance.

This test checks the average time spent performing a read for a file specified by File Name(s) parameter during this sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Average File Write Time 

This data item represents the average time spent performing a write to this datafile during the sample period. This value will always be 0 unless the TIMED_STATISTICS parameter is TRUE.

The value of this item is reported in 100ths of a second. Therefore a value of 100 would indicate on average that one second of time was spent per physical write to this file during the last sample period.

There is a drilldown chart available from this chart called Timed Statistics Chart. This chart shows the current value for the TIMED_STATISTICS parameter. Use the Turn On Timed Statistics drilldown to turn on timed statistics for the instance.

This test checks the average time spent performing a write for a file specified by File Name(s) parameter during this sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Average Redo Write Size Per Second 

This data item represents the amount of redo, in bytes, generated per second during this sample period.

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries can be used for database recovery if necessary.

This test checks the amount of redo in bytes generated per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Average Rows Per Sort 

This data item represents the average number of rows per sort during this sample period.

This test checks the average number of rows per sort during sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Commits Per Second 

This data item represents the number of user commits performed, per second during the sample period. When a user commits a transaction, the redo generated that reflects the changes made to database blocks must be written to disk. Commits often represent the closest thing to a user transaction rate.

This test checks the number of user commits per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Disk I/O 

This event test monitors the real time database physical I/O rate (requests/seconds) against the values specified by the threshold arguments. If the Disk I/O rate exceeds the threshold values entered for the specified number of occurrences, then a warning or critical alert is generated.

Note: The Disk I/O event test is provided for backward compatibility. Oracle recommends that you use the File Read Rate and File Write Rate event tests. 

Disk Sorts Per Second 

This data item represents the number of sorts going to disk per second for this sample period.

For best performance, most sorts should occur in memory, because sorts to disks are expensive to perform. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption.

This test checks the number of sorts performed to disk per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Executes without Parses % 

This data item represents the percentage of statement executions that do not require a corresponding parse. A perfect system would parse all statements once and then execute the parsed statement over and over without reparsing. This ratio provides an indication as to how often the application is parsing statements as compared to their overall execution rate. A higher number is better.

This test checks the percentage of executes that do not require parses. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Logical Reads Per Second 

This data item represents the number of logical reads per second during the sample period. A logical read is a read request for a data block from the SGA. Logical reads may result in a physical read if the requested block does not reside with the buffer cache.

This test checks the logical (db block gets + consistent gets) reads per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Logons Per Second 

This data item represents the number of logons per second during the sample period.

This test checks the number of logons that occurred per second during the sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Network Bytes Per Second 

This data item represents the total number of bytes sent and received through the SQL Net layer to and from the database.

This test checks the network read/write per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Parses (Hard) Per Second 

This data item represents the number of hard parses per second during this sample period. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement.

Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.

This test checks the number of parses of statements that were not already in the cache. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Parses (Total) Per Second 

This number reflects the total number of parses per second, both hard and soft. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement. A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.

Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.

This test checks the number of parse calls per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Physical Reads Per Second 

This data item represents the number of data blocks read from disk per second during this sample period. When a user performs a SQL query, Oracle tries to retrieve the data from the database buffer cache (memory) first, then searches the disk if it is not already in memory. Reading data blocks from disk is much more inefficient than reading the data blocks from memory. The goal with Oracle should always be to maximize memory utilization.

This test checks the data blocks read from disk per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Physical Writes Per Second 

This data item represents the number of disk writes per second during the sample period. This statistic represents the rate of database blocks written from the SGA buffer cached to disk by the DBWR background process, and from the PGA by processes performing direct writes.

This test checks the data blocks written disk per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Recursive Calls Per Second 

This data item represents the number of recursive calls, per second during the sample period.

Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used. Recursive calls are also generated:

  • when data dictionary information is not available in the data dictionary cache and must be retrieved from disk

  • in the firing of database triggers

  • in the execution of DDL statements

  • in the execution of SQL statements within stored procedures, functions, packages and anonymous PL/SQL blocks

  • in the enforcement of referential integrity constraints

This test checks the number of recursive SQL calls per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Redo Writes Per Second 

This data item represents the number redo write operations per second during this sample period.

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries can be used for database recovery if necessary.

The log writer processes (LGWR) is responsible for redo log buffer management; that is, writing the redo log buffer to a redo log file on disk.

This test checks the number of writes by LGWR to the redo log files per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Response Time Per Execution 

Using only statistics available within the database, this data item gives the best approximation of response time, in seconds, per SQL statement execution. This statistic may be more valid than response time per transaction as it shows accurate values even for read-only access.

This test checks the response time, in seconds, per SQL statement execution during sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Rollbacks Per Second 

This data item represents the number of times, per second during the sample period, that users manually issue the ROLLBACK statement or an error occurred during a user's transactions.

This test checks the number of rollbacks per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Soft Parse % 

A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.

This data item represents the percentage of parse requests where the cursor was already in the cursor cache compared to the number of total parses. This ratio provides an indication as to how often the application is parsing statements that already reside in the cache as compared to hard parses of statements that are not in the cache.

This test checks the percentage of soft parse requests to total parse requests. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Table Scans (Long) Per Second 

This data item represents the number of long table scans per second during sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.

This test checks the long table scans per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Table Scans (Total) Per Second 

This data item represents the number of long and short table scans per second during the sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.  

User Call % 

This data item represents the percentage of user calls to recursive calls.

Occasionally, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used. Recursive calls are also generated:

  • When data dictionary information is not available in the data dictionary cache and must be retrieved from disk

  • In the firing of database triggers

  • In the execution of DDL statements

  • In the execution of SQL statements within stored procedures, functions, packages and anonymous PL/SQL blocks

  • In the enforcement of referential integrity constraints

This test checks the percentage of user calls to recursive calls. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

User Calls Per Second 

This data item represents the number of logins, parses, or execute calls per second during the sample period.

This test checks the number of logins, parses, or execute calls. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Table 2-4 Database Performance Management Event Tests - Instance Activity  
Event Test  Description 

% CPU Time 

Data item that represents the percentage of time, instance-wide, spent executing instructions by the CPU during this sample period.

This test checks the percentage time spent executing instructions by the CPU, instance-wide, for resources or objects during this sample period. If the % CPU Time is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

% Shared Pool Free 

This data item represents the percentage of the Shared Pool that is currently marked as free.

This test checks the percentage of Shared Pool that is currently free. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

% Wait Time 

Data item representing the percentage of time spent waiting, instance-wide, for resources or objects during this sample period.

This test checks the percentage time spent waiting, instance-wide, for resources or objects during this sample period. If the % Wait Time is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Buffer Cache Hit % 

The data block buffer cache efficiency, as measured by the hit ratio, records the percentage of times the data block requested by the query is in memory.

Effective use of the buffer cache can greatly reduce the I/O load on the database. If the buffer cache is too small, frequently accessed data will be flushed from the buffer cache too quickly which forces the information to be re-fetched from disk. Since disk access is much slower than memory access, application performance will suffer. In addition, the extra burden imposed on the I/O subsystem could introduce a bottleneck at one or more devices which would further degrade performance.

This event test monitors the buffer cache hit ratio (percentage of success) against the values specified by the threshold arguments. If the number of occurrences is smaller than the values specified, then a warning or critical alert is generated.

Note: The DB_BLOCK_BUFFERS initialization parameter determines the number of database buffers available in the buffer cache. It is one of the primary parameters which contribute to the total memory requirements of the SGA on the instance. The DB_BLOCK_BUFFERS parameter, together with the DB_BLOCK SIZE parameter, controls the total size of the buffer cache. Since DB_BLOCK_SIZE can only be specified when the database is first created, normally the size of the buffer cache size is controlled using the DB_ BLOCK_BUFFERS parameter. 

Commit % 

This data item represents the percentage of transactions that ended as commits rather than rollbacks during this sample period.

This test checks the percentage of transactions that end as commits, as opposed to rollbacks. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Data Dictionary Hit % 

This data item represents dictionary cache efficiency as measured by the percentage of requests against the dictionary data that were already in memory. It is important to determine whether the misses on the data dictionary are actually affecting the performance of the Oracle Server.

The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache, and the other cache structures that are specific to a particular instance configuration.

Misses on the data dictionary cache are to be expected in some cases. Upon instance startup, the data dictionary cache contains no data, so any SQL statement issued is likely to result in cache misses. As more data is read into the cache, the likelihood of cache misses should decrease. Eventually the database should reach a steady state in which the most frequently used dictionary data is in the cache. At this point, very few cache misses should occur. To tune the cache, examine its activity only after your application has been running.

This test checks the percentage of requests against the data dictionary that were found in the Shared Pool. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Data Dictionary Miss % 

The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache and the other cache structures that are specific to a particular instance configuration.

The dictionary cache efficiency, as measured by the miss ratio, records the percentage of times the dictionary data was not already in memory.

The shared pool mechanism can greatly reduce system resource consumption in at least three ways:

  • Parse time is avoided if the SQL statement is already in the shared pool.

  • Application memory overhead is reduced, since all applications utilize the same pool of shared SQL statements and dictionary resources.

  • I/O resources are saved, since dictionary elements which are in the shared pool do not require access.

If the shared pool is too small, users will consume additional resources to complete a database operation. For dictionary cache access, the overhead is primarily the additional I/O since the dictionary cache references that have been displaced from the cache will need to be re-fetched from disk.

This event test monitors the data dictionary cache miss ratio (percentage of failures) against the values specified by the threshold arguments. If the number of occurrences exceeds the values specified, then a warning or critical alert is generated.

Note: The Data Dictionary Miss % event test is provided for backward compatibility. Oracle recommends that you use the Data Dictionary Hit % event test. 

DBWR Checkpoints Per Second 

This data item represents the number of times, per second, during this sample period DBWn was asked to scan the cache and write all blocks marked for a checkpoint.

The database writer process (DBWn) writes the contents of buffers to datafiles. The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk.

When a buffer in the database buffer cache is modified, it is marked dirty. The primary job of the DBWn process is to keep the buffer cache clean by writing dirty buffers to disk. As buffers are dirtied by user processes, the number of free buffers diminishes. If the number of free buffers drops too low, user processes that must read blocks from disk into the cache are not able to find free buffers. DBWn manages the buffer cache so that user processes can always find free buffers.

When the Oracle Server process cannot find a clean reusable buffer after scanning a threshold of buffers, it signals DBWn to write. When this request to make free buffers is received, DBWn writes the least recently used (LRU) buffers to disk. By writing the least recently used dirty buffers to disk, DBWn improves the performance of finding free buffers while keeping recently used buffers resident in memory. For example, blocks that are part of frequently accessed small tables or indexes are kept in the cache so that they do not need to be read in again from disk. The LRU algorithm keeps more frequently accessed blocks in the buffer cache so that when a buffer is written to disk, it is unlikely to contain data that may be useful soon.

Additionally, DBWn periodically writes buffers to advance the checkpoint which is the position in the redo log from which crash or instance recovery would need to begin.

This test checks the number of times DBWR was asked to advance the checkpoint. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Free Buffer Waits 

Database writer process (DBWR) bottlenecks can be detected by monitoring occurrences of the free buffer waits test over time. If the database environment is in a steady state, there should not be any free buffer waits. However, an occasional absolute increase in free buffer waits is not a problem. Only consistent occurrences of an increase should be of concern.

As a result, this test maintains a history of free buffer waits samples, specified by the number of samples parameter, and monitors for a percentage of these samples where an increase was detected. This percentage is then compared against the values specified by the threshold arguments. If the percentage of samples (where an increase in free buffer waits is detected) exceeds the threshold arguments, then a warning or critical alert is generated.

Example: If 10 has been specified for the number of samples, then during the first 9 times the test condition is checked, the test is merely building up the history of free buffer waits samples. On the 10 interval and from that point on, the test monitors how many of those samples showed and increase in free buffer waits. Assume 2 samples showed an increase, then the percentage of samples showing an increase is 20%. 

In Memory Sort % 

The sort efficiency is measured by the percentage of times sorts were performed in memory as opposed to going to disk.

For best performance, most sorts should occur in memory as sorts to disks are expensive to perform. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption.

This event test monitors the in memory sort hit ratio. The ratio equals the number of sorts performed in memory divided by the total number of sorts performed. If the number of occurrences is smaller that the values specified, then a warning or critical alert is generated. 

Library Cache Hit % 

This data item represents the library cache efficiency, as measured by the percentage of times the fully parsed or compiled representation of PL/SQL blocks and SQL statements are already in memory.

The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache and the other cache structures that are specific to a particular instance configuration.

The shared pool mechanism can greatly reduce system resource consumption in at least three ways:

  • Parse time is avoided if the SQL statement is already in the shared pool.

  • Application memory overhead is reduced, since all applications use the same pool of shared SQL statements and dictionary resources.

  • I/O resources are saved, since dictionary elements which are in the shared pool do not require access.

If the shared pool is too small, users will consume additional resources to complete a database operation. For library cache access, the overhead is primarily the additional CPU resources required to re-parse the SQL statement.

This test checks the percentage of parse requests where cursor already in cache If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Library Cache Miss % 

The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache and the other cache structures that are specific to a particular instance configuration.

The library cache efficiency, as measured by the miss ratio, records the percentage of times the fully parsed or compiled representation of PL/SQL blocks and SQL statements are not already in memory.

The shared pool mechanism can greatly reduce system resource consumption in at least three ways:

  • Parse time is avoided if the SQL statement is already in the shared pool.

  • Application memory overhead is reduced, since all applications utilize the same pool of shared SQL statements and dictionary resources.

  • I/O resources are saved, since dictionary elements which are in the shared pool do not require access.

If the shared pool is too small, users will consume additional resources to complete a database operation. For library cache access, the overhead is primarily the additional CPU resources required to re-parse the SQL statement.

This event test monitors the library cache miss ratio (percentage of failures) against the values specified by the threshold arguments. If the number of occurrences exceeds the values specified, then a warning or critical alert is generated.

Note: The Library Cache Miss % event test is provided for backward compatibility. Oracle recommends that you use the Library Cache Hit % event test. 

Redo Log Allocation Hit 

Redo log entries contain a record of changes that have been made to the database block buffers. The log writer (LGWR) process writes redo log entries from the log buffer to a redo log file. The log buffer should be sized so that space is available in the log buffer for new entries, even when access to the redo log is heavy. When the log buffer is undersized, user process will be delayed as they wait for the LGWR to free space in the redo log buffer.

The redo log buffer efficiency, as measured by the hit ratio, records the percentage of times users did not have to wait for the log writer to free space in the redo log buffer.

This event test monitors the redo log buffer hit ratio (percentage of success) against the values specified by the threshold arguments. If the number of occurrences is smaller than the values specified, then a warning or critical alert is generated.

Note: The Redo Log Allocation Hit event test is provided for backward compatibility. Oracle recommends that you use the Redo NoWait Ratio event test. 

Redo No Wait % 

Redo log entries contain a record of changes that have been made to the database block buffers. The log writer (LGWR) process writes redo log entries from the log buffer to a redo log file. The log buffer should be sized so that space is available in the log buffer for new entries, even when access to the redo log is heavy. When the log buffer is undersized, user process will be delayed as they wait for the LGWR to free space in the redo log buffer.

This data item represents the redo log buffer efficiency, as measured by the percentage of times users did not have to wait for the log writer to free space in the redo log buffer.

This test checks the percentage of times redo entries are allocated without having to wait. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Rollback Contention 

Rollback segments are portions of the database that record the actions of transactions in case a transaction is rolled back. Rollback segments are used to provide read consistency, support rollback transactions, and recover a database.

Proper allocation of rollback segments make for optimal database performance. Using a sufficient number of rollback segments distributes rollback segment contention across many segments and improves performance.

Contention for rollback segments is reflected by contention for buffers that contain rollback segment blocks.

This event test monitors rollback segment missing ratio (percentage) against the values specified by the threshold arguments. If the missing ratio is greater than the values specified, then a warning or critical alert is generated. 

SysStat Table 

You can monitor any system statistic available in the database with this event test. A warning or critical alert will be generated if the value of the selected V$SYSSTAT parameter exceeds the values specified by the threshold arguments.

To view the V$SYSSTAT parameter names and values, connect to the database with SQL Worksheet and execute SELECT NAME, VALUE FROM V$SYSSTAT. 

SysStat Table Delta 

You can monitor any system statistic available in the database with this event test. The threshold values are compared to the difference between the last sample point and the current sample point of the V$SYSSTAT parameter. A warning or critical alert is generated if the calculated difference exceeds the values specified by the threshold arguments.

To view the V$SYSSTAT parameter names and values, connect to the database with SQL Worksheet and execute SELECT NAME, VALUE FROM V$SYSSTAT. 

Transactions Per Second 

This data item represents the total number of commits and rollbacks performed during this sample period.

This test checks the number of commits and rollbacks performed during sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Table 2-5 Database Performance Management Event Tests - Transaction Activity  
Event Test  Description 

Average Redo Write Size Per Transaction 

This data item represents the amount of redo, in bytes, generated per transaction during this sample period.

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries are used for database recovery, if necessary.

The value of this statistic is zero if there have been no write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the amount of redo in bytes generated per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Commits Per Transaction 

This data item represents the number of user commits performed, per transaction during the sample period. When a user commits a transaction, the redo generated that reflects the changes made to database blocks must be written to disk. Commits often represent the closest thing to a user transaction rate.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of user commits per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Disk Sorts Per Transaction 

This data item represents the number of sorts going to disk per transactions for this sample period.

For best performance, most sorts should occur in memory, because sorts to disks are expensive to perform. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of sorts performed to disk per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Logical Reads Per Transaction 

This data item represents the number of logical reads per transaction during the sample period.

The value of this statistic is zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding per second data item of the same name will be a better indicator of current performance.

This test checks the logical (db block gets + consistent gets) reads per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Logons Per Transaction 

This data item represents the number of logons per transaction during the sample period.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of logons that occurred per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Parses (Hard) Per Transaction 

This data item represents the number of hard parses per second during this sample period. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement.

Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of hard parses per second during this sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Parses (Total) Per Transaction 

This number reflects the total number of parses per transaction, both hard and soft. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement. A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.

Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.

This test checks the number of parse calls per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Physical Reads Per Transaction 

This data item represents the number of disk reads per transaction during the sample period. When a user performs a SQL query, Oracle tries to retrieve the data from the database buffer cache (memory) first, then goes to disk if it is not in memory already. Reading data blocks from disk is much more expensive than reading the data blocks from memory. The goal with Oracle should always be to maximize memory utilization.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the data blocks read from disk per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Physical Writes Per Transaction 

This data item represents the number of disk writes per transaction during the sample period.

The value of this statistic is zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name is a better indicator of current performance.

This test checks the data blocks written disk per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Recursive Calls Per Transaction 

This data item represents the number of recursive calls, per second during the sample period.

Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used. Recursive calls are also generated:

  • when data dictionary information is not available in the data dictionary cache and must be retrieved from disk

  • in the firing of database triggers

  • in the execution of DDL statements

  • in the execution of SQL statements within stored procedures, functions, packages and anonymous PL/SQL blocks

  • in the enforcement of referential integrity constraints

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of calls that result in changes to internal tables. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Redo Writes Per Transaction 

This data item represents the number of redo write operations per second during this sample period.

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries are used for database recovery, if necessary.

The log writer process (LGWR) is responsible for redo log buffer management; that is, writing the redo log buffer to a redo log file on disk.

This test checks the number of writes by LGWR to the redo log files per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Response Time Per Transaction 

Using only statistics available within the database, this data item gives the best approximation of response time, in seconds, per transaction during this sample period.

This test checks the response time in seconds, per transaction during this sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Rollbacks Per Transaction 

This data item represents the number of times, per transaction during the sample period, that users manually issue the ROLLBACK statement or an error occurred during a user's transactions.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the Number of rollbacks per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Table Scan (Long) Per Transaction 

This data item represents the number of long table scans per transaction during sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of long table scans per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Table Scans (Total) Per Transaction 

This data item represents the number of long and short table scans per transaction during the sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.

This test checks the number of long and short table scans per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

User Calls Per Transaction 

This data item represents the number of logins, parses, or execute calls per transaction during the sample period.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of logins, parses, or execute calls per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Table 2-6 Database Performance Management Event Tests - Wait Activity  
Event Test  Description 

Wait by Session Count 

This data item represents the number of sessions currently waiting on this event.

This test checks the number of sessions currently waiting for the event specified by the Wait Event(s) parameter. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Wait by Time 

This data item represents the length of time, in seconds, spent waiting for the event during the last sample period. This value will always be 0 unless the TIMED_STATISTICS parameter is TRUE.

This test checks the length of time, in seconds, spent waiting for the event specified by the Wait Event(s) parameter during the last sample period If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Table 2-7 Database Resource Management Event Tests  
Event Test  Description 

Datafile Limit 

The DB_FILES initialization parameter specifies the maximum number of database files that can be opened for this database.

This event test checks for the utilization of the datafile resource against the values (percentages) specified by the threshold arguments. If the percentage of data files currently used to the limit set in the DB_FILES initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.

Example: If 30 data files are used and the value of DB_FILES is 40, the percentage is 75% (30/40 x 100). This value is compared against the specified thresholds. 

Lock Limit 

The DML_LOCKS initialization parameter specifies the maximum number of DML locks. The purpose of DML locks is to guarantee the integrity of data being accessed concurrently by multiple users. DML locks prevent destructive interference of simultaneous conflicting DML and/or DDL operations.

This event test checks for the utilization of the lock resource against the values (percentage) specified by the threshold arguments. If the percentage of all active DML locks to the limit set in the DML_LOCKS initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.

If DML_LOCKS is 0, this test fails to register. A value of 0 indicates that enqueues are disabled.

Example: If 40 DML locks are active and the value of DML_LOCKS is 60, the percentage is 67% (40/60 x 100). This value is compared against the specified thresholds. 

Process Limit 

The PROCESSES initialization parameter specifies the maximum number of operating system user processes that can simultaneously connect to a database at the same time. This number also includes background processes utilized by the instance.

This event test checks for the utilization of the process resource against the values (percentage) specified by the threshold arguments. If the percentage of all current processes to the limit set in the PROCESSES initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.

Example: If 40 processes are currently connected and the value of PROCESSES is 50, the percentage is 80% (40/50 x 100). This value is compared against the specified thresholds. 

Session Limit 

The SESSIONS initialization parameter specifies the maximum number of concurrent connections that the database will allow.

This event test checks for the utilization of the session resource against the values (percentage) specified by the threshold arguments. If the percentage of the number of sessions, including background processes, to the limit set in the SESSIONS initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.

Example: If there are 20 sessions and the value of SESSIONS is 25, the percentage is 80% (20/25 x 100). This value is compared against the specified thresholds. 

User Limit 

The LICENSE_MAX_SESSIONS initialization parameter specifies the maximum number of concurrent user sessions allowed simultaneously.

This event test checks whether the number of users logged on is reaching the license limit. If the percentage of the number of concurrent user sessions to the limit set in the LICENSE_MAX_SESSIONS initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated. If LICENSE_MAX_SESSIONS is not explicitly set to a value, the test does not trigger.

Example: If there are 15 concurrent user sessions and the value of LICENSE_MAX_SESSIONS is 20, the percentage is 75% (15/20 x 100). This value is compared against the specified thresholds.

Note: This test is most useful when session licensing is enabled. Refer to the Oracle Server Reference Manual for more information on LICENSE_MAX_SESSIONS and LICENSE_MAX_USERS parameters. 

Table 2-8 Database Space Management Event Tests  
Event Test  Description 

Alert File Large 

The ALERT file is a special trace file containing a chronological log of messages and errors. Oracle always appends to the file. To control the size of an ALERT file you must manually delete the file when you no longer need it.

This event test checks for file size of the ALERT file. If the file is greater than the values specified in the threshold arguments, then a warning or critical alert is generated.

Note: The ALERT file can be safely deleted while the instance is running, although you might want to make an archived copy of it first. 

Archive Full 

When running a database in ARCHIVELOG mode, the archiving of the online redo log is enabled. Filled groups of the online redo log are archived, by default, to the destination specified by the LOG_ARCHIVE_DEST initialization parameter. If this destination device becomes full, the database operation is temporarily suspended until disk space is available.

If the database is running in ARCHIVELOG mode, this test checks for available redo log destination devices. If the space available is less than the threshold value given in the threshold arguments, then a warning or critical alert is generated.

If the database is not running in ARCHIVELOG mode, or all archive destinations are standby databases for Oracle8i, this test fails to register.

Note: If you have more than one number for the amount of free space available, this means you have more than one destination. Check the amount of free space for all destinations. 

Archive Full (%) 

The Archive Full (%) event test monitors the same destination device as the Archive Full event test. The Archive Full (%) event test, however, returns the percentage of free space remaining on the log destination.

If the space available is less than the threshold value given in the threshold arguments, then a warning or critical alert is generated.

If the database is not running in ARCHIVELOG mode or all archive destinations are standby databases for Oracle8i, this test fails to register.

Note: If you have more than one number for the amount of free space available, this means you have more than one destination. Check the amount of free space for all destinations. 

Chained/Migrated Row 

In two circumstances the data for a row in a table may be too large to fit into a single data block. This results in row fragmentation.

In the first case, the row is too large to fit into one data block when it is first inserted. In this case, the Oracle Server stores the data for the row in a chain of data blocks reserved for that segment. Row chaining (or continuation) most often occurs with large rows, such as rows that contain a column of data type LONG or LONG RAW. Row chaining in these cases is unavoidable without using a DB_BLOCK_SIZE.

In the second case, however, a row that originally fit into one data block is updated so that the overall row length increases and the block's free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit into a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row.

When a row is chained or migrated, I/O performance associated with this row decreases because Oracle must scan more than one data block to retrieve the information for the row.

This event test monitors whether continued rows are found in the segments specified by the Segment name, Segment owner, and Segment type parameters. If continued rows are found, an alert is generated.

Note: This test is CPU-intensive. You may want to schedule the test for once a day at non-business hours. 

Chunk Small 

The Oracle Server allocates space for segments in units of one extent. When the existing extents of a segment are full, the Oracle Server allocates another extent for that segment. In order to do so, Oracle searches through the free space in the tablespace containing the segment for the first free, contiguous set of data blocks sufficient to meet the required extent's size. If sufficient space is not found, an error is returned by the Oracle Server.

This event test checks for the largest chunk free space in the tablespace specified by the Tablespace name, Segment name, and Segment type parameters. If any table, index, cluster or rollback segments within the tablespace cannot allocate the additional number of extents specified in the thresholds, then a warning or critical alert is generated.

Example: If the largest chunk of free space in the specified tablespace can only contain 2 extents, then 2 are compared to the threshold values. If 3 are specified for an alert, the alert test is triggered because 3 extents cannot be allocated in the tablespace. 

Dump Full 

Each server and background process can write to an associated trace file in order to log messages and errors. Background processes and the ALERT file are written to the destination specified by BACKGROUND_DUMP_DEST.

Trace files for server processes are written to the destination specified by USER_ DUMP_DEST.

This event test checks for available free space on these dump destination devices. If the space available is less than the threshold value given in the threshold arguments, then a warning or critical alert is generated. 

Dump Full (%) 

This event test monitors the same dump destinations as the Dump Full event test. The Dump Full (%) event test, however, returns the percentage of free space remaining on the dump destinations.

If the space available is less than the threshold value given in the threshold arguments, then a warning or critical alert is generated. 

Fast Segment Growth 

A segment collection is a group of extents that make up a single table, index, temporary or rollback segment. The Oracle Server offers a practical method of space allocation to segments as they are required to grow. Oracle allows a segment to have multiple extents, which the server allocates automatically when they are needed. For any segment that grows continuously, it is important to carefully monitor that segment's growth pattern. Storage values for the database should be chosen to ensure new extents are not frequently allocated.

This event test checks whether any of the segments specified by the Tablespace name, Segment name, and Segment type parameters are allocating extents too quickly. If, for any segment, the number of extents allocated since the event check is greater than the threshold values specified in the threshold arguments, then a warning or critical alert is generated. 

Index Rebuild 

When an indexed value is updated in the table, the old value is deleted from the index and the new value is inserted into a separate part of the index. The space released by the old value can never be used again. As indexed values are updated or deleted, the amount of unusable space within the index increases, a condition called index stagnation. Because a stagnated index contains a mixture of data and empty areas, scans of the index will be less efficient.

This event test monitors whether indexes specified by the Index name, Index owner, Indexed object name, and Indexed object owner parameters suffer from index stagnation. If an index has stagnation, an alert is generated. 

Maximum Extents 

A segment is a collection of extents that make up a single table, cluster, index, temporary or rollback segment. The MAXEXTENTS segment storage parameter specifies the maximum number of extents that can be allocated to the segment. Once a segment has filled the maximum number of extents, any row insertion will fail with an ORA-01631 error message.

This event test checks whether any of the segments specified by the Tablespace name, Segment name, and the Segment type parameters are approaching their maximum extents. If for any segment the maximum number of extents minus the number of existing extents is less than the threshold values specified in the threshold arguments, then a warning or critical alert is generated.

Example: If the maximum number of extents for a segment is 20 and the number of existing extents is 16, then 4 is compared against the specified threshold values. If 3 is specified for a critical alert and 5 is specified for a warning, a warning is triggered because only 4 extents are available. 

Multiple Extents 

A segment is a collection of extents that make up a single table, cluster, index, temporary or rollback segment. The Oracle Server allows a segment to have multiple extents, which the server allocates automatically when additional space is required.

There is no performance degradation for a segment having multiple extents that are never full-scanned (table and temporary segments only) where the extents are the same size and are also an integral multiple of the multiblock read batch size. No performance degradation is found where extents are 100 or more times larger that the read batch size. Oracle administrators may, however, choose to monitor the number of extents in a segment.

This event test checks whether any of the segments specified by the Tablespace name, Segment name, and Segment type parameters have multiple extents. If the number of extents is greater than the threshold values specified in the threshold arguments, then a warning or critical alert is generated.

Note: The only time multiple extents may cause a performance problem is when a segment is fully scanned and that segment's extent size is not a multiple of the multiblock read size. 

Snapshot Log Full 

A master table's snapshot log keeps track of fast refresh data for all corresponding snapshots. When a snapshot log is created for a master table, Oracle creates an underlying table to support the snapshot log. Oracle automatically tracks which rows in a snapshot log have been used during the refreshes of snapshots, and purges those rows from the log. Oracle does not delete rows from the log until all snapshots have used them. As a result, in certain situations a snapshot log can grow indefinitely when multiple snapshots are based on the same master table. It is best to always try to keep a snapshot log as small as possible to minimize the database space that it uses.

This event test checks whether a snapshot log is too large. In order to do this, the test determines the number of snapshot log tables containing more rows than specified by the Snapshot log's table size parameter. If this number is greater than the threshold value specified in the threshold argument, then an alert is generated. 

Tablespace Full 

As segments within a tablespace grow, the free space within that tablespace decreases. Should free space become insufficient, the creation of new segments or the extension of existing segments will fail.

This event test checks for the total free space in the tablespace specified by the Tablespace name. If the percentage of used space is greater than the values specified in the threshold arguments, then a warning or critical alert is generated. 

Table 2-9 Database Specialized Management Event Tests - Advanced Queuing  
Event Test  Description 

AQ Expired Messages Count 

This data item contains the number of messages for the current queue that are in the 'EXPIRED' state

This test checks the number of messages in the 'EXPIRED' state specified by the Queue Name(s) parameter. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

AQ Ready Messages Count 

This data item contains the number of messages for the current queue in the 'READY' state.

This test checks the number of messages in the 'READY' state specified by the Queue Name(s) parameter. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

AQ Waiting Messages Count 

This data item contains the number of messages for the current queue in the 'WAITING' state.

This test checks the number of messages in the 'WAITING' state specified by the Queue Name(s) parameter. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated. 

Table 2-10 Database Specialized Management Event Tests - Cluster Databases  
Event Test  Description 

Global Cache Converts 

This data item represents average convert time in seconds during this sample period. This test checks the Global Cache Converts for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Global Cache Convert Timeouts 

This data item represents number of times lock converts in the global cache timed out for each Oracle server instance per second during this sample period.

This test checks the Global Cache Convert Timeouts for an instance per second for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Global Cache CR Request 

This data item represents average time CR block was received during this sample period. This test checks the Global Cache CR Request for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Global Cache CR Timeouts 

This data item represents the number of times a foreground process requested a consistent-read (CR) block when the request timed out for each cluster database instance per second during this sample period. This test checks the Global Cache CR Timeouts for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Global Cache FreeList Waits 

This data item represents number of times foreground has to wait for a lock element for each cluster database instance per second during this sample period. This test checks the Global Cache Freelist Waits for an instance per second for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Global Cache Gets 

This data item represents average get time in seconds during this sample period. This test checks the Global Cache Gets for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Block Class Pings 

This data item represents the ping count per second for a block class during this sample period. This test checks the Ping count per second for the block class specified by the Class Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Instance Pings 

This data item represents the ping ratio during this sample period. This test checks the Ping Ratio for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Total Pings 

This data item represents the ping count per second for a cluster database during this sample period. This test checks the total number of pings per second. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.  

Table 2-11 Database Specialized Management Event Tests - Data Guard  
Event Test  Description 

Data Guard - Actual Apply Delay 

The difference (in number of archived redo logs) between the current log at the primary site and the last log applied on the standby site. 

Data Guard - Data Not Applied 

The time difference (in minutes) between the last archived redo log received and the last log applied on the standby. 

Data Guard - Logs Not Applied 

The difference (in number of archived redo logs) between the last log received and the last log applied on the standby. 

Data Guard - Logs Not Shipped 

The difference (in number of archived redo logs) between the current log on the primary site and the last log shipped to the standby site. 

Data Guard - Potential Data Loss 

The time difference (in minutes) between the current redo log on the primary site and the last log received on the standby site. 

Data Guard - Status 

This event test checks the status of the Data Guard configuration. Note: If the status is not SUCCESS, then this event test is triggered. 

Descriptions of Database Event Tests

The Oracle Database Event Tests are listed in alphabetical order.

% CPU Time

Description

Data item that represents the percentage of time, instance-wide, spent executing instructions by the CPU during this sample period.

This test checks the percentage time spent executing instructions by the CPU, instance-wide, for resources or objects during this sample period. If the % CPU Time is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaCpuTime / (DeltaTotalWait + DeltaCpuTime)
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and percentage of time spent by CPU.

Recommended Frequency

5 minutes

User Action

When CPU is the largest contributor to total response time, it must be broken down to properly understand the problem further. CPU Time is broken down into the following categories:

Selecting this resource and drilling down leads to the CPU Breakdown Chart which shows the breakdown of database CPU usage into its various components.

% Shared Pool Free

Description

This data item represents the percentage of the Shared Pool that is currently marked as free.

This test checks the percentage of Shared Pool that is currently free. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

((Free/Total)*100)
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and shared pool free percentage.

Recommended Frequency

5 minutes

User Action

If the percentage of Free Memory in the Shared Pool rises above 50%, too much memory has been allocated to the shared pool. This extra memory could be better utilized by other applications on the machine. In this case the size of the Shared Pool should be decreased. This can be accomplished by modifying the shared_pool_size initialization parameter.

To view the current shared pool statistics use the SGA Overview Chart. This chart shows the current SGA parameter settings as well as the size of various components of the SGA.

% Wait Time

Description

Data item representing the percentage of time spent waiting, instance-wide, for resources or objects during this sample period.

This test checks the percentage time spent waiting, instance-wide, for resources or objects during this sample period. If the % Wait Time is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaTotalWait / (DeltaTotalWait + DeltaCpuTime)
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and percentage of time spent waiting.

Recommended Frequency

5 minutes

User Action

Investigate further into which specific wait events are responsible for the bulk of the wait time. Individual wait events may identify unique problems within the database. Diagnosis will be tailored where appropriate through drilldowns specific to individual wait events. Selecting this resource and drilling down will identify the Wait Analysis Overview Chart which shows the breakdown of wait time into specific wait events.

Alert File Large

Description

The ALERT file is a special trace file containing a chronological log of messages and errors. Oracle always appends to the file. To control the size of an ALERT file you must manually delete the file when you no longer need it.

This event test checks for file size of the ALERT file. If the file is greater than the values specified in the threshold arguments, then a warning or critical alert is generated.

Parameters
Output

Current size of ALERT file in kilobytes

Recommended Frequency

10 minutes

User Action

Delete the ALERT file to recover disk space. Note this file can be safely deleted while the instance is running, although you might want to make an archived copy of it first.

AQ Expired Messages Count

Description

This data item contains the number of messages for the current queue that are in the 'EXPIRED' state.

This test checks the number of messages in the 'EXPIRED' state specified by the Queue Name(s) parameter. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

Select name, expired from gv$aq g ,all_queues d where g.qid = d.qid order by name

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Queue name and number of messages in the 'EXPIRED' state.

Recommended Frequency

5 minutes

User Action

The threshold for the Expired Queue may have been reached because the dequeuing process is not fast enough to process the messages within the specified time. In case of propagation, the propagation process may die. If that is the case the message expires before it gets propagated.

AQ Ready Messages Count

Description

This data item contains the number of messages for the current queue in the 'READY' state.

This test checks the number of messages in the 'READY' state specified by the Queue Name(s) parameter. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

select name, ready from gv$aq g ,all_queues d where g.qid = d.qid order by name

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Queue name and number of messages in the 'READY' state.

Recommended Frequency

5 minutes

User Action

Reaching the threshold of the Ready Queue indicates the process dequeuing the messages is slower than the rate by which messages are put in. Consider increasing the processing speed of the dequeuer processes. This can be done by giving more resource to the dequeuing process or by spawning more dequeuing processes.

AQ Waiting Messages Count

Description

This data item contains the number of messages for the current queue in the 'WAITING' state.

This test checks the number of messages in the 'WAITING' state specified by the Queue Name(s) parameter. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

Select name, waiting from gv$aq g ,all_queues d where g.qid = d.qid order by name.

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Queue name and number of messages in the 'WAITING' state.

Recommended Frequency

5 minutes

User Action

This data item reports on the number of messages that are in the queue that are not meant to be processed at this time, but are waiting for certain time period before they are allowed to be processed. When the required waiting period expires, the messages will be moved into the 'READY' state and will then be available to consumers.

Archive Full

Description

When running a database in ARCHIVELOG mode, the archiving of the online redo log is enabled. Filled groups of the online redo log are archived, by default, to the destination specified by the LOG_ARCHIVE_DEST initialization parameter. If this destination device becomes full, the database operation is temporarily suspended until disk space is available.

If the database is running in ARCHIVELOG mode, this event test checks for available redo log destination devices. If the space available is less than the threshold value given in the threshold arguments, then a warning or critical alert is generated.

If the database is not running in ARCHIVELOG mode, or all archive destinations are standby databases for Oracle8i, this event test fails to register.

Parameters
Output
Recommended Frequency

10 minutes

User Action

Verify the device specified in the initialization parameter LOG_ARCHIVE_DEST is set up properly for archiving.

If the LOG_ARCHIVE_DEST initialization parameter is set up correctly and this event test triggers, then free up more space in the destination specified by the archive destination parameters.

Archive Full (%)

Description

The Archive Full (%) event test monitors the same destination device as the Archive Full event test. The Archive Full (%) event test, however, returns the percentage of free space remaining on the log destination.

If the space available is less than the threshold value given in the threshold arguments, then a warning or critical alert is generated.

If the database is not running in ARCHIVELOG mode or all archive destinations are standby databases for Oracle8i, this event test fails to register.

Parameters
Output

For releases of the Intelligent Agent prior to 9i: Percentage of free space available on the destination drive. Note: If you have more than one number for the percentage of free space available, this means you have more than one destination. Check the percentage of free space for all destinations.

For the 9i release of the Intelligent Agent: Destination and its current used space in percentage

Recommended Frequency

10 minutes

User Action

Verify the device specified in the initialization parameter LOG_ARCHIVE_DEST is set up properly for archiving.

For Oracle7, verify that the LOG_ARCHIVE_DEST initialization parameter is set up properly for archiving.

For Oracle8, verify that the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters are set up properly for archiving.

For Oracle8i, there are two methods you can use to specify archive destinations. The first method is to use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 5) to specify from one to five different destinations for archival. Each numerically-suffixed parameter uniquely identifies an individual destination, for example, LOG_ARCHIVE_DEST_1, LOG_ARCHIVE_DEST_2, and so on. The second method, which allows you to specify a maximum of two locations, is to use the LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the LOG_ARCHIVE_DUPLEX_DEST parameter to determine an optional secondary location.

If the LOG_ARCHIVE_DEST initialization parameter is set up correctly and this event test triggers, then free up more space in the destination specified by the archive destination parameters.

Archiver Hung

Description

This event test signifies that the archiver of the database being monitored has been temporarily suspended since the last sample time.

If the database is running in ARCHIVELOG mode, an alert is displayed when archiving is hung (ORA-00257) messages are written to the ALERT file. The ALERT file is a special trace file containing a chronological log of messages and errors.

If the database is not running in ARCHIVELOG mode, this test will not register.

Parameters

None

Output

ALERT log error messages since last sample time

Recommended Frequency

30 seconds

User Action

Examine ALERT log and archiver trace file for additional information; however, the most likely cause of this message is that the destination device is out of space to store the redo log file. Verify the device specified in the initialization parameter ARCHIVE_LOG_DEST is set up properly for archiving. Note: This event does not automatically clear since there is no automatic way of determining when the problem has been resolved. Hence, you need to manually clear the event once the problem is fixed.

Average File Read Time

Description

This data item represents the average time spent performing a read from this datafile during the sample period. This value will always be 0 unless the TIMED_STATISTICS parameter is TRUE.

The value of this item is reported in 100ths of a second. Therefore a value of 100 would mean on average that one second of time was spent per physical read to this file during the last sample period.

There is a drilldown chart available from this chart called Timed Statistics Chart. This chart shows the current value for the TIMED_STATISTICS parameter. Use the Turn On Timed Statistics drilldown to turn on timed statistics for the instance.

This test checks the average time spent performing a read for a file specified by File Name(s) parameter during this sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaReadTime / DeltaPhysicalReads
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

File name and average file read time in 100ths of a second.

Recommended Frequency

5 minutes

User Action

This statistic shows the average read time which is the average amount of time spent for each read against the datafile. This number may be as important as the number of reads against the file. Comparing read times across multiple datafiles shows you which datafiles are slower than others. Read times may be improved if contention is reduced on the datafile, although read times may be high due to the file residing on a slow disk. You need to identify whether the SQL accessing the file can be tuned, as well as the underlying characteristics of the hardware device.

Use SQL tuning to first reduce the IO rates to this file. Often high physical read rates are attributed to queries that are performing full table scans. Full table scans, especially of large tables, should be avoided whenever possible. To identify the SQL statements that are causing the most physical reads use the Top SQL (Physical Reads) Chart. This quickly identifies the SQL statements that are prime candidates for tuning.

If your SQL statements are already adequately tuned for I/O, you can use the information in this chart to determine whether some tablespaces within the file must be separated in order to spread the I/O load more evenly across the available disks.

Average File Write Time

Description

This data item represents the average time spent performing a write to this datafile during the sample period. This value will always be 0 unless the TIMED_STATISTICS parameter is TRUE.

The value of this item is reported in 100ths of a second. Therefore a value of 100 would indicate on average that one second of time was spent per physical write to this file during the last sample period.

There is a drilldown chart available from this chart called Timed Statistics Chart. This chart shows the current value for the TIMED_STATISTICS parameter. Use the Turn On Timed Statistics drilldown to turn on timed statistics for the instance.

This test checks the average time spent performing a write for a file specified by File Name(s) parameter during this sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaWriteTime / DeltaPhysicalWrites
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

File name and average file write time in 100ths of a second.

Recommended Frequency

5 minutes

User Action

A large value for average write time to a particular file might suggest that either the underlying hardware is slow or that there is enough contention on the disk to slow it down. If a particular datafile is experiencing a slow average write time, you can further determine what tablespaces are located within the file.

If the file contains the TEMP tablespace, you can view the Top Sessions by Disk Sorts Chart to determine which sessions are performing the most sorts. The SQL statements being executed by these sessions should be examined to see if their sorting can be tuned or reduced. Increasing the SORT_AREA_SIZE initialization parameter may help move more of these sorts into memory and off the disk.

If the physical writes are caused by inserts or modifications into multiple tables within the file you may want to further investigate separating the tables into individual datafiles. Ideally these datafiles would reside on separate disks.

Average Redo Write Size Per Second

Description

This data item represents the amount of redo, in bytes, generated per second during this sample period.

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries can be used for database recovery if necessary.

This test checks the amount of redo in bytes generated per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaRedoSize / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and redo size in bytes per second.

Recommended Frequency

5 minutes

User Action

The LOG_BUFFER initialization parameter determines the amount of memory that is used when redo entries are buffered to the redo log file.

Consider increasing the LOG_BUFFER initialization parameter to increase the size of the redo log buffer should waiting be a problem. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.

Average Redo Write Size Per Transaction

Description

This data item represents the amount of redo, in bytes, generated per transaction during this sample period.

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries are used for database recovery, if necessary.

The value of this statistic is zero if there have been no write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the amount of redo in bytes generated per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaRedoSize / DeltaTransactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and amount of redo in bytes generated per transaction.

Recommended Frequency

5 minutes

User Action

The LOG_BUFFER initialization parameter determines the amount of memory that is used when buffering redo entries to the redo log file.

Consider increasing the LOG_BUFFER initialization parameter to increase the size of the redo log buffer should waiting be a problem. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.

Average Rows Per Sort

Description

This data item represents the average number of rows per sort during this sample period.

This test checks the average number of rows per sort during sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(DeltaSortRows / (DeltaDiskSorts + DeltaMemorySorts)) * 100
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and the number of rows per sort.

Recommended Frequency

5 minutes

User Action

This statistic displays the average number of rows that are being processed per sort. The size provides information about the sort size of the database. This can help you to determine the SORT_AREA_SIZE appropriately. If the rows per sort are high, you should investigate the sessions and SQL performing the most sorts to see if those SQL statements can be tuned to reduce the size of the sort sample set.

The sessions that are performing the most sorts should be identified, such that the SQL they are executing can be further identified. The sort area sizes for the database may be sized correctly and the application SQL may be performing unwanted or excessive sorts. The sessions performing the most sorts are available through the Top Sessions by Disk Sorts drilldown.

Further drilldown into the session performing the most disk sorts with the Current SQL Chart displays the SQL statement responsible for the disk sorts. Further investigation on how to best tune a particular SQL statement can be done by using the Explain Plan drilldown, or if you are on NT, the Tune SQL Statement drilldown as well.

The Top SQL (Sorts) Chart provides a mechanism to quickly display the SQL statements in the cache presented in sorted order by their number of sort operations. This is an alternative to viewing the sort of current sessions. It allows you to view sort activity via SQL statements and contains cumulative statistics for all executions of that statement.

If the top sessions and their associated SQL statements seem to be okay, you can drill down to see a more complete breakdown of sort statistics and initialization parameters with the Sort Statistics At A Glance Chart.

If excessive sorts are taking place on disk and the queries are correct, consider increasing the SORT_AREA_SIZE initialization parameter to increase the size of the sort area. A larger sort area allows the Oracle Server to keep sorts in memory, reducing the number of I/O operations required to do an equivalent amount of work using the current sort area size.

Block Class Pings

Description

This data item represents the ping count per second for a block class during this sample period.

This test checks the Ping count per second for the block class specified by the Class Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Class name and pings per second.

Recommended Frequency

5 minutes

Broken Jobs

Description

The Oracle server job queue is a database table that stores information about local jobs such as the PL/SQL call to execute for a job such as when to run a job. Database replication is also managed by using Oracle's job queue mechanism using jobs to push deferred transactions to remote master sites, to purge applied transactions from the deferred transaction queue or to refresh snapshot refresh groups.

A job can be broken in two ways:

Oracle has failed to successfully execute the job after sixteen attempts

The job has been explicitly marked as broken by using the procedure DBMS_ JOB.BROKEN

This event test checks for broken DBMS jobs. A critical alert is generated if the number of broken jobs exceeds the value specified by the threshold argument.

Parameters

Critical threshold: Threshold for critical alert (number of jobs). Default is 0 jobs.

Output

Job identifiers of broken DBMS jobs

Recommended Frequency

60 seconds

User Action

Check the ALERT log and trace files for error information. Correct the problem that is preventing the job from running. Force immediate re-execution of the job by calling DBMS_JOB.RUN.

Buffer Cache Hit %

Description

This data item represents the data block buffer cache efficiency, as measured by the percentage of times the data block requested by the query is in memory.

Effective use of the buffer cache can greatly reduce the I/O load on the database. If the buffer cache is too small, frequently accessed data will be flushed from the buffer cache too quickly which forces the information to be re-fetched from disk. Since disk access is much slower than memory access, application performance will suffer. In addition, the extra burden imposed on the I/O subsystem could introduce a bottleneck at one or more devices which would further degrade performance.

This test checks the percentage of buffer requests that were already in buffer cache. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

((DeltaLogicalGets - (DeltaPhysicalReads - DeltaPhysicalReadsDirect)) / DeltaLogicalGets) * 100
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance Name and buffer cache hit percentage.

Recommended Frequency

5 minutes

User Action

A low buffer cache hit ratio means that the server must often go to disk to retrieve the buffers required to satisfy a query. The queries that perform the most physical reads lower the numerical value of this statistic. Typically queries that perform full table scans force large amounts of buffers into the cache, aging out other buffers that may be required by other queries later. The Top Sessions by Physical Reads Chart will show the sessions performing the most reads and through further drilldown their associated queries can be identified. Similarly, the Top SQL (Physical Reads) Chart shows which SQL statements are performing the most physical reads. The statements performing the most I/O should be looked at for tuning.

The difference between the two is that the Top Sessions chart shows the sessions that are responsible for the physical reads at any given moment. The Top SQL view shows all SQL that is still in the cache. The top statement may not be executing currently, and thus not responsible for the current poor buffer cache hit ratio.

If the queries seem to be well tuned, the size of the buffer cache also determines how often buffers need to be fetched from disk. The DB_BLOCK_BUFFERS initialization parameter determines the number of database buffers available in the buffer cache. It is one of the primary parameters which contribute to the total memory requirements of the SGA on the instance. The DB_BLOCK_BUFFERS parameter, together with the DB_BLOCK_SIZE parameter, controls the total size of the buffer cache. Since DB_BLOCK_SIZE can only be specified when the database is first created, normally the size of the buffer cache size is controlled using the DB_BLOCK_BUFFERS parameter.

Consider increasing the DB_BLOCK_BUFFERS initialization parameter to increase the size of the buffer cache. This increase allows the Oracle Server to keep more information in memory, thus reducing the number of I/O operations required to do an equivalent amount of work using the current cache size.

Chained/Migrated Row

Description

In two circumstances the data for a row in a table may be too large to fit into a single data block. This results in row fragmentation.

In the first case, the row is too large to fit into one data block when it is first inserted. In this case, the Oracle Server stores the data for the row in a chain of data blocks reserved for that segment. Row chaining (or continuation) most often occurs with large rows, such as rows that contain a column of data type LONG or LONG RAW. Row chaining in these cases is unavoidable without increasing the DB_BLOCK_SIZE.

In the second case, however, a row that originally fit into one data block is updated so that the overall row length increases and the block's free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit into a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row.

When a row is chained or migrated, I/O performance associated with this row decreases because Oracle must scan more than one data block to retrieve the information for the row.

This event test monitors whether continued rows are found in the segments specified by the Segment name, Segment owner, and Segment type parameters. If continued rows are found, an alert is generated.

Parameters

Note: All filters must include SQL syntax, for example, = 'ABC', in ('XYZ', 'ABC'), like '% ABC'. There are higher resource requirements if there is a large number of objects being monitored at high frequencies, for example, checking the space for all 200 segments every 2 minutes. Where possible, Oracle recommends that you use the filters to narrow the scope of the objects being monitored. Also, set the polling schedule to a value that is appropriate to your environment. For example, segments that do not grow rapidly in size may be checked every 2 days instead of every 5 minutes.

Output

Names of segments containing chained or migrated rows.

Recommended Frequency

1 day

Note: This event test is CPU-intensive. You may want to schedule the test for once a day at non-business hours.

User Action

If a segment containing fragmented rows has been detected, there are two ways to solve the problem. If rows are not likely to continue growing, rebuild the table. Row fragmentation is eliminated as rows are tightly packed into each database block during re-creation.

If rows are likely to continue growing through updates, consider increasing the segment's PCTFREE value to reduce the likelihood of future row fragmentation.

Note: To determine what needs to be done, the Chained/Migrated Row event test gathers statistics using the ANALYZE command. Running this command may be a resource-intensive operation. Therefore, Oracle recommends running the Chained/Migrated Row event test during off-peak periods.

Chunk Small

Description

The Oracle Server allocates space for segments in units of one extent. When the existing extents of a segment are full, the Oracle Server allocates another extent for that segment. In order to do so, Oracle searches through the free space in the tablespace containing the segment for the first free, contiguous set of data blocks sufficient to meet the required extent's size. If sufficient space is not found, an error is returned by the Oracle Server.

This event test checks for the largest chunk free space in the tablespace specified by the Tablespace name, Segment name, and Segment type parameters. If any table, index, cluster or rollback segments within the tablespace cannot allocate the additional number of extents specified in the thresholds, then a warning or critical alert is generated.

Example

If the largest chunk of free space in the specified tablespace can only contain 2 extents, then 2 is compared to the threshold values. If 3 extents are specified for a critical alert, the critical test is triggered because 3 extents cannot be allocated in the tablespace.

Parameters

Note: All filters must include SQL syntax, for example, = 'ABC', in ('XYZ', 'ABC'), like '% ABC'. There are higher resource requirements if there is a large number of objects being monitored at high frequencies, for example, checking the space for all 200 segments every 2 minutes. Where possible, Oracle recommends that you use the filters to narrow the scope of the objects being monitored. Also, set the polling schedule to a value that is appropriate to your environment. For example, segments that do not grow rapidly in size may be checked every 2 days instead of every 5 minutes.

Output
Recommended Frequency

10 minutes

User Action

Increase the size of the tablespace by enabling automatic extension for one of its existing data files, manually resizing one of its existing data files or adding a new datafile.

Or if the tablespace is suffering from tablespace free space fragmentation problems, consider reorganizing the entire tablespace by dropping and recreating all segments within that tablespace. When reorganizing a tablespace, consider making the extents to be sized as integral divisors of the usable size of the data files in which they reside. Try to limit the extent sizes used in the tablespace to be no more than 2 or 3 different extent sizes. Ensure extents within a segment are the same size or a multiple of each other by specifying STORAGE parameters where NEXT=INITIAL and PCTINCREASE = 0. For segments that are linearly scanned, chose an extent size is a multiple of the number of blocks read during each multiblock read.

Note: Running the Chunk Small event test may be a resource-intensive operation. Therefore, Oracle recommends running the Chunk Small event test during off-peak periods.

Commit %

Description

This data item represents the percentage of transactions that ended as commits rather than rollbacks during this sample period.

This test checks the percentage of transactions that end as commits, as opposed to rollbacks. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(DeltaCommits/(DeltaCommits + DeltaRollbacks)) * 100.0
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and commit percentage.

Recommended Frequency

5 minutes

User Action

This statistic is an indication of how often transactions are completing successfully. A low percentage means that users are issuing the ROLLBACK statement or encountering errors in their transactions. You should investigate further to determine whether the rollbacks are part of some faulty application logic or due to errors occurring through database access.

Commits Per Second

Description

This data item represents the number of user commits performed, per second during the sample period. When a user commits a transaction, the redo generated that reflects the changes made to database blocks must be written to disk. Commits often represent the closest thing to a user transaction rate.

This test checks the number of user commits per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaCommits / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and the number of commits per second.

Recommended Frequency

5 minutes

User Action

This statistic is an indication of how much work is being accomplished within the database. A spike in the transaction rate may not necessarily be bad. If response times stay close to normal, it means your system can handle the added load. Actually, a drop in transaction rates and an increase in response time may be indicators of problems. Depending upon the application, transaction loads may vary widely across different times of the day.

Commits Per Transaction

Description

This data item represents the number of user commits performed, per transaction during the sample period. When a user commits a transaction, the redo generated that reflects the changes made to database blocks must be written to disk. Commits often represent the closest thing to a user transaction rate.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of user commits per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaCommits / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and the number of commits per transaction.

Recommended Frequency

5 minutes

User Action

This statistic is an indication of how much work is being accomplished within the database. A spike in the transaction rate may not necessarily be bad. If response times stay close to normal, it means your system can handle the added load. Actually, a drop in transaction rates and an increase in response time may be indicators of problems. Depending upon the application, transaction loads may vary widely across different times of the day.

Data Block Corruption

Description

This event test signifies that the database being monitored has generated a corrupted block error to the ALERT file since the last sample time. The ALERT file is a special trace file containing a chronological log of messages and errors. An alert event is triggered when data block corrupted messages are written to the ALERT file.

Parameters

None

Output
Recommended Frequency

30 seconds

User Action

Examine ALERT log for additional information. Note: This event does not automatically clear since there is no automatic way of determining when the problem has been resolved. Hence, you need to manually clear the event once the problem is fixed.

Data Dictionary Hit %

Description

This data item represents dictionary cache efficiency as measured by the percentage of requests against the dictionary data that were already in memory. It is important to determine whether the misses on the data dictionary are actually affecting the performance of the Oracle Server.

The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache, and the other cache structures that are specific to a particular instance configuration.

Misses on the data dictionary cache are to be expected in some cases. Upon instance startup, the data dictionary cache contains no data, so any SQL statement issued is likely to result in cache misses. As more data is read into the cache, the likelihood of cache misses should decrease. Eventually the database should reach a steady state in which the most frequently used dictionary data is in the cache. At this point, very few cache misses should occur. To tune the cache, examine its activity only after your application has been running.

This test checks the percentage of requests against the data dictionary that were found in the Shared Pool. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(Gets/Misses) * 100
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and data dictionary hit percentage.

Recommended Frequency

5 minutes

User Action

If the percentage of gets is below %90 to %85, consider increasing SHARED_POOL_SIZE to decrease the frequency in which dictionary data is being flushed from the shared pool to make room for new data. To increase the memory available to the cache, increase the value of the initialization parameter SHARED_POOL_SIZE.

Data Dictionary Miss Ratio

Description

The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache and the other cache structures that are specific to a particular instance configuration.

The dictionary cache efficiency, as measured by the miss ratio, records the percentage of times the dictionary data was not already in memory.

The shared pool mechanism can greatly reduce system resource consumption in at least three ways:

If the shared pool is too small, users will consume additional resources to complete a database operation. For dictionary cache access, the overhead is primarily the additional I/O since the dictionary cache references that have been displaced from the cache will need to be re-fetched from disk.

This event test monitors the data dictionary cache miss ratio (percentage of failures) against the values specified by the threshold arguments. If the number of occurrences exceeds the values specified, then a warning or critical alert is generated.

Parameters
Output

Current ratio

Recommended Frequency

30 seconds

User Action

The SHARED_POOL_SIZE initialization parameters controls the total size of the shared pool. Consider increasing SHARED_POOL_SIZE in order to decrease the frequency in which dictionary data is being flushed from the shared pool in order to make room for new data.

Note: For Oracle Intelligent Agent release 9i, this event test has been obsoleted. It is recommended that you use the Data Dictionary Hit Ratio event test. This event test is kept for backward compatibility with older versions of the Intelligent Agent.

Database Alert

Description

This event test signifies that the database being monitored has generated errors to the ALERT log file since the last sample time. The ALERT log file is a special trace file containing a chronological log of messages and errors. An alert event is triggered when Oracle Exception (ORA-006xx), deadlock detected (ORA-00060), or data block corrupted (ORA-01578) messages are written to the ALERT log file. A warning is displayed when other ORA messages are written to the ALERT log file.

Parameters

None

Output

Alert log error messages since last sample time

Recommended Frequency

60 seconds

User Action

Examine ALERT log for additional information. Note: This event does not automatically clear since there is no automatic way of determining when the problem has been resolved. Hence, you need to manually clear the event once the problem is fixed.

Note: This event is valid only for releases of the Intelligent Agent prior to 9i.

Datafile Limit

Description

The DB_FILES initialization parameter specifies the maximum number of database files that can be opened for this database.

This event test checks for the utilization of the datafile resource against the values (percentages) specified by the threshold arguments. If the percentage of data files currently used to the limit set in the DB_FILES initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.

Example

If 30 data files are used and the value of DB_FILES is 40, the percentage is 75% (30/40 x 100). This value is compared against the specified thresholds.

Parameters
Output

Current value and the limit specified by DB_FILES

User Action

Verify the current number of data files in use by the database. Increase the DB_FILES instance parameter, if the current value for DB_FILES is less than MAXDATAFILES.

DBWR Checkpoints

Description

This data item represents the number of times, per second, during this sample period DBWn was asked to scan the cache and write all blocks marked for a checkpoint.

The database writer process (DBWn) writes the contents of buffers to datafiles. The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk.

When a buffer in the database buffer cache is modified, it is marked dirty. The primary job of the DBWn process is to keep the buffer cache clean by writing dirty buffers to disk. As buffers are dirtied by user processes, the number of free buffers diminishes. If the number of free buffers drops too low, user processes that must read blocks from disk into the cache are not able to find free buffers. DBWn manages the buffer cache so that user processes can always find free buffers.

When the Oracle Server process cannot find a clean reusable buffer after scanning a threshold of buffers, it signals DBWn to write. When this request to make free buffers is received, DBWn writes the least recently used (LRU) buffers to disk. By writing the least recently used dirty buffers to disk, DBWn improves the performance of finding free buffers while keeping recently used buffers resident in memory. For example, blocks that are part of frequently accessed small tables or indexes are kept in the cache so that they do not need to be read in again from disk. The LRU algorithm keeps more frequently accessed blocks in the buffer cache so that when a buffer is written to disk, it is unlikely to contain data that may be useful soon.

Additionally, DBWn periodically writes buffers to advance the checkpoint which is the position in the redo log from which crash or instance recovery would need to begin.

This test checks the number of times DBWR was asked to advance the checkpoint. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaCheckpoints / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and DBWR checkpoints per second.

Recommended Frequency

5 minutes

User Action

A checkpoint tells the DBWR to write out modified buffers to disk. This write operation is different from the make free request in that the modified buffers are not marked as free by the DBWR process. Dirty buffers may also be written to disk at this time and freed.

The write size is dictated by the _db_block_checkpoint_batch parameter. If writing, and subsequently waiting for checkpoints to complete is a problem, the checkpoint completed event displays in the Top Waits by Time Waited Chart or the Sessions Waiting for this Event Chart.

If the database is often waiting for checkpoints to complete you may want to increase the time between checkpoints by checking the init.ora parameter db_block_checkpoint_batch: select name, value, isdefault from v$parameter where name = `db_block_checkpoint_batch' The value should be large enough to take advantage of parallel writes. The DBWR uses a write batch that is calculated like this: (`db_files' * `db_file_simultaneous_writes')/2 The write_batch is also limited by two other factors:

The db_block_checkpoint_batch is always smaller or equal to the _db_block_write_batch. You can also consider enabling the check point process.

Deferred Transactions

Description

Oracle uses deferred transactions to propagate data-level changes asynchronously among master sites in an advanced replication system as well as from an updatable snapshot to its master table.

This event test checks for the number of deferred transactions. An alert is generated if the number of deferred transactions exceeds the value specified by the threshold argument.

Parameters

Threshold for alert (number of transactions). Default is 100 transactions.

Output

Number of deferred transactions

Recommended Frequency

30 seconds

User Action

When the advanced replication facility pushes a deferred transaction to a remote site, it uses a distributed transaction to ensure that the transaction has been properly committed at the remote site before the transaction is removed for the queue at the local site. If transactions are not being pushed to a given remote site, verify that the destination for the transaction was correctly specified. If you specify a destination database when calling DBMS_DEFER_SYS.SCHEDULE_EXECUTION using the DBLINK parameter or DBMS_DEFER_SYS.EXECUTE using the DESTINATION parameter, make sure the full database link is provided.

Wrong view destinations can lead to erroneous deferred transaction behavior. Verify the DEFCALLEST and DEFTRANDEST views are the definitions from the CATREPC.SQL not the ones from CATDEFER.SQL.

Disk I/O

Description

This event test monitors the real time database physical I/O rate (requests/seconds) against the values specified by the threshold arguments. If the Disk I/O rate exceeds the threshold values entered for the specified number of occurrences, then a warning or critical alert is generated.

Parameters
Output

Current rate in requests/second

Recommended Frequency

30 seconds

User Action

Determine whether the I/O rate is having a negative impact to performance by investigating the disk queue lengths for high I/O devices. It may be necessary to move data files around to balance any identified bottlenecks. Other tuning efforts such as adjusting indexes to reduce the number of full table scans can also reduce I/O load.

If no bottlenecks are evident, increase the I/O rate threshold values.

Note: For Oracle Intelligent Agent release 9i, this event test has been obsoleted. It is recommended that you use the File Read Rate and File Write Rate event tests. This event test is kept for backward compatibility with older versions of the Intelligent Agent.

Disk Sorts Per Second

Description

This data item represents the number of sorts going to disk per second for this sample period.

For best performance, most sorts should occur in memory, because sorts to disks are expensive to perform. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption.

This test checks the number of sorts performed to disk per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaDiskSorts / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and disk sorts per second.

Recommended Frequency

5 minutes

User Action

The sessions that are performing the most sorts should be identified, such that the SQL they are executing can be further identified. The sort area sizes for the database may be sized correctly, the application SQL may be performing unwanted or excessive sorts. The sessions performing the most sorts are available through the Top Sessions by Disk Sorts drilldown.

Further drilldown into the session performing the most disk sorts with the Current SQL Chart will show you the SQL statement responsible for the disk sorts. Further investigation on how to best tune a particular SQL statement can be done by using the Explain Plan drilldown, or if you are on NT, the Tune SQL Statement drilldown as well.

The Top SQL (Sorts) Chart provides a mechanism to quickly display the SQL statements in the cache, presented in sorted order by their number sort operations. This is an alternative to viewing sort of current sessions, it allows you to view sort activity via SQL statements, and will contain cumulative statistics for all executions of that statement.

If the top sessions and their associated SQL statements seem to be okay, you can drilldown to see a more complete breakdown of sort statistics and initialization parameters with the Sort Statistics At A Glance Chart.

If excessive sorts are taking place on disk, and the query's are correct, consider increasing the SORT_AREA_SIZE initialization parameter to increase the size of the sort area. A larger sort area will allow the Oracle Server to keep sorts in memory, reducing the number of I/O operations required to do an equivalent amount of work using the current sort area size.

Disk Sorts Per Transaction

Description

This data item represents the number of sorts going to disk per transactions for this sample period.

For best performance, most sorts should occur in memory, because sorts to disks are expensive to perform. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of sorts performed to disk per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaDiskSorts / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and disk sorts per transaction.

Recommended Frequency

5 minutes

User Action

The sessions that are performing the most sorts should be identified, such that the SQL they are executing can be further identified. The sort area sizes for the database may be sized correctly, the application SQL may be performing unwanted or excessive sorts. The sessions performing the most sorts are available through the Top Sessions by Disk Sorts drilldown.

Further drilldown into the session performing the most disk sorts with the Current SQL Chart will show you the SQL statement responsible for the disk sorts. Further investigation on how to best tune a particular SQL statement can be done by using the Explain Plan drilldown, or if you are on NT, the Tune SQL Statement drilldown as well.

The Top SQL (Sorts) Chart provides a mechanism to quickly display the SQL statements in the cache, presented in sorted order by their number sort operations. This is an alternative to viewing sort of current sessions, it allows you to view sort activity via SQL statements, and will contain cumulative statistics for all executions of that statement.

If the top sessions and their associated SQL statements seem to be okay, you can drilldown to see a more complete breakdown of sort statistics and initialization parameters with the Sort Statistics At A Glance Chart.

If excessive sorts are taking place on disk, and the query's are correct, consider increasing the SORT_AREA_SIZE initialization parameter to increase the size of the sort area. A larger sort area will allow the Oracle Server to keep sorts in memory, reducing the number of I/O operations required to do an equivalent amount of work using the current sort area size.

Dump Full

Description

Each server and background process can write to an associated trace file in order to log messages and errors. Background processes and the ALERT file are written to the destination specified by BACKGROUND_DUMP_DEST.

Trace files for server processes are written to the destination specified by USER_ DUMP_DEST.

This event test checks for available free space on these dump destination devices. If the space available is less than the threshold value given in the threshold arguments, then a warning or critical alert is generated.

Parameters
Output

Dump destination device and space available in kilobytes

Recommended Frequency

10 minutes

User Action

Verify the device specified in the initialization parameters BACKGROUND_ DUMP_DEST and USER_DUMP_DEST are set up properly for archiving. If the BACKGROUND_ DUMP_DEST and USER_DUMP_DEST initialization parameters are set up correctly and this event test triggers, then free up more space in the destination specified by the dump destination parameters.

Dump Full (%)

Description

This event test monitors the same dump destinations as the Dump Full event test. The Dump Full (%) event test, however, returns the percentage of free space remaining on the dump destinations.

If the space available is less than the threshold value given in the threshold arguments, then a warning or critical alert is generated.

Parameters
Output

Dump destination device and percentage of free space available

Recommended Frequency

10 minutes

User Action

Verify the device specified in the initialization parameters BACKGROUND_ DUMP_DEST and USER_DUMP_DEST are set up properly for archiving. If the BACKGROUND_ DUMP_DEST and USER_DUMP_DEST initialization parameters are set up correctly and this event test triggers, then free up more space in the destination specified by the dump destination parameters.

Error Transactions

Description

Oracle uses deferred transactions to propagate data-level changes asynchronously among master sites in an advanced replication system as well as from an updatable snapshot to its master table. If a transaction is not successfully propagated to the remote site, Oracle rolls back the transaction, logs the transaction in the SYS.DEFERROR view in the remote destination database.

This event test checks for the number of transactions in SYS.DEFERROR view and raises an alert if it exceeds the value specified by the threshold argument.

Parameters

Threshold for alert (number of error transactions). Default is 0 transactions.

Output

Number of transactions that could not be applied

Recommended Frequency

30 seconds

User Action

An error in applying a deferred transaction may be the result of a database problem, such as a lack of available space in the table is to be updated or may be the result of an unresolved insert, update or delete conflict. The SYS.DEFERROR view provides the ID of the transaction that could not be applied. Use this ID to locate the queued calls associated with the transaction. These calls are stored in the SYS.DEFCALL view. You can use the procedures in the DBMS_DEFER_QUERY package to determine the arguments to the procedures listed in the SYS.DEFCALL view.

Executes without Parses %

Description

This data item represents the percentage of statement executions that do not require a corresponding parse. A perfect system would parse all statements once and then execute the parsed statement over and over without reparsing. This ratio provides an indication as to how often the application is parsing statements as compared to their overall execution rate. A higher number is better.

This test checks the percentage of executes that do not require parses. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

((DeltaExecuteCount - (DeltaParseCountTotal)) / DeltaExecuteCount) * 100
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and executes without parses percentage.

Recommended Frequency

5 minutes

User Action

An execute to parse ratio of less than 70% indicates that the application may be parsing statements more often than it should. Reparsing the statement, even if it is a soft parse, requires a network round trip from the application to the database, as well as requiring the processing time to locate the previously compiled statement in the cache. Reducing network round trips and unnecessary processing improves application performance.

Use the Top Sessions by Total Parse Count to identify the sessions responsible for the bulk of the parse activity within the database. Start with these sessions to determine whether the application could be modified to make more efficient use of its cursors.

To see the actual values of the underlying statistics used to compute this resource, you can use the Parse Statistics Chart. This chart shows the Parse, Execute and Hard Parse rates per second.

Failed Job

Description

The Oracle server job queue is a database table that stores information about local jobs such as the PL/SQL call to execute for a job such as when to run a job. Database replication is also managed by using the Oracle job queue mechanism using jobs to push deferred transactions to remote master sites, to purge applied transactions from the deferred transaction queue or to refresh snapshot refresh groups.

If a job returns an error while Oracle is attempting to execute it, the job fails. Oracle repeatedly tries to execute the job doubling the interval of each attempt. If the job fails sixteen times, Oracle automatically marks the job as broken and no longer tries to execute it.

This event test checks for failed DBMS jobs. An alert is generated if the number of failed job exceeds the value specified by the threshold argument.

Parameters

Critical threshold: Threshold for critical alert (number of jobs). Default is 0 jobs.

Output
Recommended Frequency

30 seconds

User Action

Check the ALERT log and trace files for error information. Correct the problem that is preventing the job from running.

Fast Segment Growth

Description

A segment collection is a group of extents that make up a single table, index, temporary or rollback segment. The Oracle Server offers a practical method of space allocation to segments as they are required to grow. Oracle allows a segment to have multiple extents, which the server allocates automatically when they are needed. For any segment that grows continuously, it is important to carefully monitor that segment's growth pattern. Storage values for the database should be chosen to ensure new extents are not frequently allocated.

This event test checks whether any of the segments specified by the Tablespace name, Segment name, and Segment type parameters are allocating extents too quickly. If, for any segment, the number of extents allocated since the event check is greater than the threshold values specified in the threshold arguments, then a warning or critical alert is generated.

Parameters

Note: All filters must include SQL syntax, for example, = 'ABC', in ('XYZ', 'ABC'), like '% ABC'. There are higher resource requirements if there is a large number of objects being monitored at high frequencies, for example, checking the space for all 200 segments every 2 minutes. Where possible, Oracle recommends that you use the filters to narrow the scope of the objects being monitored. Also, set the polling schedule to a value that is appropriate to your environment. For example, segments that do not grow rapidly in size may be checked every 2 days instead of every 5 minutes.

Output
Recommended Frequency

1 day

User Action

Consider increasing the value of the segment's NEXT storage parameter value so that extents are allocated less frequently.

Note: Running the Fast Segment Growth event test may be a resource-intensive operation. Therefore, Oracle recommends running the Fast Segment Growth event test during off-peak periods.

Free Buffer Waits

Description

Database writer process (DBWR) bottlenecks can be detected by monitoring occurrences of the free buffer waits test over time. If the database environment is in a steady state, there should not be any free buffer waits. However, an occasional absolute increase in free buffer waits is not a problem. Only consistent occurrences of an increase should be of concern.

As a result, this event test maintains a history of free buffer waits samples, specified by the number of samples parameter, and monitors for a percentage of these samples where an increase was detected. This percentage is then compared against the values specified by the threshold arguments. If the percentage of samples (where an increase in free buffer waits is detected) is greater than the threshold arguments, then a warning or critical alert is generated.

Example: If 10 has been specified for the number of samples, then during the first 9 times the test condition is checked, the test is merely building up the history of free buffer waits samples. On the 10 interval and from that point on, the test monitors how many of those samples showed and increase in free buffer waits. Assume 2 samples showed an increase, then the percentage of samples showing an increase is 20%.

Parameters
Output
Recommended Frequency

60 seconds

User Action

When users are having to wait for free buffers, then either DB_FILE_ SIMULTANEOUS_WRITES needs to be increased of the number or DBWR processes needs to be increased.

The DB_FILE_SIMULTANEOUS_WRITES initialization parameter determines the number of simultaneous writes to each database file when written by DBWR. This parameter is also used to determine the number of reads per file in the redo read ahead when reading redo during recover. This parameter impacts the number of simultaneous I/Os, not just the number of simultaneous writes.

Consider increasing the DB_FILE_SIMULTANEOUS_WRITES initialization parameter in order to increase the speed at which the DBWR writes dirty buffers which then decreases the number of times sessions needed to wait for free buffers.

The DB_WRITES initialization parameter controls the number of DBWR processes that are activated at instance startup. It is a platform specific parameter which is used to avoid DBWR bottlenecks on operating systems which do not support asynchronous I/O. The DBWR process is responsible for writing dirty buffers in batches from the buffer cache back to the data files.

DBWR bottlenecks are most likely on systems which have a high insert, update or delete rate and a large number of disk devices. Since database writes are not serial, there can be benefit to having multiple DBWR processes, even in a single CPU database environment.

Global Cache Convert Timeouts

Description

This data item represents number of times lock converts in the global cache timed out for each Oracle server instance per second during this sample period.

This test checks the Global Cache Convert Timeouts for an instance per second for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

ConvertTimeout / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and convert timeouts per second.

Recommended Frequency

5 minutes

Global Cache Converts

Description

This data item represents average convert time in milliseconds during this sample period.

This test checks the Global Cache Converts for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

ConvertTime * 10 / Converts
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and global cache converts time in milliseconds.

Recommended Frequency

5 minutes

Global Cache CR Request

Description

This data item represents average time in milliseconds that CR block was received during this sample period.

This test checks the Global Cache CR Request for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

BlockReceiveTime * 10 / BlocksReceived
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and global cache CR request time in milliseconds.

Recommended Frequency

5 minutes

Global Cache CR Timeouts

Description

This data item represents the number of times a foreground process requested a consistent-read (CR) block when the request timed out for each Oracle cluster database instance per second during this sample period.

This test checks the Global Cache CR Timeouts for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

CR_Timeout / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and CR timeout per second.

Recommended Frequency

5 minutes

Global Cache FreeList Waits

Description

This data item represents number of times foreground has to wait for a lock element for each cluster database instance per second during this sample period.

This test checks the Global Cache FreeList Waits for an instance per second for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

FreelistWait / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and freelist waits per second.

Recommended Frequency

5 minutes

Global Cache Gets

Description

This data item represents average get time in milliseconds during this sample period.

This test checks the Global Cache Gets for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

GetTime * 10 / Gets
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and global cache gets time in milliseconds.

Recommended Frequency

5 minutes

In Memory Sort %

Description

This data item represents the sort efficiency as measured by the percentage of times sorts were performed in memory as opposed to going to disk.

For best performance, most sorts should occur in memory because sorts to disks are less efficient. If the sort area is too small, extra sort runs will be required during the sort operation. This increases CPU and I/O resource consumption.

This test checks the percentage of sorts performed in memory rather than to disk. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(DeltaMemorySorts / (DeltaDiskSorts + DeltaMemorySorts)) * 100
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and in memory sort percentage.

Recommended Frequency

5 minutes

User Action

The sessions that are performing the most sorts should be identified such that the SQL they are executing can be further identified. The sort area sizes for the database may be sized correctly, and the application SQL may be performing unwanted or excessive sorts. The sessions performing the most sorts are available through the Top Sessions by Disk Sorts drilldown.

Further drilldown into the session performing the most disk sorts with the Current SQL Chart shows you the SQL statement responsible for the disk sorts. Further investigation on how to best tune a particular SQL statement can be done by using the Explain Plan drilldown, or if you are on NT, the Tune SQL Statement drilldown as well.

The Top SQL (Sorts) Chart provides a mechanism to quickly display the SQL statements in the cache, presented in sorted order by their number sort operations. This is an alternative to viewing a sort of current sessions. It allows you to view sort activity via SQL statements and contains cumulative statistics for all executions of that statement.

If the top sessions and their associated SQL statements seem to be okay, you can drill down to see a more complete breakdown of sort statistics and initialization parameters with the Sort Statistics At A Glance Chart.

If excessive sorts are taking place on disk and the queries are correct, consider increasing the SORT_AREA_SIZE initialization parameter to increase the size of the sort area. A larger sort area allows the Oracle Server to maintain sorts in memory, reducing the number of I/O operations required to do an equivalent amount of work using the current sort area size.

For more detailed sort statistics, the Sort Statistics At A Glance Chart is available which displays the current sort-related initialization parameter values as well as a breakdown of the individual sort statistics.

Index Rebuild

Description

When an indexed value is updated in the table, the old value is deleted from the index and the new value is inserted into a separate part of the index. The space released by the old value can never be used again. As indexed values are updated or deleted, the amount of unusable space within the index increases, a condition called index stagnation. Because a stagnated index contains a mixture of data and empty areas, scans of the index will be less efficient.

This event test monitors whether indexes specified by the Index name, Index owner, Indexed object name, and Indexed object owner parameters suffer from index stagnation. If an index has stagnation, an alert is generated.

Parameters

Note: All filters must include SQL syntax, for example, = 'ABC', in ('XYZ', 'ABC'), like '% ABC'. There are higher resource requirements if there is a large number of objects being monitored at high frequencies, for example, checking the space for all 200 indexes every 2 minutes. Where possible, Oracle recommends that you use the filters to narrow the scope of the objects being monitored. Also, set the polling schedule to a value that is appropriate to your environment. For example, indexes that do not grow rapidly in size may be checked every 2 days instead of every 5 minutes.

Output

Index name where index stagnation is detected.

Recommended Frequency

1 day

User Action

Consider rebuilding the index to enhance performance. An index rebuild can be accomplished by using either the ALTER INDEX REBUILD statement or the CREATE INDEX statement.

Note: To determine whether or not an index should be rebuilt, the Index Rebuild event test gathers statistics using the ANALYZE...INDEX VALIDATE STRUCTURE command. Running this command may be a resource-intensive operation. Therefore, Oracle recommends running the Index Rebuild event test during off-peak periods.

Instance Pings

Description

This data item represents the ping ratio during this sample period.

This test checks the Ping Ratio for the instance specified by the Instance Name(s) parameter. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

Pings / Writes
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and ping ratio.

Recommended Frequency

5 minutes

Library Cache Hit %

Description

This data item represents the library cache efficiency, as measured by the percentage of times the fully parsed or compiled representation of PL/SQL blocks and SQL statements are already in memory.

The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache and the other cache structures that are specific to a particular instance configuration.

The shared pool mechanism can greatly reduce system resource consumption in at least three ways:

Parse time is avoided if the SQL statement is already in the shared pool.

Application memory overhead is reduced, since all applications use the same pool of shared SQL statements and dictionary resources.

I/O resources are saved, since dictionary elements which are in the shared pool do not require access.

If the shared pool is too small, users will consume additional resources to complete a database operation. For library cache access, the overhead is primarily the additional CPU resources required to re-parse the SQL statement.

This test checks the percentage of parse requests where cursor already in cache If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(DeltaPinHits / DeltaPins) * 100
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and library cache hit percentage.

Recommended Frequency

5 minutes

User Action

The Top Sessions by Hard Parse Count chart lists the sessions incurring the most hard parses. Hard parses occur when the server parses a query and cannot find an exact match for the query in the library cache. You can avoid hard parses by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.

By showing you which sessions are incurring the most hard parses, this chart can identify the application or programs that are the best candidates for SQL rewrites.

Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.

To identify potential similar SQL statements use the Similar SQL Statements Chart. This shows you which statements are similar in the first 'n' characters and how many versions of that statement segment are in the cache. Hard parses can also be forced by aging of SQL statements out of the cache due to an insufficient size of the shared pool area. The shared pool sizes allows you to see your current shared pool allocation and potentially increase it.

The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.

To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.

Library Cache Miss %

Description

The shared pool is an area in the SGA that contains the library cache of shared SQL requests, the dictionary cache and the other cache structures that are specific to a particular instance configuration.

The library cache efficiency, as measured by the miss ratio, records the percentage of times the fully parsed or compiled representation of PL/SQL blocks and SQL statements are not already in memory.

The shared pool mechanism can greatly reduce system resource consumption in at least three ways:

If the shared pool is too small, users will consume additional resources to complete a database operation. For library cache access, the overhead is primarily the additional CPU resources required to re-parse the SQL statement.

This event test monitors the library cache miss ratio (percentage of failures) against the values specified by the threshold arguments. If the number of occurrences exceeds the values specified, then a warning or critical alert is generated.

Parameters
Output

Current ratio

Recommended Frequency

30 seconds

User Action

The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE in order to decrease the frequency in which SQL requests are being flushed from the shared pool in order to make room for new requests.

To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_ CURSORS.

Also examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. Consider using bind variables rather than explicitly specified constants in your statements whenever possible.

Note: For Oracle Intelligent Agent release 9i, this event test has been obsoleted. It is recommended that you use the Library Cache Hit Ratio event test. This event test is kept for backward compatibility with older versions of the Intelligent Agent.

Lock Limit

Description

The DML_LOCKS initialization parameter specifies the maximum number of DML locks. The purpose of DML locks is to guarantee the integrity of data being accessed concurrently by multiple users. DML locks prevent destructive interference of simultaneous conflicting DML and/or DDL operations.

This event test checks for the utilization of the lock resource against the values (percentage) specified by the threshold arguments. If the percentage of all active DML locks to the limit set in the DML_LOCKS initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.

If DML_LOCKS is 0, this test fails to register. A value of 0 indicates that enqueues are disabled.

Example

If 40 DML locks are active and the value of DML_LOCKS is 60, the percentage is 67% (40/60 x 100). This value is compared against the specified thresholds.

Parameters
Output

Current value and the limit specified by DML_LOCKS

Recommended Frequency

30 seconds

User Action

Increase the DML_LOCKS instance parameter by 10%.

Logical Reads Per Second

Description

This data item represents the number of logical reads per second during the sample period. A logical read is a read request for a data block from the SGA. Logical reads may result in a physical read if the requested block does not reside with the buffer cache.

This test checks the logical(db block gets + consistent gets) reads per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

LogicalReads / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and logical reads per second.

Recommended Frequency

5 minutes

User Action

Excessive logical reads, even if they do not result in physical reads, can still represent an area that should be considered for performance tuning. Typically large values for this statistic indicate that full table scans are being performed. To identify the SQL that is performing the most logical reads (buffer gets), use the Top SQL (Buffer Gets) chart. This quickly identifies the SQL responsible for the bulk of the logical reads. You can further investigate these SQL statements via drilldowns. Tuning these SQL statements will reduce your buffer cache access.

Logical Reads Per Transaction

Description

This data item represents the number of logical reads per transaction during the sample period.

The value of this statistic is zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding per second data item of the same name will be a better indicator of current performance.

This test checks the logical (db block gets + consistent gets) reads per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaReads / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and logical reads per transaction.

Recommended Frequency

5 minutes

User Action

Excessive logical reads, even if they do not result in physical reads, can still represent an area that should be considered for performance tuning. Typically large values for this statistic indicate that full table scans are being performed. To identify the SQL that is performing the most logical reads (buffer gets) use the Top SQL (Buffer Gets) chart. This quickly identifies the SQL responsible for the bulk of the logical reads. Further investigation on how to best tune a particular SQL statement can be done by using the Explain Plan drilldown, or if you are on NT, the Tune SQL Statement drilldown as well.

Logons Per Second

Description

This data item represents the number of logons per second during the sample period.

This test checks the number of logons that occurred per second during the sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaLogons / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and logons per second.

Recommended Frequency

5 minutes

User Action

A high logon rate may indicate that an application is inefficiently accessing the database. Database logon's are a costly operation. If an application is performing a logon for every SQL access, that application will experience poor performance as well as affect the performance of other applications on the database. If there is a high logon rate try to identify the application that is performing the logons to determine if it could be redesigned such that session connections could be pooled, reused or shared.

The Transaction Based Execution Rates Chart will allow you to quickly determine the ratio of logons to transactions to determine the average amount of transactional work being done per logon.

Quick analysis of the database's CPU consumption can be done by using the CPU Breakdown Chart. This chart breaks the database CPU consumption into 3 parts, and further analysis into the largest portion of the CPU time will lead you towards reducing your CPU consumption.

Logons Per Transaction

Description

This data item represents the number of logons per transaction during the sample period.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of logons that occurred per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaLogons / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and logons per transaction.

Recommended Frequency

5 minutes

User Action

A high logon rate may indicate that an application is inefficiently accessing the database. Database logon's are a costly operation. If an application is performing a logon for every SQL access, that application will experience poor performance as well as affect the performance of other applications on the database. If there is a high logon rate try to identify the application that is performing the logons to determine if it could be redesigned such that session connections could be pooled, reused or shared.

Maximum Extents

Description

A segment is a collection of extents that make up a single table, cluster, index, temporary or rollback segment. The MAXEXTENTS segment storage parameter specifies the maximum number of extents that can be allocated to the segment. Once a segment has filled the maximum number of extents, any row insertion will fail with an ORA-01631 error message.

This event test checks whether any of the segments specified by the Tablespace name, Segment name, and the Segment type parameters are approaching their maximum extents. If for any segment the maximum number of extents minus the number of existing extents is less than the threshold values specified in the threshold arguments, then a warning or critical alert is generated.

Example

If the maximum number of extents for a segment is 20 and the number of existing extents is 16, then 4 is compared against the specified threshold values. If 3 is specified for a critical alert and 5 is specified for a warning alert, a warning alert is triggered because only 4 extents are available.

Parameters

Note: All filters must include SQL syntax, for example, = 'ABC', in ('XYZ', 'ABC'), like '% ABC'. There are higher resource requirements if there is a large number of objects being monitored at high frequencies, for example, checking the space for all 200 segments every 2 minutes. Where possible, Oracle recommends that you use the filters to narrow the scope of the objects being monitored. Also, set the polling schedule to a value that is appropriate to your environment. For example, segments that do not grow rapidly in size may be checked every 2 days instead of every 5 minutes.

Output
Recommended Frequency

10 minutes

User Action

If possible, increase the value of the segment's MAXEXTENTS storage parameter. Otherwise, rebuild the segment with a larger extent size ensuring the extents within a segment are the same size by specifying STORAGE parameters where NEXT=INITIAL and PCTINCREASE = 0. For segments that are linearly scanned, choose an extent size that is a multiple of the number of blocks read during each multiblock read. This will ensure that Oracle's multiblock read capability is used efficiently.

Note: Running the Maximum Extents event test may be a resource-intensive operation. Therefore, Oracle recommends running the Maximum Extents event test during off-peak periods.

Multiple Extents

Description

A segment is a collection of extents that make up a single table, cluster, index, temporary or rollback segment. The Oracle Server allows a segment to have multiple extents, which the server allocates automatically when additional space is required.

There is no performance degradation for a segment having multiple extents that are never full-scanned (table and temporary segments only) where the extents are the same size and are also an integral multiple of the multiblock read batch size. No performance degradation is found where extents are 100 or more times larger that the read batch size. Oracle administrators may, however, choose to monitor the number of extents in a segment.

This event test checks whether any of the segments specified by the Tablespace name, Segment name, and Segment type parameters have multiple extents. If the number of extents is greater than the threshold values specified in the threshold arguments, then a warning or critical alert is generated.

Note: The only time multiple extents may cause a performance problem is when a segment is fully scanned and that segment's extent size is not a multiple of the multiblock read size.

Parameters

Note: All filters must include SQL syntax, for example, = 'ABC', in ('XYZ', 'ABC'), like '% ABC'. There are higher resource requirements if there is a large number of objects being monitored at high frequencies, for example, checking the space for all 200 segments every 2 minutes. Where possible, Oracle recommends that you use the filters to narrow the scope of the objects being monitored. Also, set the polling schedule to a value that is appropriate to your environment. For example, segments that do not grow rapidly in size may be checked every 2 days instead of every 5 minutes.

Output
Recommended Frequency

10 minutes

User Action

If the segment may be linearly scanned, make sure the multiple extents are the same size. The chosen extent size is an integral multiple of the muliblock read batch size or the extents is 100 or more times larger than the read batch size in order to achieve the highest efficiency of the server's multiblock read capability.

For all other segments, no action is required unless the number of extent allocations is approaching the segment's maximum number of extents. In this case, increase the value of the segment's MAXEXTENTS storage parameter if possible.

Otherwise, rebuild the segment with a larger extent size ensuring that the extents within a segment are the same size by specifying STORAGE parameters where NEXT=INITIAL and PCTINCREASE = 0.

Note: Running the Multiple Extents event test may be a resource-intensive operation. Therefore, Oracle recommends running the Multiple Extents event test during off-peak periods.

Network Reads Per Second

Description

This data item represents the total number of bytes sent and received through the SQL Net layer to and from the database.

This test checks the network read/write per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(DeltaBytesFromClient+DeltaBytesFromDblink+DeltaBytesToClient+DeltaBytesToDblink) / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and network bytes per second.

Recommended Frequency

5 minutes

User Action

This data item represents the amount of network traffic in and out of the database. This number may only be useful when compared to historical levels to understand network traffic usage related to a specific database.

Parses (Hard) Per Second

Description

This data item represents the number of hard parses per second during this sample period. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement.

Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.

This test checks the number of parses of statements that were not already in the cache. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaParses / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and hard parses per second.

Recommended Frequency

5 minutes

User Action

If there appears to be excessive time spent parsing, evaluate SQL statements to determine which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.

The Top Sessions by Hard Parse Count chart will show you which sessions are incurring the most hard parses. Hard parses happen when the server parses a query and cannot find an exact match for the query in the library cache. Hard parses can be avoided by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.

By showing you which sessions are incurring the most hard parses, this chart may lead you to the application or programs that are the best candidates for SQL rewrites.

Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.

To identify potential similar SQL statements use the Similar SQL Statements Chart. This will show you which statements are similar in the first 'n' characters and how many versions of that statement segment are in the cache. Hard parses can also be forced by aging of SQL statements out of the cache due to an insufficient size of the shared pool area. The shared pool sizes allows you to see your current shared pool allocation and potentially increase it.

The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.

To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.

Parses (Hard) Per Transaction

Description

This data item represents the number of hard parses per second during this sample period. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement.

Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of hard parses per second during this sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaParses / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and hard parses per second.

Recommended Frequency

5 minutes

User Action

If there appears to be excessive time spent parsing, evaluate SQL statements to determine which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.

The Top Sessions by Hard Parse Count chart will show you which sessions are incurring the most hard parses. Hard parses happen when the server parses a query and cannot find an exact match for the query in the library cache. Hard parses can be avoided by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.

By showing you which sessions are incurring the most hard parses, this chart may lead you to the application or programs that are the best candidates for SQL rewrites.

Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.

To identify potential similar SQL statements use the Similar SQL Statements Chart. This will show you which statements are similar in the first 'n' characters and how many versions of that statement segment are in the cache. Hard parses can also be forced by aging of SQL statements out of the cache due to an insufficient size of the shared pool area. The shared pool sizes allows you to see your current shared pool allocation and potentially increase it.

The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.

To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.

Parses (Total) Per Second

Description

This number reflects the total number of parses per second, both hard and soft. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement. A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.

Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.

This test checks the number of parse calls per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaParses / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and number of parse calls.

Recommended Frequency

5 minutes

User Action

If there appears to be excessive time spent parsing, evaluate SQL statements to determine which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.

The Top Sessions by Hard Parse Count chart will show you which sessions are incurring the most hard parses. Hard parses happen when the server parses a query and cannot find an exact match for the query in the library cache. Hard parses can be avoided by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.

By showing you which sessions are incurring the most hard parses, this chart may lead you to the application or programs that are the best candidates for SQL rewrites.

Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.

To identify potential similar SQL statements use the Similar SQL Statements Chart. This will show you which statements are similar in the first 'n' characters and how many versions of that statement segment are in the cache. Hard parses can also be forced by aging of SQL statements out of the cache due to an insufficient size of the shared pool area. The shared pool sizes allows you to see your current shared pool allocation and potentially increase it.

The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.

To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.

Parses (Total) Per Transaction

Description

This number reflects the total number of parses per transaction, both hard and soft. A hard parse occurs when a SQL statement has to be loaded into the shared pool. In this case, the Oracle Server has to allocate memory in the shared pool and parse the statement. A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.

Each time a particular SQL cursor is parsed, this count will increase by one. There are certain operations which will cause a SQL cursor to be parsed. Parsing a SQL statement breaks it down into atomic steps which the optimizer will evaluate when generating an execution plan for the cursor.

This test checks the number of parse calls per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaParses / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and number of parse calls per transaction.

Recommended Frequency

5 minutes

User Action

If there appears to be excessive time spent parsing, evaluate SQL statements to determine which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.

The Top Sessions by Hard Parse Count chart will show you which sessions are incurring the most hard parses. Hard parses happen when the server parses a query and cannot find an exact match for the query in the library cache. Hard parses can be avoided by sharing SQL statements efficiently. The use of bind variables instead of literals in queries is one method to increase sharing.

By showing you which sessions are incurring the most hard parses, this chart may lead you to the application or programs that are the best candidates for SQL rewrites.

Also, examine SQL statements which can be modified to optimize shared SQL pool memory use and avoid unnecessary statement reparsing. This type of problem is commonly caused when similar SQL statements are written which differ in space, case, or some combination of the two. You may also consider using bind variables rather than explicitly specified constants in your statements whenever possible.

To identify potential similar SQL statements use the Similar SQL Statements Chart. This will show you which statements are similar in the first 'n' characters and how many versions of that statement segment are in the cache. Hard parses can also be forced by aging of SQL statements out of the cache due to an insufficient size of the shared pool area. The shared pool sizes allows you to see your current shared pool allocation and potentially increase it.

The SHARED_POOL_SIZE initialization parameter controls the total size of the shared pool. Consider increasing the SHARED_POOL_SIZE to decrease the frequency in which SQL requests are being flushed from the shared pool to make room for new requests.

To take advantage of the additional memory available for shared SQL areas, you may also need to increase the number of cursors permitted per session. You can increase this limit by increasing the value of the initialization parameter OPEN_CURSORS.

Physical Reads Per Second

Description

This data item represents the number of data blocks read from disk per second during this sample period. When a user performs a SQL query, Oracle tries to retrieve the data from the database buffer cache (memory) first, then searches the disk if it is not already in memory. Reading data blocks from disk is much more inefficient than reading the data blocks from memory. The goal with Oracle should always be to maximize memory utilization.

This test checks the data blocks read from disk per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaPhysicalReads / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and physical reads per second.

Recommended Frequency

5 minutes

User Action

Block reads are inevitable so the aim should be to minimize unnecessary IO. This is best achieved by good application design and efficient execution plans. Changes to execution plans can yield profound changes in performance. Tweaking at system level usually only achieves percentage gains.

To view I/O on a per session basis to determine which sessions are responsible for your physical reads, you should use the Top Sessions by Physical Reads Chart. This approach allows you to identify problematic sessions and then drill down to their current SQL statement and perform tuning from there.

To identify the SQL that is responsible for the largest portion of physical reads, use the Top SQL (Physical Reads) Chart. This chart allows you to quickly determine which SQL statements are the causing your I/O activity. From this chart you can view the full text of the SQL statement. Further investigation on how to best tune a particular SQL statement can be done by using the Explain Plan drilldown, or if you are on NT, the Tune SQL Statement drilldown as well.

The difference between the two methods for identifying problematic SQL is that the Top Sessions view displays sessions that are performing the most physical reads at the moment. The Top SQL view displays the SQL statements that are still in the SQL cache that have performed the most I/O over their lifetime. A SQL statement could show up in the Top SQL view that is not currently being executed.

If the SQL statements are properly tuned and optimized, consider the following suggestions. A larger buffer cache may help - test this by actually increasing DB_BLOCK_BUFFERS. Do not use DB_BLOCK_LRU_EXTENDED_STATISTICS, as this may introduce other performance issues. Never increase the SGA size if it may induce additional paging or swapping on the system.

A less obvious issue which can affect the IO rates is how well data is clustered physically. For example, assume that you frequently fetch rows from a table where a column is between two values via an index scan. If there are 100 rows in each index block then the two extremes are: 1.Each of the table rows is in a different physical block (100 blocks need to be read for each index block). 2.The table rows are all located in the few adjacent blocks (a handful of blocks need to be read for each index block).

Pre-sorting or reorganizing data can improve this situation in severe situations as well.

To view more detailed I/O statistics, the I/O at a Glance Chart displays some overall I/O statistics as well as a breakdown of the top files being accessed within the database.

For a view of I/O statistics on a per transaction basis, the I/O Rates per Transaction Chart shows you the average amount of I/O incurred by each transaction. These transaction-based rates can provide you with a more consistent view of the I/O activity within the database. The transaction based statistics may not fluctuate during the day as much as the per second numbers may.

Physical Reads Per Transaction

Description

This data item represents the number of disk reads per transaction during the sample period. When a user performs a SQL query, Oracle tries to retrieve the data from the database buffer cache (memory) first, then goes to disk if it is not in memory already. Reading data blocks from disk is much more expensive than reading the data blocks from memory. The goal with Oracle should always be to maximize memory utilization.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the data blocks read from disk per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaReads / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and physical reads per transaction.

Recommended Frequency

5 minutes

User Action

Block reads are inevitable so the aim should be to minimize unnecessary IO. This is best achieved by good application design and efficient execution plans. Changes to execution plans can yield orders of magnitude changes in performance. Tweaking at system level usually only achieves percentage gains.

To identify the SQL that is responsible for the largest portion of physical reads, use the Top SQL (Physical Reads) Chart. This chart will allow you to quickly determine which SQL statements are causing the I/O activity. From this chart you can view the full text of the SQL statement. Further investigation on how to best tune a particular SQL statement can be done by using the Explain Plan drilldown, or if you are on NT, the Tune SQL Statement drilldown as well.

To view I/O on a per session basis to determine which sessions are responsible for your physical reads, you can use the Top Sessions by Physical Reads Chart. This approach allows you to identify problematic sessions and then drill down to their current SQL statement to perform tuning.

If the SQL statements are properly tuned and optimized the following suggestions may help. A larger buffer cache may help - test this by actually increasing DB_BLOCK_BUFFERS and not by using DB_BLOCK_LRU_EXTENDED_STATISTICS. Never increase the SGA size if it will induce additional paging or swapping on the system.

A less obvious issue which can affect the IO rates is how well data is clustered physically. For example, assume that you frequently fetch rows from a table where a column is between two values via an index scan. If there are 100 rows in each index block then the two extremes are: 1. Each of the table rows is in a different physical block (100 blocks need to be read for each index block). 2. The table rows are all located in the few adjacent blocks (a handful of blocks need to be read for each index block).

Pre-sorting or reorganizing data can help to tackle this in severe situations as well.

Physical Writes Per Second

Description

This data item represents the number of disk writes per second during the sample period. This statistic represents the rate of database blocks written from the SGA buffer cached to disk by the DBWR background process, and from the PGA by processes performing direct writes.

This test checks the data blocks written disk per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaWrites / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and physical writes per second.

Recommended Frequency

5 minutes

User Action

Because this statistic shows both DBWR writes as well as direct writes by sessions, you should view the physical writes directly to determine where the write activity is actually occurring. If the physical writes direct value comprises a large portion of the writes, then there are probably many sorts or writes to temporary tablespaces occurring. You can investigate further into sort activity by viewing the Top Sessions by Disk Sorts Chart.

If the majority of the writes are not direct, they are being performed by the DBWR writes process. This is only be a problem if log writer or redo waits are showing up in the Sessions Waiting for this Event Chart or the Top Waits by Time Waited Chart. These charts can be found at the bottom of the database overview chart.

You can also view DBWR performance and health in the DBWR Analysis Chart. This chart is comprised of a number of charts that breakdown the DBWR activity into categories that determine whether it is functioning efficiently.

To immediately analyze where the I/O operations are actually taking place, view the File Write Operations Chart. This chart lists the top data files by the number of writes per second. Similarly, to determine which files have the slowest access times use the Average Cost of I/O Chart. This chart lists which files are the slowest to access resulting in bottlenecks in your system.

Lastly, to view Background Process I/O statistics on a per transaction basis the Background Process I/O - Per Transaction Chart displays the amount of I/O attributed to each transaction on average.

Physical Writes Per Transaction

Description

This data item represents the number of disk writes per transaction during the sample period.

The value of this statistic is zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name is a better indicator of current performance.

This test checks the data blocks written disk per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaWrites / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and physical writes per transaction.

Recommended Frequency

5 minutes

User Action

Because this statistic shows both DBWR writes as well as direct writes by sessions, you should view the physical writes directly to determine where the write activity is really occurring. If the physical writes direct value comprises a large portion of the writes, then there are likely many sorts or writes to temporary tablespaces that are occurring. Further investigation into sort activity can be found with the Top Sessions by Disk Sorts Chart.

If the majority of the writes are not direct, they are being performed by the DBWR writes process. This will typically only be a problem if log writer or redo waits are showing up in the Sessions Waiting for this Event Chart or the Top Waits by Time Waited Chart.

Look for DBWR performance and health in the DBWR Analysis Chart. This chart is comprised of a number of charts that breakdown the DBWR and the statistics that determine if it is functioning efficiently.

Probe

Description

This event test checks whether a new connection can be established to a database. If the maximum number of users is exceeded or the listener is down, this test is triggered.

Parameters

None

Output

None

Recommended Frequency

10 minutes

User Action

Check the status of the listener to make sure it is running on the node where the event was triggered. If the listener is running, check to see if the number of users is at the session limit.

Note: The choice of user credentials for the Probe event test should be considered. If the preferred user has the RESTRICED SESSION privilege, the user will be able to connect to a database even if the LICENSE_MAX_SESSIONS limit is reached.

Process Limit

Description

The PROCESSES initialization parameter specifies the maximum number of operating system user processes that can simultaneously connect to a database at the same time. This number also includes background processes utilized by the instance.

This event test checks for the utilization of the process resource against the values (percentage) specified by the threshold arguments. If the percentage of all current processes to the limit set in the PROCESSES initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.

Example

If 40 processes are currently connected and the value of PROCESSES is 50, the percentage is 80% (40/50 x 100). This value is compared against the specified thresholds.

Parameters
Output

Current value and the limit specified by PROCESSES

Recommended Frequency

30 seconds

User Action

Verify that the current PROCESSES instance parameter setting has not exceeded the operating system-dependent maximum. Increase the number of processes to be at least 6 + the maximum number of concurrent users expected to log in to the instance.

Recursive Calls Per Second

Description

This data item represents the number of recursive calls, per second during the sample period.

Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used. Recursive calls are also generated:

This test checks the number of recursive SQL calls per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaRecursiveCalls / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and recursive calls per second.

Recommended Frequency

5 minutes

User Action

If the Oracle Server appears to be making excessive recursive calls while your application is running, determine what activity is causing these recursive calls. If you determine that the recursive calls are caused by dynamic extension, either reduce the frequency of extension by allocating larger extents or, if you are using Oracle 8i, considering taking advantage of locally managed tablespaces.

Recursive Calls Per Transaction

Description

This data item represents the number of recursive calls, per second during the sample period.

Sometimes, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used. Recursive calls are also generated:

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of calls that result in changes to internal tables. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaRecursiveCalls / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and recursive calls per transaction.

Recommended Frequency

5 minutes

User Action

If the Oracle Server appears to be making excessive recursive calls while your application is running, determine what activity is causing these recursive calls. If you determine that the recursive calls are caused by dynamic extension, either reduce the frequency of extension by allocating larger extents or, if you are using Oracle 8i, considering taking advantage of locally managed tablespaces.

Redo Log Allocation Hit

Description

Redo log entries contain a record of changes that have been made to the database block buffers. The log writer (LGWR) process writes redo log entries from the log buffer to a redo log file. The log buffer should be sized so that space is available in the log buffer for new entries, even when access to the redo log is heavy. When the log buffer is undersized, user process will be delayed as they wait for the LGWR to free space in the redo log buffer.

The redo log buffer efficiency, as measured by the hit ratio, records the percentage of times users did not have to wait for the log writer to free space in the redo log buffer.

This event test monitors the redo log buffer hit ratio (percentage of success) against the values specified by the threshold arguments. If the number of occurrences is smaller than the values specified, then a warning or critical alert is generated.

Parameters
Output
Recommended Frequency

30 seconds

User Action

The LOG_BUFFER initialization parameter determines the amount of memory that is used when buffering redo entries to the redo log file.

Consider increasing the LOG_BUFFER initialization parameter in order to increase the size of the redo log buffer. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.

Note: For Oracle Intelligent Agent release 9i, this event test has been obsoleted. It is recommended that you use the Redo NoWait Ratio event test. This event test is kept for backward compatibility with older versions of the Intelligent Agent.

Redo No Wait %

Description

Redo log entries contain a record of changes that have been made to the database block buffers. The log writer (LGWR) process writes redo log entries from the log buffer to a redo log file. The log buffer should be sized so that space is available in the log buffer for new entries, even when access to the redo log is heavy. When the log buffer is undersized, user process will be delayed as they wait for the LGWR to free space in the redo log buffer.

This data item represents the redo log buffer efficiency, as measured by the percentage of times users did not have to wait for the log writer to free space in the redo log buffer.

This test checks the percentage of times redo entries are allocated without having to wait. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

((DeltaRedoEntries - DeltaRedoLogSpaceRequests) / DeltaRedoEntries) * 100
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and redo no wait percentage.

Recommended Frequency

5 minutes

User Action

The LOG_BUFFER initialization parameter determines the amount of memory that is used when buffering redo entries to the redo log file.

Consider increasing the LOG_BUFFER initialization parameter to increase the size of the redo log buffer if waiting is a problem. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.

Redo Write Per Second

Description

This data item represents the number redo write operations per second during this sample period.

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries can be used for database recovery if necessary.

The log writer processes (LGWR) is responsible for redo log buffer management; that is, writing the redo log buffer to a redo log file on disk.

This test checks the number of writes by LGWR to the redo log files per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaRedoWrites / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and redo write per second.

Recommended Frequency

5 minutes

User Action

The LOG_BUFFER initialization parameter determines the amount of memory that is used when redo entries are buffered to the redo log file.

Should waiting be a problem, consider increasing the LOG_BUFFER initialization parameter to increase the size of the redo log buffer. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.

A drilldown is available to the Redo Analysis Chart for further analysis of the LGWR process. This event indicates that the LGWR process needs to be sped up. This is related to the throughput of the disk. Do NOT put redo on raid 5. Raid 5 is not efficient for writes. Use multiplexed redo on different spindles or mirrored disks for redo.

Lastly, if you would like to view Background Process I/O statistics on a per transaction basis the Background Process I/O - Per Transaction Chart displays the amount of I/O attributed to each transaction on average.

Redo Write Per Transaction

Description

This data item represents the number of redo write operations per second during this sample period.

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER or DROP operations. Redo entries are used for database recovery, if necessary.

The log writer process (LGWR) is responsible for redo log buffer management; that is, writing the redo log buffer to a redo log file on disk.

This test checks the number of writes by LGWR to the redo log files per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaRedoWrites /(DeltaCommits+DeltaRollbacks)
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and redo writes per transaction.

Recommended Frequency

5 minutes

User Action

The LOG_BUFFER initialization parameter determines the amount of memory that is used when buffering redo entries to the redo log file.

Consider increasing the LOG_BUFFER initialization parameter to increase the size of the redo log buffer should waiting be a problem. Redo log entries contain a record of the changes that have been made to the database block buffers. The log writer process (LGWR) writes redo log entries from the log buffer to a redo log. The redo log buffer should be sized so space is available in the log buffer for new entries, even when access to the redo log is heavy.

Response Time Per Execute

Description

Using only statistics available within the database, this data item gives the best approximation of response time, in seconds, per SQL statement execution. This statistic may be more valid than response time per transaction as it shows accurate values even for read-only access.

This test checks the response time, in seconds, per SQL statement execution during sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(DeltaTotalWait + DeltaCpuTime) / (DeltaUserCalls + DeltaRecursiveCalls)
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and average response time in seconds per transaction.

Recommended Frequency

5 minutes

User Action

Investigate further into which component, waits or CPU, is responsible for the majority of the response time and then continue diagnosis.

Response Time Per Transaction

Description

Using only statistics available within the database, this data item gives the best approximation of response time, in seconds, per transaction during this sample period.

This test checks the response time in seconds, per transaction during this sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(DeltaTotalWait + DeltaCpuTime) / (DeltaCommits + DeltaRollbacks)
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and average response time in seconds per transaction.

Recommended Frequency

5 minutes

User Action

Investigate further into which component, waits or CPU, is responsible for the majority of the response time and then continue diagnosis.

Rollback Contention

Description

Rollback segments are portions of the database that record the actions of transactions in case a transaction is rolled back. Rollback segments are used to provide read consistency, support rollback transactions, and recover a database.

Proper allocation of rollback segments make for optimal database performance. Using a sufficient number of rollback segments distributes rollback segment contention across many segments and improves performance.

Contention for rollback segments is reflected by contention for buffers that contain rollback segment blocks.

This event test monitors rollback segment missing ratio (percentage) against the values specified by the threshold arguments. If the missing ratio is greater than the values specified, then a warning or critical alert is generated.

Parameters
Output
Recommended Frequency

30 seconds

User Action

To reduce contention for buffers containing rollback segment blocks, create additional rollback segments. The general guidelines for choosing how many rollback segments to allocate is based on the number and type of concurrently active transactions on your database. Allocate one rollback segment for each 10 concurrent OLTP (online transaction processing) transactions and one rollback segment for each concurrent batch job.

In addition, when creating a rollback segment keep extents within a rollback the same size by specifying STORAGE parameters where NEXT=INITIAL.

It is also recommended that you set the MINEXTENTS parameter value to 20. Because rollback segments are logically regarded as a circular queue of extents, they are required to have MINEXTENTS value of at least 2. The probability that a rollback segment will require a new extent allocation depends on how likely the next extents are to contain active undo. The more extents the rollback segment has, the less likely it that a rollback segment will require an extent allocation that could be avoided. Administrators should create rollback segments with many extents. Naturally, there is a point of diminishing returns. There is a rapid decline of a rollback segment's probability of extending as the number of extents increases. It has been determined that beyond 20 extents, the incremental decrease in the segment's probability of extending fails to justify the cost of the additional extent.

Rollbacks Per Second

Description

This data item represents the number of times, per second during the sample period, that users manually issue the ROLLBACK statement or an error occurred during a user's transactions.

This test checks the number of rollbacks per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaRollbacks / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and the number of rollbacks per second.

Recommended Frequency

5 minutes

User Action

This value shows how often users are issuing the ROLLBACK statement or encountering errors in their transactions. Further investigation should be made to determine if the rollbacks are part of some faulty application logic or due to errors occurring through database access.

Rollbacks Per Transaction

Description

This data item represents the number of times, per transaction during the sample period, that users manually issue the ROLLBACK statement or an error occurred during a user's transactions.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the Number of rollbacks per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaRollbacks / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and the number of rollbacks per transaction.

Recommended Frequency

5 minutes

User Action

This value shows how often users are issuing the ROLLBACK statement or encountering errors in their transactions. Further investigation should be made to determine if the rollbacks are part of some faulty application logic or due to errors occurring through database access.

Session Limit

Description

The SESSIONS initialization parameter specifies the maximum number of concurrent connections that the database will allow.

This event test checks for the utilization of the session resource against the values (percentage) specified by the threshold arguments. If the percentage of the number of sessions, including background processes, to the limit set in the SESSIONS initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated.

Example

If there are 20 sessions and the value of SESSIONS is 25, the percentage is 80% (20/25 x 100). This value is compared against the specified thresholds.

Parameters
Output

Current value and the limit specified by SESSIONS

Recommended Frequency

30 seconds

User Action

Increase the SESSIONS instance parameter. For XA environments, confirm that SESSIONS is at least 2.73 * PROCESSES. For shared server environments, confirm that SESSIONS is at least 1.1 * maximum number of connections.

Session Terminated

Description

This event test signifies that a session terminated unexpectedly since the last sample time. The ALERT file is a special trace file containing a chronological log of messages and errors. An alert is displayed when session unexpectedly terminated (ORA-00603) messages are written to the ALERT file.

Parameters

None

Output

ALERT log error messages since the last sample time

Recommended Frequency

30 seconds

User Action

Examine the ALERT log and the session trace file for additional information. Note: This event does not automatically clear since there is no automatic way of determining when the problem has been resolved. Hence, you need to manually clear the event once the problem is fixed.

Snapshot Log Full

Description

A master table's snapshot log keeps track of fast refresh data for all corresponding snapshots. When a snapshot log is created for a master table, Oracle creates an underlying table to support the snapshot log. Oracle automatically tracks which rows in a snapshot log have been used during the refreshes of snapshots, and purges those rows from the log. Oracle does not delete rows from the log until all snapshots have used them. As a result, in certain situations a snapshot log can grow indefinitely when multiple snapshots are based on the same master table. It is best to always try to keep a snapshot log as small as possible to minimize the database space that it uses.

This event test checks whether a snapshot log is too large. In order to do this, the test determines the number of snapshot log tables containing more rows than specified by the Snapshot log's table size parameter. If this number is greater than the threshold value specified in the threshold argument, then an alert is generated.

Parameters
Output

Snapshot log table and its size

Recommended Frequency

30 seconds

User Action

To remove rows from a snapshot log and free up space for newer log records, you can refresh the snapshots associated with the log so that Oracle can purge rows from the snapshot log or manually purge records in the log by deleting the rows required only by the nth least recently refreshed snapshots.

To manually purge rows from a snapshot log, execute the PURGE_LOG stored procedure of the DBMS_SNAPSHOT package at the database that contains the log.

Note: Running the Snapshot Log Full event test may be a resource-intensive operation. Therefore, Oracle recommends running the Snapshot Log Full event test during off-peak periods.

Soft Parse %

Description

A soft parse is recorded when the Oracle Server checks the shared pool for a SQL statement and finds a version of the statement that it can reuse.

This data item represents the percentage of parse requests where the cursor was already in the cursor cache compared to the number of total parses. This ratio provides an indication as to how often the application is parsing statements that already reside in the cache as compared to hard parses of statements that are not in the cache.

This test checks the percentage of soft parse requests to total parse requests. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

((DeltaParseCountTotal - DeltaParseCountHard) / DeltaParseCountTotal) * 100
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance Name and soft parse percentage.

Recommended Frequency

5 minutes

User Action

Soft parses consume less resources than hard parses, so the larger the value for this item, the better. But many soft parses indicate the application is using SQL inefficiently. Reparsing the statement, even if it is a soft parse, requires a network round trip from the application to the database, as well as requiring the processing time to locate the previously compiled statement in the cache. Reducing network round trips and unnecessary processing will improve application performance.

If this data item is below 80% you should look at the Top Sessions by Hard Parse Count Chart. This chart lists the sessions that are currently performing the most hard parses. Starting with these sessions and the SQL statements they are executing will indicate which applications and corresponding SQL statements are being used inefficiently.

A faster way to identify SQL statements that might be candidates for bind variable replacement is the Similar SQL Statements Chart. This identifies which statements are similar in the first 'n' characters and how many versions of that statement segment are in the cache. Further drilldown will show the full text of the SQL statements and allow you to see if the statements are truly viable candidates for re-write.

If the data item is currently showing a high value, the expensive hard parses are not occurring but the application can still be tuned by reducing the amount of soft parses. Use the Top SQL (Parses) Chart to identify the SQL statements that have been most parsed. This will allow you to quickly identify SQL that is being re-parsed unnecessarily. You should investigate these statements first for possible application logic changes such that cursors are opened once, and executed or fetched from many times.

To see the actual values of the underlying statistics used to compute this resource you can use the Parse Statistics Chart. This chart shows the Parse, Execute, and Hard Parse rates per second.

SysStat Table

Description

You can monitor any system statistic available in the database with this event test. A warning or critical alert will be generated if the value of the selected V$SYSSTAT parameter exceeds the values specified by the threshold arguments.

To view the V$SYSSTAT parameter names and values, connect to the database with SQL Worksheet and execute SELECT NAME, VALUE FROM V$SYSSTAT.

Parameters
Output

Parameter's current value

Recommended Frequency

30 seconds

User Action

The user action for the event is dependent on the statistic that is being monitored.

SysStat Table Delta

Description

You can monitor any system statistic available in the database with this event test. The threshold values are compared to the difference between the last sample point and the current sample point of the V$SYSSTAT parameter. A warning or critical alert is generated if the calculated difference exceeds the values specified by the threshold arguments.

To view the V$SYSSTAT parameter names and values, connect to the database with SQL Worksheet and execute SELECT NAME, VALUE FROM V$SYSSTAT.

Parameters
Output

Parameter name and change in parameter's value

Recommended Frequency

30 seconds

User Action

The user action for the event is dependent upon the statistic that is being monitored.

Table Scans (Long) Per Second

Description

This data item represents the number of long table scans per second during sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.

This test checks the long table scans per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaScans / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and the number of long table scans per second.

Recommended Frequency

5 minutes

User Action

A table scan means that the entire table is being scanned record by record in order to satisfy the query. For small tables that can easily be read into and kept in the buffer cache this may be advantageous. But for larger tables this will force a lot of physical reads and potentially push other needed buffers out of the cache. SQL statements with large physical read and logical read counts are candidates for table scans. They can be identified either through the Top SQL (Physical Reads) Chart, or through the Top Sessions by Physical Reads Chart, with a drilldown to the current SQL for a session.

Table Scans (Long) Per Transaction

Description

This data item represents the number of long table scans per transaction during sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of long table scans per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaScans / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and the number of long table scans per transaction.

Recommended Frequency

5 minutes

User Action

A table scan means that the entire table is being scanned record by record in order to satisfy the query. For small tables that can easily be read into and kept in the buffer cache this may be advantageous. But for larger tables this will force a lot of physical reads and potentially push other needed buffers out of the cache. SQL statements with large physical read and logical read counts are candidates for table scans. They can be identified either through the Top SQL (Physical Reads) Chart, or through the Top Sessions by Physical Reads Chart, with a drilldown to the current SQL for a session.

Table Scans (Total) Per Second

Description

This data item represents the number of long and short table scans per second during the sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.

Data Source

(DeltaLongScans + DeltaShortScans) / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and total table scans per second.

Recommended Frequency

5 minutes

User Action

A table scan indicates that the entire table is being scanned record-by-record in order to satisfy the query. For small tables that can easily be read into and kept in the buffer cache, this may be advantageous. But larger tables will force many physical reads and potentially push other required buffers out of the cache. SQL statements with large physical read and logical read counts are candidates for table scans. They can be identified through two different methods. The Top Sessions by Physical Reads Chart displays sessions that are responsible for the current I/O activity. The Top SQL (Physical Reads) Chart lists the SQL statements in the cache by the amount of I/O they have performed. Some of these SQL statements may have high I/O numbers but they may not be attributing to the current I/O load.

The Table Scans per Transaction Chart shows table scan rates on a per-transaction rate. This chart provides an understanding of what type of table scan activity each transaction is performing. The per-transaction figures may be easier to use for comparisons to determine whether application performance has been improved or degraded. While the transaction rate may change from time to time, the amount of work the transactions do may stay the same, thus giving you a more concrete number for comparisons.

Table Scans (Total) Per Transaction

Description

This data item represents the number of long and short table scans per transaction during the sample period. A table is considered 'long' if the table is not cached and if its high-water mark is greater than 5 blocks.

This test checks the number of long and short table scans per transaction. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(DeltaLongScans + DeltaShortScans) / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and the number of total table scans per transaction.

Recommended Frequency

5 minutes

User Action

A table scan indicates that the entire table is being scanned record-by-record in order to satisfy the query. For small tables that can easily be read into and kept in the buffer cache, this may be advantageous. But larger tables will force many physical reads and potentially push other required buffers out of the cache. SQL statements with large physical read and logical read counts are candidates for table scans. They can be identified through two different methods. The Top Sessions by Physical Reads Chart displays sessions that are responsible for the current I/O activity. The Top SQL (Physical Reads) Chart lists the SQL statements in the cache by the amount of I/O they have performed. Some of these SQL statements may have high I/O numbers but they may not be attributing to the current I/O load.

The Table Scans per Transaction Chart shows the individual table scan rates on a per-transaction rate. This chart provides an understanding of what type of table scan activity each transaction is performing. The per-transaction figures may be easier to use for comparisons to determine whether application performance has been improved or degraded. While the transaction rate may change from time to time, the amount of work the transactions do may stay the same, thus giving you a more concrete number for comparisons.

Tablespace Full

Description

As segments within a tablespace grow, the free space within that tablespace decreases. Should free space become insufficient, the creation of new segments or the extension of existing segments will fail.

This event test checks for the total free space in the tablespace specified by the Tablespace name. If the percentage of used space is greater than the values specified in the threshold arguments, then a warning or critical alert is generated.

Parameters
Output
Recommended Frequency

30 seconds

User Action

Increase the size of the tablespace by enabling automatic extension for one of its data files, manually resizing one of its data files, or adding a new datafile.

Note: If the tablespace you are monitoring has Autoextend ON, then the percentage of space used is calculated as follows.

Variables are: A = maximum size of tablespace; B = current size of tablespace; C = used size of datafile; D = free space

If you specify a maximum size for the tablespace and free space is greater than the maximum size of tablespace minus the current size of the tablespace (D > A - B), then the usage is calculated by dividing the used size of the datafile by the maximum size of the tablespace (C / A). For example, if D = 300M, A = 200M, B = 100M, and C = 50M, then this tablespace is 25% full (50M / 200M). Otherwise, the usage is calculated as [C / (B + D)]. For example, if D = 50M, A = 200M, B = 100M, and C = 30M, then this tablespace is 20% full [30M / (100M + 50M)].

If you specify a maximum size of unlimited, then the total available free space is taken into account [C / (B + D)]. For example, if you have a 100M tablespace with 50M used and the physical disk is 400M free space, then the usage is: [50M / (100M + 400M)] = 10%.

Note: Running the Tablespace Full event test may be a resource-intensive operation. Therefore, Oracle recommends running the Tablespace Full event test during off-peak periods.

Total Pings

Description

This data item represents the ping count per second for a Real Application Clusters database during this sample period.

This test checks the total number of pings per second. If the value is greater than the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaPings / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Total number of pings per second.

Recommended Frequency

5 minutes

User Action

Investigate further into which instance or which file contribute most of the database ping by drilldown to Instance Ping Chart and Ping By File Chart

Transactions Per Second

Description

This data item represents the total number of commits and rollbacks performed during this sample period.

This test checks the number of commits and rollbacks performed during sample period. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaCommits + DeltaRollbacks
where:

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and transaction count per second.

Recommended Frequency

5 minutes

User Action

This statistic is an indication of how much work is being accomplished within the database. A spike in the transaction rate may not necessarily be bad. If response times stay close to normal, it means your system can handle the added load. Actually, a drop in transaction rates and an increase in response time may be indicators of problems. Depending upon the application, transaction loads may vary widely across different times of the day.

The Transaction Based Execution Rates Chart will allow you to quickly determine the amount of work or user activity being performed per transaction.

Quick analysis of the database's CPU consumption can be done by using the CPU Breakdown Chart. This chart breaks the database CPU consumption into 3 parts, and further analysis into the largest portion of the CPU time will lead you towards reducing your CPU consumption.

Unscheduled Jobs

Description

The Oracle server job queue is a database table that stores information about local jobs.

This event test checks for unscheduled DBMS jobs. An alert is generated when the number of jobs, whose execution time has exceeded the value specified by the Job Completion Time argument, exceeds the value specified in the Alert Threshold. A job's completion date/time is calculated by using the NEXT_DATE value in the SYS.DBA_JOBS view plus the approximate time it takes to complete a job as specified by the job completion time argument.

Parameters
Output

Job identifiers of jobs that are not rescheduled for execution

Recommended Frequency

30 seconds

User Action

Check the FAILURES and BROKEN values in the SYS.DBA_JOBS view.

If the job failed to execute, check the ALERT log and trace files for error information and fix the error.

If the job was never executed, there may be a problem with the availability of SNP background processes. Check the initialization parameter JOB_QUEUE_ PROCESSES to determine the maximum number of background processes available and JOB_QUEUE_INTERVAL to determine how frequently each background process wakes up.

UpDown Database

Description

This event test checks whether the database being monitored is running. If this test is triggered, other database events are ignored.

Parameters

None

User Action

The Startup Database job task can be set up as a fixit job for automatically correcting the problem.

Note: If the listener serving a database is down, this event may be triggered because the Intelligent Agent uses the listener to communicate with the database. This note applies to Intelligent Agents released before 8.0.5.

User Audit

Description

This event test monitors specified database user connections. For example, an alert is displayed when a particular database user connection, specified by the User name filter argument, has been detected.

Parameters

User Name filter: Filter the user names to be monitored, or * for all users. Default is ='SYS'. The filter must include SQL syntax, for example, = 'ABC', in ('XYZ', 'ABC'), like '% ABC'.

Note: The user name is case sensitive. By default the user name is all upper case. To define a name in lower case, the name must be in double quotes, for example, "abc". The user name must be exactly as it appears in the all_users view.

Output
Recommended Frequency

5 minutes

User Action

User actions may vary depending on the user connection that is detected.

User Blocks

Description

This event test signifies that a database user is blocking at least one other user from performing an action, such as updating a table. An alert is generated if the number of consecutive blocking occurrences reaches the specified value.

Note: The catblock.sql script needs to be run on the managed database prior to using the User Blocks test. This script creates some additional tables, view, and public synonyms that are required by the User Blocks test.

Parameters

Number of occurrences: Number of consecutive occurrences a user can be blocked before an alert is generated. Default is three.

Output

Session Id of the user who is blocking other users

Recommended Frequency

30 seconds

User Action

Either have user who is blocking other users rollback the transaction, or wait until the blocking transaction has been committed.

User Call %

Description

This data item represents the percentage of user calls to recursive calls.

Occasionally, to execute a SQL statement issued by a user, the Oracle Server must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, the Oracle Server makes recursive calls to allocate the space dynamically if dictionary managed tablespaces are being used. Recursive calls are also generated:

When data dictionary information is not available in the data dictionary cache and must be retrieved from disk

This test checks the percentage of user calls to recursive calls. If the value is less than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

(DeltaUserCalls/(DeltaRecursiveCalls + DeltaUserCalls)) * 100
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance Name and user call percentage.

Recommended Frequency

5 minutes

User Action

A low value for this data item means that the Oracle Server is making a large number of recursive calls. If the Oracle Server appears to be making excessive recursive calls while your application is running, determine what activity is causing these recursive calls. If you determine that the recursive calls are caused by dynamic extension, either reduce the frequency of extension by allocating larger extents or, if you are using Oracle 8i, considering taking advantage of locally managed tablespaces.

User Calls Per Second

Description

This data item represents the number of logins, parses, or execute calls per second during the sample period,

This test checks the number of logins, parses, or execute calls. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaUserCalls / Seconds
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and user calls per second.

Recommended Frequency

5 minutes

User Action

This statistic is a reflection of how much activity is going on within the database. Spikes in the total user call rate should be investigated to determine which of the underlying calls is actually increasing. Parse, execute and logon calls each signify different types of user or application actions and should be addressed individually. User Calls is an overall activity level monitor.

The Transaction Based Execution Rates Chart will allow you to quickly determine the amount of work or user activity being performed per transaction.

Quick analysis of the database's CPU consumption can be done by using the CPU Breakdown Chart. This chart breaks the database CPU consumption into 3 parts, and further analysis into the largest portion of the CPU time will lead you towards reducing your CPU consumption.

User Calls Per Transaction

Description

This data item represents the number of logins, parses, or execute calls per transaction during the sample period,

The value of this statistic will be zero if there have not been any write or update transactions committed or rolled back during the last sample period. If the bulk of the activity to the database is read only, the corresponding "per second" data item of the same name will be a better indicator of current performance.

This test checks the number of logins, parses, or execute calls per second. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaUserCalls / Transactions
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Instance name and user calls per transaction.

Recommended Frequency

5 minutes

User Action

This statistic is a reflection of how much activity is going on within the database. Spikes in the total user call rate should be investigated to determine which of the underlying calls is actually increasing. Parse, execute and logon calls each signify different types of user or application actions and should be addressed individually. User Calls is an overall activity level monitor.

User Limit

Description

The LICENSE_MAX_SESSIONS initialization parameter specifies the maximum number of concurrent user sessions allowed simultaneously.

This event test checks whether the number of users logged on is reaching the license limit. If the percentage of the number of concurrent user sessions to the limit set in the LICENSE_MAX_SESSIONS initialization parameter exceeds the values specified in the threshold arguments, then a warning or critical alert is generated. If LICENSE_MAX_SESSIONS is not explicitly set to a value, the test does not trigger.

Note: This event test is most useful when session licensing is enabled. Refer to the Oracle Server Reference Manual for more information on LICENSE_MAX_SESSIONS and LICENSE_MAX_USERS.

Example

If there are 15 concurrent user sessions and the value of LICENSE_MAX_SESSIONS is 20, the percentage is 75% (15/20 x 100). This value is compared against the specified thresholds.

Parameters
Output

Current value and the limit specified by SESSIONS

Recommended Frequency

30 seconds

User Action

This typically indicates that the license limit for the database has been reached. The user will need to acquire additional licenses, then increase LICENSE_MAX_ SESSIONS to reflect the new value.

User-Defined SQL Event Test

Description

The User-Defined SQL event test allows you to define your own SQL script that evaluates an event test. The event tests you define should be written as queries (i.e. SELECT statements) that return condition values for which you are monitoring. These values are checked against the Critical threshold and Warning threshold limits you specify, and trigger the event if the threshold limits are reached.

Example

You have a custom application that runs against the Oracle database. Each time it finds an application error, it creates an entry into a table called "error_log". Using the "User-Defined SQL Test", you can write an event test that notifies you when it finds at least 50 errors. Specifically, you define the following SQL statement:

select count(*) from error_log

This returns the number of rows in the error_log table. Since you want a critical alert raised when it reaches at least 50, you specify the Operator " >= ", a Critical Threshold value of 50, and perhaps a Warning Threshold value of 30.

Support for PL/SQL Functions

If your query for the event condition requires more complex processing than is allowed in a single SELECT statement, you can first create a pl/sql function that contains the extra processing steps, and then use the pl/sql function with the User-Defined SQL event test. Your pl/sql function must still return a value that can be compared against the Critical and Warning thresholds.

Example

You need to trigger a critical alert whenever an employee's salary is $500 higher than the highest manager's salary. You first define a pl/sql function as follows:

create or replace function overpaid_emp return number is

max_mgr_sal number;

max_emp_sal number;

begin

select max(sal) into max_mgr_sal from scott.emp where job = 'MANAGER' or job = 'PRESIDENT';

select max(sal) into max_emp_sal from scott.emp where job != 'MANAGER' and job != 'PRESIDENT';

return (max_emp_sal - max_mgr_sal);

end;

This pl/sql function returns the difference between the highest employee's salary and the highest manager's salary. If the difference is a positive number, then an employee has the higher pay. If the difference is more than 500, then a critical alert needs to be triggered.

When defining this event this using the User-Defined SQL event test, you define the SQL statement as follows:

select overpaid_emp from dual

Then use the Operator ">" and Warning threshold of 100 and Critical threshold of 500.

Note that ROLES are not enabled within pl/sql functions, so any privileges that are granted via ROLES will not work from within the function. You may need to grant the privileges directly to the database user account that is used for the event. (The database user account used for the event is either the Preferred Credentials user for the database, or is the credentials specified to overwrite the preferred credentials).

Parameters
Output

Value returned by the SQL script

Recommended Frequency

60 seconds

User Action

The action depends on the SQL script and hence Oracle cannot make any recommendations.

Wait by Session Count

Description

This data item represents the number of sessions currently waiting on this event.

This test checks the number of sessions currently waiting for the event specified by the Wait Event(s) parameter. If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

Select event,count(*) from v$session_wait where wait_time = 0 group by event

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Wait event and wait by session count.

Recommended Frequency

5 minutes

User Action

The wait event that has the largest number of sessions waiting for it may show the largest area of contention. The event may also show where delays for resources are occurring. Further investigation into the details of these wait events will help determine the exact reason for waits.

Wait by Time

Description

This data item represents the length of time, in seconds, spent waiting for the event during the last sample period. This value will always be 0 unless the TIMED_STATISTICS parameter is TRUE.

This test checks the length of time, in seconds, spent waiting for the event specified by the Wait Event(s) parameter during the last sample period If the value is greater than or equal to the threshold values specified by the threshold arguments, and the number of occurrences exceeds the value specified in the "Number of Occurrences" parameter, then a warning or critical alert is generated.

Data Source

DeltaTimeWaited
where:

Parameters

For information about setting appropriate threshold values, see Baselining Threshold Values.

Output

Event name and the number of waits.

Recommended Frequency

5 minutes

User Action

Time spent waiting is time that could have been spent processing. Attacking the wait event with the largest wait time will probably produce the largest gain. Drilldowns are available to charts that are tailored to help diagnose and improve the performance of individual wait events.

If this data item's value is 0 and the wait count is non-zero, then the database parameter TIMED_STATISTICS is currently set to FALSE. There is a drilldown chart available for all wait events called Timed Statistics Chart. This chart shows the current value for the TIMED_STATISTICS parameter. Use the Turn On Timed Statistics drilldown to turn on timed statistics for the instance.


Go to previous page Go to next page
Oracle
Copyright © 1996-2001, Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents

Master Index

Feedback