Improving Performance for Autorefresh Operations

There are best practice recommendations to improve performance for autorefresh operations.

Minimizing Delay for Cached Data With Continuous Autorefresh

You can specify continuous autorefresh with an autorefresh interval of 0 milliseconds. With continuous autorefresh, the next autorefresh cycle is scheduled as soon as possible after the last autorefresh cycle has ended.

Continuous autorefresh could result in a higher resource usage when there is a low workload rate on the Oracle database, since the cache agent could be performing unnecessary round-trips to the Oracle database.

See CREATE CACHE GROUP and ALTER CACHE GROUP in the Oracle TimesTen In-Memory Database SQL Reference.

Reducing Contention for Dynamic Read-Only Cache Groups With Incremental Autorefresh

Most autorefresh and dynamic load operations coordinate their access to the Oracle database for correctness. The default TimesTen coordination behavior could result in contention between autorefresh and dynamic load operations (in extreme cases).

If you have dynamic read-only cache groups with incremental autorefresh, then:

  • Multiple dynamic load operations could be blocked by autorefresh operations.

  • Autorefresh operations are frequently delayed while waiting for dynamic load operations to complete.

Enabling the DynamicLoadReduceContention database system parameter is useful for dynamic cache groups by changing the way that autorefresh and dynamic load operations coordinate, which results in reduced contention between autorefresh and dynamic load operations.

  • Dynamic load operations are never blocked by autorefresh operations (due to additional synchronization).

  • Autorefresh operations are not completely delayed by dynamic load operations. Instead, autorefresh operations will wait a short while for concurrently executing dynamic load operations to be notified that a new autorefresh operation is starting. This enables dynamic load operations to synchronize in tandem with concurrently executing autorefresh operations.

Note:

You cannot change the value of the DynamicLoadReduceContention database system parameter if there are any dynamic read-only cache groups or if the cache or replication agents are running. In order to change the value of this parameter, you must unload and drop (and later recreate) any existing dynamic read only cache groups, then stop the cache and replication agents.

The following example sets DynamicLoadReduceContention=1:

call ttDbConfig('DynamicLoadReduceContention','1');

You can query the current value of the DynamicLoadReduceContention parameter.

call ttDbConfig('DynamicLoadReduceContention');

Note:

See ttDBConfig in the Oracle TimesTen In-Memory Database Reference.

Requirements for Setting DynamicLoadReduceContention

There are requirements when using the DynamicLoadReduceContention database system parameter.

The DynamicLoadReduceContention database system parameter requires the following to be enabled:

  • Required Oracle Database privileges: You must grant two additional Oracle Database privileges to the cache administration user:

    • EXECUTE ON SYS.DBMS_FLASHBACK

    • SELECT ANY TRANSACTION

    These are granted to the cache administration user when you execute the grantCacheAdminPrivileges.sql and initCacheAdminSchema.sql scripts.

  • Support for Oracle Database: This feature requires the use of the Oracle Database Flashback Transaction Queries.With Oracle Database 12.2.0.1 with Multitenant option, Flashback Transaction Queries only supports Local Undo. You cannot use this feature with Oracle Database 12.2.0.1 Multitenant option with Shared Undo.

  • Required settings for active standby pair replication scheme:

    • Both active and standby masters must be installed. If you are replicating between active and standby masters where each is installed with different TimesTen versions, then this parameter cannot be enabled if one of the TimesTen versions does not support this feature.

    • The DynamicLoadReduceContention database system parameter must be set to the same value on both the active and standby masters.

    Otherwise, an error is written to the daemon log. Replication will not progress until the settings and TimesTen versions conform on both the active and standby masters.

Reducing Lock Contention for Read-Only Cache Groups With Autorefresh and Dynamic Load

Your application can time out because of a lock contention between autorefresh and dynamic load requests.

An autorefresh operation automatically loads committed changes on cached Oracle Database tables into the cache tables in TimesTen. A dynamic load operation requests data from the Oracle database (originating from a SELECT statement) and inserts the rows into the cache group. Both the autorefresh and dynamic load operations require access to the cache metadata, which could cause a lock contention.

At the end of an autorefresh operation, TimesTen updates the metadata to track the autorefresh progress. If you have requested guaranteed durability by setting the DurableCommits connection attribute to 1, then the autorefresh updates to the metadata are always durably committed. If you have requested delayed durability by setting the DurableCommits connection attribute to 0 (the default), then TimesTen must ensure that the autorefresh updates to the metadata are durably committed before the garbage collector can clean up the autorefresh tracking tables stored in the Oracle database.

When a durable commit is initiated for the metadata, any previous non-durable committed transactions in the transaction log buffer that have not been flushed to the file system are also a part of the durable commit. On hosts with busy or slow file systems, the durable commit could be slow enough to lock out dynamic load requests for an undesirable amount of time.

If you notice that your application is timing out because of a lock contention between autorefresh and dynamic load requests, you can set the CacheCommitDurable cache configuration parameter to 0 with the ttCacheConfig built-in procedure. This reduces the occurrence of lock contention between autorefresh and dynamic load requests in the same application by:

  • Running a non-durable commit of the autorefresh changes made to the metadata.

  • Using a separate thread in the cache agent to durably commit the autorefresh changes before the garbage collector cleans up the autorefresh tracking tables stored in the Oracle database. This results in a slight performance cost as garbage collection is delayed until after the durable commit completes.

The lock is removed after the non-durable commit of the autorefresh changes to the metadata. After which, there is no longer a lock held on the metadata and any dynamic load requests for the recently refreshed tables can continue processing without waiting. However, if there is an error and database recovery starts, autorefresh may need to reapply any committed transactions that did not flush to disk before a failure.

The following example sets CacheCommitDurable=0:

call ttCacheConfig('CacheCommitDurable',,,'0');

You can query the current value of the CacheCommitDurable parameter.

call ttCacheConfig('CacheCommitDurable');

See ttCacheConfig in the Oracle TimesTen In-Memory Database Reference.

Options for Reducing Contention Between Autorefresh and Dynamic Load Operations

There are two methods to reduce contention between autorefresh and dynamic load operations.

You can enable each or both if:

  • If you see error messages indicating lock contention between autorefresh and dynamic load operations, then enable the DynamicLoadReduceContention database system parameter by setting the value to 1 with the ttDbConfig built-in procedure. See Reducing Contention for Dynamic Read-Only Cache Groups With Incremental Autorefresh.

  • If you notice that commit operations for autorefresh are taking an unusually long time, then look for a TT47087 informational message in the support log. Locate the tt1stXactCommitTime and tt2ndXactCommitTime entries within this message. If the time indicated for either of both of these entries unusually high or is a major portion of the time indicated in the Duration entry, this may indicate that the durable commit of transaction logs is slow. In this case, you have the option to set the CacheCommitDurable cache configuration parameter to 0 with the ttCacheConfig built-in procedure. For more details on the CacheCommitDurable cache configuration parameter, see Reducing Lock Contention for Read-Only Cache Groups With Autorefresh and Dynamic Load.

Enable both options if there is a small autorefresh interval in conjunction with a high number of dynamic load requests.

Improving Performance When Reclaiming Memory During Autorefresh Operations

As described Transaction Reclaim Operations in the Oracle TimesTen In-Memory Database Operations Guide, TimesTen resource cleanup occurs during the reclaim phase of a transaction commit.

To improve performance, a number of transaction log records are cached in memory to reduce the need to access the transaction log file in the commit buffer. However, TimesTen must access the transaction log if the transaction is larger than the reclaim buffer.

When you are using autorefresh for your cache groups, the cache agent has its own reclaim buffer to manage the transactions that are committed within autorefresh operations. If the cache agent reclaim buffer is too small, the commit operations during autorefresh can take longer than expected as it must access the transaction log file. To avoid any performance issues, you can configure a larger reclaim buffer for the cache agent so that the cache agent can handle larger transactions in memory at reclaim time.

When using an active standby pair replication scheme to replicate autorefresh operations, the replication agent applies the same autorefresh operations as part of the replication. Thus, the replication agents on both the active and standby nodes have their own reclaim buffers that should be configured to be the same size or greater than the cache agent reclaim buffer.

The ttDbConfig built-in procedure provides the following parameters for setting the maximum size for the reclaim buffers for both the cache agent and the replication agent. (The memory for the reclaim buffers are allocated out of temporary memory.)

  • CacheAgentCommitBufSize sets the maximum size for the reclaim buffer for the cache agent.

  • RepAgentCommitBufSize sets the maximum size for the reclaim buffer for the replication agent. You should configure the maximum size for the reclaim buffer on both the active and standby nodes. It is recommended that you set the size for the reclaim buffers to the same value on both nodes, but not required.

Note:

For more details, see ttDBConfig in the Oracle TimesTen In-Memory Database Reference.

To determine if you should increment the size for the cache agent reclaim buffer, evaluate the CommitBufMaxReached and CommitBufNumOverflows statistics provided by the ttCacheAutorefIntervalStatsGet built-in procedure. See Retrieving Statistics on Autorefresh Transactions.

Running Large Transactions With Incremental Autorefresh Read-Only Cache Groups

At certain times, you may run large transactions, such as for the end of the month, the end of a quarter, or the end of the year transactions. You may also have situations where you modify or add a large amount of data in the Oracle database over a short period of time.

For read-only cache groups with incremental autorefresh, TimesTen could run out of permanent space when an autorefresh operation applies either of these cases. Therefore, for these situations, you can configure an autorefresh transaction limit, where the large amount of data is broken up, applied, and committed over several smaller transactions.

Note:

The autorefresh transaction limit can only be set for static read-only cache groups.

The ttCacheAutorefreshXactLimit built-in procedure enables you to direct autorefresh to commit after running a specific number of operations. This option applies to all incremental autorefresh read-only cache groups that are configured with the same autorefresh interval.

Since the single transaction is broken up into several smaller transactions, transactional consistency cannot be maintained while autorefresh is in progress. Once the autorefresh cycle completes, the data is transactionally consistent. To protect instance consistency, we recommend that you set the autorefresh transaction limit only on cache groups with only a single table, since instance consistency between the parent and child tables is not guaranteed. When the autorefresh transaction limit is turned on, TimesTen does not enforce the foreign key relationship that protects instance consistency. Once you turn off the autorefresh transaction limit for incremental autorefresh read-only cache groups, both instance and transactional consistency are maintained again.

Note:

If you are using an active standby pair, you must call the ttCacheAutorefreshXactLimit built-in procedure for the same values on both the active and standby masters.

The following sections describe how to configure an autorefresh transaction limit.

Using ttCacheAutorefreshXactLimit

Note:

See ttCacheAutorefreshXactLimit in the Oracle TimesTen In-Memory Database Reference.

For the month end processing, there can be a large number updates in a single transaction for the Oracle tables that are cached in cache groups with autorefresh. In order to ensure that the large transaction does not fill up permanent memory, you can enable autorefresh to commit after every 256 (or any other user specified number) operations with the ttCacheAutorefreshXactLimit built-in procedure.

Turn on an autorefresh transaction limit for incremental autorefresh read-only cache groups before a large transaction with the ttCacheAutorefreshXactLimit built-in procedure where the value is set to ON or to a specific number of operations. Then, when autorefresh finishes updating the cached tables in TimesTen, turn off the autorefresh transaction limit for incremental autorefresh read-only cache groups with the ttCacheAutorefreshXactLimit built-in procedure.

The following example sets up the transaction limit to commit after every 256 operations for all incremental autorefresh read-only cache groups that are defined with an interval value of 10 seconds.

call ttCacheAutorefreshXactLimit('10000', 'ON');

After the month end process has completed and the incremental autorefresh read-only cache groups are refreshed, disable the transaction limit for incremental autorefresh read-only cache groups that are defined with the interval value of 10 seconds.

call ttCacheAutorefreshXactLimit('10000', 'OFF');

To enable the transaction limit for incremental autorefresh read-only cache groups to commit after every 1024 operations, provide 1024 as the value as follows:

call ttCacheAutorefreshXactLimit('10000', '1024');

Example of Potential Transactional Inconsistency

This example shows how to create two incremental autorefresh read-only cache groups.

The following example uses the employee and departments table, where the department id of the department table is a foreign key that points to the department id of the employee table.

The following example creates two incremental autorefresh read-only cache groups, where each is in its own cache group. The autorefresh transaction limit is enabled with ttCacheAutorefreshXactLimit before a large transaction and is disabled after it completes.

  1. Before you initiate the large transaction, invoke ttCacheAutorefreshXactLimit to set the interval value and the number of operations after which to automatically commit. The following sets the number of operations to three (which is intentionally low to show a brief example) for all incremental autorefresh read-only cache groups with a two second interval.

    CALL ttCacheAutorefreshXactLimit('2000', '3');
    < 2000, 3 >
    1 row found.
  2. Create the incremental autorefresh read-only cache groups with interval of two seconds. This example creates two static (non-dynamic) read-only cache groups, where each contains a single table.

    CREATE READONLY CACHE GROUP cgDepts AUTOREFRESH MODE INCREMENTAL 
     INTERVAL 2 SECONDS 
    FROM departments
        ( department_id    NUMBER(4) PRIMARY KEY
        , department_name  VARCHAR2(30) NOT NULL
        , manager_id       NUMBER(6)
        , location_id      NUMBER(4)
        );
     
    CREATE READONLY CACHE GROUP cgEmpls AUTOREFRESH MODE INCREMENTAL 
     INTERVAL 2 SECONDS 
    FROM employees
        ( employee_id    NUMBER(6) PRIMARY KEY
        , first_name     VARCHAR2(20)
        , last_name      VARCHAR2(25) NOT NULL
        , email          VARCHAR2(25) NOT NULL UNIQUE
        , phone_number   VARCHAR2(20)
        , hire_date      DATE NOT NULL
        , job_id         VARCHAR2(10) NOT NULL
        , salary         NUMBER(8,2)
        , commission_pct NUMBER(2,2)
        , manager_id     NUMBER(6)
        , department_id  NUMBER(4)
        );
  3. Run a LOAD CACHE GROUP statement for both cache groups with autorefresh.

    LOAD CACHE GROUP cgDepts COMMIT EVERY 256 ROWS;
    27 cache instances affected.
     
    LOAD CACHE GROUP cgEmpls COMMIT EVERY 256 ROWS;
    107 cache instances affected.

You can have inconsistency within the table during an autorefresh as shown with the employees table.

  1. On TimesTen, select the minimum and maximum salary of all employees.

    SELECT MIN(salary), MAX(salary) FROM employees;
    < 2100, 24000 >
    1 row found.
  2. On the Oracle database, add 100,000 to everyone's salary.

    UPDATE employees SET salary = salary + 100000;
    107 rows updated.
  3. On TimesTen, when you run the SELECT again (while the autorefresh transactions are commmitted after every 3 records), it shows that while the maximum salary has updated, the minimum salary is still the old value.

    SELECT MIN(salary), MAX(salary) FROM employees;
    < 2100, 124000 >
    1 row found.
  4. However, once the autorefresh completes, transactional consistency is maintained. For this example, once the autorefresh process completes, all salaries have increased by 100,000.

    SELECT MIN(salary), MAX(salary) FROM employees;
    < 102100, 124000 >
    1 row found.
  5. The large transaction is complete, so disable the transaction limit for cache groups with a 2 second interval autorefresh.

    call ttCacheAutorefreshXactLimit('2000', 'OFF');

You can have transactional inconsistency between cache groups if you run a SQL statement while the autorefresh process is progressing. The following SELECT statement example runs against the employees and department table in the cgDepts autorefresh cache group. With this example, since the foreign key is not enforced on TimesTen and the autorefresh process applies several transactions, the employee table updates may be inserted before the department updates.

In addition, all of the updates for both tables in the cache group are not applied until the autorefresh cycle has completed. In the following example, the SELECT statement is performed before the autorefresh process is complete. Thus, the results do not show all of the expected data, such as the department name and several employees (some of the lawyers in the legal department 1000) are missing.

SELECT e.department_id, d.DEPARTMENT_NAME, e.FIRST_NAME, e.LAST_NAME  
       FROM employees e, departments d         
       WHERE e.DEPARTMENT_ID  = d.DEPARTMENT_ID (+) 
       AND e.department_id >= 1000 ORDER BY 1,2,3,4;
< 1000, Legal, Alec, Dunkle >
< 1000, Legal, Barry, Strong >
< 1000, Legal, Leigh, Harrison >
3 rows found.

However, after the autorefresh process completes, transactional consistency is maintained. The following shows the same SELECT statement performed after the autorefresh is complete. All expected data, the department information and all of the new lawyers, are updated.

SELECT e.department_id, d.DEPARTMENT_NAME, e.FIRST_NAME, e.LAST_NAME  
       FROM employees e, departments d         
       WHERE e.DEPARTMENT_ID  = d.DEPARTMENT_ID (+) 
       AND e.department_id >= 1000 ORDER BY 1,2,3,4;
< 1000, Legal, Alec, Dunkle >
< 1000, Legal, Barry, Strong >
< 1000, Legal, Leigh, Harrison >
< 1000, Legal, John, Crust >
< 1000, Legal, Robert, Wright >
< 1000, Legal, Robert, Smith >
6 rows found.

For cache groups with autorefresh that have more than one table, you can also experience transactional inconsistency if you run SQL statements while the autorefresh process is in progress.

  1. Initiate the transaction limit for incremental cache groups with autorefresh of 2 seconds with the ttCacheAutorefreshXactLimit built-in procedure and create a single autorefresh cache group with two tables: the employees and departments tables.
    CALL ttCacheAutorefreshXactLimit('2000', '3');
    < 2000, 3 >
    1 row found.
     
    CREATE READONLY CACHE GROUP cgDeptEmpls AUTOREFRESH MODE INCREMENTAL
     INTERVAL 2 SECONDS 
    FROM departments
         ( department_id    NUMBER(4) PRIMARY KEY
         , department_name  VARCHAR2(30) NOT NULL
         , manager_id       NUMBER(6)
         , location_id      NUMBER(4)
         )
       , employees
         ( employee_id    NUMBER(6) PRIMARY KEY
         , first_name     VARCHAR2(20)
         , last_name      VARCHAR2(25) NOT NULL
         , email          VARCHAR2(25) NOT NULL UNIQUE
         , phone_number   VARCHAR2(20)
         , hire_date      DATE NOT NULL
         , job_id         VARCHAR2(10) NOT NULL
         , salary         NUMBER(8,2)
         , commission_pct NUMBER(2,2)
         , manager_id     NUMBER(6)
         , department_id  NUMBER(4)
         , foreign key(department_id) references departments(department_id)
         );
  2. Manually load the cache group.
    LOAD CACHE GROUP cgDeptEmpls COMMIT EVERY 256 ROWS;
    27 cache instances affected.
  3. Run a SELECT statement on TimesTen that uploads all of the legal department data.
    SELECT e.department_id, d.department_name, count(*)
           FROM employees e, departments d         
           WHERE e.department_id  = d.department_id (+) 
           GROUP BY e.department_id, d.department_name
           ORDER BY 1 desc;
    < 110, Accounting, 2 >
    < 100, Finance, 6 >
    < 90, Executive, 3 >
    < 80, Sales, 34 >
    < 70, Public Relations, 1 >
    < 60, IT, 5 >
    < 50, Shipping, 45 >
    < 40, Human Resources, 1 >
    < 30, Purchasing, 6 >
    < 20, Marketing, 2 >
    < 10, Administration, 1 >
    11 rows found.
  4. On Oracle, insert a new legal department, numbered 1000, with 6 new lawyers in both the employee and department tables.
  5. When performing a SELECT statement on TimesTen during the autorefresh process, only data on two of the lawyers in department 1000 have been uploaded into TimesTen.
    SELECT e.department_id, d.department_name, count(*)
           FROM employees e, departments d         
           WHERE e.department_id  = d.department_id (+) 
           GROUP BY e.department_id, d.department_name
           ORDER BY 1 desc;
    < 1000, Legal, 2 >
    < 110, Accounting, 2 >
    < 100, Finance, 6 >
    < 90, Executive, 3 >
    < 80, Sales, 34 >
    < 70, Public Relations, 1 >
    < 60, IT, 5 >
    < 50, Shipping, 45 >
    < 40, Human Resources, 1 >
    < 30, Purchasing, 6 >
    < 20, Marketing, 2 >
    < 10, Administration, 1 >
    12 rows found.
  6. However, after the autorefresh process completes, all 6 employees (lawyers) in the legal department have been uploaded to TimesTen. Now, it is transactionally consistent.
    SELECT e.department_id, d.department_name, COUNT(*)
           FROM employees e, departments d         
           WHERE e.department_id  = d.department_id (+) 
           GROUP BY e.department_id, d.department_name
           ORDER BY 1 desc;
    < 1000, Legal, 6 >
    < 110, Accounting, 2 >
    < 100, Finance, 6 >
    < 90, Executive, 3 >
    < 80, Sales, 34 >
    < 70, Public Relations, 1 >
    < 60, IT, 5 >
    < 50, Shipping, 45 >
    < 40, Human Resources, 1 >
    < 30, Purchasing, 6 >
    < 20, Marketing, 2 >
    < 10, Administration, 1 >
    12 rows found.
  7. The large transaction is complete, so disable the transaction limit for cache groups with a 2 second autorefresh interval.
    call ttCacheAutorefreshXactLimit('2000', 'OFF');

Retrieving Statistics to Evaluate Performance When a Transaction Limit is Set

To see how a autorefresh transaction limit for a particular autorefresh interval is performing, you can retrieve statistics for the last 10 incremental autorefresh transactions for this autorefresh interval with the ttCacheAutorefIntervalStatsGet built-in procedure.

See Retrieving Statistics on Autorefresh Transactions.

Configuring a Select Limit for Incremental Autorefresh for Read-Only Cache Groups

To facilitate incremental autorefresh for read-only cache groups, TimesTen runs a table join query on both the Oracle database base table and its corresponding change log table to retrieve the incremental changes. However, if both tables are very large, the join query can be slow. In addition, if the Oracle database base table is continuously updated while the join-query is processing, you may receive the ORA-01555 “Snapshot too old" error from a long-running autorefresh query.

To avoid this situation, you can configure incremental autorefresh with a select limit for static read-only cache groups, which joins the Oracle database base table with a limited number of rows from the autorefresh change log table. You can configure a select limit with the ttCacheAutorefreshSelectLimit built-in procedure.

Note:

The select limit can only be set for static read-only cache groups. To protect instance consistency, we recommend that you set the select limit only on cache groups with only a single table.

Autorefresh continues to apply changes to the cached table incrementally until all the rows in the autorefresh change log table have been applied. When there are no rows left to apply, the autorefresh thread sleeps for the rest of the interval period.

Note:

See ttCacheAutorefreshSelectLimit in the Oracle TimesTen In-Memory Database Reference.

For example, before a large transaction, you can call the ttCacheAutorefreshSelectLimit built-in procedure to set a select limit to 1000 rows for cache groups with incremental autorefresh where the interval value is 10 seconds. The following example sets the value to ON.

Command> call ttCacheAutorefreshSelectLimit('10000', 'ON');
< 10000, ON >
1 row found.

The following example set a select limit to 2000 rows for cache groups with incremental autorefresh where the interval value is 7 seconds.

Command> call ttCacheAutorefreshSelectLimit('7000', '2000');
< 7000, 2000 >
1 row found.

You can disable any select limit for cache groups with incremental autorefresh where the interval value is 10 seconds by setting the value to OFF.

Command> call ttCacheAutorefreshSelectLimit('10000', 'OFF');
< 10000, OFF >
1 row found.

The following sections describe details when configuring a select limit for static read-only cache groups with incremental autorefresh.

How to Determine Which Intervals Have a Particular Select Limit

To determine the interval for a cache group, use ttIsql and run the cachegroups command.

> cachegroups cgowner.cgname;

This returns all attributes for the cgowner.cgname cache group including the interval.

To determine which intervals have a select limit, you can run the following query on the Oracle database where <cacheAdminUser> is the cache administrator, <hostName> is the host name of the machine where the TimesTen database is located, <databaseFileName> is the database path taken from the DataStore attribute, and substitute the version number (such as 07) for the xx.

SELECT * FROM <cacheAdminUser>.tt_xx_arinterval_params
 WHERE param='AutorefreshSelectEveryN'
   AND host='<hostName>'
   AND database like '%<databaseFileName>%'
 ORDER BY arinterval;

For example, if the cache administrator user name is pat, the host name is myhost, the database file name is myTtDb, and 07 is substituted for xx that is the TimesTen minor release number then:

SELECT * FROM pat.tt_07_arinterval_params
 WHERE param='AutorefreshSelectEveryN'
   AND host='myhost'
   AND database like '%myTtDb%'
 ORDER BY arinterval;

The interval is stored in milliseconds.

Retrieving Statistics to Evaluate Performance When Using a Select Limit

To see how a select limit for a particular autorefresh interval is performing, you can retrieve statistics for incremental autorefresh transactions for this autorefresh interval with the ttCacheAutorefIntervalStatsGet built-in procedure.

See Retrieving Statistics on Autorefresh Transactions.