2 SQL Processing for Application Developers

This chapter explains what application developers must know about how Oracle Database processes SQL statements. Before reading this chapter, read the basic information about SQL processing in Oracle Database Concepts.


Description of SQL Statement Processing

This topic provides an example of what happens during the execution of a SQL statement in each stage of processing. While this example specifically processes a DML statement, you can generalize it for other types of SQL statements. For information about how execution of other types of SQL statements might differ from this description, see Processing Other Types of SQL Statements.

Assume that you are using a Pro*C program to increase the salary for all employees in a department. The program you are using has connected to Oracle Database and you are connected to the proper schema to update the employees table. You can embed the following SQL statement in your program:

EXEC SQL UPDATE employees SET salary = 1.10 * salary 
    WHERE department_id = :department_id; 

Department_id is a program variable containing a value for department number. When the SQL statement is run, the value of department_id is used, as provided by the application program.

Stages of SQL Statement Processing

The following are the stages necessary for each type of statement processing. (For a flowchart of this process, see Oracle Database Concepts.

  1. Open or create a cursor.

    A program interface call opens or creates a cursor. The cursor is created independent of any SQL statement: it is created in expectation of a SQL statement. In most applications, cursor creation is automatic. However, in precompiler programs, cursor creation can either occur implicitly or be explicitly declared.

  2. Parse the statement.

    During parsing, the SQL statement is passed from the user process to Oracle Database, and a parsed representation of the SQL statement is loaded into a shared SQL area. Many errors can be caught during this stage of statement processing.

    See Also:

    Oracle Database Concepts for more information about parsing
  3. Determine if the statement is a query.

    This stage determines if the SQL statement starts with a query.

    See Also:

  4. If the statement is a query, describe its results.

    This stage is necessary only if the characteristics of a query's result are not known; for example, when a query is entered interactively by a user. In this case, the describe stage determines the characteristics (data types, lengths, and names) of a query's result.

  5. If the statement is a query, define its output.

    In this stage, you specify the location, size, and data type of variables defined to receive each fetched value. These variables are called define variables. Oracle Database performs data type conversion if necessary.)

    See Also:

    Oracle Database Concepts for information about the DEFINE stage
  6. Bind any variables.

    At this point, Oracle Database knows the meaning of the SQL statement but still does not have enough information to run the statement. Oracle Database needs values for any variables listed in the statement; in the example, Oracle Database needs a value for department_id. The process of obtaining these values is called binding variables.

    A program must specify the location (memory address) where the value can be found. End users of applications may be unaware that they are specifying bind variables, because the Oracle Database utility can simply prompt them for a new value.

    Because you specify the location (binding by reference), you need not rebind the variable before reexecution. You can change its value and Oracle Database looks up the value on each execution, using the memory address.

    You must also specify a data type and length for each value (unless they are implied or defaulted) if Oracle Database must perform data type conversion.

    See Also:

    For more information about specifying a data type and length for a value:
  7. (Optional) Parallelize the statement.

    Oracle Database can parallelize queries and some DDL operations such as index creation, creating a table with a subquery, and operations on partitions. Parallelization causes multiple server processes to perform the work of the SQL statement so it can complete faster.

    See Also:

    Oracle Database Concepts for an overview of parallel execution
  8. Run the statement.

    At this point, Oracle Database has all necessary information and resources, so the statement is run. If the statement is a query or an INSERT statement, no rows need to be locked because no data is being changed. If the statement is an UPDATE or DELETE statement, however, all rows that the statement affects are locked until the next COMMIT, ROLLBACK, or SAVEPOINT for the transaction. This ensures data integrity.

    For some statements you can specify a number of executions to be performed. This is called array processing. Given n number of executions, the bind and define locations are assumed to be the beginning of an array of size n.

  9. If the statement is a query, fetch its rows.

    In the fetch stage, rows are selected and ordered (if requested by the query), and each successive fetch retrieves another row of the result until the last row has been fetched.

  10. Close the cursor.

    The final stage of processing a SQL statement is closing the cursor.

Shared SQL Areas

Oracle Database automatically notices when applications send similar SQL statements to the database. The SQL area used to process the first occurrence of the statement is shared—that is, used for processing subsequent occurrences of that same statement. Therefore, only one shared SQL area exists for a unique statement. Because shared SQL areas are shared memory areas, any Oracle Database process can use a shared SQL area. The sharing of SQL areas reduces memory use on the database server, thereby increasing system throughput.

In evaluating whether statements are similar or identical, Oracle Database considers SQL statements issued directly by users and applications as well as recursive SQL statements issued internally by a DDL statement.

Processing Other Types of SQL Statements

The following topics discuss how DDL, Transaction Control, and other SQL statements can differ from the process just described in Description of SQL Statement Processing:

DDL Statement Processing

The execution of DDL statements differs from the execution of DML statements and queries, because the success of a DDL statement requires write access to the data dictionary. For these statements, parsing (Stage 2) actually includes parsing, data dictionary lookup, and execution.

Transaction Control Processing

In general, only application designers using the programming interfaces to Oracle Database are concerned with the types of actions that are grouped together as one transaction. Transactions must be defined so that work is accomplished in logical units and data is kept consistent. A transaction consists of all of the necessary parts for one logical unit of work, no more and no less.

  • Data in all referenced tables should be in a consistent state before the transaction begins and after it ends.

  • Transactions should consist of only the SQL statements that make one consistent change to the data.

For example, a transfer of funds between two accounts (the transaction or logical unit of work) should include the debit to one account (one SQL statement) and the credit to another account (one SQL statement). Both actions should either fail or succeed together as a unit of work; the credit should not be committed without the debit. Other unrelated actions, such as a new deposit to one account, should not be included in the transfer of funds transaction.

Other Processing Types

Transaction management, session management, and system management SQL statements are processed using the parse and run stages. To rerun them, simply perform another execute.

Grouping Operations into Transactions


Deciding How to Group Operations in Transactions

In general, deciding how to group operations in transactions is the concern of application designers who use the programming interfaces to Oracle Database. When deciding how to group transactions:

  • Define transactions such that work is accomplished in logical units and data remains consistent.

  • Ensure that data in all referenced tables is in a consistent state before the transaction begins and after it ends.

  • Ensure that each transaction consists only of the SQL statements or PL/SQL blocks that comprise one consistent change to the data.

For example, suppose that you write a Web application that enables users to transfer funds between accounts. The transaction must include the debit to one account, which is executed by one SQL statement, and the credit to another account, which is executed by a second SQL statement. Both statements must fail or succeed together as a unit of work; the credit must not be committed without the debit. Other unrelated actions, such as a new deposit to one account, must not be included in the same transaction.

Improving Transaction Performance

As an application developer, you must consider whether you can improve performance. Consider the following performance enhancements when designing and writing your application:

  • Use the SET TRANSACTION statement with the USE ROLLBACK SEGMENT clause to explicitly assign a transaction to a rollback segment. This technique can eliminate the need to allocate additional extents dynamically, which can reduce system performance. This clause is valid only if you use rollback segments for undo. If you use automatic undo management, then Oracle Database ignores this clause.

  • Establish standards for writing SQL statements so that you can take advantage of shared SQL areas. Oracle Database recognizes identical SQL statements and enables them to share memory areas. This reduces memory usage on the database server and increases system throughput.

  • Collect statistics that can be used by Oracle Database to implement a cost-based approach to SQL statement optimization. You can supply additional "hints" to the optimizer as needed.

    For the collection of most statistics, use the DBMS_STATS package, which lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. For more information about this package, see Oracle Database PL/SQL Packages and Types Reference.

    For statistics collection not related to the cost-based optimizer (such as collecting information about freelist blocks), use the SQL statement ANALYZE. For more information about this statement, see Oracle Database SQL Language Reference.

  • Invoke the DBMS_APPLICATION_INFO.SET_ACTION procedure before beginning a transaction to register and name a transaction for later use when measuring performance across an application. Specify which type of activity a transaction performs so that the system tuners can later see which transactions are taking up the most system resources.

  • Increase user productivity and query efficiency by including user-written PL/SQL functions in SQL expressions as described in Invoking Stored PL/SQL Functions from SQL Statements.

  • Create explicit cursors when writing a PL/SQL application.

  • Reduce frequency of parsing and improve performance in precompiler programs by increasing the number of cursors with MAX_OPEN_CURSORS.

  • Use the SET TRANSACTION statement with the ISOLATION LEVEL set to SERIALIZABLE to get ANSI/ISO serializable transactions.

See Also:

Committing Transactions

To commit a transaction, use the COMMIT statement. The following two statements are equivalent and commit the current transaction:


The COMMIT statements lets you include the COMMENT parameter along with a comment that provides information about the transaction being committed. This option is useful for including information about the origin of the transaction when you commit distributed transactions:

COMMIT COMMENT 'Dallas/Accts_pay/Trans_type 10B';

Managing Commit Redo Action

When a transaction updates the database, it generates a redo entry corresponding to this update. Oracle Database buffers this redo in memory until the completion of the transaction. When the transaction commits, the log writer process (LGWR) writes redo for the commit, along with the accumulated redo of all changes in the transaction, to disk. By default, Oracle Database writes the redo to disk before the call returns to the client. This action introduces a latency in the commit because the application must wait for the redo to be persisted on disk.

Suppose that you are writing an application that requires very high transaction throughput. If you are willing to trade commit durability for lower commit latency, then you can change the default COMMIT options so that the application need not wait for the database to write data to the online redo logs.

Oracle Database enables you to change the handling of commit redo depending on the needs of your application. You can change the commit action in the following locations:

  • COMMIT_WRITE initialization parameter at the system or session level

  • COMMIT statement

The options in the COMMIT statement override the current settings in the initialization parameter. Table 2-1 describes redo persistence options that you can set in either location.


With the NOWAIT option of COMMIT or COMMIT_WRITE, a failure that occurs after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent.

Table 2-1 Options of COMMIT Statement and COMMIT_WRITE Initialization Parameter

Option Effect

WAIT (default)

Ensures that the commit returns only after the corresponding redo information is persistent in the online redo log. When the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media.

A failure that occurs after a successful write to the log might prevent the success message from returning to the client, in which case the client cannot tell whether or not the transaction committed.


The commit returns to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput.


The redo information is buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write to the redo log is initiated. This behavior is called group commit, as redo information for multiple transactions is written to the log in a single I/O operation.

IMMEDIATE (default)

LGWR writes the transaction's redo information to the log. Because this operation option forces a disk I/O, it can reduce transaction throughput.

The following example shows how to set the commit action to BATCH and NOWAIT in the initialization parameter file:


You can change the commit action at the system level by executing ALTER SYSTEM as in the following example:


After the initialization parameter is set, a COMMIT statement with no options conforms to the options specified in the parameter. Alternatively, you can override the current initialization parameter setting by specifying options directly on the COMMIT statement as in the following example:


In either case, your application specifies that log writer does not have to write the redo for the commit immediately to the online redo logs and need not wait for confirmation that the redo was written to disk.


You cannot change the default IMMEDIATE and WAIT action for distributed transactions.

If your application uses OCI, then you can modify redo action by setting the following flags in the OCITransCommit function within your application:






There is a potential for silent transaction loss when you use OCI_TRANS_WRITENOWAIT. Transaction loss occurs silently with shutdown termination, startup force, and any instance or node failure. On a RAC system asynchronously committed changes might not be immediately available to read on other instances.

The specification of the NOWAIT and BATCH options has a small window of vulnerability in which Oracle Database can roll back a transaction that your application view as committed. Your application must be able to tolerate the following scenarios:

  • The database host fails, which causes the database to lose redo that was buffered but not yet written to the online redo logs.

  • A file I/O problem prevents log writer from writing buffered redo to disk. If the redo logs are not multiplexed, then the commit is lost.

See Also:

Rolling Back Transactions

To roll back an entire transaction, or to roll back part of a transaction to a savepoint, use the ROLLBACK statement. For example, either of the following statements rolls back the entire current transaction:


The WORK option of the ROLLBACK statement has no function.

To roll back to a savepoint defined in the current transaction, use the TO option of the ROLLBACK statement. For example, either of the following statements rolls back the current transaction to the savepoint named POINT1:


Defining Transaction Savepoints

To define a savepoint in a transaction, use the SAVEPOINT statement. The following statement creates the savepoint named ADD_EMP1 in the current transaction:


If you create a second savepoint with the same identifier as an earlier savepoint, the earlier savepoint is erased. After creating a savepoint, you can roll back to the savepoint.

There is no limit on the number of active savepoints for each session. An active savepoint is one that was specified since the last commit or rollback.

Table 2-2 shows a series of SQL statements that illustrates the use of COMMIT, SAVEPOINT, and ROLLBACK statements within a transaction.


SQL Statement Results


First savepoint of this transaction


First DML statement of this transaction


Second savepoint of this transaction


Second DML statement of this transaction


Third savepoint of this transaction


Third DML statement of this transaction.


UPDATE statement is rolled back, savepoint C remains defined


INSERT statement is rolled back, savepoint C is lost, savepoint B remains defined




New DML statement in this transaction


Commits all actions performed by the first DML statement (the DELETE statement) and the last DML statement (the second INSERT statement)

All other statements (the second and the third statements) of the transaction were rolled back before the COMMIT. The savepoint A is no longer active.

Ensuring Repeatable Reads with Read-Only Transactions

By default, the consistency model for Oracle Database guarantees statement-level read consistency, but does not guarantee transaction-level read consistency (repeatable reads). If you want transaction-level read consistency, and if your transaction does not require updates, then you can specify a read-only transaction. After indicating that your transaction is read-only, you can execute as many queries as you like against any database table, knowing that the results of each query in the read-only transaction are consistent with respect to a single point in time.

A read-only transaction does not acquire any additional data locks to provide transaction-level read consistency. The multi-version consistency model used for statement-level read consistency is used to provide transaction-level read consistency; all queries return information with respect to the system change number (SCN) determined when the read-only transaction begins. Because no data locks are acquired, other transactions can query and update data being queried concurrently by a read-only transaction.

Long-running queries sometimes fail because undo information required for consistent read (CR) operations is no longer available. This happens when committed undo blocks are overwritten by active transactions. Automatic undo management provides a way to explicitly control when undo space can be reused; that is, how long undo information is retained. Your database administrator can specify a retention period by using the parameter UNDO_RETENTION.

See Also:

Oracle Database Administrator's Guide for information about long-running queries and resumable space allocation

For example, if UNDO_RETENTION is set to 30 minutes, then all committed undo information in the system is retained for at least 30 minutes. This ensures that all queries running for 30 minutes or less, under usual circumstances, do not encounter the OER error "snapshot too old."

A read-only transaction is started with a SET TRANSACTION statement that includes the READ ONLY option. For example:


The SET TRANSACTION statement must be the first statement of a new transaction. If any statement except a DDL statement precedes a SET TRANSACTION READ ONLY statement, an error is returned. After a SET TRANSACTION READ ONLY statement successfully executes, the transaction can include only SELECT (without a FOR UPDATE clause), COMMIT, ROLLBACK, or non-DML statements (such as SET ROLE, ALTER SYSTEM, LOCK TABLE). Otherwise, an error is returned. A COMMIT, ROLLBACK, or DDL statement terminates the read-only transaction; a DDL statement causes an implicit commit of the read-only transaction and commits in its own transaction.

Using Cursors

PL/SQL implicitly declares a cursor for all SQL data manipulation statements, including queries that return only one row. For queries that return more than one row, you can explicitly declare a cursor to process the rows individually.

A cursor is a handle to a specific private SQL area. In other words, a cursor can be thought of as a name for a specific private SQL area. A PL/SQL cursor variable enables the retrieval of multiple rows from a stored subprogram. Cursor variables enable you to pass cursors as parameters in your 3GL application. Cursor variables are described in Oracle Database PL/SQL Language Reference.

Although most Oracle Database users rely on the automatic cursor handling of the database utilities, the programmatic interfaces offer application designers more control over cursors. In application development, a cursor is a named resource available to a program, which can be specifically used for parsing SQL statements embedded within the application.


How Many Cursors Can a Session Have?

There is no absolute limit to the total number of cursors one session can have open at one time, subject to two constraints:

  • Each cursor requires virtual memory, so a session's total number of cursors is limited by the memory available to that process.

  • A systemwide limit of cursors for each session is set by the initialization parameter named OPEN_CURSORS found in the parameter file (such as INIT.ORA).

    See Also:

    Oracle Database Reference for more information about OPEN_CURSORS

Explicitly creating cursors for precompiler programs has advantages in tuning those applications. For example, increasing the number of cursors can reduce the frequency of parsing and improve performance. If you know how many cursors might be required at a given time, you can open that many cursors simultaneously.

Using a Cursor to Reexecute a Statement

After each stage of execution, the cursor retains enough information about the SQL statement to reexecute the statement without starting over, as long as no other SQL statement was associated with that cursor. The statement can be reexecuted without including the parse stage.

By opening several cursors, the parsed representation of several SQL statements can be saved. Repeated execution of the same SQL statements can thus begin at the describe, define, bind, or execute step, saving the repeated cost of opening cursors and parsing.

To understand the performance characteristics of a cursor, a DBA can retrieve the text of the query represented by the cursor using the V$SQL dynamic performance view. Because the results of EXPLAIN PLAN on the original query might differ from the way the query is actually processed, a DBA can get more precise information by examining the following dynamic performance views:

View Description
V$SQL_PLAN Execution plan information for each child cursor loaded in the library cache.
V$SQL_STATISTICS Execution statistics at the row source level for each child cursor.
V$SQL_STATISTICS_ALL Memory usage statistics for row sources that use SQL memory (sort or hash-join). This view concatenates information in V$SQL_PLAN with execution statistics from V$SQL_PLAN_STATISTICS and V$SQL_WORKAREA.

See Also:

Oracle Database Reference for details of the preceding dynamic performance views

Closing a Cursor

Closing a cursor means that the information currently in the associated private area is lost and its memory is deallocated. Once a cursor is opened, it is not closed until one of the following events occurs:

  • The user program terminates its connection to the server.

  • If the user program is an OCI program or precompiler application, then it explicitly closes any open cursor during the execution of that program. (However, when this program terminates, any cursors remaining open are implicitly closed.)

Canceling a Cursor

Canceling a cursor frees resources from the current fetch.The information currently in the associated private area is lost but the cursor remains open, parsed, and associated with its bind variables.


You cannot cancel cursors using Pro*C/C++ or PL/SQL.

See Also:

Oracle Call Interface Programmer's Guide for information about canceling a cursor with the OCIStmtFetch2 statement

Locking Tables Explicitly

Oracle Database always performs necessary locking to ensure data concurrency, integrity, and statement-level read consistency. You can override these default locking mechanisms. For example, you might want to override the default locking of Oracle Database if:

  • You want transaction-level read consistency or "repeatable reads"—where transactions query a consistent set of data for the duration of the transaction, knowing that the data was not changed by any other transactions. This level of consistency can be achieved by using explicit locking, read-only transactions, serializable transactions, or overriding default locking for the system.

  • A transaction requires exclusive access to a resource. To proceed with its statements, the transaction with exclusive access to a resource does not have to wait for other transactions to complete.

The automatic locking mechanisms can be overridden at the transaction level. Transactions including the following SQL statements override Oracle Database's default locking:


  • SELECT, including the FOR UPDATE clause


Locks acquired by these statements are released after the transaction is committed or rolled back.

The following topics describe each option available for overriding the default locking of Oracle Database. The initialization parameter DML_LOCKS determines the maximum number of DML locks.

See Also:

Oracle Database Reference for more information about DML_LOCKS

Although the default value is usually enough, you might need to increase it if you use additional manual locks.


If you override the default locking of Oracle Database at any level, be sure that the overriding locking subprograms operate correctly: Ensure that data integrity is guaranteed, data concurrency is acceptable, and deadlocks are either impossible or appropriately handled.


Privileges Required

You can automatically acquire any type of table lock on tables in your schema. To acquire a table lock on a table in another schema, you must have the LOCK ANY TABLE system privilege or any object privilege (for example, SELECT or UPDATE) for the table.

Choosing a Locking Strategy

A transaction explicitly acquires the specified table locks when a LOCK TABLE statement is executed. A LOCK TABLE statement manually overrides default locking. When a LOCK TABLE statement is issued on a view, the underlying base tables are locked. The following statement acquires exclusive table locks for the emp_tab and dept_tab tables on behalf of the containing transaction:

LOCK TABLE emp_tab, dept_tab

You can specify several tables or views to lock in the same mode; however, only a single lock mode can be specified for each LOCK TABLE statement.


When a table is locked, all rows of the table are locked. No other user can modify the table.

In the LOCK TABLE statement, you can also indicate how long you want to wait for the table lock:

  • If you do not want to wait, specify either NOWAIT or WAIT 0.

    You acquire the table lock only if it is immediately available; otherwise, an error notifies you that the lock is not available at this time.

  • If you want to wait up to n seconds to acquire the table lock, specify WAIT n, where n is greater than 0 and less than or equal to 100000.

    If the table lock is still unavailable after n seconds, an error notifies you that the lock is not available at this time.

  • If you want to wait indefinitely to acquire the lock, specify neither NOWAIT nor WAIT.

    The database waits indefinitely until the table is available, locks it, and returns control to you. When the database is executing DDL statements concurrently with DML statements, a timeout or deadlock can sometimes result. The database detects such timeouts and deadlocks and returns an error.

For the syntax of the LOCK TABLE statement, see Oracle Database SQL Language Reference.



ROW SHARE and ROW EXCLUSIVE table locks offer the highest degree of concurrency. You might use these locks if:

  • Your transaction must prevent another transaction from acquiring an intervening share, share row, or exclusive table lock for a table before your transaction can update that table.

    If another transaction acquires an intervening share, share row, or exclusive table lock, no other transactions can update the table until the locking transaction commits or rolls back.

  • Your transaction must prevent a table from being altered or dropped before your transaction can modify that table.

When to Lock with SHARE MODE

SHARE table locks are rather restrictive data locks. You might use these locks if:

  • Your transaction only queries the table, and requires a consistent set of the table data for the duration of the transaction.

  • You can hold up other transactions that try to update the locked table, until all transactions that hold SHARE locks on the table either commit or roll back.

  • Other transactions might acquire concurrent SHARE table locks on the same table, also giving them the option of transaction-level read consistency.


    Your transaction might or might not update the table later in the same transaction. However, if multiple transactions concurrently hold share table locks for the same table, no transaction can update the table (even if row locks are held as the result of a SELECT FOR UPDATE statement). Therefore, if concurrent share table locks on the same table are common, updates cannot proceed and deadlocks are common. In this case, use share row exclusive or exclusive table locks instead.

Scenario: Tables employees and budget_tab require a consistent set of data in a third table, departments. For a given department number, you want to update the information in employees and budget_tab, and ensure that no new members are added to the department between these two transactions.

Solution: Lock the departments table in SHARE MODE, as shown in Example 2-1. Because the departments table is rarely updated, locking it probably does not cause many other transactions to wait long.

Example 2-1 LOCK TABLE with SHARE MODE

SQL> -- Create and populate table:
SQL> DROP TABLE budget_tab;
DROP TABLE budget_tab
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> CREATE TABLE budget_tab (
  2    sal     NUMBER(8,2),
  3    deptno  NUMBER(4));
Table created.
SQL> INSERT INTO budget_tab
  2    SELECT salary, department_id
  3      FROM employees;
107 rows created.
SQL> -- Lock departments and update employees and budget_tab:
Table(s) Locked.
SQL> UPDATE employees
  2    SET salary = salary * 1.1
  3      WHERE department_id IN
  4        (SELECT department_id FROM departments WHERE location_id = 1700);
18 rows updated.
SQL> UPDATE budget_tab
  2    SET sal = sal * 1.1
  3      WHERE deptno IN
  4        (SELECT department_id FROM departments WHERE location_id = 1700);
18 rows updated.
SQL> -- COMMIT releases lock


You might use a SHARE ROW EXCLUSIVE table lock if:

  • Your transaction requires both transaction-level read consistency for the specified table and the ability to update the locked table.

  • You do not care if other transactions acquire explicit row locks (using SELECT FOR UPDATE), which might make UPDATE and INSERT statements in the locking transaction wait and might cause deadlocks.

  • You only want a single transaction to have this action.

When to Lock with EXCLUSIVE MODE

You might use an EXCLUSIVE table if:

  • Your transaction requires immediate update access to the locked table. When your transaction holds an exclusive table lock, other transactions cannot lock specific rows in the locked table.

  • Your transaction also ensures transaction-level read consistency for the locked table until the transaction is committed or rolled back.

  • You are not concerned about low levels of data concurrency, making transactions that request exclusive table locks wait in line to update the table sequentially.

Letting Oracle Database Control Table Locking

Letting Oracle Database control table locking means your application needs less programming logic, but also has less control, than if you manage the table locks yourself.

Issuing the statement SET TRANSACTION ISOLATION LEVEL SERIALIZABLE or ALTER SESSION ISOLATION LEVEL SERIALIZABLE preserves ANSI serializability without changing the underlying locking protocol. This technique gives concurrent access to the table while providing ANSI serializability. Getting table locks greatly reduces concurrency.

See Also:

Change the settings for these parameters only when an instance is shut down. If multiple instances are accessing a single database, then all instances must use the same setting for these parameters.

Explicitly Acquiring Row Locks

You can override default locking with a SELECT statement that includes the FOR UPDATE clause. This statement acquires exclusive row locks for selected rows (as an UPDATE statement does), in anticipation of updating the selected rows in a subsequent statement.

You can use a SELECT FOR UPDATE statement to lock a row without actually changing it. For example, several triggers in Oracle Database PL/SQL Language Reference show how to implement referential integrity. In the EMP_DEPT_CHECK trigger, the row that contains the referenced parent key value is locked to guarantee that it remains for the duration of the transaction; if the parent key is updated or deleted, referential integrity is violated.

SELECT FOR UPDATE statements are often used by interactive programs that enable a user to modify fields of one or more specific rows (which might take some time); row locks are acquired so that only a single interactive program user is updating the rows at any given time.

If a SELECT FOR UPDATE statement is used when defining a cursor, the rows in the return set are locked when the cursor is opened (before the first fetch) rather than being locked as they are fetched from the cursor. Locks are only released when the transaction that opened the cursor is committed or rolled back, not when the cursor is closed.

Each row in the return set of a SELECT FOR UPDATE statement is locked individually; the SELECT FOR UPDATE statement waits until the other transaction releases the conflicting row lock. If a SELECT FOR UPDATE statement locks many rows in a table, and if the table experiences a lot of update activity, it might be faster to acquire an EXCLUSIVE table lock instead.


The return set for a SELECT FOR UPDATE might change while the query is running; for example, if columns selected by the query are updated or rows are deleted after the query started. When this happens, SELECT FOR UPDATE acquires locks on the rows that did not change, gets a new read-consistent snapshot of the table using these locks, and then restarts the query to acquire the remaining locks.

This can cause a deadlock between sessions querying the table concurrently with DML operations when rows are locked in a nonsequential order. To prevent such deadlocks, design your application so that any concurrent DML on the table does not affect the return set of the query. If this is not feasible, you might want to serialize queries in your application.

By default, the transaction waits until the requested row lock is acquired. If you are not willing to wait to acquire the row lock, use either the NOWAIT clause of the LOCK TABLE statement (see Choosing a Locking Strategy) or the SKIP LOCKED clause of the SELECT FOR UPDATE statement.

If you can lock some of the requested rows, but not all of them, the SKIP LOCKED option skips the rows that you cannot lock and locks the rows that you can lock.

See Also:

Oracle Database SQL Language Reference for information about the SELECT FOR UPDATE statement and an example of the SKIP LOCKED clause

Using Oracle Lock Management Services

You can use Oracle Lock Management services (user locks) for your applications by invoking subprograms the DBMS_LOCK package. It is possible to request a lock of a specific mode, give it a unique name recognizable in another subprogram in the same or another instance, change the lock mode, and release it. Because a reserved user lock is the same as an Oracle Database lock, it has all the features of a database lock, such as deadlock detection. Be certain that any user locks used in distributed transactions are released upon COMMIT, or an undetected deadlock can occur.

See Also:

Oracle Database PL/SQL Packages and Types Reference for detailed information about the DBMS_LOCK package


When to Use User Locks

User locks can help to:

  • Provide exclusive access to a device, such as a terminal

  • Provide application-level enforcement of read locks

  • Detect when a lock is released and cleanup after the application

  • Synchronize applications and enforce sequential processing

The following Pro*COBOL precompiler example shows how locks can be used to ensure that there are no conflicts when multiple people must access a single device.

Example 2-2 User Locks

* Print Check                                                    * 
* Any cashier may issue a refund to a customer returning goods.  * 
* Refunds under $50 are given in cash, more than $50 by check.   * 
* This code prints the check. One printer is opened by all       * 
* the cashiers to avoid the overhead of opening and closing it   * 
* for every check, meaning that lines of output from multiple    * 
* cashiers can become interleaved if you do not ensure exclusive * 
* access to the printer. The DBMS_LOCK package is used to        * 
* ensure exclusive access.                                       * 
*    Get the lock "handle" for the printer lock. 
      END; END-EXEC. 
*   Lock the printer in exclusive mode (default mode).
      END; END-EXEC. 
*   You now have exclusive use of the printer, print the check. 
*   Unlock the printer so other people can use it 
      END; END-EXEC.

Viewing and Monitoring Locks

Table 2-5 describes the Oracle Database facilities that display locking information for ongoing transactions within an instance.

Table 2-3 Ways to Display Locking Information

Tool Description

Oracle Enterprise Manager 10g Database Control

From the Additional Monitoring Links section of the Database Performance page, click Database Locks to display user blocks, blocking locks, or the complete list of all database locks. See Oracle Database 2 Day DBA for more information.


The UTLLOCKT.SQL script displays a simple character lock wait-for graph in tree structured fashion. Using any SQL tool (such as SQL*Plus) to execute the script, it prints the sessions in the system that are waiting for locks and the corresponding blocking locks. The location of this script file is operating system dependent. (You must have run the CATBLOCK.SQL script before using UTLLOCKT.SQL.)

Using Serializable Transactions for Concurrency Control

By default, Oracle Database permits concurrently executing transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.

If a transaction A attempts to update or delete a row that has been locked by another transaction B (by way of a DML or SELECT FOR UPDATE statement), then A's DML statement blocks until B commits or rolls back. Once B commits, transaction A can see changes that B has made to the database.

For most applications, this concurrency model is the appropriate one, because it provides higher concurrency and thus better performance. But some rare cases require transactions to be serializable. Serializable transactions must execute in such a way that they appear to be executing one at a time (serially), rather than concurrently. Concurrent transactions executing in serialized mode can make only the database changes that they could make if the transactions ran one after the other.

Figure 2-1 shows a serializable transaction (B) interacting with another transaction (A).

The ANSI/ISO SQL standard SQL92 defines three possible kinds of transaction interaction, and four levels of isolation that provide increasing protection against these interactions. These interactions and isolation levels are summarized in Table 2-4.

Table 2-4 Summary of ANSI Isolation Levels

Isolation Level Dirty ReadFoot 1  Unrepeatable ReadFoot 2  Phantom ReadFoot 3 






Not possible




Not possible

Not possible



Not possible

Not possible

Not possible

Footnote 1 A transaction can read uncommitted data changed by another transaction.

Footnote 2 A transaction rereads data committed by another transaction and sees the new data.

Footnote 3 A transaction can execute a query again, and discover new rows inserted by another committed transaction.

The action of Oracle Database with respect to these isolation levels is summarized in Table 2-5.

Table 2-5 ANSI Isolation Levels and Oracle Database

Isolation Level Description


Oracle Database never permits "dirty reads." Although some other database products use this undesirable technique to improve thoughput, it is not required for high throughput with Oracle Database.


Oracle Database meets the READ COMMITTED isolation standard. This is the default mode for all Oracle Database applications. Because an Oracle Database query only sees data that was committed at the beginning of the query (the snapshot time), Oracle Database actually offers more consistency than is required by the ANSI/ISO SQL92 standards for READ COMMITTED isolation.


Oracle Database does not normally support this isolation level, except as provided by SERIALIZABLE.


Oracle Database does not normally support this isolation level, except as provided by SERIALIZABLE.


How Serializable Transactions Interact

Figure 2-1 shows how a serializable transaction (Transaction B) interacts with another transaction (A, which can be either SERIALIZABLE or READ COMMITTED).

When a serializable transaction fails with ORA-08177, the application can take any of several actions:

  • Commit the work executed to that point

  • Execute additional, different, statements, perhaps after rolling back to a prior savepoint in the transaction

  • Roll back the entire transaction and try it again

Oracle Database stores control information in each data block to manage access by concurrent transactions. To use the SERIALIZABLE isolation level, you must use the INITRANS clause of the CREATE TABLE or ALTER TABLE statement to set aside storage for this control information. To use serializable mode, INITRANS must be set to at least 3.

Figure 2-1 Time Line for Two Transactions

Time Line for Two Transactions
Description of "Figure 2-1 Time Line for Two Transactions"

Setting the Isolation Level of a Serializable Transaction

You can change the isolation level of a transaction using the ISOLATION LEVEL clause of the SET TRANSACTION statement, which must be the first statement issued in a transaction.

Use the ALTER SESSION statement to set the transaction isolation level on a session-wide basis.

See Also:

Oracle Database stores control information in each data block to manage access by concurrent transactions. Therefore, if you set the transaction isolation level to SERIALIZABLE, then you must use the ALTER TABLE statement to set INITRANS to at least 3. This parameter causes Oracle Database to allocate sufficient storage in each block to record the history of recent transactions that accessed the block. Use higher values for tables that will undergo many transactions updating the same blocks.

Referential Integrity and Serializable Transactions

Because Oracle Database does not use read locks, even in SERIALIZABLE transactions, data read by one transaction can be overwritten by another. Transactions that perform database consistency checks at the application level must not assume that the data they read will not change during the execution of the transaction (even though such changes are not visible to the transaction). Database inconsistencies can result unless such application-level consistency checks are coded carefully, even when using SERIALIZABLE transactions.


Examples in this topic apply to both READ COMMITTED and SERIALIZABLE transactions.

Figure 2-2 shows two different transactions that perform application-level checks to maintain the referential integrity parent/child relationship between two tables. One transaction checks that a row with a specific primary key value exists in the parent table before inserting corresponding child rows. The other transaction checks to see that no corresponding detail rows exist before deleting a parent row. In this case, both transactions assume (but do not ensure) that data they read will not change before the transaction completes.

Figure 2-2 Referential Integrity Check

Referential Integrity Check
Description of "Figure 2-2 Referential Integrity Check"

The read issued by transaction A does not prevent transaction B from deleting the parent row, and transaction B's query for child rows does not prevent transaction A from inserting child rows. This scenario leaves a child row in the database with no corresponding parent row. This result occurs even if both A and B are SERIALIZABLE transactions, because neither transaction prevents the other from making changes in the data it reads to check consistency.

As this example shows, sometimes you must take steps to ensure that the data read by one transaction is not concurrently written by another. This requires a greater degree of transaction isolation than defined by SQL92 SERIALIZABLE mode.

Fortunately, it is straightforward in Oracle Database to prevent the anomaly described:

  • Transaction A can use SELECT FOR UPDATE to query and lock the parent row and thereby prevent transaction B from deleting the row.

  • Transaction B can prevent Transaction A from gaining access to the parent row by reversing the order of its processing steps. Transaction B first deletes the parent row, and then rolls back if its subsequent query detects the presence of corresponding rows in the child table.

Referential integrity can also be enforced in Oracle Database using database triggers, instead of a separate query as in Transaction A. For example, an INSERT into the child table can fire a BEFORE INSERT row-level trigger to check for the corresponding parent row. The trigger queries the parent table using SELECT FOR UPDATE, ensuring that parent row (if it exists) remains in the database for the duration of the transaction inserting the child row. If the corresponding parent row does not exist, the trigger rejects the insert of the child row.

SQL statements issued by a database trigger execute in the context of the SQL statement that caused the trigger to fire. All SQL statements executed within a trigger see the database in the same state as the triggering statement. Thus, in a READ COMMITTED transaction, the SQL statements in a trigger see the database as of the beginning of the triggering statement execution, and in a transaction executing in SERIALIZABLE mode, the SQL statements see the database as of the beginning of the transaction. In either case, the use of SELECT FOR UPDATE by the trigger correctly enforces referential integrity.


Oracle Database gives you a choice of two transaction isolation levels with different characteristics. Both the READ COMMITTED and SERIALIZABLE isolation levels provide a high degree of consistency and concurrency. Both levels reduce contention, and are designed for deploying real-world applications. The rest of this topic compares the two isolation modes and provides information helpful in choosing between them.


Transaction Set Consistency

A useful way to describe the READ COMMITTED and SERIALIZABLE isolation levels in Oracle Database is to consider:

  • A collection of database tables (or any set of data)

  • A sequence of reads of rows in those tables

  • The set of transactions committed at any moment

An operation (a query or a transaction) is transaction set consistent if its read operations all return data written by the same set of committed transactions. When an operation is not transaction set consistent, some reads reflect the changes of one set of transactions, and other reads reflect changes made by other transactions. Such an operation sees the database in a state that reflects no single set of committed transactions.

Oracle Database transactions executing in READ COMMITTED mode are transaction-set consistent on an individual-statement basis, because all rows read by a query must be committed before the query begins.

Oracle Database transactions executing in SERIALIZABLE mode are transaction set consistent on an individual-transaction basis, because all statements in a SERIALIZABLE transaction execute on an image of the database as of the beginning of the transaction.

In other database systems, a single query run in READ COMMITTED mode provides results that are not transaction set consistent. The query is not transaction set consistent, because it might see only a subset of the changes made by another transaction. For example, a join of a master table with a detail table can see a master record inserted by another transaction, but not the corresponding details inserted by that transaction, or vice versa. The READ COMMITTED mode avoids this problem, and so provides a greater degree of consistency than read-locking systems.

In read-locking systems, at the cost of preventing concurrent updates, SQL92 REPEATABLE READ isolation provides transaction set consistency at the statement level, but not at the transaction level. The absence of phantom protection means two queries issued by the same transaction can see data committed by different sets of other transactions. Only the throughput-limiting and deadlock-susceptible SERIALIZABLE mode in these systems provides transaction set consistency at the transaction level.

Comparison of READ COMMITTED and SERIALIZABLE Transactions

Table 2-6 summarizes key similarities and differences between READ COMMITTED and SERIALIZABLE transactions.

Table 2-6 Read Committed and Serializable Transactions

Operation Read Committed Serializable

Dirty write

Not Possible

Not Possible

Dirty read

Not Possible

Not Possible

Unrepeatable read


Not Possible



Not Possible

Compliant with ANSI/ISO SQL 92



Read snapshot time



Transaction set consistency

Statement level

Transaction level

Row-level locking



Readers block writers



Writers block readers



Different-row writers block writers



Same-row writers block writers



Waits for blocking transaction



Subject to "cannot serialize access" error



Error after blocking transaction terminates



Error after blocking transaction commits



Choosing an Isolation Level for Transactions

Choose an isolation level that is appropriate to the specific application and workload. You might choose different isolation levels for different transactions. The choice depends on performance and consistency needs, and consideration of application coding requirements.

For environments with many concurrent users rapidly submitting transactions, you must assess transaction performance against the expected transaction arrival rate and response time demands, and choose an isolation level that provides the required degree of consistency while performing well. Frequently, for high performance environments, you must trade-off between consistency and concurrency (transaction throughput).

Both Oracle Database isolation modes provide high levels of consistency and concurrency (and performance) through the combination of row-level locking and Oracle Database's multi-version concurrency control system. Because readers and writers do not block one another in Oracle Database, while queries still see consistent data, both READ COMMITTED and SERIALIZABLE isolation provide a high level of concurrency for high performance, without the need for reading uncommitted ("dirty") data.

READ COMMITTED isolation can provide considerably more concurrency with a somewhat increased risk of inconsistent results (due to phantoms and unrepeatable reads) for some transactions. The SERIALIZABLE isolation level provides somewhat more consistency by protecting against phantoms and unrepeatable reads, and might be important where a read/write transaction executes a query more than once. However, SERIALIZABLE mode requires applications to check for the "cannot serialize access" error, and can significantly reduce throughput in an environment with many concurrent transactions accessing the same data for update. Application logic that checks database consistency must take into account the fact that reads do not block writes in either mode.

Application Tips for Transactions

When a transaction runs in serializable mode, any attempt to change data that was changed by another transaction since the beginning of the serializable transaction causes ORA-08177.

When you get this error, roll back the current transaction and execute it again. The transaction gets a new transaction snapshot, and the operation is likely to succeed.

To minimize the performance overhead of rolling back transactions and executing them again, try to put DML statements that might conflict with other concurrent transactions near the beginning of your transaction.

Autonomous Transactions

This topic gives a brief overview of autonomous transactions and what you can do with them.

See Also:

Oracle Database PL/SQL Language Reference for detailed information about autonomous transactions

At times, you might want to commit or roll back some changes to a table independently of a primary transaction's final outcome. For example, in a stock purchase transaction, you might want to commit a customer's information regardless of whether the overall stock purchase actually goes through. Or, while running that same transaction, you might want to log error messages to a debug table even if the overall transaction rolls back. Autonomous transactions enable you to do such tasks.

An autonomous transaction (AT) is an independent transaction started by another transaction, the main transaction (MT). It lets you suspend the main transaction, do SQL operations, commit or roll back those operations, then resume the main transaction.

An autonomous (independent) transaction executes within an autonomous scope. An autonomous scope is a routine you mark with the pragma (compiler directive) AUTONOMOUS_TRANSACTION. The pragma instructs the PL/SQL compiler to mark a routine as autonomous.

Figure 2-3 shows how control flows from the main routine (MT) to an autonomous routine (AT) and back again. As you can see, the autonomous routine can commit more than one transaction (AT1 and AT2) before control returns to the main routine.

Figure 2-3 Transaction Control Flow

Transaction Control Flow
Description of "Figure 2-3 Transaction Control Flow"

When you enter the executable section of an autonomous routine, the main routine suspends. When you exit the routine, the main routine resumes. COMMIT and ROLLBACK end the active autonomous transaction but do not exit the autonomous routine. As Figure 2-3 shows, when one transaction ends, the next SQL statement begins another transaction.

A few more characteristics of autonomous transactions:

  • The changes autonomous transactions effect do not depend on the state or the eventual disposition of the main transaction. For example:

    • An autonomous transaction does not see any changes made by the main transaction.

    • When an autonomous transaction commits or rolls back, it does not affect the outcome of the main transaction.

  • The changes an autonomous transaction effects are visible to other transactions as soon as that autonomous transaction commits. Therefore, users can access the updated information without having to wait for the main transaction to commit.

  • Autonomous transactions can start other autonomous transactions.

Figure 2-4 illustrates some of the possible sequences autonomous transactions can follow.

Figure 2-4 Possible Sequences of Autonomous Transactions

Possible Sequences of Autonomous Transactions
Description of "Figure 2-4 Possible Sequences of Autonomous Transactions"

Examples of Autonomous Transactions

As these examples illustrate, there are four possible outcomes when you use autonomous and main transactions (see Table 2-7). There is no dependency between the outcome of an autonomous transaction and that of a main transaction.

Table 2-7 Possible Transaction Outcomes

Autonomous Transaction Main Transaction




Rolls back

Rolls back


Rolls back

Rolls back

Ordering a Product

In the example illustrated by Figure 2-5, a customer orders a product. The customer's information (such as name, address, phone) is committed to a customer information table—even though the sale does not go through.

Figure 2-5 Example: A Buy Order

Example: A Buy Order
Description of "Figure 2-5 Example: A Buy Order"

Withdrawing Money from a Bank Account

In this example, a customer tries to withdraw money from a bank account. In the process, a main transaction invokes one of two autonomous transaction scopes (AT Scope 1 or AT Scope 2).

The possible scenarios for this transaction are:

Scenario 1: Sufficient Funds

There are sufficient funds to cover the withdrawal, so the bank releases the funds (see Figure 2-6).

Figure 2-6 Bank Withdrawal—Sufficient Funds

Sufficient Funds
Description of "Figure 2-6 Bank Withdrawal—Sufficient Funds"

Scenario 2: Insufficient Funds with Overdraft Protection

There are insufficient funds to cover the withdrawal, but the customer has overdraft protection, so the bank releases the funds (see Figure 2-7).

Figure 2-7 Bank Withdrawal—Insufficient Funds with Overdraft Protection

Insufficient Funds with Overdraft Protection
Description of "Figure 2-7 Bank Withdrawal—Insufficient Funds with Overdraft Protection"

Scenario 3: Insufficient Funds Without Overdraft Protection

There are insufficient funds to cover the withdrawal and the customer does not have overdraft protection, so the bank withholds the requested funds (see Figure 2-8).

Figure 2-8 Bank Withdrawal—Insufficient Funds Without Overdraft Protection

Insufficient Funds Without Overdraft Protection
Description of "Figure 2-8 Bank Withdrawal—Insufficient Funds Without Overdraft Protection"

Defining Autonomous Transactions

To define autonomous transactions, use PRAGMA AUTONOMOUS_TRANSACTION, which instructs the PL/SQL compiler to mark the routine as autonomous.

In Example 2-3, the function balance is autonomous.

Example 2-3 Marking a Packaged Subprogram as Autonomous

SQL> -- Create table that package will use:
SQL> DROP TABLE accounts;
DROP TABLE accounts
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> CREATE TABLE accounts (account INTEGER, balance REAL);
Table created.
SQL> -- Create package:
  2    FUNCTION balance (acct_id INTEGER) RETURN REAL;
  3    -- Additional functions and packages
  4  END banking;
  5  /
Package created.
  2    FUNCTION balance (acct_id INTEGER) RETURN REAL IS
  4      my_bal  REAL;
  5    BEGIN
  6      SELECT balance INTO my_bal FROM accounts
  7        WHERE account=acct_id;
  8      RETURN my_bal;
  9    END;
 10    -- Additional functions and packages
 11  END banking;
 12  /
Package body created.

See Also:

Oracle Database PL/SQL Language Reference for more information about autonomous transactions

Resuming Execution After Storage Allocation Error

When a long-running transaction is interrupted by an out-of-space error condition, your application can suspend the statement that encountered the problem and resume it after the space problem is corrected. This capability is known as resumable storage allocation. It lets you avoid time-consuming rollbacks. It also lets you avoid splitting the operation into smaller pieces and writing code to track its progress.

See Also:


What Operations Can Be Resumed After an Error Condition?

Queries, DML operations, and certain DDL operations can all be resumed if they encounter an out-of-space error. The capability applies if the operation is performed directly by a SQL statement, or if it is performed within a stored subprogram, anonymous PL/SQL block, SQL*Loader, or an OCI call such as OCIStmtExecute.

Operations can be resumed after these kinds of error conditions:

  • Out of space errors, such as ORA-01653.

  • Space limit errors, such as ORA-01628.

  • Space quota errors, such as ORA-01536.

Certain storage errors cannot be handled using this technique. In dictionary-managed tablespaces, you cannot resume an operation if you run into the limit for rollback segments, or the maximum number of extents while creating an index or a table. Use locally managed tablespaces and automatic undo management in combination with this feature.

Handling Suspended Storage Allocation

When a statement is suspended, your application does not receive the usual error code. Therefore, it must do any logging or notification by coding a trigger to detect the AFTER SUSPEND event and invoke functions in the DBMS_RESUMABLE package to get information about the problem.

Within the body of the trigger, you can perform any notifications, such as sending e-mail to alert an operator to the space problem.

Alternatively, the DBA can periodically check for suspended statements using the static data dictionary view DBA_RESUMABLE and the dynamic performance view V$_SESSION_WAIT.

See Also:

When the space condition is corrected (usually by the DBA), the suspended statement automatically resumes execution. If not corrected before the timeout period expires, the statement raises a SERVERERROR exception.

To reduce the chance of out-of-space errors within the trigger itself, declare it as an autonomous transaction, so that it uses a rollback segment in the SYSTEM tablespace. If the trigger encounters a deadlock condition because of locks held by the suspended statement, the trigger terminates and your application receives the original error condition, as if the statement was never suspended. If the trigger encounters an out-of-space condition, both the trigger and the suspended statement are rolled back. You can prevent the rollback through an exception handler in the trigger, and wait for the statement to be resumed.

The trigger in Example 2-4 handles storage errors within the database. For some kinds of errors, it terminates the statement and alerts the DBA that this has happened through an e-mail. For other errors, which might be temporary, it specifies that the statement waits for eight hours before resuming, with the expectation that the storage problem will be fixed by then. To execute this example, you must be logged in as SYSDBA.

Example 2-4 Resumable Storage Allocation

SQL> -- Create table used by trigger body
SQL> DROP TABLE rbs_error;
DROP TABLE rbs_error
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> CREATE TABLE rbs_error (
  2    SQL_TEXT VARCHAR2(64),
  3    ERROR_MSG VARCHAR2(64),
Table created.
SQL> -- Resumable Storage Allocation
  5    cur_sid           NUMBER;
  6    cur_inst          NUMBER;
  7    err_type          VARCHAR2(64);
  8    object_owner      VARCHAR2(64);
  9    object_type       VARCHAR2(64);
 10    table_space_name  VARCHAR2(64);
 11    object_name       VARCHAR2(64);
 12    sub_object_name   VARCHAR2(64);
 13    msg_body          VARCHAR2(64);
 14    ret_value         BOOLEAN;
 15    error_txt         VARCHAR2(64);
 16    mail_conn         UTL_SMTP.CONNECTION;
 17  BEGIN
 19    cur_inst := USERENV('instance');
 21                 (err_type,
 22                  object_owner,
 23                  object_type,
 24                  table_space_name,
 25                  object_name,
 26                  sub_object_name);
 27    IF object_type = 'ROLLBACK SEGMENT' THEN
 28      INSERT INTO rbs_error
 30           FROM DBA_RESUMABLE
 31             WHERE SESSION_ID = cur_sid
 32               AND INSTANCE_ID = cur_inst);
 33      SELECT ERROR_MSG INTO error_txt
 35          WHERE SESSION_ID = cur_sid
 36            AND INSTANCE_ID = cur_inst;
 37      msg_body :=
 38      'Space error occurred: Space limit reached for rollback segment '
 39      || object_name || ' on ' || to_char(SYSDATE, 'Month dd, YYYY, HH:MIam')
 40      || '. Error message was: ' || error_txt;
 41      mail_conn := UTL_SMTP.OPEN_CONNECTION('localhost', 25);
 42      UTL_SMTP.HELO(mail_conn, 'localhost');
 43      UTL_SMTP.MAIL(mail_conn, 'sender@localhost');
 44      UTL_SMTP.RCPT(mail_conn, 'recipient@localhost');
 45      UTL_SMTP.DATA(mail_conn, msg_body);
 46      UTL_SMTP.QUIT(mail_conn);
 47      DBMS_RESUMABLE.ABORT(cur_sid);
 48    ELSE
 50    END IF;
 51    COMMIT;
 52  END;
 53  /
Trigger created.