|Oracle® Database Advanced Application Developer's Guide
11g Release 2 (11.2)
Part Number E17125-03
Edition Attribute of Database Service
Before Release 184.108.40.206, you could not specify your initial session edition when using a database service to connect to Oracle Database. If you wanted to use Edition-Based Redefinition for hot rollover, where some database clients use the pre-upgrade edition while others use the post-upgrade edition, then you had to change the client code.
As of Release 220.127.116.11, you can specify the initial session edition as an attribute of a database service, which makes it easier to ensure that each session uses the desired edition during hot rollover. For more information, see "Your Initial Session Edition".
As of Release 18.104.22.168, each
*_SERVICES static data dictionary view has an
EDITION column that shows the default initial session edition. For more information, see "Displaying Information About Editions, Editioning Views, and Crossedition Triggers".
The Oracle Database features for Release 22.214.171.124 are:
Flashback Transaction Foreign Key Dependency Tracking
Flashback Transaction (the
TRANSACTION_BACKOUT procedure) with the
CASCADE option rolls back a transaction and its dependent transactions while the database remains online.
Before Release 126.96.36.199, Flashback Transaction did not track foreign key dependencies. Therefore, if you tried to use Flashback Transaction with the
CASCADE option to roll back a transaction that had foreign key dependencies, you could get a foreign key violation error. The workaround was to include the foreign-key-dependent transactions in the list of transactions to roll back.
As of Release 188.8.131.52, when using Flashback Transaction with the
CASCADE option, you do not have to include any dependent transactions in the list of transactions to be rolled back.
Foreign key dependency tracking for Flashback Transaction requires that you enable foreign key supplemental logging. For instructions, see "Configuring Your Database for Flashback Transaction". For information about Flashback Transaction, see "Using Flashback Transaction".
Fine-Grained Invalidation for Triggers
The Release 11.1 feature "Fine-Grained Invalidation" has been extended to triggers.
Edition-based redefinition enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time.
To upgrade an application while it is in use, you copy the database objects that comprise the application and redefine the copied objects in isolation. Your changes do not affect users of the application—they continue to run the unchanged application. When you are sure that your changes are correct, you make the upgraded application available to all users.
Using edition-based redefinition means using one or more of its component features. The features you use, and the down time, depend on these factors:
What kind of database objects you redefine
How available the database objects must be to users while you are redefining them
Whether you make the upgraded application available to some users while others continue to use the older version of the application
You always use the edition feature to copy the database objects and redefine the copied objects in isolation.
If you change the structure of one or more tables, you also use the feature editioning views.
If other users must be able to change data in the tables while you are changing their structure, you also use forward crossedition triggers. If the pre- and post-upgrade applications will be in ordinary use at the same time (hot rollover), you also use reverse crossedition triggers. Crossedition triggers are not a permanent part of the application—you drop them when all users are using the post-upgrade application.
For more information, see Chapter 19, "Edition-Based Redefinition."
The body of a forward crossedition trigger must handle data transformation collisions. If your collision-handling strategy depends on why the trigger is running, you can determine the reason with the function
APPLYING_CROSSEDITION_TRIGGER, which is defined in the package
For more information, see "Handling Data Transformation Collisions".
When a statement of the form
subquery runs, a unique key for some rows to be inserted might collide with existing rows. Suppose that your application must ignore such collisions and insert the rows that do not collide with existing rows.
Before Release 184.108.40.206, you had to write a PL/SQL program which, in a block with a
NULL handler for the
DUP_VAL_ON_INDEX exception, selected the source rows and then inserted them, one at a time, into the target.
As of Release 220.127.116.11, you do not have to write a PL/SQL program. You can use the
IGNORE_ROW_ON_DUPKEY_INDEX hint in an
INSERT statement, which is easier to write and runs much faster. This hint is especially helpful when implementing crossedition triggers.
For more information, see "Handling Data Transformation Collisions".
UPDATE statement runs, a unique key might collide with existing rows.
Before Release 18.104.22.168, the collision caused error ORA-00001. You could tell that a collision had occurred, but you could not tell where.
As of Release 22.214.171.124, you can use the
CHANGE_DUPKEY_ERROR_INDEX hint in an
UPDATE statement, specifying that when a unique key violation occurs for a specified index or set of columns, ORA-38911 is reported instead of ORA-00001. This hint is especially helpful when implementing crossedition triggers.
For more information, see "Handling Data Transformation Collisions".
DBMS_PARALLEL_EXECUTE package enables you to incrementally update the data in a large table in parallel, in two high-level steps:
Group sets of rows in the table into smaller chunks.
Apply the desired
UPDATE statement to the chunks in parallel, committing each time you have finished processing a chunk.
This technique improves performance, reduces rollback space consumption, and reduces the number of row locks held. The
DBMS_PARALLEL_EXECUTE package is recommended whenever you are updating a lot of data; for example, when you are applying forward crossedition triggers.
For more information, see "Transforming Data from Pre- to Post-Upgrade Representation".
Internet Protocol version 6 (IPv6) Support
Internet Protocol version 6 (IPv6) supports a much larger address space than IPv4 does. An IPv6 address has 128 bits, while an IPv4 address has only 32 bits.
Applications that use network addresses might need small changes, and recompilation, to accommodate IPv6 addresses. For more information, see "Performing Network Operations in PL/SQL Subprograms".
The agent control utility,
agtctl, which starts a multithreaded
extproc agent, now accepts IPv6 addresses. For more information, see "Configuration Parameters for Multithreaded extproc Agent Control".
See Also:Oracle Database Net Services Administrator's Guide for detailed information about IPv6 support in Oracle Database
The application development features for Release 11.1 are:
WAIT Option for Data Definition Language (DDL) Statements
DDL statements require exclusive locks on internal structures. If these locks are unavailable when a DDL statement is issued, the DDL statement fails, though it might have succeeded if it had been issued subseconds later. The
WAIT option of the SQL statement
TABLE enables a DDL statement to wait for its locks for a specified period before failing.
For more information, see "Choosing a Locking Strategy".
Binary XML Support for Oracle XML Database
Binary XML is a third way to represent an XML document. Binary XML complements, rather than replaces, the existing object-relational storage and
CLOB storage representations. Binary XML has two significant benefits:
XML operations can be significantly optimized, with or without an XML schema is available.
The internal representation of XML is the same on disk, in memory, and on wire.
As with other storage mechanisms, the details of binary XML storage are transparent to you. You continue to use
XMLType and its associated methods and operators.
For more information, see "Representing XML".
See Also:Oracle XML DB Developer's Guide
Metadata for SQL Built-In Functions
Metadata for SQL built-in functions is accessible through dynamic performance (
V$) views. Third-party tools can leverage built-in SQL functions without maintaining their metadata in the application layer.
For more information, see "Metadata for SQL Built-In Functions".
Enhancements to Regular Expression Built-in Functions
The regular expression built-in functions
REGEXP_SUBSTR have increased functionality. A new regular expression built-in function,
REGEXP_COUNT, returns the number of times a pattern appears in a string. These functions act the same in SQL and PL/SQL.
For more information, see "Oracle Database Implementation of Regular Expressions".
See Also:Oracle Database SQL Language Reference
An invisible index is maintained by Oracle Database for every data manipulation language (DML) statement, but is ignored by the optimizer unless you explicitly set the parameter
TRUE on a session or system level.
Making an index invisible is an alternative to making it unusable or dropping it. Using invisible indexes, you can:
Test the removal of an index before dropping it
Create invisible indexes temporarily for specialized, nonstandard operations, such as online application upgrades, without affecting the behavior of existing applications
For more information, see "Drop Unused Indexes".
PL/SQL Function Result Cache
Before Release 11.1, if you wanted your PL/SQL application to cache the results of a function, you had to design and code the cache and cache-management subprograms. If multiple sessions ran your application, each session had to have its own copy of the cache and cache-management subprograms. Sometimes each session had to perform the same expensive computations.
As of Release 11.1, PL/SQL provides a function result cache. Because the function result cache is stored in a shared global area (SGA), it is available to any session that runs your application.
For more information, see "PL/SQL Function Result Cache".
Sequences in PL/SQL Expressions
NEXTVAL make writing PL/SQL source code easier for you and improve run-time performance and scalability. You can use
NEXTVAL wherever you can use a
See Example 6-6.
PL/Scope is a compiler-driven tool that collects and organizes data about user-defined identifiers from PL/SQL source code. Because PL/Scope is a compiler-driven tool, you use it through interactive development environments (such as SQL Developer and JDeveloper), rather than directly.
PL/Scope enables the development of powerful and effective PL/Scope source code browsers that increase PL/SQL developer productivity by minimizing time spent browsing and understanding source code.
For a detailed description of PL/Scope, see Chapter 7, "Using PL/Scope."
PL/SQL Hierarchical Profiler
Nonhierarchical (flat) profilers record the time that a program spends within each subprogram—the function time or self time of each subprogram. Function time is helpful, but often inadequate. For example, it is helpful to know that a program spends 40% of its time in the subprogram
INSERT_ORDER, but it is more helpful to know which subprograms call
INSERT_ORDER often and the total time the program spends under
INSERT_ORDER (including its descendent subprograms). Hierarchical profilers provide such information.
The PL/SQL hierarchical profiler does this:
Reports the dynamic execution profile of your PL/SQL program, organized by subprogram calls
Accounts for SQL and PL/SQL execution times separately
Requires no special source or compile-time preparation
Stores results in database tables (hierarchical profiler tables) for custom report generation by integrated development environment (IDE) tools (such as SQL Developer and third-party tools)
To generate simple HTML reports from raw profiler output, you can use the
plshprof command-line utility.
Each subprogram-level summary in the dynamic execution profile includes information such as:
Number of calls to the subprogram
Time spent in the subprogram itself (function time or self time)
Time spent in the subprogram itself and in its descendent subprograms (subtree time)
Detailed parent-children information, for example:
All callers of a given subprogram (parents)
All subprograms that a given subprogram called (children)
How much time was spent in subprogram x when called from y
How many calls to subprogram x were from y
You can browse the generated HTML reports in any browser. The browser's navigational capabilities, combined with well chosen links, provide a powerful way to analyze performance of large applications, improve application performance, and lower development costs.
For a detailed description of PL/SQL hierarchical profiler, see Chapter 8, "Using the PL/SQL Hierarchical Profiler."
Query Result Change Notification
Before Release 11.1, Continuous Query Notification (CQN) published only object change notifications, which result from DML or DDL changes to the objects associated with registered the queries.
As of Release 11.1, CQN can also publish query result change notifications, which result from DML or DDL changes to the result set associated with the registered queries. New static data dictionary views enable you to see which queries are registered for result-set-change notifications (see "Querying CQN Registrations").
For more information, see Chapter 11, "Using Continuous Query Notification (CQN)."
TRANSACTION_BACKOUT procedure rolls back a transaction and its dependent transactions while the database remains online. This recovery operation uses undo data to create and run the compensating transactions that return the affected data to its original state.
For more information, see "Using Flashback Transaction".
Flashback Data Archive (Oracle Total Recall)
A Flashback Data Archive provides the ability to store and track transactional changes to a record over its lifetime. It is no longer necessary to build this intelligence into the application. A Flashback Data Archive is useful for compliance with record stage policies and audit reports.
For more information, see "Using Flashback Data Archive (Oracle Total Recall)".
XA API Available Within PL/SQL
The XA interface functionality that supports transactions involving multiple resource managers, such as databases and queues, is now available within PL/SQL. You can use PL/SQL to switch and share transactions across SQL*Plus sessions and across processes.
For more information, see "Using the DBMS_XA Package".
Support for XA/JTA in Oracle Real Application Clusters (Oracle RAC) Environment
An XA transaction now spans Oracle RAC instances by default, enabling any application that uses XA to take full advantage of the Oracle RAC environment, enhancing the availability and scalability of the application.
For more information, see "Using Oracle XA with Oracle Real Application Clusters (Oracle RAC)".
Identity Code Package
The Identity Code Package provides tools to store, retrieve, encode, decode, and translate between various product or identity codes, including Electronic Product Code (EPC), in Oracle Database. The Identity Code Package provides new data types, metadata tables and views, and PL/SQL packages for storing EPC standard RFID tags or new types of RFID tags in a user table.
The Identity Code Package enables Oracle Database to recognize EPC coding schemes, to support efficient storage and component-level retrieval of EPC data, and to meet the EPCglobal Tag Data Translation 1.0 (TDT) standard that defines how to decode, encode, and translate between various EPC RFID tag representations.
The Identity Code Package also provides an extensible framework that enables you to use pre-existing coding schemes with applications that are not included in the EPC standard and adapt Oracle Database both to these older systems and to evolving identity codes that might become part of a future EPC standard.
The Identity Code Package also lets you create your own identity codes by first registering the new encoding category, registering the new encoding type, and then registering the new components associated with each new encoding type.
For more information, see Chapter 17, "Using the Identity Code Package."
Enhanced Online Index Creation and Rebuilding
Online index creation and rebuilding no longer requires a DML-blocking lock.
Before Release 11.1, online index creation and rebuilding required a very short-term DML-blocking lock at the end of the rebuilding. The DML-blocking lock could cause a spike in the number of waiting DML operations, and therefore a short drop and spike of system usage. This system usage anomaly could trigger operating system alarm levels.
Embedded PL/SQL Gateway
The PL/SQL gateway enables a user-written PL/SQL subprogram to be invoked in response to a URL with parameters derived from an HTTP request.
mod_plsql is a form of the gateway that exists as a plug-in to the Oracle HTTP Server. Now the PL/SQL gateway is also embedded in the database itself. The embedded PL/SQL gateway uses the internal Oracle XML Database Listener and does not depend on the Oracle HTTP Server. You configure the embedded version of the gateway with the
For more information, see "Using Embedded PL/SQL Gateway".
Oracle Database Spawns Multithreaded extproc Agent Directly by Default
When an application calls an external C procedure, either Oracle Database or Oracle Listener starts the external procedure agent,
Before Release 11.1, Oracle Listener spawned the multithreaded
extproc agent, and you defined environment variables for
extproc in the file
As of Release 11.1, by default, Oracle Database spawns
extproc directly, eliminating the risk that Oracle Listener might spawn
extproc unexpectedly. This default configuration is recommended for maximum security. If you use it, you define environment variables for
extproc in the file
For more information, including situations in which you cannot use the default configuration, see "Loading External Procedures".
Before Release 11.1, a DDL statement that changed a referenced object invalidated all of its dependents.
As of Release 11.1, a DDL statement that changes a referenced object invalidates only the dependents for which either of these statements is true:
The dependent relies on the attribute of the referenced object that the DDL statement changed.
The compiled metadata of the dependent is no longer correct for the changed referenced object.
For example, if view
v selects columns
c2 from table
t, a DDL statement that changes only column
t does not invalidate
For more information, see "Invalidation of Dependent Objects".