Skip Headers
Oracle® Database Development Guide
12c Release 1 (12.1)

E17620-16
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

2 Performance and Scalability

This chapter explains some techniques for designing performance and scalability into the database and database applications.

Topics:

2.1 Performance Strategies

Topics:

2.1.1 Designing Your Data Model to Perform

This topic briefly describes the important concepts of data modeling. Entire books about this subject are available; refer to them for details and guidance.

Design your schema for optimal performance of the most important and frequent queries, following this basic procedure:

  1. Analyze Requirements

  2. Create the Database Design

  3. Implement the Database Application

  4. Maintain the Database and Database Application

2.1.1.1 Analyze Requirements

Analyze requirements by interviewing people to learn about the business, the nature of the application, how information is used and by whom, and the expectations of end users. This includes collecting business documents, such as personnel forms, invoice forms, order forms, and so on to learn how information is used in the business. This step involves analyzing the data gathered and converting it into a logical data model.

This process of converting data gathered into a logical model includes a number of steps starting with data collection and normalization of the collected data, entity-relationship modeling, transaction analysis, and finally data modeling.

Data analysis is considered a bottom up process. This is followed by a top down analysis known as functional analysis. The end result of functional analysis is a data flow diagram that identifies the main process blocks and how data flows in and out of each and from where the data originates to where it ends up.

After completing the data analysis and functional analysis, its time to create the database design. This is described in the next section.

2.1.1.2 Create the Database Design

Creating the database design consists of two components, creating the logical design and creating the physical design.

To create the logical design, translate the requirements into data items (columns) and organize these items to better understand the logical relationships among all these objects in the database. Related attributes (columns) are grouped together to form entities (tables), so you can map relationships among entities and attributes, determine primary and foreign key attributes for each table, normalize tables, analyze transactions by tracing transaction paths through the logical model, prototype transactions in SQL, and develop a volume table. The volume table indicates the size of your database. By representing these entities graphically, you eliminate redundancies, and produce the most useful and efficient layouts of the tables that represent the data to the user. When you model transaction activity, you determine which tables are accessed by which users in which sequences, and which queries are reading data and which queries are writing data. This lets you determine if the application is mostly reading data or writing data, which affects the physical design. So it is important to understand what the typical transactions are, what the transactions are doing, as well as which ones are most important to users.

Effective logical design considers the requirements of different users who must own, access, and update data. For this reason, it is critical that all data at the logical design level is represented explicitly as columns in tables and these data columns in your database are organized so that users with different needs can easily use them. This process may involve denormalizing some tables for improving the efficiency of data access.

Creating the physical design is where you implement an effective way of storing and retrieving data for these database objects in files on a storage medium. Physical design consists of converting the information gathered during the logical design phase into a description of the physical database. This physical description optimizes the placement of the physical database structures that represent the logical design to attain the best performance. Because the physical database design is the result of the integration of all the information about the tables and columns, the relationships between and among tables, and all known transaction activity, the physical description is created with a knowledge of how the database stores data in tables and creates other structures, such as indexes, on disk, and so on. With this keen understanding of database features, the designer creates scripts using SQL data definition language to create the schema definition, define the database objects in their required sequence, and define the storage requirements to specification, and so on. Creating the schema definition represents the beginning of the application implementation.

See Also:

Oracle Database Performance Tuning Guide for more information about designing and developing for performance

2.1.1.3 Implement the Database Application

The implementation stage means getting the application set up and running in a preproduction environment. This stage involves loading the data into the physical schema design, selecting the programming language in which to develop the application, developing the user interface, creating and testing the queries, and so on. Implement the application in a test environment that best matches the production environment. Never implement directly as a production ready application for obvious reasons - unless tested, it is untried. You do not know if the application works unless you first thoroughly test it in a test environment.

After the application development is completed in the test framework, you want to ensure the application is running to specification, all components are exercised, the application is fully operational end-to-end, and the database features it uses are configured properly. Develop a test bed in which to fully exercise and test the application's functionality.

After you confirm from testing that the application is fully implemented and operational, perform workload benchmarks on it. Running these benchmarks ensures that the application performs as you expect under various workloads with simulated real-time operations. Use a specific set of metrics that you want the system to meet, and then test the system end-to-end. For example, you want to ensure that response times for key queries are as fast as you planned and for all other queries response times are as expected and acceptable. Next, you may want to SQL tune queries to perform optimally end-to-end, first under no workload, then with increasing workloads.

See Also:

Testing must ensure that the application works well throughout a range of workloads where workloads match a typical day's operation, including peak activity. Under simulations as you add data to the database and as you add users, you want to ensure that your application scales well and response times for key queries do not degrade. You can use a number of tools to measure application and system performance, troubleshoot problems, and further test and tune the system as described in Section 2.2.1. Developing specific benchmarks (see Section 2.1.3) and testing the application under a wide range of simulated workloads ensures that the performance goals for the application are met.

If you designed the system to perform from the beginning, then it will. If your benchmark tests indicate otherwise; that is, certain queries are much slower than you expected, then you know work to begin version 2 of your application is just around the corner to get it right the second time around, unless of course there is time for redesign work before you go to production. To avoid this pitfall, design the system to perform from the beginning in the logical and physical design phases. As mentioned previously, for queries to run fast, access to tables must be planned to be as fast as possible. This means you must have both a keen understanding of the database features to physically make this possible as well as have a fundamental understanding of logical design principles to form this solid foundation. Working with an experienced team is vitally important.

If the application, as implemented in the test framework or in its preproduction environment, meets or exceeds all metric goals for the benchmark tests, then the application is ready for the production environment. If you designed the application from the beginning to have efficient queries and to perform under typical loads you expect each day, then your testing only confirms this fact.

The next step is to put the application into production and maintain the application. This is described in the next section.

See Also:

Oracle Database Performance Tuning Guide for more information about deploying new applications

2.1.1.4 Maintain the Database and Database Application

Maintaining the database, the database application, and the operating system are on-going tasks for the database administrator, the application developer, and the system administrator. This requires resources. What resources are required depends upon the importance of the database and the database application to the business, its relative potential for growth over time, and the need to accommodate more users, and so on.

You must perform periodic monitoring of the system and schedule specified periods for maintenance. Keep your user community informed of upcoming maintenance periods. These maintenance periods may require periodic software upgrades so downtime periods must be planned. These downtimes can be scheduled during periods when there is little to no database activity.

Application maintenance also includes fixing software application bugs, rolling out patches, and releasing upgrades for the application. Maintenance work must be tested beforehand in a non production test environment. This is critically important to ensure that maintenance tasks are sound in practice and potential problems are caught and resolved before they are rolled out to your production systems. Nothing serves as a finer teaching point than an unplanned outage due to limited testing or inadequate planning that turns into a worse case scenario when the application fails from a faulty patch installation or due to an upgrade issue, or the system crashes bringing down the production system due to a simple system upgrade issue that was not discovered beforehand.

See Also:

Oracle Database Performance Tuning Guide for more information about designing and developing for performance

2.1.2 Setting Performance Goals - Metrics

Start your application development project with a set of metrics in mind that describe how your application is to perform and scale. These metrics include: the number of users it is expected to support, the number of transactions per second at peak load times, the expected query response times at peak load times, the volume or total number of records expected for each table per unit of time, such as one day, one month, and one year, and so on.

When you have a clear set of metrics with which to test, you can set up simulations on your developed application to measure its performance to see how well your application performs against these benchmarks. If your application meets or exceeds your carefully chosen set of metrics, then you have met one set of goals. If not, then you have work to do.

To determine how your application is performing in the production environment over time, record performance benchmark results on a regular basis and store this information in a database table. Compare these records as they are gathered, looking for issues with each comparison. Compare the records for your key transactions and any other transactions in which you cannot afford performance degradation. Using this method, you isolate issues as they arise, rather than after users start complaining about performance (at which time you do not know when the problem started).

2.1.3 Benchmarking Your Application

Benchmarks are devised tests that measure aspects of how the application is performing. Benchmarks are usually performed on a single user system in isolation to keep interference from other factors to a minimum. Results from such benchmarks can provide a performance baseline for the application.

A benchmark must model the application in the real world, not represent an imagined real world that might provide results with little meaning. Therefore, you must test the application in the environment in which you expect it to run. Benchmarks must provide information that either helps you validate the application's performance or raises concerns that help you resolve issues before the application goes into production. When the application meets your metrics based on benchmark tests, you know that it is ready to go into production.

You can create a variety of benchmarks from small benchmarks that measure query performance for the most important queries, or test and compare different solutions to a performance issue, or help you resolve design issues that may affect performance. Developing small benchmarks for these types of challenges helps you design and build a quality application.

You must develop very large, more complex benchmarks to measure the application's performance during peak user loads, during peak transaction loads, or both. You must budget and plan for such benchmarks, especially if the number of application users will increase over time, or the application will experience high transaction rates. You want to know before the application goes into production that you have designed the application to meet these requirements. Developing large benchmarks for this purpose is especially critical for the success of these types of applications.

Simulations using both small and large benchmarks in a test environment provides vital information that can either validate that the application is well designed and can handle heavy user loads, or peak transaction loads, or both; or, if issues arise this may send you scrambling back to the drawing board to figure out what went wrong. In either case, such benchmarks can be very useful and provide indispensable feedback about how the application is performing. Putting these benchmarks in place in a test environment during the development process, gives you ample opportunity to test and retest and make improvements as application development progresses.

See Also:

Oracle Database Performance Tuning Guide for more information about benchmarking applications

2.2 Tools for Performance

Topics:

2.2.1 About Tools for Performance

Several tools you can use to gain important information about your application's performance as it runs are:

See Also:

Oracle Database 2 Day + Performance Tuning Guide for more information about tools for tuning the database

2.2.1.1 Using DBMS_APPLICATION_INFO

Use the DBMS_APPLICATION_INFO package to track the performance of various modules, track resource use by module, and for debugging. When you register the application, it allows system administrators and performance tuning specialists to track performance by module, which records its name and actions in the V$SESSION and V$SQLAREA views. This package provides functions to set the following columns in the V$SESSION view: MODULE (application or package name), ACTION (procedure in the package or transaction name), and CLIENT_INFO (additional information about the client application such as initial bind variable values) for the current session. Functions are also provided to return information from these same columns for the current session in a query.

Use the DBMS_APPLICATION_INFO package to track the on-going progress of executing long running commands that take more than a few seconds to run and display a result, such as creating an index or updating many rows. You can monitor the on-going progress of these commands by using the DBMS_APPLICATION_INFO package to set values in the V$SESSION_LONGOPS view. This view includes information such as when the command started, how far it has progressed, and its estimated time to completion. A function is provided to set values for a variety of specific parameters that lets you monitor the on-going progress of the command based on values you specify for these parameters.

See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about the DBMS_APPLICATION_INFO package

2.2.1.2 Using SQL Trace Facility (SQL_TRACE) in Your Application

Use the SQL Trace facility, SQL_TRACE, to trace all SQL statements and PL/SQL blocks your application executes. By turning on the SQL Trace facility for a particular module or SQL statement and turning it off again following its use, you can selectively get trace information only for that module. Use the SQL Trace facility with TKPROF and with the EXPLAIN PLAN for best results. See Section 2.2.1.3 for more information about the EXPLAIN PLAN.

The SQL Trace facility provides performance information on individual SQL statements. TKPROF formats the contents of the trace file and places the output into a readable output file. The EXPLAIN PLAN lets you view execution plans chosen by the optimizer and shows the query plan for the specified SQL query if it were executed in the current session.

Plan to use the SQL Trace facility while you are designing your application so you know what it is you want to trace and how the application performs before it goes into production. If you wait until after performance problems develop in the production environment to use this facility, you may find it virtually impossible to use the SQL Trace facility for your application because of the way the application was developed.

See Also:

Oracle Database SQL Tuning Guide for more information about the SQL Trace facility

2.2.1.3 Using the EXPLAIN PLAN

When you run a SQL statement, the query optimizer chooses a specific execution plan to execute the SQL based on the lowest cost plan of execution from among a set of potential plans it generates and compares. The plan cost is an estimated value proportional to the expected resource use needed to execute the statement for that plan. The cost is based on statistics such as the data distribution and storage characteristics of the tables, indexes, and partitions accessed by the SQL statement. The optimizer calculates the cost of access paths and join orders based on the estimated computer resources that include I/O, CPU, and memory.

When you use the EXPLAIN PLAN FOR clause before a SQL statement, such as for SELECT, UPDATE, INSERT, and DELETE statements, then run the query, the EXPLAIN PLAN creates an execution plan and stores the results in an output plan table. When you query the resulting plan table, you can examine the execution plan chosen by the optimizer. This information helps you to understand the optimizer decisions in executing the SQL statement by organizing its execution plan results into a row source tree. The row source tree or core of the execution plan is the combination of steps Oracle Database uses to execute the SQL statement. It shows the ordering of the tables referenced by the statement, the access plan for each table, the join method for tables affected by join operations, and the data operations, such as filter, sort, or aggregation. Also, the plan table contains optimization information, such as the cost and cardinality of each operation, partitioning information, such as the set of accessed partitions, and parallel execution information, such as the distribution method of join inputs. This information provides valuable insight into how and why the execution plan chosen is the optimum method of execution.

You can use optimizer hints with SQL statements to alter execution plans when you want to test the performance of a specific access path in which you know something about the data that the optimizer does not know. If the results of the test indicate the hint is useful, then you can keep the hint in your code to instruct the optimizer to use this more optimal execution plan. Realizing this benefit comes with the disadvantage that you must manage and continue to test the hint to ensure that over time that the hint does not become obsolete or result in negative consequences.

Because the EXPLAIN PLAN shows only how Oracle Database would run the SQL statement when the statement was explained, the plan can change from the actual execution plan of the SQL statement because of differences in the execution environment and the explain plan environment. For this reason, you must consider using SQL plan management to build a SQL plan baseline, which is a set of accepted plans for a SQL statement.

The EXPLAIN PLAN is most useful in determining an access plan, then testing the plan to see if it is indeed optimal, while you also consider the statement's actual resource consumption. Then if the plan looks optimal, then consider using SQL plan management.

See Also:

2.3 Monitoring Database Performance

Topics:

2.3.1 About Monitoring Database Performance

A number of tools or advisors are available to monitor database performance. These tools include:

Automatic Database Diagnostic Monitor (ADDM)

Automatic Database Diagnostic Monitor (ADDM), which is part of the Oracle Diagnostic Pack, is an advisor that examines and analyzes data captured in Automatic Workload Repository (AWR) (also part of the Oracle Diagnostic Pack). ADDM determines where possible database performance problems exist, makes a list of recommendations for correcting them, and also determines where performance problems do not exist.

ADDM analysis is performed after each AWR snapshot, which by default is every hour. Results are stored in the database by default for 8 days. However, you can change the default values for both the snapshot interval and retention period. For example, to better monitor your particular application, you can set a retention period of one month, or set it to the length of your business cycle, or set it to be indefinite.

Using Oracle Enterprise Manager, results are displayed as ADDM findings on the Database home page. This snapshot data as taken by AWR is presented as snapshot statistics much like Statspack data was analyzed by DBAs for performance analysis. Viewing such data confirms ADDM results, but without the analysis and list of recommendations that ADDM provides.

See Also:

Oracle Database 2 Day + Performance Tuning Guide for more information about configuring ADDM, reviewing ADDM analysis, interpreting ADDM findings, implementing ADDM recommendations, and viewing snapshot statistics using Enterprise Manager

Monitoring Real-Time Database Performance

Using Oracle Enterprise Manager, you can monitor real-time database performance from the Performance page to identify and respond to performance problems. From the Performance page, you can drill down to the appropriate pages to identify and resolve performance issues without having to wait for the next ADDM analysis.

See Also:

Oracle Database 2 Day + Performance Tuning Guide for more information about monitoring real-time database performance

Responding to Performance-Related Alerts

The Database home page in Oracle Enterprise Manager displays performance-related alerts generated by the database. You can improve database performance by resolving problems indicated by these alerts. Oracle Database by default enables alerts for tablespace usage, snapshot too old, recovery area low on free space, and resumable session suspended. You can view metrics and thresholds, set metric thresholds, respond to alerts, clear alerts, and set up direct alert email notification. Using this built-in alerts infrastructure allows you to be notified for these special performance-related alerts.

See Also:

Using Other Performance Advisors

Oracle Database provides a set of advisors or powerful tools to help you manage and tune your database. Besides ADDM mentioned previously, there are SQL advisors and memory advisors that you can use to help improve database performance.

SQL advisors include SQL Tuning Advisor and SQL Access Advisor, both of which are part of the Oracle Database Tuning Pack.

SQL Tuning Advisor enables you to submit one or more SQL statements as input and receive output in the form of specific advice or recommendations for how to tune statements, along with a rationale for each recommendation and its expected benefit. A recommendation relates to collection of statistics on objects, creation of new indexes, restructuring of the SQL statements, or creation of SQL profiles.

SQL Access Advisor enables you to optimize data access paths of SQL queries by recommending the proper set of materialized views and view logs, indexes, and partitions for a given SQL workload.

Memory advisors include the Memory Advisor, SGA Advisor, Shared Pool Advisor, Buffer Cache Advisor, and PGA Advisor. The Memory Advisors provide graphical analyses of total memory target settings, SGA and PGA target settings, or SGA component size settings. You use these analyses to tune database performance and for what-if planning.

See Also:

2.4 Using Instrumentation

Topics:

2.4.1 About Instrumentation

Instrumentation refers to adding lots of debug code throughout your application. This means embedding the ability to generate lots of trace information when you need it. When someone from support tells you to turn on trace events, and when you do so, the application as it runs, generates trace files containing diagnostic information. This diagnostic information lets you or whom ever may be supporting the application debug problems. These trace files give you a good idea what and where the problem is. This is especially helpful for N-tier applications, where you need diagnostic information to help you isolate and identify which tier is causing the performance issue.

2.5 Testing for Performance

Topics:

2.5.1 About Testing for Performance

Testing an application for performance involves working from a set of guidelines to ensure you have covered the breadth of testing that this process requires. These guidelines include the following information:

  • Use the Automatic Database Diagnostic Monitor (ADDM) and SQL Tuning Advisor for design validation

    ADDM determines where possible database performance problems exist and provides a list of recommendations for correcting them. For example, ADDM automatically identifies high-load SQL statements, then you can use the SQL Tuning Advisor to analyze individual SQL statements to suggest indexes, SQL profiles, restructured SQL, and statistics that improve the performance of the SQL statements.

    See Automatic Database Diagnostic Monitor (ADDM) for more information about ADDM. See Using Other Performance Advisors for more information about the SQL Tuning Advisor.

    See Also:

    Oracle Database 2 Day + Performance Tuning Guide for more information about tuning SQL statements using the SQL Tuning Advisor
  • Test with realistic data volumes and distributions

    The test database must contain data representative of the production system. This means tables must be fully populated and represent the data volume and cardinality between tables found in the production system. Also, all the production indexes must be built and the schema statistics must be populated correctly.

  • Use the correct optimizer mode

    Perform all testing with the optimizer mode that you plan to use in production. Because all Oracle Database research and development effort is focused on the query optimizer, the use of the query optimizer is recommended.

  • Test a single user performance

    Start testing with a single user on an idle or lightly-used database for acceptable performance. If a single user cannot achieve acceptable performance under ideal conditions, then multiple users cannot achieve acceptable performance under real conditions.

  • Obtain and document plans for all SQL statements

    Obtain an execution plan for each SQL statement. Use this process to verify that the optimizer is obtaining an optimal execution plan, and that the relative cost of the SQL statement is understood in terms of CPU time and physical I/Os. This process assists in identifying the heavy use transactions that require the most tuning and performance work in the future.

  • Attempt multiuser testing

    This process is difficult to perform accurately because user workload and profiles might not be fully quantified. However, transactions performing DML statements must be tested to ensure that there are no locking conflicts or serialization problems.

  • Test with the correct hardware configuration

    Test with a configuration as close to the production system as possible. Using a realistic system is particularly important for network latencies, I/O subsystem bandwidth, and processor type and speed. Failing to use this approach may result in an incorrect analysis of potential performance problems.

  • Measure steady state performance

    When benchmarking, it is important to measure the performance under steady state conditions. Each benchmark run must have a ramp-up phase, where users are connected to the application and gradually start performing work on the application. This process lets frequently cached data be initialized into the cache and lets single execution operations—such as parsing—be completed before the steady state condition. Likewise, at the end of a benchmark run, there must be a ramp-down period, where resources are freed from the system and users cease work and disconnect.

See Also:

2.6 Using Bind Variables

A bind variable or placeholder in a prepared SQL statement or PL/SQL block indicates where data must be supplied by the program at runtime. Using literals instead of bind variables to indicate each data value severely limits the usefulness of the program. For example, suppose you want to insert data into a table, and set up a benchmark test to test the performance of each approach. Because the table is not known until runtime, you must use dynamic SQL. One method concatenates character string literals to be inserted into the SQL statement, while the other method uses bind variables.

Create the table as follows:

CREATE TABLE test (x VARCHAR2(30), y VARCHAR2(30));

Consider the following INSERT statement to add a row of data into table test concatenating the character string literals for columns x and y.

INSERT INTO test (x,y) VALUES ( ''' || REPLACE (x, '''', '''''') || '''),
                                ''' || REPLACE (y, '''', '''''') || ''');

Next, consider the following INSERT statement to add a row of data into table test using bind variables, where :x and :y are the bind variables for columns x and y.

INSERT INTO test (x,y) VALUES (:x, :y);

At first glance, using bind variables is simpler and easier to code.

Now consider performing a dynamic bulk load operation where you want to add 1,000 rows of data into table test using each approach. For concatenating the character string literals, this approach would require 1,000 unique insert statements to complete this operation, none of which can be reused. This means the application uses a new query for each statement to be parsed (hard parse), qualified, checked for security, optimized, and so on, and eventually compiled for each execution of the statement. Also, because each statement is hard parsed, the number of latches used increases greatly. Latches are a type of mutual exclusion, or locking mechanism, which are serialization devices. Because serialization devices inhibit concurrency, they are less scalable, support fewer users, and require more resources.

When you use bind variables to represent each data item to show where input data is to be supplied by the program, you use and reuse only one insert statement repeatedly, in which the data values are supplied at runtime. The statement is parsed (soft parse), qualified, checked for security, optimized, and so on and then compiled and stored in a shared pool. The compiled query would then be reused from the shared pool for each SQL statement that matches it exactly, which is all 1000 inserts. This is a key benefit to using bind variables.

Using bind variables simplifies the programmer's task, provides the program with greater flexibility, avoids unnecessary hard parses when using literals, greatly reduces the number of latches and locks needed (reducing concurrency), thus resulting in your program running faster. Also, it also lets you take advantage of additional features offered by the database. This yields substantial savings in the use of resources. Imagine the savings if most DML statements in your application could benefit from matching and using other compiled query execution plans stored in the shared pool because you used bind variables in your application.

You can use placeholders for input variables in any DELETE, INSERT, SELECT, or UPDATE statement, or in a PL/SQL block, in any position in the statement where you can use an expression or a literal value. In PL/SQL, placeholders can also be used for output variables. Binding is used for both input and output variables in nonquery operations.

When you use bind variables you can also take advantage of other features in the database to improve the performance of your application, such as statement caching. Using database features to gain the optimum performance and scalability for your application in the early design phase will reap immediate and long lasting benefits. Also, when used wisely these features allow the application to scale well with increasing numbers of users. Intelligent use of database features quickly becomes a win-win proposition for the application, the programmer, the DBA, and the end user.

Another benefit with the use of bind variables is that it greatly reduces the chances for SQL injection attacks. Use of literals in your application when using the string concatenation technique, makes the application less secure. This is because it can allow an end user with bad intentions in mind to potentially modify the SQL statement resulting in turning the application into somewhat of a rogue program intended to do something it was not supposed to do that could result in harm being done. When you use bind variables, this does not subject your application to SQL injection attacks.

See Also:

Oracle Call Interface Programmer's Guide for more information about using binding placeholders in OCI

2.7 Using Client Result Cache

Topics:

2.7.1 About Client Result Cache

Applications that use Oracle Database drivers and adapters built on top of OCI libraries including C, C++, Java (JDBC-OCI), PHP, Python, Ruby, and Perl, can use client memory to take advantage of the Client result cache to improve response times of repetitive queries.

The client result cache enables client-side caching of SQL query result sets in client memory. The OCI result cache is completely transparent to applications, and its cache of result set data is kept consistent with session or database changes that affect its result set.

Applications employing this feature see improved performance for queries that have a cache hit. These applications use the cached result sets on the clients or middle-tiers for subsequent execution without going to the database. Because retrieving results locally from a client process is faster than making a database call and rerunning a query, frequently run queries experience a significant performance improvement when their results are cached.

The client result cache also reduces the server CPU that would have been consumed for processing the query, thereby improving server scalability. OCI statements from multiple sessions can match the same cached result set in the OCI process memory, if they have similar schema, SQL text, bind values, and session settings. Otherwise, with any dissimilarity, the query execution is directed to the server.

You must enable OCI statement caching or cache statements at the application level when using the client result cache.

Client result cache works with OCI features such as OCI session pooling, OCI connection pooling, database resident connection pooling, and OCI transparent application failover (TAF).

See Also:

2.7.2 Benefits of Client Result Cache

The benefits of client query result cache are:

  • Because the result cache is on the client side, a cache hit causes fetch (OCIStmtFetch2()) and execute (OCIStmtExecute()) calls to be processed locally, instead of making server round-trips. This can result in huge performance savings for server resources, for example, server CPU and server I/O.

  • The client-side query result set cache is a transparent and consistent cache.

  • The result cache on client is for each process, so multiple client sessions can simultaneously use matching cached result sets.

  • It minimizes the need for each OCI application to have its own custom result set cache.

  • It transparently manages the caching aspects of the cached result sets, that is: concurrent access by multiple threads, multiple statements, multiple sessions, invalidation, refreshing of result sets in the cache, and cache memory management.

  • It transparently invalidates the cached result sets on database changes that may affect the result sets, when an OCI process makes round-trips to the server.

  • This consistent cache is automatically available to applications and drivers that use OCI library under the cover including: JDBC OCI, ODP.Net, OCCI, Pro*C/C++, Pro*COBOL, ODBC, and so on.

  • The cache uses OCI client memory that may be less expensive than server memory.

  • A local cache on the client has better locality of reference for queries executed by that client.

See Also:

OCIStmtExecute() and OCIStmtFetch2() in Oracle Call Interface Programmer's Guide

2.7.3 Guidelines for Using Client Result Cache

You can enable client result caching in several ways for your application and establish an order of precedence in its usage based on the methods selected. See Section 2.7.3.6 for more usage information.

  • SQL Hints - Annotate a query with a SQL hint /*+ result_cache */ to indicate that results are to be stored in the query result cache. Using SQL hints is the highest order of precedence; it takes precedence over table annotations and session parameters. It is applicable to a single query. This method requires application-level changes.

  • Table Annotation - Annotate a table during deployment using result cache hints in the ALTER TABLE and CREATE TABLE statements. Using table annotation is the next highest order of precedence below SQL hints and above session parameters when using MODE FORCE. It is applicable to all queries for that table. This method requires no application-level changes.

  • Session Parameters - Works across all tables for all queries; use this method when possible. You can either set the RESULT_CACHE_MODE initialization parameter in the server parameter file (init.ora) or use RESULT_CACHE_MODE clause in the ALTER SESSION and the ALTER SYSTEM statements. Using session parameters is the lowest order of precedence; both SQL hints and table annotations take precedence over session parameters usage. It is the most widely effective usage being applicable to all tables. This method requires no application-level changes.

Oracle recommends that applications annotate tables and queries with result cache hints for read-only or read-mostly database objects. If the result caching happens for queries with large results, these results can use a large amount of cache memory.

As each set of bind values specified by the application creates a different cached result set (for the same SQL text), these result sets together can use a large amount of client result cache memory.

When client result caching is enabled, the query result set can be cached on the client or on the server or both. The client result caching can be enabled even if the server result cache (that is enabled by default) is disabled.

For OCI, the first OCIStmtExecute() call of every OCI statement handle call always goes to the server even if there might be a valid cached result set. It is necessary that an OCIStmtExecute() call be made for each statement handle to be able to match a cached result set. Oracle recommends that applications have their own statement caching for OCI statement handles, or use OCI statement caching so that OCIStmtPrepare2() can return an OCI statement handle that has been executed once. Multiple OCI statement handles, from the same or different sessions, can simultaneously fetch data from the same cached result set.

For OCI, a result set to be cached, the OCIStmtExecute() or OCIStmtFetch2() calls that transparently create this cached result set must fetch rows until ORA-01403 (No Data Found) is returned. Subsequent OCIStmtExecute() or OCIStmtFetch2() calls that match a locally cached result set need not fetch to completion.

See Also:

OCIStmtExecute(), OCIStmtPrepare2(), and OCIStmtFetch2() in Oracle Call Interface Programmer's Guide

2.7.3.1 SQL Hints

Unless the RESULT_CACHE_MODE server initialization parameter is set to FORCE, you must explicitly specify the queries to be cached using SQL hints.

For example, for OCI the SQL /*+ result_cache */ or /*+ no_result_cache */ hint must be set in SQL text passed to OCIStmtPrepare() and OCIStmtPrepare2() calls.

For example, for JDBC OCI, the SQL /*+ result_cache */ or /*+ no_result_cache */ hint is annotated in the actual SQL SELECT query as part of the query string.

See Also:

2.7.3.2 Table Annotation

The ALTER TABLE and CREATE TABLE statements enable you to annotate tables with result cache mode. There are also session parameters as mentioned in a later section. The matrix of table annotations and session parameters dictates the effective result cache mode for queries on that table. SQL hints override table annotations and session parameters. The syntax is:

CREATE|ALTER TABLE [<schema>.]<table> ... [RESULT_CACHE (MODE {FORCE|DEFAULT})]

Here is an example of CREATE TABLE. It defines the table columns:

CREATE TABLE foo (a NUMBER, b VARCHAR2(20)) RESULT_CACHE (MODE FORCE); 

Here is an example of ALTER TABLE:

ALTER TABLE foo RESULT_CACHE (MODE DEFAULT);

This ALTER TABLE statement annotates tables so that results of statements or query blocks (for server result cache) using these tables are stored in the result cache. If a given query has a SQL hint /*+ result_cache / or /*+ no_result_cache */ or if the parameter RESULT_CACHE_MODE is set to FORCE, then the hint or session variable take precedence over the table annotation.

You must annotate all tables you want stored in the result cache. Then all SQL queries, whether single table selects or with joins, for these tables with cache hints, are considered for caching assuming they are cache-worthy.

See Also:

Table 2-1 summarizes the result cache annotation mode values.

Table 2-1 DDL Table Result Cache Annotation Modes

Mode Value Description

DEFAULT

The default value. Result caching is not determined at the table level. You can use this value to clear any table annotations.

FORCE

If all table names in the query have this setting, then the query is always considered for caching unless the NO_RESULT_CACHE hint is specified for the query. If one or more tables named in the query are set to DEFAULT, then the effective table annotation for that query is DEFAULT.


2.7.3.3 Checking Table Annotation Mode

The RESULT_CACHE column in the DBA views DBA_TABLES, USER_TABLES, and ALL_TABLES shows the result cache mode annotation for the table. If the table has not been annotated, it shows DEFAULT.

Suppose that table emp is annotated as ALTER TABLE emp RESULT_CACHE (MODE FORCE).

Then execute the following query in the session:

SELECT table_name, result_cache FROM user_tables

The output is as follows:

TABLE_NAME    RESULT_CACHE
----------    ------------
   EMP           FORCE
   FOO           DEFAULT

The output shows that table FOO either has not been annotated or has been annotated using the following statement:

ALTER TABLE foo RESULT_CACHE (MODE DEFAULT);

See Also:

Oracle Database Reference for more information about the RESULT_CACHE column on these DBA views

2.7.3.4 Session Parameters

The RESULT_CACHE_MODE parameter enables you to decide result cache mode across tables in your queries. Use this clause in ALTER SESSION and ALTER SYSTEM statements, or inside the server parameter file (init.ora) to determine result caching.

See Also:

2.7.3.5 Effective Result Cache Table Mode

The SQL query level result cache hints take precedence over the session parameter RESULT_CACHE_MODE and result cache table annotations. Also, table annotation FORCE takes precedence over the session parameter MANUAL as indicated in Table 2-2. Table 2-2 compares modes (MANUAL and FORCE) for the session parameter RESULT_CACHE_MODE versus the comparable table annotation modes and shows the effective result cache mode.

Table 2-2 Effective Result Cache Table Mode

RESULT_CACHE_MODE Parameter MANUAL FORCE

Table Annotation = FORCE

FORCE

FORCE

Table Annotation = DEFAULT

MANUAL

FORCE


When the effective mode is FORCE, then the actual caching depends on internal restrictions for client and server cache, query cache worthiness (for example, there is no SYSDATE in the query), and space available in the cache. This is similar to the SQL query hint /*+ result_cache */ because it is only a hint. It does not imply that the query is actually cached. Recall that table annotation DEFAULT indicates that result caching is not determined at the table level and session parameter mode MANUAL indicates that the query must be annotated with a SQL hint for the hint to take precedence, so in effect these are equivalent methods for this setting.

2.7.3.6 Cache Example Use Cases

The following use cases show when SQL hints take precedence over table annotations and session parameter.

  • If the emp table is annotated as ALTER TABLE emp RESULT_CACHE (MODE FORCE) and the session parameter is not set, (it has its default value of MANUAL), this implies queries on table emp are considered for query caching.

  • If in an example, the SQL query is SELECT /*+ no_result_cache */ empno FROM emp, the query is not cached. This is because SQL hints take precedence over table annotations and session parameter.

  • If the emp table is not annotated or is annotated as ALTER TABLE emp RESULT_CACHE (MODE DEFAULT) and the session parameter is not set (it has a default value of MANUAL), this implies queries are not cached.

  • If in an example, the SQL query has the hint SELECT /*+ result_cache */ * FROM emp, then this query is considered for query caching.

  • If there is no table annotation and there is no SQL query hint, but the session or system parameter is set to FORCE, all queries on all tables are considered for query caching.

See Also:

Oracle Database SQL Language Reference for more about caching

2.7.3.7 Queries That Are Not Cached

There are queries that are not cached on the OCI client even if the result cache hint is specified. Such queries might be cached on the database if the server result cache feature is enabled (see the discussion of the SQL query result cache in Oracle Database Concepts for more information). If a SQL query includes any of the following, then the result set of that query is not cached in the OCI client result cache:

  • Remote objects

  • Complex types in the select list

  • Snapshot-based queries or flashback queries

  • Queries executed in a serializable, read-only transaction, or inside a flashback session

  • Queries that have PL/SQL functions in them

  • Queries that have virtual private database (VPD) policies enabled on the tables

2.7.4 Client Cache Consistency

The client cache transparently keeps the result set consistent with any session state or database changes that can affect its cached result sets.

When a transaction modifies the data or metadata of any of the database objects used to construct that cached result, invalidation is sent to the client on its subsequent round-trip to the server. If the application does no database calls for a period of time, then the client cache lag setting forces the next OCIStmtExecute() call to make a database call to check for such invalidations.

The cached result sets relevant to database invalidations are immediately invalidated, and no subsequent OCIStmtExecute() calls can match such result sets. The OCI statement handles currently fetching from these cached result sets, at the time such invalidations are received, can continue fetching from this (invalidated) cached result set.

The next OCIStmtExecute() call by the process might cache the new result set if there is space available in the cache. The OCI client result cache periodically reclaims unused memory.

If a session has a transaction open, OCI ensures that its queries that reference database objects changed in this transaction go to the server instead of the client cache.

This consistency mechanism ensures that the OCI cache is always close to committed database changes. If the OCI application has relatively frequent calls involving database round-trips due to queries that cannot be cached, (such as DMLs, OCILob calls, and so on) then these calls transparently keep the client cache up-to-date with database changes.

Sometimes when a table is modified, a trigger can cause another table to be modified. OCI client result cache is sensitive to all such changes.

When the session state is altered, for example, if NLS session parameters are modified, this can cause different query results. The OCI result cache is sensitive to such changes and on subsequent query executions, returns the correct query result set. The current cached result sets are kept (and not invalidated) for any other session in the process to match; otherwise, these result sets are "Ruled" after a while. There are new result sets cached corresponding to the new session state.

If the application must keep track of all such database and session changes it can be cumbersome and prone to errors. Hence, OCI result cache transparently keeps the result sets consistent with any database or session changes.

The OCI client result cache does not require thread support in the client.

See Also:

OCIStmtExecute() in Oracle Call Interface Programmer's Guide

2.7.5 Deployment Time Settings for Client Result Cache

The client result cache has server initialization parameters and client configuration parameters for its deployment time settings.

These are the server initialization parameters:

  • CLIENT_RESULT_CACHE_SIZE

    The default value is zero, implying that the client cache feature is disabled. To enable the client result cache feature, set the size to 32768 bytes (32 Kilobytes (KB)) or greater. This is the minimum size of the client result set cache for each process. All OCI client processes get this minimum size. This can be overridden by the sqlnet.ora configuration parameter OCI_RESULT_CACHE_MAX_SIZE only if this feature is enabled on the server by the CLIENT_RESULT_CACHE_SIZE initialization parameter.

    Oracle recommends either enabling client caching for all Oracle RAC nodes or disabling client caching for all Oracle RAC modes. Otherwise, within a client process, some sessions might have caching enabled, and some sessions might have caching disabled thereby getting the latest results from the server. Such a combination might present an inconsistent view of this database to the application.

    You can view the current default maximum size by displaying the value of the CLIENT_RESULT_CACHE_SIZE parameter. To increase this maximum size, you can set CLIENT_RESULT_CACHE_SIZE. However, because CLIENT_RESULT_CACHE_SIZE is a static parameter, you must include the SCOPE = SPFILE clause if you use an ALTER SYSTEM statement, and you must restart the database before any changes to this parameter take effect.

    If the client result cache feature is disabled at the server, the client configuration parameter OCI_RESULT_CACHE_MAX_SIZE is ignored and the client result cache cannot be enabled at the client.

    The cache size can be set to the minimum of:

    (available client memory) and ((the possible number of result sets to be cached) * (the average size of a row in a result set) * (the average number of rows in a result set))
    

    Note:

    The client result cache has a maximum value of 2 GB; setting it higher causes a truncation to 2 GB.

    Do not set the CLIENT_RESULT_CACHE_SIZE parameter during database creation, because that can cause errors.

  • CLIENT_RESULT_CACHE_LAG

    The CLIENT_RESULT_CACHE_LAG initialization parameter enables you to specify the maximum amount of time in milliseconds that the client result cache can lag behind any changes in the database that affect its result sets. The default is 3000 milliseconds.

    You can view the current lag by displaying the value of the CLIENT_RESULT_CACHE_LAG parameter. To change this value, you can set CLIENT_RESULT_CACHE_LAG. However, because CLIENT_RESULT_CACHE_LAG is a static parameter, you must include the SCOPE = SPFILE clause if you use an ALTER SYSTEM statement, and you must restart the database before any changes to this parameter take effect.

  • Table annotation. Optional. One can annotate read-only, read-mostly tables for result caching during deployment. No application-level changes are required.

    Note:

    SQL result cache hints, if specified, override the table annotations. See Oracle Database SQL Language Reference for more information.
  • compatible

    Specifies the release with which Oracle Database must maintain compatibility. The value of this parameter must be 11.0.0.0 or higher to enable the client result cache. If you want client caching on views, set compatible to 11.2.0.0 or higher.

2.7.5.1 Client Configuration File

A client configuration file is optional and overrides the cache parameters set in the server init.ora initialization file. These parameters are part of the oraaccess.xml file (see Oracle Call Interface Programmer's Guide). The client result cache parameters are also supported currently in sqlnet.ora. These parameters must be set in oraaccess.xml. The values take effect if set in oraaccess.xml. If not set, they are checked in sqlnet.ora. The following optional parameters are available in oraaccess.xml for client configuration:

  • <max_size> (optional): Maximum size in bytes for the query cache for each process. Specifying a size less than 32768 in the client sqlnet.ora file disables the client result cache feature for client processes reading this sqlnet.ora file. Comparable to OCI_RESULT_CACHE_MAX_SIZE in sqlnet.ora.

  • <max_rset_size> (optional): Maximum size of any result set in bytes in the query cache for each process. Comparable to OCI_RESULT_CACHE_MAX_RSET_SIZE in sqlnet.ora.

  • <max_rset_rows> (optional): Maximum size of any result set in rows in the query cache for each process. Comparable to OCI_RESULT_CACHE_MAX_RSET_ROWS in sqlnet.ora.

The cache lag cannot be set on the client.

2.7.6 Client Cache Statistics

On existing round-trips from the OCI client, OCI periodically sends statistics related to its client cache to the server. These statistics are stored in the CLIENT_RESULT_CACHE_STATS$ view. Information such as the number of result sets successfully cached, number of cache hits, and number of cached result sets invalidated are stored here. The number of cache misses for queries is at least equal to the number of Create Counts in client result cache statistics. More precisely, the cache miss count equals the number of server executions as seen in server Automatic Workload Repository (AWR) reports.

See Also:

2.7.7 Validation of the Client Result Cache

The following sections provide more information about performing validations of the client result cache.

2.7.7.1 Timing Measurement

First, to determine the performance gain of adding result cache hints to the queries, measure the time taken to run the queries without the /*+ result_cache */ hints. Then add the /*+ result_cache */ hints to the query and measure the time again. The difference in time is your performance gain.

2.7.7.2 Using v$mystat

Query the v$mystat view. To query this view, you must be granted permissions. Perform these two queries

Query-1: Measures the "Oracle Net Services round-trips to and from the client" from v$mystat.

Query-2: Measures the "Oracle Net Services round-trips to and from the client" without the SQL result cache hint.

The difference between Query-2 and Query-1 queries is the number of round-trips that it usually takes without enabling client result cache.

Query-1 query itself would make approximately 2 round-trips in this calculation.

If you add a result cache hint to the query or add the FORCE table annotation to the query for table emp and perform the query again, the difference between Query-2 and Query-1 is much less.

2.7.7.3 Using v$sqlarea

Query the v$sqlarea view. To query this view, you must be granted permissions.

Run the following SQL statement:

SELECT COUNT(*) FROM emp

Reexecute this preceding SQL statement a few times.

Then query select executions, fetches, parse_calls from v$sqlarea where sql_text like '% from emp';

Next, add the result cache table hint for emp to the query.

Reexecute the query a few times.

With client caching, the values for column1, column2 are less.

The preceding validations can also be performed with result cache table annotations.

2.7.8 Client-Side Result Cache and Server Result Cache

The client-side result cache is a separate feature from the server result cache. The client-side result cache caches results of top-level SQL queries in OCI client memory, whereas the server result cache caches result sets in server SGA memory.

The server result cache can also cache query fragments. The client-side result caching can be enabled independently of the server result cache, though they both share the result cache SQL hints, table annotation, and session parameter RESULT_CACHE_MODE. See Oracle Database Concepts for more information about SQL query result cache. Table 2-3 shows the specific result cache association for client-site result cache or server result cache, or both, with regard to setting specific parameters, running particular PL/SQL packages, and querying specific Oracle Database views.

Table 2-3 Setting Client-Side Result Cache and Server Result Cache

Parameters, PL/SQL Package, and Database Views Result Cache Association

client_result_cache_* parameters

client_result_cache_size,

client_result_cache_lag

client result cache

SQL hints /*+ result_cache */,

/*+ no_result_cache */

client result cache, server result cache

sqlnet.ora OCI_RESULT_CACHE* parameters:

OCI_RESULT_CACHE_MAX_SIZE

OCI_RESULT_CACHE_MAX_RSET_SIZE

OCI_RESULT_CACHE_MAX_RSET_ROWS

client result cache

Statistics view: client_result_cache_stats$

client result cache

result_cache_mode parameter

client result cache, server result cache

All other result_cache* parameters, for example, result_cache_max_size

server result cache

Package DBMS_RESULT_CACHE

server result cache

Statistics views v$result_cache_*, gv$result_cache_*.

For example, v$result_cache_statistics, gv$result_cache_memory

server result cache

create table annotation

client result cache, server result cache

alter table annotation

client result cache, server result cache


2.7.9 Client Result Cache Demo Files

For OCI applications, see the files cdemoqc.sql, cdemoqc.c, and cdemoqc2.c (all are in the demo directory for your operating system) for demonstration files for this feature.

2.7.10 Compatibility with Previous Releases

To use client result cache, applications must be relinked with Oracle Database 11g Release 1 (11.1) or later client libraries and be connected to an Oracle Database 11g Release 1 (11.1) or later database server. This feature is available to all OCI applications including JDBC Type II driver, OCCI, Pro*C/C++, and ODP.NET. The OCI drivers automatically pass the result cache hint to OCIStmtPrepare() and OCIStmtPrepare2() calls, thereby getting the benefits of caching.

See Also:

OCIStmtPrepare(), OCIStmtPrepare2() in Oracle Call Interface Programmer's Guide

2.7.11 Where To Find More Information About Client Result Cache

More information about Client Result Cache is described in the following documentation:

2.8 Statement Caching

Statement caching is a feature that establishes and manages a cache of statements for each session. In the server, it means that cursors can be used without reparsing the statement, thus eliminating repetitive statement parsing. You can use statement caching with connection pooling and with session pooling, and improve performance and scalability. You can also use statement caching without session pooling in OCI and without connection pooling in OCCI, in the JDBC interface, and in the ODP.NET interface. You can also use dynamic SQL statement caching in Oracle precompiler applications that rely on dynamic SQL statements, such as Pro*C/C++ and ProCOBOL.

In the JDBC interface, you can use implicit or explicit statement caching. Each type of statement cache can be enabled or disabled independent of the other. You can have either, neither, or both in effect. Both types of statement caching share a single cache per connection. In the JDBC interface, you can also disable implicit caching for a particular statement.

See Also:

2.9 OCI Client Statement Cache Auto-Tuning

Auto-tuning optimizes OCI client session features of mid-tier applications to gain higher performance without the need to reprogram your OCI application.

It is possible for the OCI client statement cache size setting to be sub optimal. This can happen, for example, with changing the workload causing a different working set of SQL statements.

If the size is too low, it will cause excess network activity and more parses at the server. If the size is too high, there will be excess memory used.

It can be difficult for the client side application to always keep this cache size optimal.

To resolve this potential performance issue, Oracle Database 12c Release 1 (12.1.0.1) introduces a client-side auto-tuning feature. Auto-tuning automatically reconfigures the OCI statement cache size on a periodic basis. Auto-tuning is achieved by providing a deployment time setting that provides an option to reconfigure OCI statement caching. These settings are provided as a connect string based deployment settings in a client oraaccess.xml file that overrides programmatic settings to the user configuration of OCI features.

Mid-tier application developers and DBAs can expect to see a reduction in time and effort in diagnosing and fixing performance problems with each part of their system using the auto-tuning OCI client statement caching parameter setting.

For more information about OCI auto-tuning concepts, client statement cache auto-tuning parameters, implementation, and usage, see Oracle Call Interface Programmer's Guide

2.10 Client-Side Deployment Parameters

Beginning with Oracle Database 12c Release 1 (12.1.0.1), OCI deployment parameters are available in a new configuration file oraaccess.xml as described in Oracle Call Interface Programmer's Guide.

2.11 Using Query Change Notification

Continuous Query Notification (CQN) enables client applications to register queries with the database and receive notifications in response to DML or DDL changes on the objects (object change notification (OCN)) or in response to result set changes associated with the queries (query result change notification (QRCN)). The notifications are published by the database when the DML or DDL transaction commits.

A CQN registration associates a list of one or more queries with a notification type (OCN or QRCN) and a notification handler. To create a CQN registration, you can use the:

  • PL/SQL interface

    When you use the PL/SQL interface, the notification handler is a server-side PL/SQL stored procedure. PL/SQL registration can be used by nonthreaded languages and systems, such as PHP. In that case, the PL/SQL listener invokes a PHP callback when it receives the database notification.

  • Oracle Call Interface (OCI)

    When you use OCI, the notification handler is a client-side C callback procedure.

  • Java Database Connectivity (JDBC) interface

    When you use JDBC interface, the JDBC driver creates a registration on the server. The JDBC driver launches a new thread that listens to notifications from the server (through a dedicated channel) and converts these notification messages into Java events. The driver then notifies all listeners registered with this registration.

See Also:

2.12 Using Database Resident Connection Pooling (DRCP)

Topics:

2.12.1 About Database Resident Connection Pooling

Database resident connection pooling (DRCP) provides a connection pool in the database server for typical web application usage scenarios where the application acquires a database connection, works on it for a relatively short duration, and then releases it. DRCP pools server processes, each of which is the equivalent of a dedicated server process and a database session combined, which are referred to as pooled servers. Pooled servers can be shared across multiple applications running on the same or several hosts. A connection broker process manages the pooled servers at the database instance level. DRCP is a configurable feature, chosen at program runtime, which allows concurrent use of traditional and DRCP-based connection architectures.

DRCP is especially relevant for architectures with multiprocess single-threaded application servers (such as PHP and Apache) that cannot do middle-tier connection pooling. DRCP is also very useful in large scale Web deployments where hundreds or thousands of web servers or mid-tiers need database access, client-side pools (even in multithreaded systems and languages such as Java). Using DRCP, the database can scale to tens of thousands of simultaneous connections. If your Database web application must scale with large numbers of connections, DRCP is your connection pooling solution.

DRCP complements middle-tier connection pools that share connections between threads in a middle-tier process. Also, DRCP enables sharing of database connections across middle-tier processes on the same middle-tier host, across multiple middle-tier hosts, and across multiple middle-tiers (web servers, containers) accommodating applications written in different languages. This results in significant reduction in key database resources needed to support a large number of client connections, thereby reducing the database tier memory footprint and boosting the scalability of both middle-tier and database tiers. Having a pool of readily available servers has the additional benefit of reducing the cost of creating and tearing down client connections.

Clients get connections out of the database resident connection pool connect to an Oracle Database background process known as the connection broker. The connection broker implements the pool functionality and multiplexes pooled servers among persistent inbound connections from the client

When a client requires database access, the connection broker picks up a server process from the pool and hands it off to the client. The client is then directly connected to the server process until the request is served. After the server has finished, the server process is released back into the pool and the connection from the client is restored to the connection broker as a persistent inbound connection from the client process. In DRCP, releasing resources leaves the session intact, but no longer associated with a connection (server process). Because this session stores its user global area (UGA) in the program global area (PGA), not in the system global area (SGA), a client can reestablish a connection transparently upon detecting activity.

DRCP is typically preferred for applications with a large number of connections. Shared servers are useful for a medium number of connections and dedicated sessions are preferred for small numbers of connections. The threshold sizes are relative to the amount of memory available on the database host.

DRCP provides the following advantages:

  • It enables resource sharing among multiple client applications and middle-tier application servers.

  • It improves scalability of databases and applications by reducing resource usage on the database host.

Compared to shared servers, DRCP offers theses additional benefits:

  • DRCP provides a direct tie with the database server furnished by client-side connection pooling (that is, there is no man-in-the-middle like client-side connection pooling, but unlike shared servers)

  • DRCP can pool database servers (like client-side connection pooling and shared servers),

  • DRCP can pool sessions (like client-side connection pooling and unlike shared servers)

  • DRCP can share connections across mid-tier boundaries (unlike client-side connection pooling)

DRCP offers a unique connection pooling solution that addresses scalability requirements in environments requiring large numbers of connections with minimal database resource usage.

See Also:

Oracle Database Concepts for details about the DRCP architecture

2.12.2 Configuring Database Resident Connection Pooling

The pool is managed by the DBA using the DBMS_CONNECTION_POOL package. The pool is installed by default, but is shutdown. The DBA must start it and specify DRCP configuration options that include, for example, the minimum and maximum number of pooled servers allowed in the pool, the number of connection brokers to be created, and the maximum number of connections that each connection broker can handle, and so on. See the references for more information.

OCI session pool APIs have been extended to interoperate with the database resident connection pool. See the references for more information.

2.12.3 Using JDBC with DRCP

Beginning with Oracle Database 12c Release 1 (12.1.0.1), Oracle JDBC drivers support Database Resident Connection Pooling (DRCP). The DRCP implementation creates a pool on the server side, which is shared across multiple client pools. These client pools utilize Universal Connection Pool for JDBC. This significantly lowers memory consumption because of the reduced number of server processes on the server and increases the scalability of the Database server.

To track server-side connections' checkin and checkout operations, the Java applications must use a client-side pool such as Universal Connection Pool for JDBC or any third-party Java connection pool.

Section 2.12.6 describes how as the database administrator (DBA), you can log in as SYSDBA and start the default pool on the server side.

To enable DRCP on the client side, you must perform the following steps:

  • Pass a non-NULL and non-empty String value to the connection property oracle.jdbc.DRCPConnectionClass

  • Pass (SERVER=POOLED) in the long connection string

You can also specify (SERVER=POOLED) in the short URL from as follows:

jdbc:oracle:thin:@//<host>:<port>/<service_name>[:POOLED]

For example:

jdbc:oracle:thin:@//localhost:5221/orcl:POOLED

For an example of how to enable JDBC on the client side, see the section on enabling DRCP on the client side in Oracle Database JDBC Developer's Guide.

By setting the same DRCP Connection class name for all the pooled server processes on the server using the connection property oracle.jdbc.DRCPConnectionClass, you are able to share pooled server processes on the server across multiple connection pools.

In DRCP, you can also apply a tag to a given connection and retrieve that tagged connection later. Connection tagging enhances session pooling because you can retrieve specific sessions easily. See the section on enabling connection tagging in DRCP in Oracle Database JDBC Developer's Guide for more information.

2.12.4 Using OCI Session Pool APIs with DRCP

The sections that follow describe OCI session pool APIs that have been extended to interoperate with the database resident connection pool. An OCI application typically initializes the environment for the OCI session pool for DRCP using OCISessionPoolCreate() by specifying the database connection string (connStr), whether a user name (userid) and password (password) are associated with each session, the minimum (sessMin) and the next increment (sessIncr) of sessions to be started if the mode parameter is specified as OCI_SPC_HOMOGENEOUS to allow all sessions in the pool to be authenticated with the user name and password passed in, the maximum (sessMax) number of sessions allowed in the session pool, and so on.

Sessions are obtained from DRCP from the OCI session pool using OCISessionGet(), by specifying the OCI_SESSGET_SPOOL attribute in the mode parameter and sessions are released to DRCP to the OCI session pool using OCISessionRelease(). The OCI session pool can also transparently keep connections to the connection broker cached to improve performance. OCI applications can reuse the sessions within which the application leaves sessions of a similar state by using OCISessionGet() (authInfop) and setting the OCI_ATTR_CONNECTION_CLASS attribute and specifying a connection class name or by using the OCIAuthInfo handle before calling OCISessionGet(). Using OCISessionGet() (mode), OCI applications can also specify session purity, that is, whether to reuse a pooled session (set the OCI_SESSGET_PURITY_SELF attribute) or to use a new session (set the OCI_SESSGET_PURITY_NEW attribute).

Also, features offered by the traditional client-side OCI session pool, such as tagging, statement caching, and transparent application failover (TAF) are also supported with DRCP.

2.12.5 Session Purity and Connection Class

In Oracle Database 11g Release 1 (11.1), OCI introduced two settings that can be specified when obtaining a session using OCISessionGet():

2.12.5.1 Session Purity

Session purity specifies whether the application logic is set up to reuse a pooled session or to use a new session. OCISessionGet() has been enhanced to take in a purity setting of OCI_SESSGET_PURITY_NEW or OCI_SESSGET_PURITY_SELF. Alternatively, you can set OCI_ATTR_PURITY_NEW or OCI_ATTR_PURITY_SELF on the OCIAuthInfo handle before calling OCISessionGet(). Both methods are equivalent.

Note:

When reusing a session from the pool, the NLS attributes of the server take precedence over those of the client.

For example, if the client has NLS_LANG set to french_france.us7ascii and if it is assigned a German session from the pool, the client session would be German.

You can use connection classes to restrict sharing and to avoid this problem.

Example 2-1 shows how a connection pooling application sets up a NEW session.

Example 2-1 Setting Session Purity

/* OCIAttrSet method */

ub4 purity = OCI_ATTR_PURITY_NEW;
OCIAttrSet (authInfop, OCI_HTYPE_AUTHINFO,  &purity, sizeof (purity),
            OCI_ATTR_PURITY, errhp);
OCISessionGet (envhp, errhp, &svchp, authInfop, poolName, poolNameLen, NULL, 0,
               NULL, NULL, NULL, OCI_SESSGET_SPOOL);
/* poolName is the name returned by OCISessionPoolCreate() */

/*  OCISessionGet mode method */
OCISessionGet (envhp, errhp, &svchp, authInfop, poolName, poolNameLen, NULL, 0,
               NULL, NULL, NULL, OCI_SESSGET_SPOOL | OCI_SESSGET_PURITY_NEW);
/* poolName is the name returned by OCISessionPoolCreate() */

2.12.5.2 Connection Class

Connection class defines a logical name for the type of connection required by the application. Sessions from the OCI session pool cannot be shared by different users (A session first created for user HR is given out only to subsequent requests by user HR.) The connection class setting allows for further separation between the sessions of a given user. The connection class setting lets different applications (connecting as the same database user) identify their sessions using a logical name that corresponds to the application. OCI then ensures that such sessions belonging to a particular connection class are not shared outside of the connection class.

You can use the OCI_ATTR_CONNECTION_CLASS attribute on the OCIAuthInfo handle to set the connection class. The connection class is a string attribute. OCI supports a maximum connection class length of 1024 bytes. The asterisk (*) is a special character and is not allowed in the connection class name.

Example 2-2 shows that an HRMS application needs sessions identified with the connection class HRMS.

Example 2-2 Setting the Connection Class as HRMS

OCISessionPoolCreate (envhp, errhp, spoolhp, &poolName, &poolNameLen, "HRDB",
    strlen("HRDB"), 0, 10, 1, "HR", strlen("HR"), "HR", strlen("HR"),
    OCI_SPC_HOMOGENEOUS);
 
OCIAttrSet (authInfop, OCI_HTYPE_AUTHINFO, "HRMS", strlen ("HRMS"),
    OCI_ATTR_CONNECTION_CLASS, errhp);
OCISessionGet (envhp, errhp, &svchp, authInfop, poolName, poolNameLen, NULL, 0,
    NULL, NULL, NULL, OCI_SESSGET_SPOOL);

Example 2-3 shows that a recruitment application needs sessions identified with the connection class RECMS.

Example 2-3 Setting the Connection Class as RECMS

OCISessionPoolCreate (envhp, errhp, spoolhp, &poolName, &poolNameLen, "HRDB",
    strlen("HRDB"), 0, 10, 1, "HR", strlen("HR"), "HR", strlen("HR"),
    OCI_SPC_HOMOGENEOUS);
 
OCIAttrSet (authInfop, OCI_HTYPE_AUTHINFO,  "RECMS", strlen("RECMS"),
    OCI_ATTR_CONNECTION_CLASS, errhp);
OCISessionGet (envhp, errhp, &svchp, authInfop, poolName, poolNameLen, NULL, 0,
    NULL, NULL, NULL, OCI_SESSGET_SPOOL);

2.12.5.3 Defaults for Session Purity and Connection Class

Table 2-4 illustrates the defaults used in various client scenarios.

Table 2-4 Defaults Used in Various Client Scenarios

Attribute or Setting Application Uses OCISessionGet() from Session Pool Other Connections Are Not Obtained from OCI Session Pool

OCI_ATTR_PURITY

OCI_ATTR_PURITY_SELF

OCI_ATTR_PURITY_NEW

OCI_ATTR_CONNECTION_CLASS

OCI-generated globally unique name for each client-side session pool that is used as the default connection class for all connections in the OCI session pool.

SHARED

Sharing of sessions

Sharing of sessions between threads requesting sessions from the OCI session pool.

Sharing among all connections of a particular database using the default SHARED connection class.


2.12.6 Starting the Database Resident Connection Pool

The database administrator (DBA) must log on as SYSDBA and start the default pool, SYS_DEFAULT_CONNECTION_POOL, using DBMS_CONNECTION_POOL.START_POOL with the default settings.

For detailed information about configuring the pool, see Oracle Database Administrator's Guide.

2.12.7 Enabling Database Resident Connection Pooling

Any application can benefit from database resident connection pool by specifying :POOLED in the Easy Connect string (see Example 2-4) or by specifying (SERVER=POOLED) in the TNS connect string (see Example 2-5).

Example 2-4 Specifying :POOLED in the Easy Connect String for Enabling DRCP

oraclehost.company.com:1521/books.company.com:POOLED

Example 2-5 Specifying SERVER=POOLED in a TNS Connect String for Enabling DRCP

BOOKSDB = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oraclehost.company.com)
     (PORT=1521))(CONNECT_DATA = (SERVICE_NAME=books.company.com)(SERVER=POOLED)))

2.12.8 Benefiting from the Scalability of DRCP in an OCI Application

Consider the following three types of application scenarios and note how each benefits from DRCP:

  • Applications that neither use OCI session pooling nor specify any connection class or purity setting (or specify a purity setting of NEW) get a new session from the DRCP. Similarly, when the application releases a connection back to the pool, the session is not shared with other instances of the same application by default. SQL*Plus is an example of a client that does not use OCI session pooling. It holds on to connections even when the connection is idle. As result, the pool server remains assigned to the client if the client session exists or if the client session does not log off. The application, however, does get the benefit of reusing an existing pooled server process.

  • Applications that use the OCISessionGet() call outside of an OCI session pool, or to specify the connection class and set purity=SELF can reuse both DRCP pooled server processes and sessions. However, following an OCISessionRelease() call, OCI terminates the connection to the connection broker. On the next OCISessionGet() call, the application reconnects to the broker. After it reconnects, the DRCP assigns a pooled server (and session) belonging to the connection class specified. Reconnecting, however, incurs the cost of connection establishment and reauthentication. Such applications achieve better sharing of DRCP resources (processes and sessions) but do not get the benefit of caching connections to the connection broker.

  • Applications that use OCI session pool APIs and specify the connection class and set purity=SELF make full use of the DRCP functionality through reuse of both the pooled server process and the associated session. They get the benefit of cached connections to the connection broker. Cached connections do not incur the cost of reauthentication on the OCISessionGet() call.

2.12.9 Benefiting from the Scalability of DRCP in a Java Application

Java applications have the same benefits as in OCI mentioned in Section 2.12.8.

Any existing Universal Connection Pool (UCP) customer can upgrade to use DRCP without code change (configuration change only). This is also true for customers who are using ConnectionPoolDataSource as the connection factory.

2.12.10 Best Practices for Using DRCP

The steps to design an application that can benefit from the full power of DRCP are very similar to the steps required for an application that uses the OCI session pool as described in the sections about using OCI session pooling and OCI calls for session pooling in Oracle Call Interface Programmer's Guide.

The only additional step is that for best performance, when deployed to run with DRCP, the application must specify an explicit connection class setting.

Multiple instances of the same application must specify the same connection class setting for best performance and enhanced sharing of DRCP resources. Ensure that the different instances of the application can share database sessions.

Example 2-6 shows a database resident connection pooling DRCP application.

Example 2-6 Database Resident Connection Pooling Application

/* Assume that all necessary handles are allocated. */
 
/*   This middletier uses a single database user. Create a homogeneous
     client-side session pool */
OCISessionPoolCreate (envhp, errhp, spoolhp, &poolName, &poolNameLen, "BOOKSDB",
    strlen("BOOKSDB"), 0, 10, 1, "SCOTT", strlen("SCOTT"), "password",
    strlen("password"), OCI_SPC_HOMOGENEOUS);
 
while (1)
{
   /* Process a client request */
   WaitForClientRequest();
   /* Application function */
 
   /* Set the Connection Class on the OCIAuthInfo handle that is passed as
      argument to OCISessionGet*/
   
   OCIAttrSet (authInfop, OCI_HTYPE_AUTHINFO,  "BOOKSTORE", strlen("BOOKSTORE"),
               OCI_ATTR_CONNECTION_CLASS, errhp);
 
   /* Purity need not be set, as default is OCI_ATTR_PURITY_SELF for OCISessionPool
       connections */
 
   /* You can get a SCOTT session released by Mid-tier 2 */
   OCISessionGet(envhp, errhp, &svchp, authInfop, poolName, poolNameLen, NULL, 0,
                 NULL, NULL, NULL, OCI_SESSGET_SPOOL); 
 
   /* Database calls using the svchp obtained above  */
   OCIStmtExecute(...)
 
   /* This releases the pooled server on the database for reuse */
   OCISessionRelease (svchp, errhp, NULL, 0, OCI_DEFAULT);
}
 
/* Mid-tier is done - exiting */
OCISessionPoolDestroy (spoolhp, errhp, OCI_DEFAULT);

Example 2-7 and Example 2-8 show two deployment examples, each based on code in Example 2-6, in which code is deployed in 10 middle-tier hosts servicing the BOOKSTORE application.

For the first deployment example, assume that the database used is Oracle Database 12c (or earlier) in dedicated server mode but with DRCP not enabled. The client side has 12c libraries. Example 2-7 shows the connect string to use for this deployment. In this case, the application obtains dedicated server connections from the database.

Example 2-7 Connect String to Use for a Deployment in Dedicated Server Mode with DRCP Not Enabled

BOOKSDB = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oraclehost.company.com)
   (PORT=1521))(CONNECT_DATA = (SERVICE_NAME=books.company.com)))

For the second deployment example, assume that DRCP is enabled on the Oracle Database 12c database. Now all the middle-tier processes can benefit from the pooling capabilities offered by DRCP. The database resource requirement with DRCP is much less than what would be required with dedicated server mode. Example 2-8 shows how you change the connect string for this type of deployment.

Example 2-8 Connect String to Use for a Deployment with DRCP Enabled

BOOKSDB = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oraclehost.company.com)
  (PORT=1521))(CONNECT_DATA = (SERVICE_NAME=books.company.com)(SERVER=POOLED)))

2.12.11 Compatibility and Migration

An OCI application linked with Oracle Database 12c client libraries works unaltered against:

  • An Oracle Database 12c database with DRCP disabled

  • A database server from a release earlier than Oracle Database 12c

  • An Oracle Database 12c database server with DRCP enabled, when deployed with the DRCP connect string

Suitable clients benefit from enhanced scalability offered by DRCP if they are appropriately modified to use the OCI session pool APIs with the connection class and purity settings as previously described.

Beginning with Oracle Database 12c Release 1 (12.1.0.1), Oracle JDBC drivers support Database Resident Connection Pooling (DRCP). See the chapter on DRCP in Oracle Database JDBC Developer's Guide for more information.

2.12.12 Restrictions on Using Database Resident Connection Pooling

The following actions cannot be performed or used with pooled servers:

  • Shutting down the database

  • Stopping the database resident connection pool

  • Change the password for the connected user

  • Using shared database links to connect to a database resident connection pool that is on a different instance

  • Using Advanced Security Options (ASO) with TCPS

  • Using Enterprise user security with DRCP

  • Using migratable sessions on the server side, directly by using the OCI_MIGRATE option or indirectly by using the OCISessionPoolCreate() call

  • Using initial client roles

  • Using Application context attributes such as OCI_ATTR_APPCTX_NAME and OCI_ATTR_APPCTX_VALUE

DDL statements that pertain to database users in the pool must be performed carefully, as the pre-DDL sessions in the pool can still be given to clients post-DDL. For example, while dropping users, ensure that there are no sessions of that user in the pool and no connections to the Broker that were authenticated as that user.

Sessions with explicit roles enabled, that are released to the pool, can be later handed out to connections (of the same user) that need the default logon role. Avoid releasing sessions with explicit roles, and instead terminate them.

Note:

You can use Oracle Advanced Security features such as encryption and strong authentication with DRCP.

Users can mix data encryption/data integrity combinations. However, users must segregate each such combination by using connection classes. For example, if the user application must specify AES256 as the encryption mechanism for one set of connections and DES for another set of connections, then the application must specify different connection classes for each set.

2.12.13 Using DRCP with Custom Pools

DRCP is well integrated with OCI session pooling as described in Section 2.12.4. Oracle highly recommends using OCI session pool as it is already integrated with DRCP, FAN, and RLB.

However, if an application is built using its own custom connection pool (or if the application uses no pooling but has periods of time when the session is not used and the application does not depend on getting back the specific session for correctness), it can still integrate with DRCP. You can do this by enabling the OCI_ATTR_SESSION_STATE attribute as described in Oracle Call Interface Programmer's Guide in the section that describes marking sessions explicitly as stateful or stateless.

When an application flags a session as being OCI_SESSION_STATELESS, OCI benefits from this session annotation to return the session transparently to the DRCP pool (when DRCP is enabled). Similarly, when the application indicates the session as being OCI_SESSION_STATEFUL, OCI benefits from this changed session state annotation to transparently check out an appropriate session from the DRCP pool.

Applications must mark session state as promptly as possible to enable efficient utilization of the underlying database resources.

Note:

Other DRCP attributes such as connection class and purity still must be specified as previously described in detail.

2.12.14 Marking Sessions Explicitly as Stateful or Stateless

An application typically requires a specific database session for the duration of a unit of work. The session is said to be STATEFUL for this duration. At the end of this unit of work, if the application does not depend on retaining the specific session for subsequent units of work, the session is said to be STATELESS.

As the application detects when a session transitions from being STATEFUL to STATELESS and vice versa, the application can explicitly inform OCI regarding these transitions by using OCI_ATTR_SESSION_STATE.

This indication by the application or caller can allow OCI and Oracle Database to take advantage of this information for transparently performing certain scalability optimizations. For example, the session could be given to someone else who needs it when the application is not working on it. Or, the session could be replaced by a different session when the application needs it again.

See Also:

Section 2.12.13, "Using DRCP with Custom Pools," for more information

Here is an example of marking the state of sessions in a code fragment:

wait_for_transaction_request();
do {
 
ub1 state;
 
/* mark  database session as STATEFUL  */
state = OCI_SESSION_STATEFUL;
checkerr(errhp, OCIAttrSet(usrhp, OCI_HTYPE_SESSION,
        &state, 0, OCI_ATTR_SESSION_STATE, errhp));
/* do database work consisting of one or more related calls to the database */
 
...
 
/* done with database work, mark session as stateless */
state = OCI_SESSION_STATELESS;
checkerr(errhp, OCIAttrSet(usrhp, OCI_HTYPE_SESSION,
         &state, 0,OCI_ATTR_SESSION_STATE, errhp));
 
wait_for_transaction_request();
 
} while(not _done); 

If a session is obtained from outside an OCI session pool, the session starts as OCI_SESSION_STATEFUL and remains OCI_SESSION_STATEFUL throughout the life of the session unless the application explicitly changes it to OCI_SESSION_STATELESS.

If a session is obtained from an OCI session pool, the session is by default marked as OCI_SESSION_STATEFUL when the first call is initiated on that session after getting it from the pool. The session is also by default marked as being OCI_SESSION_STATELESS when it is released to the pool. Hence, there is no need to set this attribute explicitly with an OCI session pool. OCI session pooling does this transparently. Use this attribute only if you are not using OCI session pooling.

See Also:

Oracle Call Interface Programmer's Guide for more information about OCI_ATTR_SESSION_STATE

2.12.15 DRCP with Real Application Clusters

When the Database Resident Connection Pool is configured in a database in a Real Application Clusters environment, the pool configuration is applied to each of the database instances. Starting or stopping the pool on one instance starts or stops the pool on all the instances.

2.12.16 DRCP with Pluggable Databases

The Database Resident Connection Pool in a multitenant container database (CDB) is configured and managed in the root container. The configuration, starting and stopping of the pool can be done when connected to the root container. The pool maintains the pooled servers of different pluggable databases to which the clients are connected using different service names.

2.12.17 DRCP with Data Guard

The Database Resident Connection Pool has certain conditions to operate in a Data Guard environment.

  • Starting the pool: The pool can be started on a physical standby database only if the pool has been already started on the primary database. If the pool is down on the primary, it cannot be started on the standby database.

  • Stopping the pool: The pool cannot be stopped on a physical standby database if it is up and running on the primary database. It can be stopped only if on the primary database, the pool is not running.

  • Pool parameters cannot be configured, restored to defaults, or altered on a physical standby database.

  • When role reversal takes place, that is, the Primary goes down and the Secondary database takes up the role of the Primary database, the limitations mentioned previously for the physical standby database no longer holds. Because the Standby database has now become the Primary, all pool operations are allowed.

  • On a logical standby database, all pool operations are allowed.

2.13 Using Runtime Connection Load Balancing

Topics:

2.13.1 About Runtime Connection Load Balancing

Oracle Real Application Clusters (Oracle RAC) is a database option in which a single database is hosted by multiple instances on multiple nodes. The Oracle RAC shared disk method of clustering databases increases scalability. The nodes can easily be added or freed to meet current needs and improve availability, because if one node fails, another can assume its workload. Oracle RAC adds high availability and failover capacity to the database, because all instances have access to the whole database.

Balancing of work requests occurs at two different times: at connect time and at runtime. These are referred to as connect time load balancing (provided by Oracle Net Services) and runtime connection load balancing. For Oracle RAC environments, session pools use service metrics received from the Oracle RAC load balancing advisory through Fast Application Notification (FAN) events to balance application session requests. The work requests coming into the session pool can be distributed across the instances of Oracle RAC offering a service, using the current service performance.

See Also:

Connect time load balancing occurs when a session is first created by the application. It is necessary that the sessions that are part of the pool be well distributed across Oracle RAC instances, when they are first created. This ensures that sessions on each of the instances get a chance to execute work.

Runtime connection load balancing routes work requests to sessions in a session pool that best serve the work. It occurs when an application selects a session from an existing session pool and thus is a very frequent activity. For session pools that support services at one instance only, the first available session in the pool is adequate. When the pool supports services that span multiple instances, there is a need to distribute the work requests across instances so that the instances that are providing better service or have greater capacity get more requests.

OCI, OCCI, JDBC, and ODP.NET client applications all support runtime connection load balancing.

See Also:

2.13.2 Enabling and Disabling Runtime Connection Load Balancing

Runtime connection load balancing is enabled by default for OCI, OCCI, and JDBC in an Oracle Database 11g Release 1 (11.1) or later client communicating with a server of Oracle Database 10g Release 2 (10.2) or later when you perform the following operations:

  • For OCI clients, to ensure your application receives service metrics based on service time, you must link the application with the threads library, create the OCI environment in OCI_EVENTS and OCI_THREADED mode, and configure the load balancing advisory goal and the connection load balancing goal for a service that is used by the session pool.

  • For OCCI clients, you must link the application with the threads library, create the OCCI environment in EVENTS and THREADED_MUTEXED mode, and configure the load balancing advisory goal and the connection load balancing goal for a service that is used by the session pool.

  • In the JDBC environment, runtime connection load balancing relies on the Oracle Notification Service (ONS) infrastructure. It uses the same out-of-band ONS event mechanism that is used for Fast Connection Failover (FCF) processing. As a result, runtime connection load balancing is enabled by default when Fast Connection Failover is enabled. There is no additional setup or configuration of ONS required to benefit from runtime connection load balancing.

Runtime connection load balancing is disabled by default for ODP.NET client applications. To enable runtime connection load balancing, include "Load Balancing=true" in the connection string.

FAN HA (FCF) for OCI, OCCI, and ODP.NET require AQ_HA_NOTIFICATIONS for the service to be TRUE.

To disable runtime connection load balancing, perform the following actions in each of the supported interfaces:

  • In an OCI application, set the mode parameter to OCI_SPC_NO_RLB when calling OCISessionPoolCreate() to disable runtime connection load balancing.

  • In an OCCI application, use the NO_RLB option for the PoolType attribute of the StatelessConnectionPool Class to disable runtime load balancing.

  • In a JDBC application, call setFastConnectionFailoverEnabled() with a value of false to disable runtime load balancing.

  • In an ODP.NET client application, runtime connection load balancing is disabled by default.

See Also:

2.13.3 Receiving Load Balancing Advisory FAN Events

To receive load balancing advisory FAN events, the following requirements must be met:

  • Oracle RAC environment with Oracle Clusterware must be set up and enabled.

  • The server is configured to issue event notifications.

  • The application must have been linked with the threads library.

  • The OCI environment must be created in OCI_EVENTS and OCI_THREADED mode.

  • The OCCI environment must be created in THREADED_MUTEXED and EVENTS modes.

  • You must configure or modify the Oracle Real Application Clusters database using the DBMS_SERVICE package. The service must be modified to set up its goal and the connection load balancing goal as follows:

    EXEC DBMS_SERVICE.MODIFY_SERVICE("myService",
         DBMS_SERVICE.GOAL_SERVICE_TIME,
         clb_goal => dbms_service.clb_goal_short);
    

    The constant, GOAL_SERVICE_TIME, indicates that Load Balancing Advisory is based on elapsed time for work done in the service plus available bandwidth to the service. The constant, CLB_GOAL_SHORT, indicates that connection load balancing uses Load Balancing Advisory, when Load Balancing Advisory is enabled.You can set the connection balancing goal to CLB_GOAL_LONG. However, this is mostly useful for closed workloads, that is, when the rate of completing work is equal to the rate of starting new work.

See Also: