Oracle8i Designing and Tuning for Performance
Release 2 (8.1.6)

Part Number A76992-01





Go to previous page Go to next page

Understanding Oracle Performance Tuning

The Oracle server is a sophisticated and highly tunable software product. Its flexibility allows you to make small adjustments that affect database performance. By tuning your system, you can tailor its performance to best meet your needs.

Tuning begins in the system planning and design phases and continues throughout the life of your system. Carefully consider performance issues during the planning phase, and it will be easier to tune your system during production.

This chapter contains the following sections:

What Is Performance Tuning?

When considering performance, you should understand several fundamental concepts as described in this section:

Trade-offs Between Response Time and Throughput

Goals for tuning vary, depending on the needs of the application. Online transaction processing (OLTP) applications define performance in terms of throughput. These applications must process thousands or even millions of very small transactions per day. By contrast, decision support systems (DSS applications) define performance in terms of response time. Demands on the database that are made by users of DSS applications vary dramatically. One moment they may enter a query that fetches only a few records, and the next moment they may enter a massive parallel query that fetches and sorts hundreds of thousands of records from different tables. Throughput becomes more of an issue when an application must support a large number of users running DSS queries.

Response Time

Because response time equals service time plus wait time, you can increase performance two ways: by reducing service time or by reducing wait time.

Figure 1-1 illustrates ten independent tasks competing for a single resource.

Figure 1-1 Sequential Processing of Multiple Independent Tasks

In this example, only task 1 runs without having to wait. Task 2 must wait until task 1 has completed; task 3 must wait until tasks 1 and 2 have completed, and so on. (Although the figure shows the independent tasks as the same size, the size of the tasks vary.)


In parallel processing, if you have multiple resources, then more resources can be assigned to the tasks. Each independent task executes immediately using its own resource: no wait time is involved.  

System Throughput

System throughput equals the amount of work accomplished in a given amount of time. Two techniques of increasing throughput exist:

Wait Time

The service time for a task may stay the same, but wait time increases as contention increases. If many users are waiting for a service that takes 1 second, then the tenth user must wait 9 seconds for a service that takes 1 second.

Figure 1-2 Wait Time Rising with Increased Contention for a Resource

Critical Resources

Resources such as CPUs, memory, I/O capacity, and network bandwidth are key to reducing service time. Added resources make higher throughput possible and facilitate swifter response time. Performance depends on the following:

Figure 1-3 shows that as the number of units requested rises, the time to service completion rises.

Figure 1-3 Time to Service Completion vs. Demand Rate

To manage this situation, you have two options:

Effects of Excessive Demand

Excessive demand gives rise to the following:

If there is any possibility of demand rate exceeding achievable throughput, then a demand limiter is essential.

Figure 1-4 Increased Response Time/Reduced Throughput

Adjustments to Relieve Problems

You can relieve performance problems by making the following adjustments:

Adjusting unit consumption 

You can relieve some problems by using fewer resources per transaction or by reducing service time. Or you can take other approaches, such as reducing the number of I/Os per transaction. 

Adjusting functional demand 

Other problems can be solved by rescheduling or redistributing the work. 

Adjusting capacity 

You can also relieve problems by increasing or reallocating resources. 

For example, if your system's busiest times are from 9:00AM to 10:30AM and from 1:00PM to 2:30PM, then you can run batch jobs in the background after 2:30PM when there is more capacity. Thus, you can spread the demand more evenly. Alternatively, you can allow for delays at peak times.

Figure 1-5 Adjusting Capacity and Functional Demand

Who Tunes?

Everyone involved with the system has a role in tuning. When people communicate and document the system's characteristics, tuning becomes significantly easier and faster.

Figure 1-6 Who Tunes the System?

Decisions made in application development and design have the greatest effect on performance. Once the application is deployed, the database administrator usually has the primary responsibility for tuning.

See Also:

Chapter 16, "Diagnosing System Performance Problems" for problem-solving methods that can help identify and solve performance problems. 

Setting Performance Targets

Whether you are designing or maintaining a system, you should set specific performance goals so that you know when to tune. You may waste time tuning your system if you alter initialization parameters or SQL statements without a specific goal.

When designing your system, set a goal such as "achieving an order entry response time of less than three seconds for 90% of transactions". If the application does not meet that goal, then identify the bottleneck that prevents this (for example, I/O contention), determine the cause, and take corrective action. During development, test the application to determine whether it meets the designed performance goals before deploying the application.

Tuning is usually a series of trade-offs. Once you have identified bottlenecks, you may need to sacrifice other system resources to achieve the desired results. For example, if I/O is a problem, you may need to purchase more memory or more disks. If a purchase is not possible, then you may need to limit the concurrency of the system to achieve the desired performance. However, with clearly defined performance goals, the decision on what resource to relinquish in exchange for improved performance is simpler because you have identified the most important areas.


At no time should achieving performance goals override your ability to recover data. Performance is important, but ability to recover data is critical.  

Setting User Expectations

Application developers and database administrators must be careful to set appropriate performance expectations for users. When the system performs a particularly complicated operation, response time may be slower than when it is performing a simple operation. In this case, slower response time is not unreasonable.

If a DBA promises 1-second response time, then consider how this might be interpreted. The DBA might mean that the operation would take 1 second in the database--and might well be able to achieve this goal. However, users querying over a network might experience a delay of a couple of seconds due to network traffic: they may not receive the response they expect in 1 second.

Evaluating Performance

With clearly defined performance goals, you can readily determine when performance tuning has been successful. Success depends on the functional objectives you have established with the user community, your ability to measure objectively whether the criteria are being met, and your ability to take corrective action to overcome exceptions. The rest of this tuning manual describes the tuning methodology in detail with information about diagnostic tools and the types of corrective actions you can take.

DBAs responsible for solving performance problems must remember all factors that together affect response time. Sometimes what initially seems like the most obvious source of a problem is actually not the problem at all. Users in the preceding example might conclude that there is a problem with the database, whereas the actual problem is with the network. A DBA must monitor the network, disk, CPU, application design, and so on, to identify the actual source of the problem--rather than simply assume that all performance problems stem from the database.

Ongoing performance monitoring enables you to maintain a well-tuned system. You can make useful comparisons by keeping a history of the application's performance over time. Data showing resource consumption for a broad range of load levels helps you conduct objective scalability studies. From such detailed performance history you can begin to predict the resource requirements for future load levels.

See Also:

Chapter 11, "Overview of Diagnostic Tools".  

Go to previous page Go to next page
Copyright © 1996-2000, Oracle Corporation.

All Rights Reserved.