Oracle9i Real Application Clusters Deployment and Performance
Release 1 (9.0.1)

Part Number A89870-02
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

B
A Case Study in Real Application Clusters Database Design

This appendix describes a case study that presents a methodology for designing systems optimized for Oracle Real Application Clusters.

Case Study Overview

The case study presented in this appendix provides techniques for designing new applications for use with Real Application Clusters. You can also use these techniques to evaluate existing applications and determine how well suited they are for migration to Real Application Clusters.


Note:

Always remember that your goal is to minimize contention: doing so results in optimized performance.  


This case study assumes you have made an initial database design. To optimize your Real Application Clusters design, follow this methodology:

  1. Develop an initial database design.

  2. Analyze access to tables.

  3. Analyze transaction volume.

  4. Decide how to partition users and data.

  5. Decide how to partition indexes, if necessary.

  6. Implement and tune your design.

    See Also:

    Part II, "Scaling Applications and Designing Databases for Real Application Clusters", for detailed information on this methodology 

Case Study: From Initial Database Design to Real Application Clusters

This case study is a practical demonstration of analytical techniques. Although your specific applications will differ from the example in this appendix, this case study should help you to understand the process. The topics in this section are:

Eddie Bean Catalog Sales

The case study is about the fictitious Eddie Bean catalog sales company. This company has many order entry clerks processing telephone orders for various products. Shipping clerks fill orders and accounts receivable clerks handle billing. Accounts payable clerks handle orders for supplies and services the company requires internally. Sales managers and financial analysts run reports on the data. This company's financial application has three business processes operating on a single database:

Eddie Bean Database Tables

Tables from the Eddie Bean database include:

Table B-1 Eddie Bean Sample Tables
Table  Contents 

order_header 

Order number, customer name and address. 

order_item 

Products ordered, quantity, and price. 

organizations 

Names, addresses, phone numbers of customers and suppliers. 

accounts_payable 

Tracks the company's internal purchase orders and payments for supplies and services. 

budget 

Balance sheet of the company's expenses and income. 

forecasts 

Projects future sales and records current performance. 

Eddie Bean Users

Various application users access the database to perform different functions:

The Eddie Bean Application Profile

Operation of the Eddie Bean application is fairly consistent throughout the day: order entry, order processing, and shipping occur all day. These functions are not for example, segregated into separate one-hour time slots.

About 500 orders are entered each day. Each order header is updated about 4 times during its lifetime. So we expect about 4 times as many updates as inserts. There are many selects, because many employees are querying order headers: people doing sales work, financial work, shipping, tracing the status of orders, and so on.

There are on average 4 items for each order. Order items are never updated: an item can be deleted and another item entered. The order_header table has four indexes. Each of the other tables has a primary key index only.

Budget and forecast activity has a much lower volume than the order tables. They are read frequently, but modified infrequently. Forecasts are updated more often than budgets, and are deleted once they go into actuals.

The vast bulk of the deletes are performed as a nightly batch job. This maintenance activity does not, therefore, need to be included in the analysis of normal functioning of the application.

Analyzing Access to Tables

Begin by analyzing the existing (or expected) access patterns for tables in your database. Then decide how to partition the tables and group them according to access pattern.

Table Access Analysis Worksheet

List all your high-activity database tables in a worksheet like the one shown in Table B-2:

Table B-2 Table Access Analysis Worksheet
Table Name  Daily Access Volume 
Read Access  Write Access 
Select  Insert  Update  Delete 
Operations  I/Os  Operations  I/Os  Operations  I/Os  Operations  I/Os 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To complete this worksheet, estimate the volume of each type of operations. Then calculate the number of reads and writes (I/Os) the operations entail.

Estimating Volume of Operations

For each type of operation to be performed on a table, enter a value reflecting the normal volume you would expect in a day.


Note:

The emphasis throughout this analysis is on relative values--gross figures describing the normal use of an application. Even if an application does not yet exist, you can project the types of users and estimate relative levels of activity. Maintenance activity on the tables is not generally relevant to this analysis. 


Calculating I/Os for Each Operation

For each value in the Operations column, calculate the number of I/Os that will be generated using a worst-case scenario. The SELECT operation involves read access, and the INSERT, UPDATE and DELETE operations involve both read and write access. These operations access not only data blocks, but also any related index blocks.


Note:

The number of I/Os generated for each operation changes by table depending on the access path of the table, and the table's size. It also changes depending on the number of indexes a table has. A small index, for example, can have only a single index branch block. 


For example, Figure B-1 illustrates read and write access to data in a large table in which two levels of the index are not in the buffer cache and only a high level index is cached in the System Global Area (SGA).

Figure B-1 Number of I/Os for Each SELECT or INSERT Operation


Text description of sps81065.gif follows
Text description of the illustration sps81065.gif

In this example, assuming that you are accessing data by way of a primary key, a SELECT requires three I/Os:

  1. One I/O to read the first lower level index block.

  2. One I/O to read the second lower level index block.

  3. One I/O to read the data block.


    Note:

    If all of the root and branch blocks are in the SGA, then a SELECT can entail only two I/Os: read leaf index block, read data block. 


An INSERT or DELETE statement requires at least five I/Os:

  1. One I/O to read the data block.

  2. One I/O to write the data block.

  3. Three I/Os per index: 2 to read the index entries and 1 to write the index.

One UPDATE in this example entails seven I/Os:

  1. One I/O to read the first lower level index block.

  2. One I/O to read the second lower level index block.

  3. One I/O to read the data block.

  4. One I/O to write the data block.

  5. One I/O to read the first lower level index block again.

  6. One I/O to read the second lower level index block again.

  7. One I/O to write the index block.


    Note:

    An INSERT or DELETE affects all indexes, but an UPDATE sometimes affects only one index. Check the number of changed index keys.  


I/Os for Each Operation for Sample Tables

In the case study, the number of I/Os for each operation differs from table to table because the number of indexes differs from table to table.

Table B-3 shows how many I/Os are generated by each type of operation on the order_header table. It assumes that the order_header table has four indexes.

Table B-3 Number of I/Os for each Operation: Sample ORDER_HEADER Table
Operation  SELECT  INSERT  UPDATE  DELETE 

Type of Access 

read 

read/write 

read/write 

read/write 

Number of I/Os 

14 

14 


Note:

Adjust these figures depending upon the actual number of indexes and access path for each table in your database.  


Table B-4 shows how many I/Os generated for each operation for each of the other tables in the case study, assuming each of them has a primary key index only.

Table B-4 Number of I/Os for each Operation: Other Sample Tables
Operation  SELECT  INSERT  UPDATE  DELETE 

Type of Access 

read 

read/write 

read/write 

read/write 

Number of I/Os 

For this analysis, you can disregard the fact that changes made to data also generate rollback segment, entailing additional I/Os. These I/Os are instance-based. Therefore, they should not cause problems with your Real Application Clusters applications.

See Also:

Oracle9i Database Concepts for more information about indexes 

Case Study: Table Access Analysis

Table B-5 shows rough figures reflecting normal use of the application in the case study.

Table B-5 Case Study: Table Access Analysis Worksheet
Table Name  Daily Access Volume 
Read Access  Write Access 
Select  Insert  Update  Delete 
Operations  I/Os  Operations  I/Os  Operations  I/Os  Operations  I/Os 

order_header 

20,000 

60,000 

500 

7,000 

2,000 

14,000 

1,000 

14,000 

order_item 

60,000 

180,000 

2,000 

10,000 

4,030 

20,150 

organizations 

40,000 

120,000 

10 

50 

100 

700 

budget 

300 

900 

14 

forecasts 

500 

1,500 

10 

70 

10 

accounts_payable 

230 

690 

50 

250 

20 

140 

You can make the following conclusions from the data in Table B-5:

Analyzing Transaction Volume by Users

Begin by analyzing the existing (or expected) access patterns for tables in your database. Then partition the tables and group them according to access pattern.

Transaction Volume Analysis Worksheet

For each table with a high volume of write access, analyze the transaction volume per day for each type of user.


Note:

For read-only tables, you do not need to analyze transaction volume by user type. 


Use worksheets like the one in Table B-6:

Table B-6 Transaction Volume Analysis Worksheet
Table Name: 
Type of User  No.Users   Daily Transaction Volume 
Read Access  Write Access 
Select  Insert  Update  Delete 
Operations  I/Os  Operations  I/Os  Operations  I/Os  Operations  I/Os 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Begin by estimating the volume of transactions by each type of user and then calculate the number of I/Os required.

Case Study: Transaction Volume Analysis

The following tables show transaction volume analysis of the three tables in the case study that have high write access levels: order_header, order_item, and accounts_payable.

ORDER_HEADER Table

Table B-7 shows rough estimates for values in the order_header table in the case study.

Table B-7 Case Study: Transaction Volume Analysis: ORDER_HEADER Table
Table Name: ORDER_HEADER 
Type of User  No. Users  Daily Transaction Volume 
Read Access  Write Access 
Select  Insert  Update  Delete 
Operations  I/Os  Operations  I/Os  Operations  I/Os  Operations  I/Os 

Order entry clerk 

25 

5,000 

15,000 

500 

7,000 

Accounts payable clerk 

Accounts receivable clerk 

6,000 

18,000 

1,000 

7,000 

Shipping clerk 

4,000 

12,000 

1,000 

7,000 

Sales manager 

3,000 

9,000 

Financial analyst 

2,000 

6,000 

You can make the following conclusions from the data in Table B-7:

Deletes are performed as a maintenance operation, so you do not need to consider them in this analysis. Furthermore, the application developers realize that sales managers normally access data for the current month, whereas financial analysts access mostly historical data.

ORDER_ITEM Table

Table B-8 shows rough estimates for values in the order_item table in the case study.

Table B-8 Case Study: Transaction Volume Analysis: ORDER_ITEM Table
Table Name: ORDER_ITEM 
Type of User  No. Users  Daily Transaction Volume 
Read Access  Write Access 
Select  Insert  Update  Delete 
Operations  I/Os  Operations  I/Os  Operations  I/Os  Operations  I/Os 

Order entry clerk 

25 

15,000 

45,000 

2,000 

10,000 

20 

100 

Accounts payable clerk 

Accounts receivable clerk 

18,000 

54,000 

10 

50 

Shipping clerk 

12,000 

36,000 

Sales manager 

9,000 

27,000 

Financial analyst 

6,000 

18,000 

The following conclusions can be drawn from Table B-8:

The order_header table has more writes than order_item because the order header tends to require more changes of status, such as address changes, than the list of available products. The order_item table is seldom updated because new items are listed as journal entries.

ACCOUNTS_PAYABLE Table

Table B-9 shows rough figures for the Accounts_payable table in the case study. Although this table does not have a particularly high level of write access, we have analyzed it because it contains the main operation that the accounts payable clerks perform.

Table B-9 Case Study: Transaction Volume Analysis: ACCOUNTS_PAYABLE Table
Table Name: ACCOUNTS_PAYABLE 
Type of User  No. Users  Daily Transaction Volume 
Read Access  Write Access 
Select  Insert  Update  Delete 
Operations  I/Os  Operations  I/Os  Operations  I/Os  Operations  I/Os 

Order entry clerk 

25 

Accounts payable clerk 

200 

600 

50 

250 

20 

140 

Accounts receivable clerk 

Shipping clerk 

Sales manager 

Financial analyst 

30 

90 

You can make the following conclusions from the data in this table:

Deletes are performed as a maintenance operation, so you do not need to consider them in this analysis.

Case Study: Initial Partitioning Plan

In the case study, the large number of order entry clerks doing heavy insert activity on the order_header and order_item tables should not be separated across machines. You should concentrate these users on one node along with the two tables they use most. A good starting point is to set aside one node for the Order Entry clerks, and one node for all other users as illustrated in Figure B-2.

Figure B-2 Case Study: Partitioning Users and Data


Text description of sps81066.gif follows
Text description of the illustration sps81066.gif

The system in Figure B-2 is probably well balanced across nodes. The database intensive reporting done by financial analysts takes a significant amount of system resources, whereas the transactions run by the order entry clerks are relatively simple.

Attempting to use load balancing by manipulating the number of users across the system is typically useful, but not always critical. Reducing contention has a more significant effect on tuning than implementing load balancing does.

Case Study: Further Partitioning Plans

In the case study it is also clear that accounts payable data is written exclusively by accounts payable clerks. You can thus effectively partition this data onto a separate instance as shown in Figure B-3.

Figure B-3 Case Study: Partitioning Users and Data: Design Option 1

Text description of sps81067.gif follows
Text description of the illustration sps81067.gif

When all users needing write access to a certain part of the data are concentrated on one node, the global enqueues all reside on that node. In this way, resource ownership does not move between instances. Based on this analysis, you have two design options as described under the following headings.

Design Option 1

You can set up your system as shown in Figure B-3 with all order entry clerks on one instance to minimize contention for exclusive GCS resources on the tables. This allows sales managers and financial analysts to get up-to-the-minute information. Since they do want data that is predominantly historical, there should not be too much contention for current records.

Design Option 2

Alternatively, you could implement a separate temporary table for order_item/ order_header as shown in Figure B-4. This table is only for recording new order information. Overnight, you could incorporate changes into the main table against which all queries are performed. This solution would work well if it is not required that financial analysis have current data. This is probably an acceptable solution only if they are primarily interested in looking at historical data. This would not be appropriate if the financial analysts needed up-to-the-minute data.

Figure B-4 Case Study: Partitioning Users and Data: Design Option 2


Text description of sps81068.gif follows
Text description of the illustration sps81068.gif

Partitioning Indexes

You need to consider index partitioning if multiple nodes in your system are inserting into the same index. In this situation, you must ensure that different instances insert into different points within the index.


Note:

This problem is avoided in the Eddie Bean case study because application and data usage are partitioned. 


Implement and Tune Your Design

Up to this point, you conducted an analysis using estimated figures. To finalize your design you must now either prototype the application or actually implement it. By observing the actual system, you can tune it further.

To do this, try the following techniques:


Go to previous page Go to next page
Oracle
Copyright © 1996-2001, Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback