Skip Headers

Oracle9i Database Performance Tuning Guide and Reference
Release 2 (9.2)

Part Number A96533-01
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

13
Building a Database for Performance

One of the first stages in managing a database is the initial database creation. Although performance modifications can be made to both the database and to the Oracle instance at a later time, much can be gained by carefully designing the database for the intended needs. This chapter contains the following sections:

Initial Database Creation

Database Creation Using the Installer

The Oracle Installer lets you create a database during software installation or at a later time using the Database Creation Assistant. This is an efficient way of creating databases for small to medium size environments, and it provides a straightforward graphical user interface. However, this procedure sets some limits on the possibilities for various options, and it is therefore not recommended for database creation in larger environments.

Manual Database Creation

A manual approach provides full freedom for different configuration options. This is especially important for larger environments. A manual approach typically involves designing multiple parameter files for use during the various stages, running SQL scripts for the initial CREATE DATABASE and subsequent CREATE TABLESPACE statements, running necessary data dictionary scripts, and so on.

Parameters Necessary for Initial Database Creation

The initialization parameter file is read whenever an Oracle instance is started, including the very first start before the database is created. There are very few parameters that cannot be modified at a later time. The most important parameters to set correctly at database creation time are the following:

Parameter Description

DB_BLOCK_SIZE

Sets the size of the Oracle database blocks stored in the database files and cached in the SGA. The range of values depends on the operating system, but it is typically powers of two in the range 2048 to 16384. Common values are 4096 or 8192 for transaction processing systems and higher values for database warehouse systems.

DB_NAME

and

DB_DOMAIN

Set the name of the database and the domain name of the database, respectively. Although they can be changed at a later time, it is highly advisable to set these correctly before the initial creation. The names chosen must be reflected in the SQL*Net configuration as well.

COMPATIBLE

Specifies the release with which the Oracle server must maintain compatibility. It lets you take advantage of the maintenance improvements of a new release immediately in your production systems without testing the new functionality in your environment. If your application was designed for a specific release of Oracle, and you are actually installing a later release, then you might want to set this parameter to the version of the previous release.


Note:

Compatibility must be set to Release 2 (9.2) or higher to have a locally managed SYSTEM tablespace. If EXTENT MANAGEMENT LOCAL is specified at CREATE DATABASE time, the database cannot be migrated back to a pre-9.2 release and the SYSTEM tablespace cannot be migrated to dictionary-managed. All tablespaces subsequently created must be locally managed.


The CREATE DATABASE Statement

The first SQL statement that is executed after startup of the initial instance is the CREATE DATABASE statement. This creates the initial system tablespace, creates the initial redo logfiles, and sets certain database options. The following options cannot be changed or can only be changed with difficulty at a later time:

Database Option Description

Character set

The character set specified by this option identifies the character set used for SQL text, for the internal data dictionary, and most importantly for text stored as datatypes CHAR, VARCHAR, or VARCHAR2. After data including any type of national characters has been loaded, the character set cannot be changed.

National character set

This character set is used for the datatypes NCHAR, NVARCHAR, and NVARCHAR2. In general, as with the regular character set, it cannot be changed.

SQL.BSQ file

This creates the internal data dictionary. For information on modifying this file, see Chapter 15, "I/O Configuration and Design".

Location of initial datafile

The initial datafile(s) that will make up the system tablespace should be set with care. They can be modified later, but the procedure involves a complete shutdown of the instance.

EXTENT MANAGEMENT LOCAL

Use the EXTENT MANAGEMENT LOCAL clause to create a locally managed SYSTEM tablespace. AUTOALLOCATE is the default; extent sizes are chosen by the system. EXTENT SIZE UNIFORM is not available for the SYSTEM tablespace.

Default temporary tablespace

If you specify EXTENT MANAGEMENT LOCAL, then you must also specify the default temporary tablespace.

MAXDATAFILES

The maximum number of data files.

MAXLOGFILES

The maximum number of log files.


Note:

Compatibility must be set to Release 2 (9.2) or higher to have a locally managed SYSTEM tablespace. If EXTENT MANAGEMENT LOCAL is specified at CREATE DATABASE time, the database cannot be migrated back to a pre-9.2 release and the SYSTEM tablespace cannot be migrated to dictionary-managed. All tablespaces subsequently created must be locally managed.


Example 13-1 Sample CREATE DATABASE Script

CONNECT SYS/ORACLE AS SYSDBA
STARTUP NOMOUNT pfile=/u01/admin/init_create.ora
CREATE DATABASE "dbname"
DATAFILE '/u01/oradata/system01.dbf' size 200M
LOGFILE  '/u02/oradata/redo01.dbf' size 100M,
         '/u02/oradata/redo02.dbf' size 100M
CHARACTER SET "WE8ISO8859P1"
NATIONAL CHARACTER SET "UTF8"
EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE mytemp TEMPFILE 'temp.f' SIZE 1000M
MAXDATAFILES = 50
MAXLOGFILES = 5;
See Also:

Oracle9i SQL Reference for detailed information about the CREATE DATABASE statement

Running Data Dictionary Scripts

After running the CREATE DATABASE statement, certain catalog scripts must be executed. They are found in the RDBMS/ADMIN directory on UNIX or the RDBMS\ADMIN directory on Windows, under the ORACLE_HOME directory. The following scripts must be executed.

Script Description

CATALOG.SQL

Needed for all normal data dictionary views

CATPROC.SQL

Needed to load the initial PL/SQL environment


Note:

When specific options or features are in use (for example, Java or replication), more scripts are necessary. These are documented with each individual option.


Example 13-2 Executing Required Data Dictionary Scripts

CONNECT SYS/ORACLE AS SYSDBA
@@CATALOG
@@CATPROC

The use of the double at-sign forces execution of these scripts from the proper directory.

Sizing Redo Log Files

The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Small log files can increase checkpoint activity and reduce performance. Because the recommendation on I/O distribution for high performance is to use separate disks for the redo log files, there is no reason not to make them large. A potential problem with large redo log files is that these are a single point of failure if redo log mirroring is not in effect.

It is not possible to provide a specific size recommendation for redo log files, but redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes.

The complete set of required redo log files can be created during database creation. After they are created, the size of a redo log size cannot be changed. However, new, larger files can be added later, and the original (smaller) ones can subsequently be dropped.

Not much can be done to speed up the creation of the initial database and the loading of necessary data dictionary views from catalog SQL files. These steps must be run serially after each other.


Note:

Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior.


Creating Subsequent Tablespaces

After creating the initial database, several extra tablespaces must be created. All databases should have at least three tablespaces in addition to the system tablespace: a temporary tablespace, which is used for things like sorting; a rollback tablespace, which is used to store rollback segments or is designated as the automatic undo management segment; and at least one tablespace for actual application use. In most cases, applications require several tablespaces. For extremely large tablespaces with many datafiles, multiple ALTER TABLESPACE x ADD DATAFILE Y statements can also be run in parallel.

During tablespace creation, the datafiles that make up the tablespace are initialized with special "empty" block images. TEMPFILES are not initialized.

Oracle does this to ensure that all datafiles can be written in their entirety, but this can obviously be a lengthy process if done serially. Therefore, run multiple CREATE TABLESPACE statements concurrently to speed up the tablespace creation process. The most efficient way to do this is to run one SQL script for each set of available disks.

For permanent tables, the choice between local and global extent management on tablespace creation can have a large effect on performance. For any permanent tablespace that has moderate to large insert, modify, or delete operations compared to reads, local extent management should be chosen.

Creating Permanent Tablespaces - Automatic Segment-Space Management

For permanent tablespaces, Oracle Corporation recommends using automatic segment-space management. Such tablespaces (often referred to as bitmapped tablespaces) are locally managed tablespaces with bitmap segment space management. They are available with Oracle9i Release 1 (9.0.1) and later.

See Also:

Oracle9i Database Administrator's Guide for more information on creating and using automatic segment-space managed tablespaces

Creating Temporary Tablespaces

Temporary tablespaces can be dictionary-managed or locally managed. With Oracle9i Release 1 (9.0.1) and later, Oracle Corporation recommends use of locally managed temporary tablespaces. Example 13-3 shows how you can create a temporary tablespace with local extent management:

Example 13-3 How to Create Tablespaces Concurrently

CONNECT SYSTEM/MANAGER
CREATE TABLESPACE appdata DATAFILE
  '/u02/oradata/appdata01.dbf' size 1000M;
CREATE TEMPORARY TABLESPACE mytemp TEMPFILE 'temp.f' SIZE 1000M;

In another session:

CONNECT SYSTEM/MANAGER
CREATE TABLESPACE appindex DATAFILE
  '/u03/oradata/appindex01.dbf' size 1000M;

Creating Tables for Good Performance

When you create a segment, such as a table, Oracle allocates space in the database for the data. If subsequent database operations cause the data volume to increase and exceed the space allocated, then Oracle extends the segment.

When installing applications, an initial step is to create all necessary tables and indexes. This operation is by itself relatively fast, and not much is gained by doing it in parallel. However, some things require attention:

Data Segment Compression

Data segment compression reduces disk use and memory use (specifically, the buffer cache), often leading to a better scaleup for read-only operations. Data segment compression can also speed up query execution.

Tuning to Achieve a Better Compression Ratio

Oracle9i Release 2 (9.2) achieves a good compression ratio in many cases with no special tuning. However, if you need a better compression ratio, tuning can improve it slightly in some cases and very substantially in other cases.

Heap-organized block-level compression works by eliminating column value repetitions within individual blocks. This is done by moving such repeated column values into a shared block-level symbol table and replacing occurrences with references into the symbol table. Therefore, the compression ratio is higher in blocks that have more repeated values. As a database administrator or application developer, you can take advantage of this fact by reorganizing rows in the segments that you want to compress, to increase the likelihood of such repetitions.

For a single-column table, you can order table rows by the column value, using a CREATE TABLE AS SELECT with an ORDER BY clause.

You can also apply this method to a table in which one column has low cardinality and other columns have high cardinalities: Order the table rows by the low cardinality column.

The following views contain information about column cardinalities within segments:

Example 13-4 Estimating Data Segment Compression/Decompression Ratio

You can estimate the compression/decompression ratio for a table TTT by using the following procedure, which allows automatic cleanup:

  1. Make sampling repeatable:
    ALTER SESSION SET EVENTS '10193 trace name context forever, level 1';
    
    
  2. Spawn a dbms_job to execute once after 1 minute with the following body (cleanup procedure ):
    LOCK TABLE ttt$aaa1 IN EXCLISIVE MODE; 
    DROP TABLE ttt$aaa1; 
    DROP TABLE ttt$aaa2;
     
    
  3. Create an empty compressed table:
    CREATE TABLE ttt$aaa1 COMPRESS AS SELECT * FROM TTT WHERE ROWNUM < 1; 
    LOCK TABLE ttt$aaa1 IN SHARE MODE;
     
    
  4. Create an empty uncompressed table:
    CREATE TABLE ttt$aaa2 NOCOMPRESS AS SELECT * FROM TTT WHERE ROWNUM < 1; 
    INSERT /*+ APPEND */ INTO ttt$aaa1 SELECT * FROM ttt SAMPLE BLOCK(x,y);
    INSERT /*+ APPEND */ INTO ttt$aaa2 SELECT * FROM ttt SAMPLE BLOCK(x,y);
     
    

The data segment compression ratio is the number of blocks in table ttt$aaa1 divided by number of blocks in table ttt$aaa2


Note:

The first time you add a compressed partition to a partitioned table that currently contains only uncompressed parrtitions, you must either drop all existing bitmap indexes and bitmap index partitions or mark them UNUSABLE prior to adding the compressed partition, even if the new partition contains no data. Operations for which this condition is relevant are adding, splitting, merging, and moving partitions.


See Also:

Oracle9i SQL Reference for block group sampling syntax SAMPLE BLOCK(x,y)

Loading and Indexing Data

Many applications need to load data as part of the initial application installation process. This can be fixed data, such as postal code or other type of lookup data, or it can be actual data originating in older systems. Oracle's SQL*Loader tool is the most efficient way to load a substantial amount of data.

Using SQL*Loader for Good Performance

When running SQL*Loader, you specify to use either the conventional path or the direct path. The conventional path uses ordinary SQL INSERT statements to load data into the tables, which means that the loading can be done concurrently with other database operations. However, the loading then is also limited by the normal INSERT performance. For quick loading of large amounts of data, choose the direct path. With the direct path, the loading process bypasses SQL and loads directly into the database blocks. During this type of load, normal operation on the table (or partition for partitioned tables) cannot be performed.

The following tips could help speed up the data loading process using SQL*Loader:

Efficient Index Creation

The most efficient way to create indexes is to create them after data has been loaded. By doing this, space management becomes much simpler, and no index maintenance takes place for each row inserted. SQL*Loader automatically does this, but if you are using other methods to do initial data load, you might need to do this manually. Additionally, index creation can be done in parallel using the PARALLEL clause of the CREATE INDEX statement. However, SQL*Loader is not able to do this, so you must manually create indexes in parallel after loading data.

During index creation on tables that contain data, the data must be sorted. This sorting is done in the fastest possible way, if all available memory is used for sorting. This can be controlled with the startup parameter SORT_AREA_SIZE. The value of this parameter should be set using the following rules:

  1. Find the amount of available memory by subtracting the size of the SGA and the size of the operating system from the total system memory.
  2. Divide this amount by the number of parallel slaves that you will use; this is typically the same as the number of CPUs.
  3. Subtract a process overhead, typically a five to ten megabytes, to get the value for SORT_AREA_SIZE.


    Note:

    You can also save time on index creating operations (or fast rebuilds) with on the fly statistics generation.


Example of Creating Indexes Efficiently

A system with 512 Mb memory runs an Oracle instance with a 100 Mb SGA, and the operating system uses 50 Mb. Hence, the memory available for sorting is 362 Mb (512 minus 50 minus 100). If the system has four CPUs and you run with four parallel slaves, then each of these will have 90 Mb available, 10 Mb are set aside for process overhead, and SORT_AREA_SIZE should be set to 80 Mb. This can be done either in the initialization file or for each session, with a statement like the following:

ALTER SESSION SET SORT_AREA_SIZE = 80000000;

Initial Instance Configuration

A running Oracle instance is configured using startup parameters, which are set in the initialization parameter file. These parameters influence the behavior of the running instance, including influencing performance. In general, a very simple initialization file with few relevant settings covers most situations, and the initialization file should not be the first place you expect to do performance tuning, except for the very few parameters shown in the following table.

See Also:

Chapter 14, "Memory Configuration and Use"

Table 13-1 describes the parameters necessary in a minimal initialization file. Although these parameters are necessary, but they have no performance impact:

Table 13-1 Necessary Initialization Parameters Without Performance Impact
Parameter Description

DB_NAME

Name of the database. This should match the ORACLE_SID environment variable.

DB_DOMAIN

Location of the database in Internet dot notation.

OPEN_CURSORS

Limit on the maximum number of cursors (active SQL statements) for each session. The setting is application-dependent, and the default, in many cases, is sufficient.

CONTROL_FILES

Set to contain at least two files on different disk drives to prevent failures from control file loss.

DB_FILES

Set to the maximum number of files that can assigned to the database.

Table 13-2 includes the most important parameters to set with performance implications:

Table 13-2 Important Initialization Parameters With Performance Impact
Parameter Description

DB_BLOCK_SIZE

Sets the database block size.

DB_CACHE_SIZE

Size of the buffer cache in the SGA. There are no good and simple rules to set a value, which is very application dependent, but typical values are in the range of twenty to fifty for each user session. More often, this value is set too high than too low. DB_BLOCK_BUFFERS has been deprecated.

SHARED_POOL_SIZE

Sets the size of the shared pool in the SGA. The setting is application-dependent, but it is typically is in the range of a few to a few tens of megabytes for each user session.

PROCESSES

Sets the maximum number of processes that can be started by that instance. This is the most important primary parameter to set, because many other parameter values are deduced from this.

SESSIONS

This is set by default from the value of processes. However, if you are using the shared server, then the deduced value is likely to be insufficient.

JAVA_POOL_SIZE

If you are using Java stored procedures, then this parameter should be set depending on the actual requirements of memory for the Java environment.

LOG_ARCHIVE_XXX

Enables redo log archiving. See Oracle9i User-Managed Backup and Recovery Guide.

ROLLBACK_SEGMENTS

Allocates one or more rollback segments by name to this instance. If you set this parameter, the instance acquires all of the rollback segments named in this parameter, even if the number of rollback segments exceeds the minimum number required by the instance (calculated as TRANSACTIONS /TRANSACTIONS_PER_ROLLBACK_SEGMENT).

Example of a Minimal Initialization File

In many cases, only the parameters mentioned in the following example need to be set to appropriate values for the Oracle instance to be reasonable well-tuned. Here is an example of such an initialization file:

DB_NAME = finance
DB_DOMAIN = hq.company.com
CONTROL_FILES = ('/u01/database/control1.dbf', '/u02/database/control2.dbf')
DB_BLOCK_SIZE = 8192
DB_BLOCK_BUFFERS = 12000 # this is approximately 100 Mb
DB_FILES = 200 # Maximum 200 files in the database
SHARED_POOL_SIZE = 100000000 # 100 Mb
PROCESSES = 80 # Would be good for approximately 70
# directly connected users
# log_archive_XXX   
# Set various archiving parameters 

Configuring Undo Space

Oracle needs undo space to keep information for read consistency, for recovery, and for actual rollback statements. This is kept either in rollback segments or in one or more automatic undo management tablespaces.

The V$UNDOSTAT view contains statistics for monitoring and tuning undo space. Using this view, you can better estimate the amount of undo space required for the current workload. Oracle also uses this information to help tune undo usage in the system. The V$ROLLSTAT view contains information about the behavior of the undo segments in the undo tablespace.

See Also:

Setting up Operating System, Database, and Network Monitoring

To effectively diagnose performance problems, it is vital to have an established performance baseline for later comparison when the system is running poorly. Without a baseline data point, it can be very difficult to identify new problems. For example, perhaps the volume of transactions on the system has increased, or the transaction profile or application has changed, or the number of users has increased.

After the database is created, tables are created, data is loaded and indexed, and the instance is configured, it is time to set up monitoring procedures.

See Also:

Chapter 20, "Oracle Tools to Gather Database Statistics"


Go to previous page Go to next page
Oracle
Copyright © 2000, 2002 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback