This chapter describes how to initially configure your data warehouse environment. It contains the following topics:
The procedures in this section describe how to configure Oracle Database for use as a data warehouse. Subsequently, you configure Oracle Warehouse Builder (OWB), which leverages Oracle Database and provides graphical user interfaces to design data management strategies.
To set up a data warehouse system:
Size and configure your hardware as described in "Preparing the Environment".
Optimize the Database for use as a data warehouse as described in "Setting Up a Database for a Data Warehouse".
Access the Oracle Warehouse Builder software.
Follow the instructions in "Accessing Oracle Warehouse Builder". Subsequently, you can install a demonstration to help you learn how to complete common data warehousing tasks using Warehouse Builder.
The basic components for a data warehousing architecture are similar to an online transaction processing (OLTP) system. However, because of the size and volume of data, the hardware configuration and data throughput requirements for a data warehouse are unique. The starting point for sizing a data warehouse is the throughput that you require from the system. When sizing, use one or more of the following criteria:
The amount of data accessed by queries during peak time and the acceptable response time.
The amount of data that is loaded within a window of time.
In general, you must estimate the highest throughput required at any given point.
Hardware vendors can recommend balanced configurations for a data warehousing application and can help you with the sizing. Contact your preferred hardware vendor for more details.
Central processing units (CPUs) provide the calculation capabilities in a data warehouse. You must have sufficient CPU power to perform the data warehouse operations. Parallel operations are more CPU-intensive than the equivalent number of serial operations.
Use the estimated highest throughput as a guideline for the number of CPUs required. As a rough estimate, use the following formula:
<number of CPUs> = <maximum throughput in MB/s> / 200
When you use this formula, you assume that a CPU can sustain up to about 200 MB per second. For example, if you require a maximum throughput of 1200 MB per second, then the system needs
<number of CPUs> = 1200/200 = 6 CPUs. A configuration with 1 server with 6 CPUs can service this system. A 2-node clustered system could be configured with 3 CPUs in both nodes.
Memory in a data warehouse is particularly important for processing memory-intensive operations such as large sorts. Access to the data cache is less important in a data warehouse because most of the queries access vast amounts of data. Data warehouses do not have the same memory requirements as mission-critical OLTP applications.
The number of CPUs is a good guideline for the amount of memory you need. Use the following simplified formula to derive the amount of memory you need from the CPUs that you select:
<amount of memory in GB> = 2 * <number of CPUs>
For example, a system with 6 CPUs needs
2 * 6 = 12 GB of memory. Most standard servers fulfill this requirement.
A common mistake in data warehouse environments is to size the storage based on the maximum capacity needed. Sizing that is based exclusively on storage requirements will likely create a throughput bottleneck.
Use the maximum throughput you require to find out how many disk arrays you need. Use the storage provider's specifications to find out how much throughput a disk array can sustain. Note that storage providers measure in Gb per second, and your initial throughput estimate is based on MB per second. An average disk controller has a maximum throughput of 2 Gb per second, which equals a sustainable throughput of about
(70% * 2 GB/s) /8 = 180 MB/s.
Use the following formula to determine the number of disk arrays you need:
<number of disk controllers> = <throughput in MB/s> / <individual controller throughput in MB/s>
For example, a system with 1200 MB per second throughput requires at least 1200 / 180 = 7 disk arrays.
Ensure you have enough physical disks to sustain the throughput you require. Ask your disk vendor for the throughput numbers of the disks.
The end-to-end I/O system consists of more components than just the CPUs and disks. A well-balanced I/O system must provide approximately the same bandwidth across all components in the I/O system. These components include:
Host bus adapters (HBAs), the connectors between the server and the storage.
Switches, in between the servers and a storage area network (SAN) or network attached storage (NAS).
Ethernet adapters for network connectivity (GigE NIC or Infiniband). In an Oracle Real Application Clusters (Oracle RAC) environment, you need an additional private port for the interconnect between the nodes that you should not include when sizing the system for I/O throughput. The interconnect must be sized separately, taking into account factors such as internode parallel execution.
Wires that connect the individual components.
Each of the components must provide sufficient I/O bandwidth to ensure a well-balanced I/O system. The initial throughput you estimated and the hardware specifications from the vendors are the basis to determine the quantities of the individual components you need. Use the conversion in Table 2-1 to convert the vendors' maximum throughput numbers in bits into sustainable throughput in bytes.
|Component||Bits||Bytes Per Second|
16 Port Switch
8 * 2 GB
In addition to having sufficient components to ensure enough I/O bandwidth, the layout of data on the disk is key to success or failure. If you configured the system for sufficient throughput across all disk arrays, but if the data that a query will retrieve is on one disk, then you will not be able to get the required throughput. This is because having only one disk will be the bottleneck. To avoid such a situation, stripe data across as many disks as possible, ideally all disks. A stripe size of 256 KB to 1 MB provides a good balance between multiblock read operations and data spread across multiple disks.
Before you install Oracle Database, verify your setup on the hardware and operating-system level. The key point to understand is that if the operating system cannot deliver the performance and throughput you need, Oracle Database will not perform according to your requirements. Two tools for verifying throughput are the
dd utility and Orion, an Oracle-supplied tool.
A very basic way to validate the operating system throughput on UNIX or Linux systems is to use the
dd utility. The
dd utility is a common Unix program whose primary purpose is the low-level copying and conversion of raw data. Because there is almost no overhead involved with the dd utility, the output provides a reliable calibration. Oracle Database can reach a maximum throughput of approximately 90 percent of what the
dd utility can achieve.
First, the most important options for using
bs=BYTES: Read BYTES bytes at a time; use 1 MB count=BLOCKS: copy only BLOCKS input blocks if=FILE: read from FILE; set to your device of=FILE: write to FILE; set to /dev/null to evaluate read performance; write to disk would erase all existing data!!! skip=BLOCKS: skip BLOCKS BYTES-sized blocks at start of input
To estimate the maximum throughput Oracle Database will be able to achieve, you can mimic a workload of a typical data warehouse application, which consists of large, random sequential disk access.
dd command performs random sequential disk access across two devices reading a total of 2 GB. The throughput is 2 GB divided by the time it takes to finish the following command:
dd bs=1048576 count=200 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 skip=200 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 skip=400 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 skip=600 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 skip=800 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 if=/raw/data_2 of=/dev/null & dd bs=1048576 count=200 skip=200 if=/raw/data_2 of=/dev/null & dd bs=1048576 count=200 skip=400 if=/raw/data_2 of=/dev/null & dd bs=1048576 count=200 skip=600 if=/raw/data_2 of=/dev/null & dd bs=1048576 count=200 skip=800 if=/raw/data_2 of=/dev/null &
In your test, include all the storage devices that you plan to include for your database storage. When you configure a clustered environment, you run
dd commands from every node.
Orion's simulation is closer to the workload the database will produce.
Orion enables you to perform reliable write and read simulations within one simulation.
Oracle recommends you use Orion to verify the maximum achievable throughput, even if a database has already been installed.
The types of supported I/O workloads are as follows:
Small and random
Large and sequential
Large and random
For each type of workload, Orion can run tests at different levels of I/O load to measure performance metrics such as MB per second, I/O per second, and I/O latency. A data warehouse workload is typically characterized by sequential I/O throughput, issued by multiple processes. You can run different I/O simulations depending upon which type of system you plan to build. Examples are the following:
Daily workloads when users or applications query the system
The data load when users may or may not access the system
Index and materialized view builds
Oracle Database Performance Tuning Guide for more information
As a general guideline, avoid changing a database parameter unless you have good reason to do so. You can use Oracle Enterprise Manager to set up your data warehouse. To view various parameter settings, go to the Database page, then click Server. Under Database Configuration, click Memory Parameters or All Inititalization Parameters.
Shared memory: Also called the system global area (SGA), this is the memory used by the Oracle instance.
Session-based memory: Also called program global area (PGA), this is the memory that is occupied by sessions in the database. It is used to perform database operations, such as sorts and aggregations.
Oracle Database can automatically tune the distribution of the memory components in two memory areas. You have a choice between two mutually exclusive options:
If you choose the first option, then you need not set other parameters. The database manages all memory for you. If you choose the second option, then you must specify a size for the SGA and a size for the PGA. The database does the rest.
PGA_AGGREGATE_TARGET parameter is the target amount of memory that you want the total PGA across all sessions to use. As a starting point, you can use the following formula to define the
PGA_AGGREGATE_TARGET = 3 *
If you do not have enough physical memory for the
PGA_AGGREGATE_TARGET to fit in memory, then reduce
MEMORY_TARGET parameter enables you to set a target memory size and the related initialization parameter,
MEMORY_MAX_TARGET, sets a maximum target memory size. The database then tunes to the target memory size, redistributing memory as needed between the system global area (SGA) and aggregate program global area (PGA). Because the target memory initialization parameter is dynamic, you can change the target memory size at any time without restarting the database. The maximum memory size acts as an upper limit so that you cannot accidentally set the target memory size too high. Because certain SGA components either cannot easily shrink or must remain at a minimum size, the database also prevents you from setting the target memory size too low.
You can set an initialization parameter by issuing an
SYSTEM statement, as follows:
ALTER SYSTEM SET SGA_TARGET = 1024M;
A good starting point for a data warehouse is the data warehouse template database that you can select when you run the Database Configuration Assistant (DBCA). However, any database will be acceptable as long as you ensure you take the following initialization parameters into account:
COMPATIBLE parameter identifies the level of compatibility that the database has with earlier releases. To benefit from the latest features, set the
COMPATIBLE parameter to your database release number.
The default value of the DB_BLOCK_SIZE parameter is 8 KB, and appropriate for most data warehousing needs. If you intend to use table compression, then consider a larger block size.
DB_FILE_MULTIBLOCK_READ_COUNT parameter enables reading several database blocks in a single operating-system read call. Because a typical workload on a data warehouse consists of many sequential I/Os, ensure you can take advantage of fewer large I/Os as opposed to many small I/Os. When setting this parameter, take into account the block size and the maximum I/O size of the operating system, and use the following formula:
DB_FILE_MULTIBLOCK_READ_COUNT * DB_BLOCK_SIZE = <maximum operating system I/O size>
Maximum operating-system I/O sizes vary between 64 KB and 1 MB.
PARALLEL_MAX_SERVERS parameter sets a resource limit on the maximum number of processes available for parallel execution. Parallel operations need at most twice the number of query server processes as the maximum degree of parallelism (DOP) attributed to any table in the operation.
Oracle Database sets the
PARALLEL_MAX_SERVERS parameter to a default value that is sufficient for most systems. The default value for the
PARALLEL_MAX_SERVERS parameter is as follows:
(CPU_COUNT x PARALLEL_THREADS_PER_CPU x (2 if PGA_AGGREGATE_TARGET > 0; otherwise 1) x 5)
This value might not be enough for parallel queries on tables with higher DOP attributes. Oracle recommends users who expect to run queries of higher DOP to set
PARALLEL_MAX_SERVERS as follows:
2 x DOP x <number_of_concurrent_users>
For example, setting the
PARALLEL_MAX_SERVERS parameter to 64 will allow you to run four parallel queries simultaneously, assuming that each query is using two slave sets with a DOP of eight for each set.
If the hardware system is neither CPU-bound nor I/O bound, then you can increase the number of concurrent parallel execution users on the system by adding more query server processes. When the system becomes CPU-bound or I/O-bound, however, adding more concurrent users becomes detrimental to the overall performance. Careful setting of the
PARALLEL_MAX_SERVERS parameter is an effective method of restricting the number of concurrent parallel operations.
PARALLEL_ADAPTIVE_MULTI_USER parameter, which can be
FALSE, defines whether or not the server will use an algorithm to dynamically determine the degree of parallelism for a particular statement depending on the current workload. To use this feature, set
The default for the
QUERY_REWRITE_INTEGRITY parameter is
ENFORCED. The database will rewrite queries against only up-to-date materialized views, if it can base itself on enabled and validated primary, unique, and foreign key constraints.
TRUSTED mode, the optimizer trusts that the data in the materialized views is current, and the hierarchical relationships declared in dimensions and
RELY constraints are correct.
To take advantage of highly optimized star transformations, set the
STAR_TRANSFORMATION_ENABLED parameter to
Oracle Warehouse Builder (OWB) enables you to design and deploy various types of data management strategies, including traditional data warehouses.
To enable OWB:
Ensure that you have access to either Oracle Database Enterprise Edition or Standard Edition.
Oracle Database 11g comes with Warehouse Builder server components preinstalled. This includes a schema for the Warehouse Builder repository.
To use the default Warehouse Builder schema installed in Oracle Database, first unlock the schema as follows:
Connect to SQL*Plus as the
SYSDBA user and enter the following commands:
SQL> ALTER USER OWBSYS ACCOUNT UNLOCK;
SQL> ALTER USER OWBSYS IDENTIFIED BY owbsys_passwd;
Start the Warehouse Builder Design Center.
For Windows, select Start, Programs, Oracle, Warehouse Builder, and then Design Center.
For UNIX and Linux, locate
owb home/owb/bin/unix and then run
Define a workspace and assign a user to the workspace.
In the single Warehouse Builder repository, you can define multiple workspaces with each workspace corresponding to a set of users working on related projects. For instance, you could create a workspace for each of the following environments: development, test, and production.
For simplicity, create one workspace called MY_WORKSPACE and assign a user.
In the Design Center dialog box, click Show Details and then Workspace Management.
The Repository Assistant appears.
Follow the prompts and accept the default settings in the Repository Assistant to create a workspace and assign a user as the workspace owner.
Log in to the Design Center with the user name and password you created.
In subsequent topics, this guide uses exercises from Oracle By Example (OBE) series for Oracle Warehouse Builder to show how to consolidate data from multiple flat file sources, transform the data, and load it into a new relational target.
The exercises and examples are available on Oracle Technology Network (OTN) at
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/10g/r2/owb/owb10gr2_obe_series/owb10g.htm. To help you learn the product, the demonstration provides you with flat file data and scripts that create various Warehouse Builder objects. The OBE pages provide additional information about OWB and the latest information about the exercises.
Download the demonstration.
Go to the location for OWB examples, which is available on OTN from the following location:
Click the link for the Oracle By Example (OBE) set for the latest release.
The demonstration is a set of files in a ZIP archive called
The ZIP archive includes a SQL script, two files in comma-separated values format, and scripts written in Tcl.
(Optional) Download the
xsales.zip file from the same link, which includes XSALES table data.
Edit the script
owbdemoinit.tcl defines and sets variables used by the other tcl scripts. Edit the following variables to match the values in your computer environment:
set tempspace TEMP
set syspwd pwd
set host hostname
set port portnumber
set service servicename
set sourcedir drive:/
set indexspace USERS
set dataspace USERS
set snapspace USERS
set sid servicename
Execute the Tcl scripts from the Warehouse Builder scripting utility, OMB Plus.
For Windows, select Start, Programs, Oracle, Warehouse Builder, and then OMB*Plus.
For UNIX, locate
owb home/owb/bin/unix and then execute
At the OMB+> prompt, enter the following command to change to the directory that contains the scripts:
Run all of the Tcl scripts in the desired sequence by entering the following command:
Start the Design Center and log in to it as the workspace owner, using the credentials you specified in the script
Verify that you successfully set up the Warehouse Builder client to follow the demonstration.
In the Design Center, expand the Locations node in the Locations Navigator. Expand Databases and then Oracle. The Oracle node includes the following locations:
When you successfully install the Warehouse Builder demonstration, the Design Center displays with an Oracle module named