|Oracle8 Parallel Server Concepts & Administration
There is an old network saying: Bandwidth problems can be cured with money. Latency problems are harder because the speed of light is fixed-you can't bribe God.
To attain the goals of speedup and scaleup, you must effectively implement parallel processing and parallel database technology. This means designing and building your system for parallel processing from the start. This chapter covers the following issues:
Successful implementation of parallel processing and parallel database requires optimal scalability on four levels:
Attention: An inappropriately designed application may not fully utilize the potential scalability of the system. Likewise, no matter how well your application scales, you will not get the desired performance if you try to run it on hardware that does not scale.
Interconnect is key to hardware scalability. Every system must have some means of connecting the CPUs, whether it be a high speed bus or a low speed Ethernet connection. Bandwidth and latency of the interconnect determine the scalability of the hardware.
Most interconnects have sufficient bandwidth. A high bandwidth may, in fact, disguise high latency.
Hardware scalability depends heavily on very low latency. Lock coordination traffic communication is characterized by a large number of very small messages among the LMD processes.
Consider the difference between conveying a hundred passengers on a single bus, compared to a hundred individual cars. In the latter case, efficiency depends largely upon the capacity for cars to quickly enter and exit the highway.
Other operations between nodes, such as parallel query, rely on high bandwidth.
Local I/Os are faster than remote I/Os (those which occur between nodes). If a great deal of remote I/O is needed, the system loses scalability. In this case you can partition data so that the data is local. Figure 2-2 illustrates the difference.
Note: Various clustering implementations are available from different hardware vendors. On shared disk clusters with dual ported controllers, there is the same latency from all nodes.
The ultimate scalability of your system also depends upon the scalability of the operating system. This section explains how to analyze this factor.
Software scalability can be an important issue if one node is a shared memory system (that is, a system in which multiple CPUs connect to a symmetric multiprocessor single memory). Methods of synchronization in the operating system can determine the scalability of the system. In asymmetrical multiprocessing, for example, only a single CPU can handle I/O interrupts. Consider a system in which multiple user processes all need to request a resource from the operating system:
Here, the potential scalability of the hardware is lost because the operating system can only process one resource request at a time. Each time one request enters the operating system, a lock is held to exclude the others. In symmetrical multiprocessing, by contrast, there is no such bottleneck.
An important distinction in parallel server architectures is internal versus external parallelism; this has a strong effect on scalability. The key difference is whether the object-relational database management system (ORDBMS) parallelizes the query, or an external process parallelizes the query.
Disk affinity can improve performance by ensuring that nodes mainly access local, rather than remote, devices. An efficient synchronization mechanism enables better speedup and scaleup.
See Also: "Disk Affinity" on page 4-9.
"Parallel Execution" in Oracle8 Tuning.
Application design is key to taking advantage of the scalability of the other elements of the system.
Attention: Applications must be specifically designed to be scalable!
No matter how scalable the hardware, software, and database may be, a table with only one row which every node is updating will synchronize on one datablock. Consider the process of generating a unique sequence number:
Every node which needs to update this sequence number will have to wait to access the same row of this table: the situation is inherently unscalable. A better approach would be to use sequences to improve scalability:
Note: Clients must be connected to server machines in a scalable manner: this means that your network must also be scalable!
This section describes applications which commonly benefit from a parallel server.
Data warehousing applications which infrequently update, insert, or delete data are often appropriate for the parallel server. Query-intensive applications and other applications with low update activity can access the database through different instances with little additional overhead.
If the data blocks are not modified, multiple copies of the blocks can be read into the Oracle buffer caches on several nodes and queried without additional I/O or lock operations. As long as the instances are only reading data and not modifying it, a block can be read into multiple buffer caches and one instance never has to write the block to disk before another instance can read it.
Decision support applications are good candidates for a parallel server because they only occasionally modify data, as in a database of financial transactions which is mostly accessed by queries during the day and is updated during off-peak hours.
Applications which either update disjoint data blocks or update the same data blocks at different times are also well suited to the parallel server. Applications can run efficiently on a parallel server if the set of data blocks regularly updated by one instance does not overlap with the set of blocks simultaneously updated by other instances. An example is a time-sharing environment where each user primarily owns and uses one set of tables.
An instance which needs to update blocks held in its buffer cache must hold one or more instance locks in exclusive mode while modifying those buffers. You should tune a parallel server and the applications which run on it, so as to reduce contention for instance locks.
Online transaction processing applications which modify disjoint sets of data benefit the most from the parallel server architecture. One example is a branch banking system where each branch (node) accesses its own accounts and only occasionally accesses accounts from other branches.
Applications which access a database in a mostly random pattern also benefit from the parallel server architecture, if the database is significantly larger than any node's buffer cache. One example is a Department of Motor Vehicles system where individual records are unlikely to be accessed by different nodes at the same time. Another example would be archived tax records or research data. In cases like these, most of the accesses would result in I/O even if the instance had exclusive access to the database. Oracle features such as fine grained locking can further improve performance of such applications.
Applications which primarily modify different tables in the same database are also suitable for Oracle Parallel Server. An example is a system where one node is dedicated to inventory processing, another is dedicated to personnel processing, and a third is dedicated to sales processing. Note that there is only one database to administer, not three.
Applications which require high availability benefit from the Oracle parallel server's failover capability. If the connection to the database is broken, applications can automatically reconnect.
Figure 2-4 illustrates the relative scalability of different kinds of applications. Online transaction processing applications which have a very high volume of inserts or updates from multiple nodes on the same set of data may require partitioning if they are to scale well. OLTP applications with a very low insert and update load may not require partitioning at all to be successful.
The following guidelines describe situations in which parallel processing is not advantageous.
If many users on a large number of nodes are modifying a small set of data, then synchronization is likely to be very high. However, if they are just reading the data then no synchronization is required.
For example, it would not be effective to use a table with one row used primarily as a sequence numbering tool. Such a table would be a bottleneck because every process would have to select the row, update it, and release it sequentially.
This section provides general guidelines to make partitioning decisions which will decrease synchronization and add to your system's performance.
You can partition any of the three elements of processing, depending on function, location, and so on, such that they do not interfere with each other. These elements are:
You can partition data, based on groups of users who access it; partition applications into groups which access the same data. You can also consider partitioning by location (geographic partitioning).
With vertical partitioning, a large number of tasks can run on a large number of resources without much synchronization. Figure 2-5 illustrates the concept of vertical partitioning.
Here, a company's accounts payable and accounts receivable functions have been partitioned by users, application, and data. They have been placed on two separate nodes. Here, most of the synchronization takes place on the same node, which is very efficient. The cost of synchronization on the local node is cheaper than the cost of synchronization between nodes.
Partition tasks on a subset of resources to reduce synchronization. When you partition, you have a smaller set of tasks working on a smaller resource.
To illustrate the concept of horizontal partitioning, Figure 2-6 represents the rows of a stock table. If the Oracle Parallel Server has four instances on a single node, then the data can be partitioned such that each instance accesses only a subset of the data.
In this example, very little synchronization is necessary because the instances access different sets of rows. Similarly, users partitioned by location can often run almost independently: very little synchronization is necessary if the users do not access the same data.
Various mistaken notions can lead to unrealistic expectations about parallel processing. Consider the following:
In some applications a single synchronization may be so expensive as to constitute a problem; in other applications, many cheap synchronizations may be perfectly acceptable.
For example, on some MPP systems if one of the CPUs dies, the whole machine dies. On a cluster, by contrast, if one of the nodes dies the other nodes survive.