Oracle8i Parallel Server Concepts
Release 2 (8.1.6)

Part Number A76968-01

Library

Product

Contents

Index

Go to previous page Go to next page

5
Parallel Cache Management

This chapter explains how the Distributed Lock Manager in Oracle Parallel Server controls access to data. The Distributed Lock Manager also performs other synchronization tasks as described in Chapter 4. The overall process of managing data access and inter-instance coordination is known as "Parallel Cache Management." Topics in this chapter include:

Parallel Cache Management and Lock Implementation

Oracle's Parallel Cache Management facility uses locks to coordinate shared data access by multiple instances. These locks are known as "Parallel Cache Management locks". Oracle Parallel Server also uses non-Parallel Cache Management locks to control access to data files and control files as explained in Chapter 4.

Parallel Cache Management locks are more numerous than non-Parallel Cache Management locks. They can also have a more volatile effect on performance because they control access to data blocks upon which the nodes in a cluster operate. For these reasons, it is critical that you accurately allocate Parallel Cache Management locks to ensure optimal performance.

Parallel Cache Management locks can cover one or more blocks of any class: data blocks, index blocks, undo blocks, segment headers, and so on. However, a given Parallel Cache Management lock can cover only one block class.

Parallel Cache Management ensures cache coherency by forcing requesting instances to acquire locks from instances that hold locks on data blocks before modifying or reading the blocks. Parallel Cache Management also allows only one instance at a time to modify a block. If a block is modified by an instance, the block must first be written to disk before another instance can acquire the Parallel Cache Management lock and modify it.

Parallel Cache Management locks use a minimum amount of communication to ensure cache coherency. The amount of cross-instance activity--and the corresponding performance of Oracle Parallel Server--is evaluated in terms of pings. A ping is when one instance requests a block that is held by another instance. To resolve this type of request, Oracle writes or "pings" the block to disk so the requesting instance can read it in its most current state.

Heavily loaded applications can experience significant locking activity, but they do not necessarily have excessive pinging. If data is well partitioned, the locking is local to each node--therefore pinging does not occur.

The Role of Cache Fusion in Resolving Cache Coherency Conflicts

Inter-instance references to data blocks and the resulting cache coherency issues are the main performance problems of Oracle Parallel Server. In most cases, proper partitioning resolves most contention problems.

In reality, however, most applications are not effectively partitioned, or are partitioned only to a limited extent. There are three types of cross-instance concurrent access:

Reader/reader concurrency occurs when two instances need to read the same data block. Oracle Parallel Server easily resolves this type of contention because multiple instances can share the same blocks for read access without cache coherency conflicts. The other types of contention, however, are more complex from a cache coherency point-of-view.

Reader/writer contention is in many cases the predominant form of concurrency in OLTP and hybrid applications. The ability to combine Decision Support System (DSS) and Online Transaction Processing (OLTP) in a typical application depends on Oracle Parallel Server's efficiency in resolving such conflicts.

In Oracle8i, concurrent write access to cached data blocks by multiple instances is managed by a disk-based "ping" protocol, in which current changes to a cached data block must be written to disk before another instance can read it and make changes to the block. The disk write must occur before exclusive access to a block is granted by the global locking mechanism.

In Oracle8i, Cache Fusion optimizes read/write concurrency by using the interconnect to directly transfer data blocks among instances. This eliminates I/O and reduces delays for block reads to the speed of the interprocess communication (IPC) and the interconnecting network. This also relaxes the strict requirements of data partitioning so that you can more efficiently deploy applications with mostly OLTP and mostly reporting modules.

Lock Duration and Granularity

Lock duration refers to the length of time for which a lock is associated with a resource. Lock granularity refers to the ratio of locks per data block.

Two Types of Lock Duration

Oracle Parallel Cache Management implements locks with two durations:

Fixed Locks

Fixed Parallel Cache Management locks are initially acquired in null mode. All specified fixed locks are allocated at instance startup, and de-allocated at instance shutdown. Because of this, fixed locks incur more overhead and longer startup times than releasable locks. The advantage of fixed Parallel Cache Management locks, however, is that they do not need to be continually acquired and released.

Fixed locks are pre-allocated and statically hashed to blocks at startup time. The first instance that starts up creates a Distributed Lock Manager resource and a Distributed Lock Manager lock, in null mode, on the resource for each fixed Parallel Cache Management lock. The instance then converts the mode of these locks to other modes as required. Each subsequent instance acquires a null mode lock at startup and then performs lock conversions as needed.

By default, fixed Parallel Cache Management locks are never released; each lock remains in the mode in which it was last requested. If the lock is required by another instance, it is converted to null mode. These locks are de-allocated only at instance shutdown.

Releasable Locks

Releasable locks are acquired and released as needed. This allows the instance to start up much faster than with fixed locks. A Distributed Lock Manager resource is created and an Distributed Lock Manager lock is obtained only when a user actually requests a block. Once a releasable lock has been created, it can be converted to various modes as required.

An instance can relinquish references to releasable lock resources during normal operations. The lock is released when it is required for reuse for a different block. This means that sometimes no instance holds a lock on a given resource.

Comparing Fixed and Releasable Locking

With fixed duration locking, an instance never disowns a Parallel Cache Management lock unless another instance requests the lock. This minimizes the overhead of global lock operations in systems with relatively low contention for resources. With releasable locks, once the block is released, the lock on it is available for re-use. Non-Parallel Cache Management locks are disowned.

Releasable Parallel Cache Management locking is more dynamic than fixed locking. Releasable locks are allocated only as needed by the Distributed Lock Manager. At startup Oracle allocates lock elements that are obtained directly in the requested mode; this is normally shared or exclusive mode.

Two Forms of Lock Granularity

Oracle Parallel Cache Management implements two types of lock granularity:

1:1 Locks

A 1:1 lock means one lock per block. This is the smallest granularity of locks and it is the default. 1:1 locks are useful when a database object is updated frequently by several instances. The advantages are:

A disadvantage of 1:1 locking is that overhead is incurred for each block read, and performance is affected accordingly.

1:n Locks

1:n locks implies that a lock covers two or more data blocks as defined by the value for n. With 1:n locks, a few locks can cover many blocks and thus reduce lock operations. For read-only data, 1:n locks can perform faster than 1:1 locks during certain operations such as parallel execution.

If you partition data according to the nodes that are most likely to modify it, you can implement disjoint lock sets; each set belonging to a specific node. This can significantly reduce lock operations. 1:n locks are also beneficial if you have a large amount of data that a relatively small number of instances modify. If a lock is already held by the instance that modifies the data, no lock activity is required for the operation.

See Also:

Oracle8i Parallel Server Administration, Deployment, and Performance for detailed information about configuring Parallel Cache Management locks.  

The Cost of Locks

To effectively implement locks, carefully evaluate their relative costs. As a rule-of-thumb:

In general, global locks and global enqueues have an equivalent effect on performance. When Oracle Parallel Server is disabled, all enqueues are local. When Parallel Server is enabled, most enqueues are global. The length of time required to process these can vary by many milliseconds.

Microseconds, milliseconds, and tenths of seconds may seem negligible. However, imagine the cost of locks using grossly exaggerated values such as shown in the "Relative Time Required" column of Table 5-1.

Table 5-1 Comparing the Relative Cost of Locks
Class of Lock  Actual Time Required  Relative Time Required 

Latches 

1 microsecond 

1 minute 

Local Enqueues 

1 millisecond 

1000 minutes (16 hours) 

Global locks
(or Global Enqueues) 

1/10 second 

100,000 minutes (69 days) 

Table 5-1 only shows relative examples to underscore the need to carefully calibrate lock use. In general, it especially critical to avoid unregulated global lock use.

See Also:

Oracle8i Parallel Server Administration, Deployment, and Performance for procedures to analyze the number of Parallel Cache Management locks applications use.  

Coordination of Locking Mechanisms by the Distributed Lock Manager

The Distributed Lock Manager is a resource manager that is internal to the Oracle Parallel Server. This section explains how the Distributed Lock Manager coordinates locking mechanisms by covering the following topics:

Lock Modes As Resource Access Rights

Oracle may initially create a lock on a resource without granting access rights. If the instance later receives a request, Oracle converts the lock mode to obtain access rights. Figure 5-1 illustrates the levels of access rights or "lock modes" available through the Distributed Lock Manager. Table 5-2 lists and describes these lock modes.

Figure 5-1 Distributed Lock Manager Lock Modes: Levels of Access


Table 5-2 Lock Mode Names
Lock Mode  Summary  Description 

NULL 

Null mode. No lock is on the resource.  

Holding a lock at this level conveys no access rights. Typically, a lock is held at this level to indicate that a process is interested in a resource. Or it is used as a place holder.

Once created, null locks ensure the requestor always has a lock on the resource; there is no need for the Distributed Lock Manager to constantly create and destroy locks when ongoing access is needed. 

SS 

Sub-shared mode (concurrent read). Read; there may be writers and other readers. 

When a lock is held at this level, the associated resource can be read in an unprotected fashion: other processes can read and write the associated resource.  

SX 

Shared exclusive mode (concurrent write). Write; there may be other readers and writers. 

When a lock is held at this level, the associated resource can be read or written in an unprotected fashion: other processes can both read and write the resource.  

Shared mode (protected read).

Read; no writers are allowed. 

When a lock is held at this level, a process cannot write the associated resource. Multiple processes can read the resource. This is the traditional shared lock.

In shared mode, any number of users can have simultaneous read access to the resource. Shared access is appropriate for read operations. 

SSX 

Sub-shared exclusive mode (protected write). One writer only; there may be readers 

Only one process can hold a lock at this level. This allows a process to modify a resource without allowing other processes to simultaneously modify the resource at the same time. Other processes can perform unprotected reads. The traditional update lock. 

Exclusive mode.

Write; no other access is allowed 

When a lock is held at this level, it grants the holding process exclusive access to the resource. Other processes cannot read or write the resource. This is the traditional exclusive lock. 

Instances Map Database Resources to Distributed Lock Manager Resources

Each instance maps Oracle database resources to Distributed Lock Manager resources. For example, a 1:n lock on an Oracle database block with a given data block address, such as file 2 block 10, is translated as a "BL resource" with the class of the block and the lock element number, such as BL 9 1. The data block address is translated from the Oracle resource level to the Distributed Lock Manager resource level; the hashing function used is dependent on your GC_* parameter settings.


Note:

For 1:1 locking, the database address is used as the second identifier, rather than the lock element number. 


Figure 5-2 Resource Names and Distributed Lock Manager Resource Names


The Distributed Lock Manager Records Lock Information

The Distributed Lock Manager maintains an inventory of Oracle global locks and global enqueues held against system resources. It also acts as a negotiator when conflicting lock requests arise. In performing this function, the Distributed Lock Manager does not distinguish between Parallel Cache Management and non-Parallel Cache Management locks.

Sample Lock Manager Lock Mode and Resource Inventory

Figure 5-3 represents the Distributed Lock Manager as an inventory sheet that shows lock resources and the current status of the locks on those resources in an example Oracle Parallel Server environment.

Figure 5-3 Sample Distributed Lock Manager Resource and Lock Inventory


This inventory example includes all instances in this cluster. For example, resource BL 1, 101 is held by three instances with shared locks and three instances with null locks. Since Figure 5-3 shows up to six locks on one resource, at least six instances are running on this system.

How Distributed Lock Manager Locks and Global Locks Relate

Figure 5-4 illustrates how Distributed Lock Manager locks and Parallel Cache Management locks relate. To allow instance B to read the value of data at data block address x, instance B must first check for locks on that address. The instance translates the block's database resource name to the Distributed Lock Manager resource name, and asks the Distributed Lock Manager for a shared lock to read the data.

As illustrated in Figure 5-4, the Distributed Lock Manager checks outstanding locks on the granted queue and determines there are already two shared locks on resource 441,BL1. Since shared locks are compatible with read-only requests, the Distributed Lock Manager grants a shared lock to instance B. The instance then proceeds to query the database to read the data at data block address x. The database returns the data.

Figure 5-4 The Distributed Lock Manager Monitors the Status of Locks



Note:

The global lock space is cooperatively managed in a distributed fashion by the LMD processes of all instances.  


If the required block already had an exclusive lock on it from another instance, then instance B would have to wait for this to be released. The Distributed Lock Manager would place the shared lock request from instance B on the convert queue. The Distributed Lock Manager would then notify the instance when the exclusive lock was removed and grant its request for a shared lock.

One Lock Per Instance on a Resource

Oracle uses only one lock per instance on any one Parallel Cache Management resource.The LCK0 process manages the assignment of this lock to the resource. As illustrated in Figure 5-5, if you have a four-instance system and require a buffer lock on a single resource, you actually need four locks--one per instance.

Figure 5-5 Resources Have One Lock Per Instance


The number of locks on a non-Parallel Cache Management resource may depend on the type of resource, the application's behavior, and the configuration.

See Also:

Chapter 4 for more information on non-Parallel Cache Managements locks.  

Lock Elements and Parallel Cache Management Locks

Figure 5-6 illustrates the correspondence of lock elements to blocks in fixed and releasable locking. A lock element (LE) is an Oracle-specific data structure representing a Parallel Cache Management lock. There is a one-to-one correspondence between a lock element and a Parallel Cache Management lock in the Distributed Lock Manager.

Figure 5-6 1:n Locking and 1:1 Locking


Lock Elements for Fixed Parallel Cache Management Locks

For both fixed Parallel Cache Management locks and releasable locks, you can specify more than 1 block per lock element. The difference is that by default, fixed Parallel Cache Management locks are not releasable: the lock element name is "fixed".

When the lock element is pinged due to a remote request, other modified blocks owned by that lock element are written along with the requested one. For example, in Figure 5-6, if LE is pinged when block DBA2 (Data Block Address 2) is needed, blocks DBA1, DBA3, DBA4, and DBA5 are all written to disk as well--if they have been modified.

Lock Elements for Releasable Parallel Cache Management Locks

With 1:1 locking, the name of the lock element is the name of the resource inside the Distributed Lock Manager. Although a fixed number of lock elements cover potentially millions of blocks, the lock element names change over and over as they are re-associated with blocks that are requested. The lock element name, for example, LE7,1, contains the database block address 7 and class 1 of the block it covers. Before a lock element can be reused, the lock must be released. You can then rename and reuse the lock element, creating a new resource in the Distributed Lock Manager if necessary.

When using 1:1 locking, you can configure your system with many more potential lock names, since they do not need to be held concurrently. However, the number of blocks mapped to each lock is configurable in the same way as 1:n locking.

Lock Elements for 1:1 Parallel Cache Management Locks

In 1:1 locking you can set a one-to-one relationship between lock elements and blocks. Such an arrangement, illustrated in Figure 5-6, is called Data Block Address Locking. Thus if LE2,1 is pinged, only block DBA2 is written to disk.

How Parallel Cache Management Locks Operate

Figure 5-7 illustrates how Parallel Cache Management locks work. When instance A reads the black block for modification, it obtains the Parallel Cache Management lock for block. The same scenario occurs with the shaded block and Instance B. If instance B requires the black block, the block must be written to disk because instance A has modified it. The Oracle process communicates with the LMD processes to obtain the global lock from the Distributed Lock Manager.

Figure 5-7 How Parallel Cache Management Locks Work


Parallel Cache Management Locks Are Owned by Instance LCK Processes

Each instance has at least one LCK background process. If multiple LCK processes exist within the same instance, the Parallel Cache Management locks are divided among the LCK processes. This means that each LCK process is only responsible for a subset of the Parallel Cache Management locks.

Multiple Instances Can Own the Same Locks

A Parallel Cache Management lock owned in shared mode is not disowned by an instance if another instance also requests the Parallel Cache Management lock in shared mode. Thus, two instances may have the same data block in their buffer caches because the copies are shared (no writes occur). Different data blocks covered by the same Parallel Cache Management lock can be contained in the buffer caches of separate instances. This can occur if all the different instances request the Parallel Cache Management lock in shared mode.

How 1:1 Locking Works

Figure 5-8 shows how 1:1 locking operates.

Figure 5-8 Lock Elements Coordinate Blocks (by 1:1 Locking)


The foreground process checks in the System Global Area to determine if the instance owns a lock on the block.

Number of Blocks Per Parallel Cache Management Lock

The number of Parallel Cache Management locks assigned to data files and the number of data blocks in those data files determines the number of data blocks covered by a single Parallel Cache Management lock.

If the size of each file, in blocks, is a multiple of the number of Parallel Cache Management locks assigned to it, then each 1:n Parallel Cache Management lock covers exactly the number of data blocks given by the equation.

If the file size is not a multiple of the number of Parallel Cache Management locks, then the number of data blocks per 1:n Parallel Cache Management lock can vary by one for that data file. For example, if you assign 400 Parallel Cache Management locks to a data file which contains 2,500 data blocks, then 100 Parallel Cache Management locks cover 7 data blocks each and 300 Parallel Cache Management locks cover 6 blocks. Any data files not specified in the GC_FILES_TO_LOCKS initialization parameter use the remaining Parallel Cache Management locks.

If n files share the same 1:n Parallel Cache Management locks, then the number of blocks per lock can vary by as much as n. If you assign locks to individual files, either with separate clauses of GC_FILES_TO_LOCKS or by using the keyword EACH, then the number of blocks per lock does not vary by more than one.

If you assign 1:n Parallel Cache Management locks to a set of data files collectively, then each lock usually covers one or more blocks in each file. Exceptions can occur when you specify contiguous blocks (using the "!blocks" option) or when a file contains fewer blocks than the number of locks assigned to the set of files.

Example of Locks Covering Multiple Blocks

The following illustrates how 1:n Parallel Cache Management locks can cover multiple blocks in different files. Figure 5-9 assumes 44 Parallel Cache Management locks assigned to 2 files which have a total of 44 blocks. GC_FILES_TO_LOCKS is set to A,B:44

Block 1 of a file does not necessarily begin with lock 1; a hashing function determines which lock a file begins with. In file A, which has 24 blocks, block 1 hashes to lock 32. In file B, which has 20 blocks, block 1 hashes to lock 28.

Figure 5-9 Fixed Parallel Cache Management Locks Covering Blocks in Multiple Files


In Figure 5-9, locks 32 through 44 and 1 through 3 are used to cover 2 blocks each. Locks 4 through 11 and 28 through 31 cover 1 block each; and locks 12 through 27 cover no blocks at all!

In a worst case scenario, if two files hash to the same lock as a starting point, then all the common locks will cover two blocks each. If your files are large and have multiple blocks per lock (on the order of 100 blocks per lock), then this is not an important issue.

Periodicity of Fixed Parallel Cache Management Locks

You should also consider the periodicity of Parallel Cache Management locks. Figure 5-10 shows a file of 30 blocks which is covered by 6 Parallel Cache Management locks. This file has 1:n locks set to begin with lock number 5. As suggested by the shaded blocks covered by Parallel Cache Management lock number 4, use of each lock forms a pattern over the blocks of the file.

Figure 5-10 Periodicity of Fixed Parallel Cache Management Locks


Pinging: Signaling the Need to Update

In Parallel Server, a particular data block can only be modified by one instance at a time. If one instance modifies a data block that another instance needs, whether pinging is required depends on the type of request submitted for the block.

If the requesting instance wants the block for modification, then the holding instance's locks on the data block must be converted accordingly. The first instance must write the block to disk before the requesting instance can read it. This is known as pinging a block.

The BSP (Block Server Process) uses the Distributed Lock Manager facility to signal a need between the two instances. If the requesting instance only wants the block in CR mode, the BSP of the holding instance transmits a CR version of the block to the requesting instance by way of the interconnect. In this scenario, pinging is much faster.

Data blocks are only pinged when a block held in exclusive current (XCUR) state in the buffer cache of one instance is needed by a different instance for modification. In some cases, therefore, the number of Parallel Cache Management locks covering data blocks may have little effect on whether a block gets pinged.

An instance can relinquish an exclusive lock on a block and still have a row lock on rows in it: pinging is independent of whether a commit has occurred. You can modify a block, but whether it is pinged is independent of whether you have made the commit.

Partitioning to Avoid Pinging

If you have partitioned data across instances and are doing updates, your application can have, for example, a million blocks on each instance. Each block is covered by one Parallel Cache Management lock yet there are no forced reads or forced writes.

As shown in Figure 5-11, assume a single Parallel Cache Management lock covers one million data blocks in a table and the blocks in that table are read from or written into the System Global Area of instance X. Assume another single Parallel Cache Management lock covers another million data blocks in the table that are read or written into the System Global Area of instance Y. Regardless of the number of updates, there will be no forced reads or writes on data blocks between instance X and instance Y.

Figure 5-11 Partitioning Data to Avoid Pinging


With read-only data, both instance X and instance Y can hold the Parallel Cache Management lock in shared mode without causing pinging. This scenario is illustrated in Figure 5-12.

Figure 5-12 No Pinging of Read-Only Data


See Also:

Oracle8i Parallel Server Administration, Deployment, and Performance for more information about partitioning applications to avoid pinging.  

Lock Mode and Buffer State

The state of a block in the buffer cache relates directly to the mode of the lock held upon it. For example, if a buffer is in exclusive current (XCUR) state, you know that an instance owns the Parallel Cache Management lock in exclusive mode. There can be only one XCUR version of a block in the database, but there can be multiple SCUR versions. To perform a modification, a process must get the block in XCUR mode.

Finding the State of a Buffer

To see a buffer's state, check the STATUS column of the V$BH dynamic performance table. This table provides information about each buffer header.

Table 5-3 Parallel Cache Management Lock Modes and Buffer States
Parallel Cache Management Lock Mode  Buffer State Name  Description 

XCUR 

Instance has an EXCLUSIVE lock for this buffer. 

SCUR 

Instance has a SHARED lock for this buffer. 

CR 

Instance has a NULL lock for this buffer. 

How Buffer States and Lock Modes Change

Figure 5-13 shows how buffer state and lock mode change as instances perform various operations on a given buffer. Lock mode is shown in parentheses.

Figure 5-13 How Buffer States and Lock Modes Change


In Figure 5-13, the three instances start out with blocks in shared current mode, and shared locks. When Instance 1 performs an update on the block, its lock mode on the block changes to exclusive mode (X). The shared locks owned by Instance 2 and Instance 3 convert to null mode (N). Meanwhile, the block state in Instance 1 becomes XCUR, and in Instance 2 and Instance 3 it becomes CR. These lock modes are compatible.

Lock Modes May Be Compatible or Incompatible

When one process owns a lock in a given mode, another process requesting a lock in any particular mode succeeds or fails as shown in Table 5-4.

Table 5-4 Lock Mode Compatibility
Lock Requested: Lock Owned  Null   SS   SX   S   SSX   X  

NULL 

SUCCEED 

SUCCEED 

SUCCEED 

SUCCEED 

SUCCEED 

SUCCEED 

SS 

SUCCEED 

SUCCEED 

SUCCEED 

SUCCEED 

SUCCEED 

FAIL 

SX 

SUCCEED 

SUCCEED 

SUCCEED 

FAIL 

FAIL 

FAIL 

S 

SUCCEED 

SUCCEED 

FAIL 

SUCCEED 

FAIL 

FAIL 

SSX 

SUCCEED 

SUCCEED 

FAIL 

FAIL 

FAIL 

FAIL 

X 

SUCCEED 

FAIL 

FAIL 

FAIL 

FAIL 

FAIL 

How the DLM Grants and Coordinates Resource Lock Requests

This section explains how the Distributed Lock Manager coordinates resource lock requests by explaining the following topics:

The Distributed Lock Manager tracks all lock requests, granting requests for resources whenever permissible. Requests for resources that are not currently available are also tracked, and access rights are granted when these resources later become available. The Distributed Lock Manager inventories lock requests and communicates their statuses to users and to the internal processes involved in Parallel Cache Management.

Lock Requests Are Queued

The Distributed Lock Manager maintains two queues for lock requests:

Granted queue 

The Distributed Lock Manager tracks lock requests that have been granted in the granted queue.  

Asynchronous Traps (ASTs) Communicate Lock Request Status

To communicate the status of lock requests, the Distributed Lock Manager uses two types of asynchronous traps (ASTs) or interrupts:

Blocking AST 

When a process requests a certain mode of lock on a resource, the Distributed Lock Manager sends a blocking AST to notify processes currently owning locks on that resource in incompatible modes. (Shared and exclusive modes, for example, are incompatible.) Upon notification, owners of locks can relinquish them to permit access by the requestor. 

Lock Requests Are Converted and Granted

The following figures show how the Distributed Lock Manager handles lock requests. In Figure 5-14, shared lock request 1 has been granted on the resource to process 1, and shared lock request 2 has been granted to process 2. As mentioned, the Distributed Lock Manager tracks the locks in the granted queue. When a request for an exclusive lock is made by process 2, it must wait in the convert queue.

Figure 5-14 The Distributed Lock Manager Granted and Convert Queues


In Figure 5-15, the Distributed Lock Manager sends a blocking AST to Process 1, the owner of the shared lock, notifying it that a request for an exclusive lock is waiting. When the shared lock is relinquished by Process 1, it is converted to a null mode lock or released.

Figure 5-15 Blocking AST


An acquisition AST is then sent to alert Process 2, the requestor of the exclusive lock. The Distributed Lock Manager grants the exclusive lock and converts it to the granted queue. This is illustrated in Figure 5-16.

Figure 5-16 Acquisition AST


Specifying the Allocation and Duration of Locks

You allocate Parallel Cache Management locks to data files by specifying values for initialization parameters in parameter files that Oracle reads when starting up a database. For example, use the initialization parameter GC_FILES_TO_LOCKS to specify the number of Parallel Cache Management locks that cover the data blocks in a data file or set of data files.

Number of Blocks Per Parallel Cache Management Lock

This section explains the ways in which 1:n and 1:1 locks can differ in lock granularity.

1:N Locks for Multiple Blocks

You can specify lock-to-block ratios that protect a range of contiguous blocks within a file. Table 5-5 summarizes the situations in which 1:n locks are useful:

Table 5-5 When to Use 1:N Parallel Cache Management Locks
Situation  Reason 

When the data is mostly read-only.  

A few 1:n locks can cover many blocks without requiring frequent lock operations. These locks are released only when another instance needs to modify the data. 1:n locking can perform faster than 1:1 locking on read-only data with the Parallel Query Option.
If the data is strictly read-only, consider designating the tablespace itself as read-only. The tablespace will not then require any Parallel Cache Management locks. 

When the data can be partitioned according to the instance which is likely to modify it.  

1:n locks which are defined to match this partitioning allow instances to hold disjoint Distributed Lock Manager lock sets, reducing the need for Distributed Lock Manager operations. 

When a large amount of data is modified by a relatively small set of instances.  

1:n locks permit access to a new database block to proceed without Distributed Lock Manager activity, if the lock is already held by the requesting instance.  

Using 1:n locks may cause extra cross-instance lock activity since conflicts may occur between instances that modify different database blocks. Resolution of false pinging may require writing several blocks from the cache of the instance that currently owns the lock. You can minimize or eliminate false pinging by correctly setting the GC_FILES_TO_LOCKS parameter.

1:1 Locking: Locks for One Block

If you create a one-to-one correspondence between Parallel Cache Management locks and data blocks, contention will occur only when instances need data from the same block. This level of 1:1 locking is also sometimes referred to as "DBA locking" where a "DBA" is the data block address of the data block. If you assign more than one block per lock, contention occurs as in 1:n locking.

On most systems, an instance could not possibly hold a lock for each block of a database since System Global Area memory or the Distributed Lock Manager capabilities would be exceeded. Therefore, instances acquire and release 1:1 locks as needed. Since 1:1 locks, lock elements, and resources are renamed in the Distributed Lock Manager and reused, a system can function properly with fewer of them. The value you set for the DB_BLOCK_BUFFERS parameter is the recommended minimum number of releasable locks you should allocate.

Selecting Lock Granularity

Use the information in Table 5-6 to best determine when to use either 1:n or 1:1 locks:

Table 5-6
When to use 1:n locks...  When to use 1:1 locks... 
  • Data is mostly read-only

 
  • Small amount of data is updated by many instances

 
  • Data can be partitioned

 
  • There is not enough memory for the configuration of 1:n locking

 
  • Large set of data is modified by a relatively small set of instances

 

 

Simultaneously Using Fixed and Releasable Locking

Table 5-7 compares using both fixed and releasable locking at the same time.

Table 5-7
Fixed Parallel Cache Management Locks  Releasable Parallel Cache Management Locks 
  • Allocated at instance startup, resulting in a slower startup

 
  • Allocated when user requests a block, resulting in faster instance startup

 
  • Released only at instance shutdown

 
  • Dynamically re-used by other blocks, requiring less memory

 
  • Statically allocated to blocks at startup time, requiring more memory

 

 

Group-Owned Locks

Group-based locking provides dynamic ownership: a single lock can be shared by two or more processes belonging to the same group. Processes in the same group can share and/or touch the lock without opening or converting a new and different lock. This is particularly important for the Multi-Threaded Server and XA.

Distributed Lock Manager Support for Multi-Threaded Server and XA

Oracle Parallel Server uses two forms of lock ownership:

Per-process ownership 

Locks are commonly process-owned: that is, if one process owns a lock, then no other process can touch the lock.  

Group-based ownership 

With group-based locking, ownership becomes dynamic: a single lock can be used by two or more processes belonging to the same group. Processes in the same group can exchange and/or touch the lock without going to the Distributed Lock Manager grant and convert queues. 

Group-based locking is an important Distributed Lock Manager feature for Oracle Multi-Threaded Server (MTS) and XA library functionality.

MTS 

Group-based locking is used for Oracle MTS configurations. Without it, sessions could not migrate between shared server processes. In addition, load balancing may be affected, especially with long running transactions. 

XA libraries 

With Oracle XA libraries, multiple sessions or processes can work on the transaction; they therefore need to exchange the same locks, even in exclusive mode. With group-based lock ownership, processes can exchange access to an exclusive resource. 

Memory Requirements for the Distributed Lock Manager

The user-level Distributed Lock Manager can normally allocate as many resources as you request; your process size, however, will increase accordingly. This is because you are mapping the shared memory where locks and resources reside into your address space. Thus, the process address space can become very large.

Make sure that the Distributed Lock Manager is configured to support all resources your application requires.


Go to previous page Go to next page
Oracle
Copyright © 1996-2000, Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index