Oracle8 Parallel Server Concepts & Administration
Release 8.0

A58238-01

Library

Product

Contents

Index

Prev Next

7
Overview of Locking Mechanisms

This chapter provides an overview of the locking mechanisms that are internal to the parallel server. The chapter is organized as follows:

Differentiating Oracle Locking Mechanisms

This section covers the following topics:

Overview

You must understand locking mechanisms if you are to effectively harness parallel processing and parallel database capabilities. You can influence each kind of locking through the way you set initialization parameters, administer the system, and design applications. If you do not use locks effectively, your system may spend so much time synchronizing shared resources that no speedup and no scaleup is achieved; your parallel system could even suffer performance degradation compared to a single instance system.

Locks are used for two main purposes in Oracle Parallel Server:

Transaction locks are used to implement row level locking for transaction consistency. Row level locking is supported in both single instance Oracle and Oracle Parallel Server.

Instance locks (also commonly known as distributed locks) guarantee cache coherency. They ensure that data and other resources distributed among multiple instances belonging to the same database remain consistent. Instance locks include PCM and non-PCM locks.

See Also: Oracle8 Concepts for a detailed treatment of Oracle locks.
Chapter 8, "Integrated Distributed Lock Manager: Access to Resources" for more information.

Local Locks

Figure 7-1 shows latches and enqueues: locking mechanisms which are synchronized within a single instance. These are used in Oracle with or without the Parallel Server Option, and whether parallel server is enabled or disabled.

Figure 7-1 Locking Mechanisms: Oracle and OPS Disabled

* The mount lock is obtained if the Parallel Server Option has been linked in to your Oracle executable.

Latches

Latches are simple, low level serialization mechanisms to protect in-memory data structures in the SGA. Latches do not protect datafiles. They are entirely automatic, are held for a very short time, and can only be held in exclusive mode. Being local to the node, internal locks and latches do not provide internode synchronization.

Enqueues

Enqueues are shared memory structures which serialize access to resources in the database. These locks can be local to one instance or global to a database. They are associated with a session or transaction, and can be in any mode: shared, exclusive, protected read, protected write, concurrent read, concurrent write, or null.

Enqueues are held longer than latches, have more granularity and more modes, and protect more resources in the database. For example, if you request a table lock (a DML lock) you will receive an enqueue.

Certain enqueues are local to a single instance, when parallel server is disabled. But when parallel server is enabled, those enqueues can no longer be managed on the instance level: they need to be maintained on a system-wide level, managed by the Integrated Distributed Lock Manager (IDLM).

When parallel server is enabled, most of the local enqueues become global enqueues. This is reflected in Figure 7-1 and Figure 7-2. They will all appear as enqueues in the fixed tables-no distinction is made there between local and global enqueues. Global enqueues are handled in a distributed fashion.

Note: Transaction locks are simply a subset of enqueues.

Instance Locks

Figure 7-2 illustrates the instance locks that come into play when Oracle Parallel Server is enabled. In OPS implementations, the status of all Oracle locking mechanisms is tracked and coordinated by the Integrated DLM component.

Figure 7-2 Locking Mechanisms: Parallel Server Enabled

Instance locks (other than the mount lock) only come into existence if you start an Oracle instance with parallel server enabled. They synchronize between instances, communicating the current status of a resource among the instances of an Oracle Parallel Server.

Instance locks are held by background processes of instances, rather than by transactions. An instance owns an instance lock that protects a resource (such as a data block or data dictionary entry) when the resource enters its SGA.

The Integrated DLM component of Oracle handles locking only for resources accessed by more than one instance of a Parallel Server, to ensure cache coherency. The IDLM communicates requests for instance locks and the status of the locks between the lock processes of each instance. The V$DLM_LOCKS view lists information on all locks currently known to the IDLM.

Instance locks are of two types: parallel cache management (PCM) locks and non-PCM locks.

PCM Locks

Parallel cache management locks are instance locks that cover one or more data blocks (table or index blocks) in the buffer cache. PCM locks do not lock any rows on behalf of transactions. PCM locks are implemented in two ways:

hashed locking

 

This is the default implementation, in which PCM locks are statically assigned to blocks in the datafiles.

 

fine grain locking

 

In this implementation, PCM locks are assigned to blocks on a dynamic basis.

 

With hashed locking, an instance never disowns a PCM lock unless another instance asks for it. This minimizes the overhead of instance lock operations in systems that have relatively low contention for resources. With fine grain locking, once the block is released, the lock is released. (Note that non-PCM locks are disowned.)

Non-PCM Locks

Non-PCM locks of many different kinds control access to data and control files, control library and dictionary caches, and perform various types of communication between instances. These locks do not protect datafile blocks. Examples are DML enqueues (table locks), transaction enqueues, and DDL or dictionary locks. The System Change Number (SCN), and the mount lock are global locks, not enqueues.

Note: The context of Oracle Parallel Server causes most local enqueues to become global; they can still be seen in the fixed tables and views which show enqueues (such as V$LOCK). The V$LOCK table does not, however, show instance locks, such as SCN locks, mount locks, and PCM locks.

Many More PCM Locks Than Non-PCM Locks

Although PCM locks are typically far more numerous than non-PCM locks, there is still a substantial enough number of non-PCM locks that you must carefully plan adequate IDLM capacity for them. Typically 5% to 10% of locks are non-PCM. Non-PCM locks do not grow in volume the same way that PCM locks do.

The user controls PCM locks in detail by setting initialization parameters to allocate the number desired. However, the user has almost no control over non-PCM locks. You can try to eliminate the need for table locks by setting DML_LOCKS = 0 or by using the ALTER TABLE ENABLE/DISABLE TABLE LOCK command, but other non-PCM locks will still persist.

See Also: Chapter 16, "Ensuring IDLM Capacity for All Resources & Locks"

The LCKn Processes

With the Oracle Parallel Server, up to ten Lock processes (LCK0 through LCK9) provide inter-instance locking.

LCK processes manage most of the locks used by an instance and coordinate requests for those locks by other instances. LCK processes maintain all of the PCM locks (hashed or fine grain) and some of the non-PCM locks (such as row cache or library cache locks). LCK0 will handle PCM as well as non-PCM locks. Additional lock processes, LCK1 through LCK9, are available for systems that require exceptionally high throughput of instance lock requests; they will only handle PCM locks. Multiple LCK processes can improve recovery time and startup time.

Although instance locks are mainly handled by the LCK processes, some instance locks are directly acquired by other background or shadow foreground processes. In general, if a background process such as LCK owns an instance lock, it is for the whole instance. If a foreground process owns an instance lock, it is just for that particular process. For example, the log writer (LGWR) will get the SCN instance lock, the database writer (DBWR) will get the media recovery lock. The bulk of all these locks, however, are handled by the LCK processes.

Attention: Foreground processes obtain transaction locks-LCK processes do not. Transaction locks are associated with the session/transaction unit, not with the process.

See Also: Oracle8 Concepts for more information about the LCKn processes.

The LMON and LMD0 Processes

The LMON and LMD0 processes implement the global lock management subsystem of Oracle Parallel Server. LMON performs lock cleanup and lock invalidation after the death of an Oracle shadow process or another Oracle instance. It also reconfigures and redistributes the global locks as Oracle Parallel Server instances are started and stopped.

The LMD0 process handles remote lock requests for global locks (that is, lock requests originating from another instance for a lock owned by the current instance). All messages pertaining to global locks that are directed to an Oracle Parallel Server instance are handled by the LMD0 process of that instance.

Cost of Locks

To effectively implement locks, you need to carefully evaluate their relative expense. As a rule of thumb, latches are cheap; local enqueues are more expensive; instance locks and global enqueues are quite expensive. In general, instance locks and global enqueues have equivalent performance impact. (When parallel server is disabled, all enqueues are local; when parallel server is enabled, most are global.)

Table 7-1 dramatizes the relative expense of latches, enqueues, and instance locks. The elapsed time required per lock will vary by system--the values used in the "Actual Time Required" column are only examples.

Table 7-1 Comparing the Relative Cost of Locks
Class of Lock   Actual Time Required   Relative Time Required  

Latches

 

1 microsecond

 

1 minute

 

Local Enqueues

 

1 millisecond

 

1000 minutes (16 hours)

 

Instance Locks
(or Global Enqueues)

 

1/10 second

 

100,000 minutes (69 days)

 

Microseconds, milliseconds, and tenths of a second all sound like negligible units of time. However, if you imagine the cost of locks using grossly exaggerated values such as those listed in the "Relative Time Required" column, you can grasp the need to carefully calibrate the use of locks in your system and applications. In a big OLTP situation, for example, unregulated use of instance locks would be impermissible. Imagine waiting hours or days to complete a transaction in real life!

Stored procedures are available for analyzing the number of PCM locks an application will use if it performs particular functions. You can set values for your initialization parameters and then call the stored procedure to see the projected expenditure in terms of locks.

See Also: Chapter 15, "Allocating PCM Instance Locks".
Chapter 16, "Ensuring IDLM Capacity for All Resources & Locks".

Oracle Lock Names

This section covers the following topics:

Lock Name Format

All Oracle enqueues and instance locks are named using one of the following formats:

type ID1 ID2

or type, ID1, ID2

or type (ID1, ID2)

where

type

 

A two-character type name for the lock type, as described in the V$LOCK table, and listed in Table 7-2 and Table 7-3.

 

ID1

 

The first lock identifier, used by the IDLM. The convention for this identifier differs from one lock type to another.

 

ID2

 

The second lock identifier, used by the IDLM. The convention for this identifier differs from one lock type to another.

 

For example, a space management lock might be named ST 1 0. A PCM lock might be named BL 1 900.

The V$LOCK table contains a list of local and global Oracle enqueues currently held or requested by the local instance. The "lock name" is actually the name of the resource; locks are taken out against the resource.

PCM Lock Names

All PCM locks are Buffer Cache Management locks.

Table 7-2 PCM Lock Type and Name
Type   Lock Name  

BL

 

Buffer Cache Management

 

The syntax of PCM lock names is type ID1 ID2, where

type

 

is always BL (because PCM locks are buffer locks)

 

ID1

 

is the block class

 

ID2

 

For fixed locks, ID2 is the lock element (LE) index number obtained by hashing the block address (see the V$LOCK_ELEMENT fixed view). For releasable locks, ID2 is the database address of the block.

 

Sample PCM lock names are:

BL (1, 100)

 

This is a data block with lock element 100.

 

BL (4, 1000)

 

This is a segment header block with lock element 1000.

 

BL (27, 1)

 

This is an undo segment header with rollback segment #10. The formula for the rollback segment is 7 + (10 * 2).

 

Non-PCM Lock Names

Non-PCM locks have many different names.

Table 7-3 Non-PCM Lock Types and Names
Type   Lock Name  

CF

 

Controlfile Transaction

 

CI

 

Cross-Instance Call Invocation

 

DF

 

Datafile

 

DL

 

Direct Loader Index Creation

 

DM

 

Database Mount

 

DX

 

Distributed Recovery

 

FS

 

File Set

 

KK

 

Redo Log "Kick"

 

IN

 

Instance Number

 

IR

 

Instance Recovery

 

IS

 

Instance State

 

MM

 

Mount Definition

 

MR

 

Media Recovery

 

IV

 

Library Cache Invalidation

 

L[A-P]

 

Library Cache Lock

 

N[A-Z]

 

Library Cache Pin

 

Q[A-Z]

 

Row Cache

 

PF

 

Password File

 

PR

 

Process Startup

 

PS

 

Parallel Slave Synchronization

 

RT

 

Redo Thread

 

SC

 

System Commit Number

 

SM

 

SMON

 

SN

 

Sequence Number

 

SQ

 

Sequence Number Enqueue

 

SV

 

Sequence Number Value

 

ST

 

Space Management Transaction

 

TA

 

Transaction Recovery

 

TM

 

DML Enqueue

 

TS

 

Temporary Segment (also Table-Space)

 

TT

 

Temporary Table

 

TX

 

Transaction

 

UL

 

User-Defined Locks

 

UN

 

User Name

 

WL

 

Begin written Redo Log

 

XA

 

Instance Registration Attribute Lock

 

XI

 

Instance Registration Lock

 

See Also: Oracle8 Reference for descriptions of all these non-PCM locks.

Coordination of Locking Mechanisms by the Integrated DLM

The Integrated DLM component is a distributed resource manager that is internal to the Oracle Parallel Server. This section explains how the IDLM coordinates locking mechanisms that are internal to Oracle. Chapter 8, "Integrated Distributed Lock Manager: Access to Resources" presents a detailed description of IDLM features and functions.

This section covers the following topics:

The Integrated DLM Tracks Lock Modes

In Oracle Parallel Server implementations, the Integrated DLM facility keeps an inventory of all the Oracle instance locks and global enqueues held against the resources of your system. It acts as a referee when conflicting lock requests arise.

In Figure 7-3 the IDLM is represented as an inventory sheet listing resources and the current status of locks on each resource across the parallel server. Locks are represented as follows: S for shared mode, N for null mode, X for exclusive mode.

Figure 7-3 The Integrated DLM Inventory of Oracle Resources and Locks

This inventory includes all instances. For example, resource BL 1, 101 is held by three instances with shared locks and three instances with null locks. Since the table reflects up to 6 locks on one resource, at least 6 instances are evidently running on this system.

The Instance Maps Database Resources to Integrated DLM Resources

Oracle database resources are mapped to IDLM resources, with the necessary mapping performed by the instance. For example, a hashed lock on an Oracle database block with a given data block address (such as file 2 block 10) becomes translated as a BL resource with the class of the block and the lock element number (such as BL 9 1). The data block address (DBA) is translated from the Oracle resource level to the IDLM resource level; the hashing function used is dependent on GC_* parameter settings. The IDLM resource name identifies the physical resource in views such as V$LOCK.

Note: For DBA fine grain locking, the database address is used as the second identifier, rather than the lock element number.

Figure 7-4 Database Resource Names Corresponding to IDLM Resource Names

How IDLM Locks and Instance Locks Relate

Figure 7-5 illustrates the way in which IDLM locks and PCM locks relate. For instance B to read the value of data at data block address x, it must first check for locks on that data. The instance translates the block's database resource name to the IDLM resource name, and asks the IDLM for a shared lock in order to read the data.

As illustrated in the following conceptual diagram, the IDLM checks all the outstanding locks on the granted queue and determines that there are already two shared locks on the resource BL1,441. Since shared locks are compatible with read-only requests, the IDLM grants a shared lock to Instance B. The instance then proceeds to query the database to read the data at data block address x. The database returns the data.

Figure 7-5 The IDLM Checks Status of Locks

Note: The global lock space is managed in distributed fashion by the LMDs of all the instances cooperatively.

If the required block already had an exclusive lock on it by another instance, then Instance B would have to wait for this to be released. The IDLM would place on the convert queue the shared lock request from Instance B. The IDLM would notify the instance when the exclusive lock was removed, and then grant its request for a shared lock.

The term IDLM lock refers simply to the IDLM's notations for tracking and coordinating the outstanding locks on a resource.

The Integrated DLM Provides One Lock Per Instance on a Resource

The IDLM provides one lock per instance on a PCM resource. As illustrated in Figure 7-6, if you have a four-instance system and require a buffer lock on a single resource, you will actually end up with four locks-one per instance.

Figure 7-6 Resources Have One Lock Per Instance

The number of non-PCM locks may depend on the type of lock.

See Also: Chapter 10, "Non-PCM Instance Locks"




Prev

Next
Oracle
Copyright © 1997 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index