Oracle9i Real Application Clusters Deployment and Performance
Release 1 (9.0.1)

Part Number A89870-02
Go To Documentation Library
Go To Product List
Book List
Go To Table Of Contents
Go To Index

Master Index


Go to previous page Go to next page

Configuring Multi-Block Lock Assignments (Optional)

This appendix explains how to configure locks to cover multiple blocks. Refer to this appendix only for rare circumstances to override Oracle Real Application Clusters' default resource control scheme as performed by the Global Cache Service (GCS) and the Global Enqueue Service (GES). The topics in this appendix are:

Before You Override the Global Cache and Global Enqueue Service Resource Control Mechanisms

Oracle strongly recommends that you avoid overriding the resource control performed by the Global Cache and Global Enqueue Services. The default scheme provides exceptional performance for almost all system types in almost all Real Application Clusters environments. In addition, assigning locks requires additional administrative and tuning effort. Therefore, using the default scheme is preferable to performing the complex tasks required to override the default strategy as described in this appendix.


Only use the information in this appendix for exceptional cases. An example of this is an application where the data access patterns are almost exclusively read-mostly.  

Deciding Whether to Override Global Cache Service and Global Enqueue Service Processing

Cache Fusion provides exceptional scalability and performance using cache-to-cache transfers of data that is not cached locally. In other words, before an instance reads a data block from disk, Oracle attempts to obtain the requested data from another instance's cache. If the requested block exists in another cache, then the data block is transferred across the interconnect from the holding instance to the requesting instance.

Real Application Clusters' resource control scheme guarantees the integrity of changes to data made by multiple instances. By default, each data block in an instance's buffer cache is protected by the Global Cache Service. The GCS tracks the access modes, roles, privileges, and states of these resources.

In rare situations, you may want to override the GCS, and the Global Enqueue Service, by configuring multi-block locks where one lock covers multiple data blocks in a file. If blocks are frequently accessed from the same instance, or if blocks are accessed from multiple nodes but in compatible modes such as shared mode for concurrent reads, then a lock configuration may improve performance.

To do this, set the GC_FILES_TO_LOCKS parameter and specify the number of locks that Oracle uses for particular files. The syntax of the parameter also enables you to specify lock allocations for groups of files as well as the number of contiguous data blocks to be covered by each lock. If you indiscriminately use values for GC_FILES_TO_LOCKS, then adverse performance such as excessive forced disk writes can result. Therefore, only set GC_FILES_TO_LOCKS for:

When to Use Locks

Using multiple locks for each file can be useful for the types of data shown in Table A-1.

Table A-1 When to Use Locks
Situation  Reason 

When the data is mostly read-only.  

A few locks can cover many blocks without requiring frequent lock operations. These locks are released only when another instance needs to modify the data. Assigning locks can result in better performance on read-only data with parallel execution processing. If the data is strictly read-only, then consider designating the tablespace as read-only.  

When the data can be partitioned according to the instance which is likely to modify it.  

Lock assignments that you define to match this partitioning scheme allow instances to hold disjoint sets of locks. This reduces the need for lock operations.  

When a large amount of data is modified by a relatively small set of instances.  

Lock assignments permit access to an un-cached database block to proceed without Parallel Cache Management activity. However, this is only possible if the block is already in the requesting instance's cache.  

Using locking can cause additional cross-instance cache management activity because conflicts can occur between instances that modify different database blocks. Resolution of false forced disk writes or excessive forced disk writes can require writing several blocks from the cache of the instance that currently owns access to the blocks.


Set the GC_FILES_TO_LOCKS initialization parameter to specify the number of locks covering data blocks in a datafile or set of datafiles. This section covers:


The syntax for the GC_FILES_TO_LOCKS parameter enables you to specify the relationship between locks and files. The syntax for this parameter is:

GC_FILES_TO_LOCKS="{file_list=#locks[!blocks] [EACH][:]} . . ." 



file_list specifies a single file, range of files, or list of files and ranges as follows: fileidA[-fileidC][,fileidE[-fileidG]] ...

Query the data dictionary view DBA_DATA_FILES to find the correspondence between file names and file ID numbers.


Sets the number of locks to assign to file_list.


Specifies the number of contiguous data blocks to be covered by each lock; also called the blocking factor


Specifies #locks as the number of locks to be allocated to each file in file_list.


All instance must have identical values for GC_FILE_TO_LOCKS. Also, do not use spaces within the quotation marks of the GC_FILES_TO_LOCKS parameter syntax.  

The default value for !blocks is 1. When you specify blocks, contiguous data blocks are covered by each of the #lock locks. To specify a value for blocks, use the exclamation point (!) separator. You would primarily specify blocks, and not specify the EACH keyword to allocate sets of locks to cover multiple datafiles.

Always set the !blocks value to avoid interfering with the data partitioning gained by using free list groups. Normally you do not need to preallocate extents. When a row is inserted into a table and Oracle allocates new extents, Oracle allocates contiguous blocks that are specified with !blocks in GC_FILES_TO_LOCKS to the free list group associated with an instance.

Lock Assignment Examples

For example, you can assign 300 locks to file 1 and 100 locks to file 2 by adding the following line to your initialization parameter file:

   GC_FILES_TO_LOCKS = "1=300:2=100"

The following entry specifies a total of 1500 locks: 500 each for files 1, 2, and 3:

   GC_FILES_TO_LOCKS = "1-3=500EACH"

By contrast, the following entry specifies a total of only 500 locks spread across the three files:

   GC_FILES_TO_LOCKS = "1-3=500"

The following entry indicates that Oracle should use 1000 distinct locks to protect file 1. The data in the files is protected in groups of 25 contiguous locks.

   GC_FILES_TO_LOCKS = "1=1000!25"

If you define a datafile with the AUTOEXTEND clause or if you issue the ALTER DATABASE ... DATAFILE ... RESIZE statement, then you may also need to adjust the lock assignment.

When you add new datafiles, decide whether these new files should be subject to the default control of the GCS or whether you want to assign locks using the GC_FILES_TO_LOCKS initialization parameter.

The following examples show different methods of mapping blocks to locks and how the same locks are used on multiple datafiles.

Figure A-1 Mapping Locks to Data Blocks

Text description of sps81057.gif follows
Text description of the illustration sps81057.gif
Example 1

Figure A-1 shows an example of mapping blocks to locks for the parameter value GC_FILES_TO_LOCKS = "1=60:2-3=40:4=140:5=30".

In datafile 1 shown in Figure A-1, 60 locks map to 120 blocks, which is a multiple of 60. Each lock covers two data blocks.

In datafiles 2 and 3, 40 locks map to a total of 160 blocks. A lock can cover either one or two data blocks in datafile 2, and two or three data blocks in datafile 3. Thus, one lock can cover three, four, or five data blocks across both datafiles.

In datafile 4, each lock maps exactly to a single data block, since there is the same number of locks as data blocks.

In datafile 5, 30 locks map to 170 blocks, which is not a multiple of 30. Each lock therefore covers five or six data blocks.

Each lock illustrated in Figure A-1 can be held in either shared read mode or read-exclusive mode.

Example 2

The following parameter setting allocates 500 locks to datafile 1; 400 locks each to files 2, 3, 4, 10, 11, and 12; 150 locks to file 5; 250 locks to file 6; and 300 locks collectively to files 7 through 9:

  GC_FILES_TO_LOCKS = "1=500:2-4,10-12=400EACH:5=150:6=250:7-9=300" 

This example assigns a total of (500 + (6*400) + 150 + 250 + 300) = 3600 locks. You can specify more than this number of locks if you add more datafiles.

Example 3

In Example 2, 300 locks are allocated to datafiles 7, 8, and 9 collectively with the clause "7-9=300". The keyword EACH is omitted. If each of these datafiles contains 900 data blocks, then for a total of 2700 data blocks, then each lock covers nine data blocks. Because the datafiles are multiples of 300, the nine locks cover three data blocks in each datafile.

Example 4

The following parameter value allocates 200 locks each to files 1 through 3; 50 locks to datafile 4; 100 locks collectively to datafiles 5, 6, 7, and 9; and 20 locks in contiguous 50-block groups to datafiles 8 and 10 combined:

  GC_FILES_TO_LOCKS = "1-3=200EACH 4=50:5-7,9=100:8,10=20!50" 

In this example, a lock assigned to the combined datafiles 5, 6, 7, and 9 covers one or more data blocks in each datafile, unless a datafile contains fewer than 100 data blocks. If datafiles 5 to 7 contain 500 data blocks each and datafile 9 contains 100 data blocks, then each lock covers 16 data blocks: one in datafile 9 and five each in the other datafiles. Alternatively, if datafile 9 contained 50 data blocks, half of the locks would cover 16 data blocks (one in datafile 9); the other half of the locks would only cover 15 data blocks (none in datafile 9).

The 20 locks assigned collectively to datafiles 8 and 10 cover contiguous groups of 50 data blocks. If the datafiles contain multiples of 50 data blocks and the total number of data blocks is not greater than 20 times 50, that is, 1000, then each lock covers data blocks in either datafile 8 or datafile 10, but not in both. This is because each of these locks covers 50 contiguous data blocks. If the size of datafile 8 is not a multiple of 50 data blocks, then one lock must cover data blocks in both files. If the sizes of datafiles 8 and 10 exceed 1000 data blocks, then some locks must cover more than one group of 50 data blocks, and the groups might be in different files.

Example 5


In this example, four locks are specified for files 1 and 2. Therefore, the number of blocks covered by each lock is eight ((16+16)/4). The blocks are not contiguous.

Figure A-2 GC_FILES_TO_LOCKS Example 5

Text description of sps81058.gif follows
Text description of the illustration sps81058.gif
Example 6


In this example, four locks are specified for files 1 and 2. However, the locks must cover eight contiguous blocks.

Figure A-3 GC_FILES_TO_LOCKS Example 6

Text description of sps81059.gif follows
Text description of the illustration sps81059.gif
Example 7


In this example, four locks are specified for file 1 and four for file 2. The locks must cover four contiguous blocks.

Figure A-4 GC_FILES_TO_LOCKS Example 7

Text description of sps81060.gif follows
Text description of the illustration sps81060.gif
Example 8


In this example, file 1 has multi-block lock control with 4 locks. On file 2, locks are allocated.

Figure A-5 GC_FILES_TO_LOCKS Example 8

Text description of sps81061.gif follows
Text description of the illustration sps81061.gif

Blocking Factor, Extent Allocation, and Free List Groups

Use the !blocks option of GC_FILES_TO_LOCKS to align the extents of contiguous blocks allocated to an object with lock coverage. When using the !blocks notation, contiguous data blocks are covered by one lock. For example:


Allocates 1000 locks to file 12 with a periodicity of 25, in other words one lock covers 25 contiguous blocks.

If an extent definition (INITIAL EXTENT, NEXT EXTENT) for a table corresponds to !blocks, then in this case the 25 contiguous blocks covered by a lock coincide with the extent boundaries. In other words, all the blocks covered by a multi-block lock are in the same extent.

This is important when using free list groups. When no more blocks exist with free space, Oracle allocates a new extent. The blocks in this extent are then allocated to a particular free list group. If not properly configured, then locks can also cover blocks from another extent which might be in the free list group used by another instance. This results in false forced disk writes.

If you do not use the blocking factor as described in this section, then the same lock can cover blocks from different extents allocated to distinct free list groups, thus incurring additional overhead. This situation is what free list groups are supposed to avoid.

Dynamic Allocation of Blocks on Lock Boundaries

To accommodate growth, the strategy of dynamically allocating blocks to free list groups is more effective than the preallocation of extents.

See Also:

"Extent Preallocation Examples" before using the methods described in this section  

You can also use the !blocks option of GC_FILES_TO_LOCKS to dynamically allocate blocks to a free list from the high water mark within a lock boundary. This method does not eliminate all forced writes on the segment header. Instead, this method allocates blocks as needed so you do not have to preallocate extents.

Because locks are owned by instances, blocks are allocated on a per-instance basis--and that is why they are allocated to free list groups. Within an instance, blocks can be allocated to different free lists.

Using this method, you can either explicitly allocate the !blocks value, or leave the balance of new blocks covered by the existing lock. If you choose the latter, there still may be contention for the existing locks by allocation to other instances.

Moving a Segment's High Water Mark

A segment's high water mark is the current limit to the number of blocks that have been allocated within the segment. If you are allocating extents dynamically, the high water mark is also the lock boundary. The lock boundary and the number of blocks that will be allocated at one time within an extent must coincide. This value must be the same for all instances.

Consider the example in Figure A-6 with four blocks for each lock (!4). Locks have been allocated before the block content has been entered. If Oracle fills data block D2, held by Lock 2, and then allocated another range of four blocks, only the number of blocks fitting within the lock boundary are allocated. In this case, this includes blocks 7 and 8. Both of these are protected by the current lock. With the high water mark at 8, when instance 2 allocates a range of blocks, all four blocks 9 to 12 are allocated, covered by Lock 3. The next time instance 1 allocates blocks it will get blocks 13 to 16, covered by Lock 4.

Figure A-6 A File with a High Water Mark That Moves as Oracle Allocates Blocks

Text description of sps81094.gif follows
Text description of the illustration sps81094.gif

This example assumes that GC_FILES_TO_LOCKS has the following setting for both instances:

GC_FILES_TO_LOCKS = "1000!5"

With the EACH option specified, each file in file_list is allocated #locks number of locks. Within each file, !blocks specifies the number of contiguous data blocks to be covered by each lock.

Figure A-7 shows the incremental process by which the segment grows:

In this way, if user A on Instance 1 is working on block 10, no one else from either instance can work on any block in the range of blocks covered by Lock 2. This includes blocks 6 through 10.

Figure A-7 Allocating Blocks within An Extent

Text description of sps81095.gif follows
Text description of the illustration sps81095.gif

Additional Considerations for Setting GC_FILES_TO_LOCKS

Setting GC_FILES_TO_LOCKS in Real Application Clusters has further implications. For example, setting it can increase monitoring overhead and you may have to frequently adjust the parameter when the database grows or when you add files. Moreover, you cannot dynamically change the setting for GC_FILES_TO_LOCKS. To change the setting, you must stop the instances, alter the setting, and restart all the instances. In addition, consider the following topics in this section:

Expanding or Adding Datafiles

Sites that run continuously cannot afford to shut down for parameter value adjustments. Therefore, when you use the GC_FILES_TO_LOCKS parameter, remember to provide room for growth or room for files to extend.

You must also carefully consider how you use locks on files that do not grow significantly, such as read-only or read-mostly files. It is possible that better performance would result from assigning fewer locks for multiple blocks. However, if the expected CPU and memory savings due to fewer locks do not outweigh the administrative overhead, use the resource control scheme of the Global Cache and Global Enqueue Services.

Files To Avoid Including in GC_FILES_TO_LOCKS Settings

Never include the following types of files in the GC_FILES_TO_LOCKS parameter list:

Database Design Considerations and Free List Groups

When data is frequently inserted into a table from multiple nodes and the table is not partitioned, you can use free list groups to avoid performance issues. Free list groups separate the data structures associated with the free space management of a table into disjoint sets that are available for individual instances. However, if you override the Global Cache and Global Enqueue Services, consider assigning locks with free lists as described under the following topic.

Associating Locks with Free Lists

If each extent in a table is in a separate datafile, you can use the GC_FILES_TO_LOCKS parameter to allocate specific ranges of locks to each extent, so that each set of locks is associated with only one group of free lists.

Figure A-8 shows multiple extents in separate files. The GC_FILES_TO_LOCKS parameter allocates 10 locks to files 8 and 10, and 10 locks to files 9 and 11. Extents A and C are in the same free list group, and extents B and D are in another free list group. One set of locks is associated with files 8 and 10, and a different set of locks is associated with files 9 and 11. You do not need separate locks for files that are in the same free list group, such as files 8 and 10, or files 9 and 11.

Figure A-8 Extents and Free List Groups

Text description of sps81093.gif follows
Text description of the illustration sps81093.gif

This example assumes total partitioning for reads as well as writes. If more than one instance updates blocks, then it is desirable to have more than one lock for each file to minimize forced reads and writes. This is because even with a shared lock, all blocks held by a lock are subject to forced reads when another instance tries to read even one of the blocks held in exclusive mode.

Tuning Parallel Execution on Real Application Clusters

To optimize parallel execution in Real Application Clusters environments when not using the default resource control scheme, you must accurately set the GC_FILES_TO_LOCKS parameter. Data block address locking in its default behavior assigns one lock to each block. For example, during a full table scan, a lock must be acquired for each block read into the scan. To accelerate full table scans, you use one of the following three possibilities:

The following guidelines affect memory usage, and thus indirectly affect performance:

Analyzing Real Application Clusters I/O Statistics

If you set GC_FILES_TO_LOCKS, then Cache Fusion is disabled. In this case, you can use three statistics in the V$SYSSTAT view to measure the I/O workload related to global cache synchronization:

DBWR cross-instance writes occur when Oracle resolves inter-instance data block contention by writing the requested block to disk before the requesting node can use it.

Cache Fusion eliminates the disk I/O for current and consistent-read versions of blocks. This can lead to a substantial reduction in physical writes and reads performed by each instance.

Analyzing Real Application Clusters I/O Statistics Using V$SYSSTAT

You can obtain the following statistics to quantify the write I/Os required for global cache synchronization.

  1. Use this syntax to query the V$SYSSTAT view:

       WHERE NAME IN ('DBWR cross-instance writes', 
       'remote instance undo block writes',
       'remote instance undo header writes',
       'physical writes');

    Oracle responds with output similar to:

    NAME                                                           VALUE
    --------------------------------------------------------- ----------
    physical writes                                                41802
    DBWR cross-instance writes                                      5403
    remote instance undo block writes                                  0
    remote instance undo header writes                                 2
    4 rows selected.

    Where the statistic physical writes refers to all physical writes that occurred from a particular instance performed by DBWR, the value for DBWR cross-instance writes accounts for all writes caused by writing a dirty buffer containing a data block that is requested for modification by another instance. Because the DBWR process also handles cross-instance writes, DBWR cross-instance writes are a subset of all physical writes.

  2. Calculate the ratio of Real Application Clusters-related I/O to overall physical I/O using this equation:

    Text description of psadm008.gif follows
    Text description of the illustration psadm008.gif

  3. Use this equation to calculate how many writes to rollback segments occur when a remote instance needs to read from rollback segments that are in use by a local instance:

    Text description of sps81131.gif follows
    Text description of the illustration sps81131.gif

    The ratio shows how much disk I/O is related to writes to rollback segments.

  4. To estimate the number or percentage of reads due to global cache synchronization, use the number of lock requests for conversions from NULL(N) to Shared mode (S) counted in V$LOCK_ACTIVITY and the physical reads statistics from V$SYSSTAT.

    The following formula computes the percentage of reads that are only for local work where lock buffers for read represents the N-to-S block access mode conversions:

    Text description of psadm017.gif follows
    Text description of the illustration psadm017.gif

    These so-called forced reads occur when a cached data block that was previously modified by the local instance had to be written to disk. This is due to a request from another instance, so the block is then re-acquired by the local instance for a read.

Monitoring Multi-Block Lock Usage by Detecting False Forced Writes

False forced writes occur when Oracle down-converts a lock that protects two or more blocks if the blocks are concurrently updated from different nodes. Assume that each node is updating a different block covered by the same lock. In this case, each node must write both blocks to disk even though the node is updating only one of them. This is necessary because the same lock covers both blocks.

Statistics are not available to show false forced write activity. To assess false forced write activity you can only consider circumstantial evidence as described in this section.

The following SQL statement shows the number of lock operations causing a write, and the number of blocks actually written:

       AND A.TO_VAL = 'NULL' 
       AND B.FROM_VAL = 'X' 
       AND B.TO_VAL = 'S' 
       AND C.FROM_VAL = 'X' 
       AND C.TO_VAL = 'SSX' 
       AND NAME = 'DBWR cross-instance writes'; 

Table A-2 shows how to interpret the forced disk write rate.

Table A-2 Interpreting the Forced Write Rate
Forced Disk Write Rate  Meaning 

Less than 1 

False forced writes may be occurring, but there are more lock operations than forced disk writes. DBWR is writing blocks fast enough, resulting in no writes for lock activity. This is also known as a soft ping, meaning I/O activity is not required for the forced disk write, only lock activity.  

Equal to 1  

Each lock activity involving a potential write causes the write to occur. False forced writes may be occurring.  

Greater than 1  

False forced writes are definitely occurring. 

Use this formula to calculate the percentage of false forced writes:

Text description of sps81110.gif follows
Text description of the illustration sps81110.gif

Then check the total number of writes and calculate the number due to false forced writes:

       Z.VALUE * pingrate "FALSE PINGS", 
       V$SYSSTAT Y,
   WHERE Z.NAME = 'DBWR cross-instance writes' 
   AND Y.NAME = 'physical writes'; 

Here, ping_rate is given by the following SQL statement:

       AND A.TO_VAL   = 'NULL'
       AND B.FROM_VAL = 'X'
       AND B.TO_VAL   = 'S'
       AND C.FROM_VAL = 'X'
   AND C.TO_VAL   = 'SSX'
   AND NAME = 'DBWR cross-instance writes';

The goal is not only to reduce overall forced disk writes, but also to reduce false forced writes. To do this, look at the distribution of instance locks in GC_FILES_TO_LOCKS and check the data in the files.

Lock Names and Lock Formats

The following section describes the lock names and lock formats of locks. The topics in this section are:

Lock Names and Lock Name Formats

Internally, Oracle global lock name formats used one of the following formats:



A two-character type name for the lock type, for example, BL, TX, TM


The first lock identifier. The meaning and format of this identifier differs from one lock type to another.


The second lock identifier. The meaning and format of this identifier differs from one lock type to another.

For example, a space management lock might be named ST00. A lock might be named BL 1 900.

The clients of the lock manager define the lock type, for example BL for a lock, and two parameters, id1 and id2, and pass these parameters to the GCS API to open a lock. The lock manager does not distinguish between different types of locks. Each component of Oracle defines the type and the two parameters for its own needs, in other words, id1 and id2 have a meaning consistent with the requirements of each component.

Lock Names

All locks are Buffer Cache Management locks. Buffer Cache Management locks are of type BL. The syntax of lock names is type ID1 ID2, where:


Is always BL because locks are buffer locks.


The database address of the blocks.


The block class.

Examples of lock names are:

BL (100, 1)

This is a data block with lock element 100.

BL (1000, 4)

This is a segment header block with lock element 1000.

BL (27, 1)

This is an undo segment header with rollback segment #10. The formula for the rollback segment is 7 + (10 * 2).

Lock Types and Names

There are several different types and names of locks as shown in Table A-3:

Table A-3 Locks Types and Names
Type  Lock Name  Type  Lock Name  


Controlfile Transaction 


Parallel Execution Process Synchronization 


Cross-Instance Call Invocation 


Redo Thread 




System Commit Number 


Direct Loader Index Creation 




Database Mount 


Sequence Number 


Distributed Recovery 


Sequence Number Enqueue 


File Set 


Sequence Number Value 


Redo Log Kick 


Space Management Transaction 


Instance Number 


Transaction Recovery 


Instance Recovery 


DML Enqueue 


Instance State 


Temporary Segment (also Table-Space) 


Mount Definition 


Temporary Table 


Media Recovery 




Library Cache Invalidation 


User-Defined Locks 


Library Cache Lock 


User Name 


Library Cache Pin 


Begin written Redo Log 


Row Cache 


Instance Registration Attribute Lock 


Password File 


Instance Registration Lock 


Process Startup 



Go to previous page Go to next page
Copyright © 1996-2001, Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Go To Product List
Book List
Go To Table Of Contents
Go To Index

Master Index