Sun Java logo     Previous      Contents      Index      Next     

Sun logo
Sun Java(TM) System Directory Server 5.2 2005Q1 Performance Tuning Guide 

Chapter 3
Tuning Cache Sizes

Directory Server caches directory information in memory and on disk in order to be able to respond more quickly to client requests. Properly tuned caching minimizes the need to access disk subsystems when handling client requests.


Note

Unless caches are tuned and working properly, other tuning may have only limited impact on performance.


This chapter covers the following topics:


Types of Cache

Directory Server handles three types of cache as described in Table 3-1.

Table 3-1 Caches 

Cache Type

Description

Database

Each Directory Server instance has one database cache that holds both indexes and entries in database format.

Refer to Database Cache for more information.

Entry

Each suffix has an entry cache that holds entries retrieved from the database during previous operations and formatted for quick delivery to client applications.

Refer to Entry Cache for more information.

Import

Each Directory Server instance has an import cache that is structurally similar to the database cache and is used during bulk loading.

Refer to Import Cache for more information.

Directory Server also benefits from file system cache, handled by the underlying operating system, and from I/O buffers in disk subsystems.

Figure 3-1shows caches for an instance of Directory Server handling three suffixes, each with its own entry cache. The instance is configured to handle significant disk activity.

Figure 3-1 Entry and Database Caches in Context

Directory Server controls entry and database caches

Database Cache

Each Directory Server instance has one database cache. The database cache holds pages from the database containing indexes and entries. Each page is not an entry, but a slice of memory containing a portion of the database. You specify database cache size (nsslapd-dbcachesize) in bytes. The change to database cache size takes effect after you restart the server, with database cache space allocated at server startup.

Directory Server moves pages between the database files and the database cache to maintain maximum database cache size. The actual amount of memory used by Directory Server for database cache may be up to 25 percent larger than the size you specify, due to additional memory needed to manage the database cache itself.

When using a very large database cache, verify through empirical testing and by monitoring memory use with tools such as pmap(1) on Solaris systems that the memory used by Directory Server does not exceed the size of available physical memory. Exceeding available physical memory causes the system to start paging repeatedly, resulting in severe performance degradation.

The ps(1) utility can also be used with the -p pid and -o format options to view current memory used by a particular process such as Directory Server (ns-slapd). Refer to the operating system documentation for details.

For 32-bit servers, database cache size must be limited such that the total Directory Server (ns-slapd) process size is less than the maximum process size allowed by the operating system. In practice, this limit is generally in the 2-3 GB range.

Refer to the Directory Server Administration Reference for further information about the valid range of nsslapd-dbcachesize values.

Entry Cache

The entry cache holds recently accessed entries, formatted for delivery to client applications. You specify entry cache size for a suffix (nsslapd-cachememsize) in bytes. Entry cache is allocated as needed.

Directory Server can return entries from an entry cache extremely efficiently, as entries stored in this cache are already formatted. Entries in the database must be formatted (and stored in the entry cache) before delivery to client applications.

When specifying entry cache size, know that nsslapd-cachememsize indicates how much memory Directory Server requests from the underlying memory allocation library. Depending on how the memory allocation library handles such requests, actual memory used may be much larger than the effective amount of memory ultimately available to Directory Server for the entry cache.

Actual memory used by the Directory Server process depends primarily on the memory allocation library used, and on the entries cached. Entries with many small attribute values usually require more overhead than entries with a few large attribute values.

For 32-bit servers, entry cache size must be limited such that the total Directory Server (ns-slapd) process size is less than the maximum process size allowed by the operating system. In practice, this limit is generally in the 2-3 GB range.Refer to the Directory Server Administration Reference for further information about the valid range of nsslapd-cachememsize values.

Import Cache

The import cache is created and used during suffix initialization only, also known as bulk loading or importing. If the deployment involves offline suffix initialization only, import cache and database cache are not used together, so you need not add them together when aggregating cache size as described in Total Aggregate Cache Size. You specify import cache size (nsslapd-import-cachesize) in bytes. Changes to import cache size take effect the next time the suffix is reset and initialized, with import cache allocated for the initialization, then released after the initialization.

Directory Server handles import cache as it handles database cache. Ensure therefore that sufficient physical memory is available to prevent swapping. Furthermore, benefits of larger import cache tend to diminish for cache sizes larger than 1 GB, so do not allocate more than 1-2 GB for import cache.

Refer to the Directory Server Administration Reference for further information about the valid range of nsslapd-import-cachesize values.

File System Cache

The operating system allocates available memory not used by Directory Server caches and other applications to the file system cache. This cache holds data recently read from the disk, making it possible for subsequent requests to obtain data copied from cache rather than to read it again from the disk. As memory access is many times faster than disk access, leaving some physical memory available to the file system cache can boost performance.

For 32-bit servers, consider using file system cache as a replacement for some of the database cache. Database cache is more efficient for Directory Server use than file system cache, but file system cache is not directly associated with the Directory Server (ns-slapd) process, so you can potentially make a larger total cache available to Directory Server than would be available using database cache alone.

64-bit servers do not have the same process size limit issue. Use database cache instead of file system cache with 64-bit servers.

Refer to the operating system documentation for details on file system cache.

Total Aggregate Cache Size

The sum of all caches used simultaneously must remain smaller than the total size of available physical memory, less the memory intended for file system cache, and for other processes, such as Directory Server itself. For 32-bit servers, this means total aggregate cache size must be limited such that the total Directory Server (ns-slapd) process size is less than the maximum process size allowed by the operating system. In practice, this limit is generally in the 2-3 GB range. Total cache used may well be significantly larger than the size you specify. Refer to Database Cache for hints on how to check that the cache size and thus Directory Server process size does not exceed available physical memory.

If suffixes are initialized (bulk loaded) while Directory Server is online, the sum of database, entry, and import cache sizes should remain smaller than the total size of available physical memory.

Table 3-2 Suffix Initialization (Import) Operations and Cache Use 

Cache Type

Offline Import

Online Import

Database

no

yes

Entry1

yes

yes

Import

yes

yes

1As shown in Figure 3-1, you have one entry cache for each suffix.

If all suffix initialization takes place offline with Directory Server stopped, you may be able to work around this limitation. In this case import cache does not coexist with database cache, so you may allocate the same memory to import cache for offline suffix initialization and to database cache for online use. If you opt to implement this special case, however, ensure that no one performs online bulk loads on the production system. The sum of the caches used simultaneously must still remain smaller than the total size of available physical memory.


How Searches Use Cache

Figure 3-2 illustrates how Directory Server handles both searches specifying a base DN and searches using filters. Individual lines represent threads accessing different levels of memory, with broken lines representing steps to minimize through effective tuning.

Figure 3-2 Searches and Cache

Searches use entry and database caches.

Base Search Process

As shown, base searches (those specifying a base DN) are the simplest type of searches for Directory Server to handle. To process such searches, Directory Server:

  1. Attempts to retrieve the entry having the specified base DN from the entry cache.
  2. If the entry is found there, Directory Server checks whether the candidate entry matches the filter provided for the search.

    If the entry matches, Directory Server then quickly returns the formatted, cached entry to the client application.

  3. Attempts to retrieve the entry from the database cache.
  4. If the entry is found there, Directory Server copies the entry to the entry cache for the suffix, and then proceeds as if the entry had been found in the entry cache.

  5. Attempts to retrieve the entry from the database itself.
  6. If the entry is found there, Directory Server copies the entry to the database cache, then proceeds as if the entry had been found in the database cache.

Subtree and One-Level Search Process

Also as shown in Figure 3-2, searches on a subtree or a level of a tree involve additional processing to handle sets of entries. To process such searches, Directory Server:

  1. Attempts to build a set of candidate entries that match the filter from indexes in the database cache.
  2. If no appropriate index is present, the set of candidate entries must be generated from the relevant entries in the database itself.

  3. Handles each candidate entry by:
    1. Performing a base search to retrieve the entry.
    2. Checking whether the entry matches the filter provided for the search.
    3. Returning the entry to the client application if the entry matches the filter.
    4. In this way, Directory Server avoids constructing the set in memory.

Ideally, you know what searches to expect before tuning Directory Server. In practice, verify assumptions through empirical testing.


How Updates Use Cache

Figure 3-3 illustrates how Directory Server handles updates. Individual lines represent threads accessing different levels of memory, with broken lines representing steps to minimize through effective tuning.

Figure 3-3 Updates and Cache

Updates use entry and database caches.

Notice that Figure 3-3 does not show the potential impact on the entry cache of an internal search performed to retrieve the entry for a modify or delete operation. Figure 3-2 shows how searches use cache.

Updates involve more processing than searches. To process updates, Directory Server:

  1. Performs a base DN search to retrieve the entry to update or verify in the case of an add operation that it does not already exist.
  2. Changes the database cache, updating in particular any indexes affected by the update.
  3. If the data affected by the update has not been loaded into the database cache, this step can result in disk activity while the relevant data are loaded into the cache.

  4. Writes information about the changes to the transaction log, waiting for the information to be flushed to disk.
  5. Refer to Transaction Logging for details.

  6. Formats and copies the updated entry to the entry cache for the suffix.
  7. Returns an acknowledgement of successful update to the client application.


How Suffix Initialization Uses Cache

Figure 3-4 illustrates how Directory Server handles suffix initialization, also known as bulk load import. Individual lines represent threads accessing different levels of memory, with broken lines representing steps to minimize through effective tuning.

Figure 3-4 Suffix Initialization (Bulk Loading) and Cache

Bulk loads use entry and import caches.

To initialize a suffix, Directory Server:

  1. Starts a thread to feed an entry cache, used as a buffer, from LDIF.
  2. Starts a thread for each index affected and a thread to create entries in the import cache. These threads consume entries fed into the entry cache.
  3. Reads from and writes to the database files when import cache runs out.
  4. Directory Server may also write log messages during suffix initialization, but does not write to the transaction log.

Tools for suffix initialization such as ldif2db (directoryserver -u 5.2 ldif2db) delivered with Directory Server provide feedback concerning cache hit rate and import throughput. Having both cache hit rate and import throughput drop together suggests that import cache may be too small. Consider increasing import cache size.


Optimizing For Searches

For top performance, cache as much directory data as possible in memory. In preventing the directory from reading information from disk, you limit the disk I/O bottleneck. There are a number of different possibilities for doing this, depending on the size of your directory tree, the amount of memory available and the hardware used. Depending on the deployment, you may choose to allocate more or less memory to entry and database caches to optimize search performance. You may alternatively choose to distribute searches across Directory Server consumers on different servers.

This section covers the following scenarios:

All Entries and Indexes in Memory

Imagine the optimum case. Database and entry caches fit into the physical memory available. The entry caches are large enough to hold all entries in the directory. The database cache is large enough to hold all indexes and entries. In this case, searches find everything in cache. Directory Server never has to go to file system cache or to disk to retrieve entries.

In this case, ensure that database cache can contain all database indexes even after updates and growth. When space runs out in the database cache for indexes, Directory Server must read indexes from disk for every search request, severely impacting throughput. You can monitor activity with Directory Server Console, which displays useful information under the Status tab as shown in Figure 3-5.

Figure 3-5 Monitoring Cache Hit Rate Using Directory Server Console

Screen capture showing cache hit rate in the status tab.

Alternatively, paging and cache activity can be monitored by searching from the command line:

$ ldapsearch -D admin -w password \
-b "cn=monitor,cn=database_name,cn=ldbm database,cn=plugins,cn=config"

Finding appropriate caches sizes must be done through empirical testing with representative data. Start by allocating a large amount of memory for the caches, and then exercise and monitor Directory Server to observe the result, repeating the process as necessary. Entry caches in particular may use much more memory than you allocate to them.

Plenty of Memory, 32-Bit Directory Server

Imagine a system with sufficient memory to hold all data in entry and database caches, but no support for a 64-bit Directory Server process. If hardware constraints prevent you from deploying on a Solaris system with 64-bit support, the key is sizing caches appropriately with respect to memory limitations for 32-bit processes, then leaving remaining memory to the file system cache. As a starting point when benchmarking performance, size the entry cache to hold as many entries as possible, and size the database cache relatively small such as 100 MB without completely minimizing it, but letting file system cache hold the database pages.


Note

File system cache is shared with other processes on the system, especially file based operations. It is thus considerably more difficult to control than other caches, particularly on systems not dedicated to Directory Server.

The system may reallocate file system cache to other processes.


Avoid online import in this situation, as import cache is associated with the Directory Server process.

Not Enough Memory

Imagine a system with insufficient available memory to hold all data in entry and database caches. The key in this case is to avoid causing combined entry and database cache sizes to exceed the available physical memory, resulting in heavy virtual memory paging that could bring the system to a virtual halt.

Start benchmarking by devoting available memory to entry cache and database caches, with sizes no less than 100 MB each. Try disabling the file system cache by mounting Solaris UFS file systems with the -o forcedirectio option, described in mount(1M). This can prevent the file system cache from using memory needed by Directory Server. Verify and correct assumptions through empirical testing.


Optimizing for Updates

For top update performance, first remove any transaction log bottlenecks observed. Refer to Transaction Logging for details.

Next, attempt to provide enough memory for the database cache to handle updates in memory and minimize disk activity. You can monitor the effectiveness of the database cache by reading the hit ratio in Directory Server Console. Directory Server Console displays hit ratios for suffixes under the Status tab as shown in Figure 3-5.

After Directory Server has run for some time, the caches should contain enough entries and indexes that disk reads are no longer necessary. Updates should affect the database cache in memory, with data from the large database cache in memory being flushed only infrequently.

Flushing data to disk during a checkpoint can itself be a bottleneck, so storing the database on a separate RAID system such as a Sun StorEdge™ disk array can help improve update performance. You may use utilities such as iostat(1M) on Solaris systems to isolate potential I/O bottlenecks.

Table 3-3 shows recommendations for systems with 2, 3, and 4 disks.

Table 3-3 Isolating Databases and Logs on Different Disks 

Disks Available

Recommendations

2

  • Place the Directory Server database on one disk
  • Place the transaction log, the access, audit, error logs, and any changelogs on the other disk

3

  • Place the Directory Server database on one disk
  • Place the transaction log on the second disk
  • Place the access, audit, error logs, and any changelogs on the third disk

4

  • Place the Directory Server database on one disk
  • Place the transaction log on the second disk
  • Place the access, audit, and error logs on the third disk
  • Place changelogs on the fourth disk


Cache Priming and Monitoring

Priming caches means filling them with data such that subsequent Directory Server behavior reflects normal operational performance, rather than ramp up. Priming caches is typically useful for arriving at reproducible results when benchmarking, and measuring and analyzing potential optimizations. In most cases, do not actively prime the caches, but instead let the caches be primed by normal or typical. client interaction with Directory Server before you measure performance.

After caches are primed, you may run tests, and monitor whether cache tuning has produced the desired outcomes. Directory Server Console displays monitoring information for caches when you select the Suffixes node under the Status tab as shown in Figure 3-5. Alternatively, paging and cache activity can be monitored by searching from the command line:

$ ldapsearch -D admin -w password \
-b "cn=monitor,cn=database_name,cn=ldbm database,cn=plugins,cn=config"

If database cache size is large enough and the cache is primed, then the hit ratio (dbcachehitratio) should be high, and number of pages read in (dbcachepagein) and clean pages written out (dbcacheroevict) should be low. Here, "high" and "low" must be understood relative to the deployment constraints.

If entry cache for a suffix is large enough and the cache is primed, then the hit ratio (entrycachehitratio) should be high. As the entry cache fills, entry cache size (currententrycachesize) approaches the maximum entry cache size (maxentrycachesize). Ideally, the size in entries (currententrycachecount) should be either equal to or very close to the total number of entries in the suffix.


Other Optimizations

Tuning cache sizes represent only one approach to improving search, update or bulk load rates. As you tune the cache, performance bottlenecks from cache move to other parts of the system. Refer to the other chapters in this guide for more information.



Previous      Contents      Index      Next     


Copyright 2005 Sun Microsystems, Inc. All rights reserved.