Sun Directory Server Enterprise Edition 7.0 Deployment Planning Guide

What to Configure and Why

Directory Server default configuration settings are defined for typical small deployments and to make it easy to install and evaluate the product. This section examines some key configuration settings to adjust for medium to large deployments. In medium to large deployments you can often improve performance significantly by adapting configuration settings to your particular deployment.

Directory Server Database Page Size

When Directory Server reads or writes data, it works with fixed blocks of data, called pages. By increasing the page size you increase the size of the block that is read or written in one disk operation.

The page size is related to the size of entries and is a critical element of performance. If you know that the average size of your entries is greater than db-page-size/4–24 (24 is the per page binary tree internal structure), you must increase the database page size. The database page size should also match the file system disk block size.

Directory Server Cache Sizes

Directory Server is designed to respond quickly to client application requests. In order to avoid waiting for directory data to be read from disk, Directory Server caches data in memory. You can configure how much memory is devoted to cache for database files, for directory entries, and for importing directory data from LDIF.

Ideally the hardware on which you run Directory Server allows you to devote enough space to cache all directory data in physical memory. The data should fit comfortably, such that the system has enough physical memory for operation, and the file system has plenty of physical memory for its caching and operation. Once the data are cached, Directory Server has to read data from and write data to disk only when a directory entry changes.

Directory Server supports 64–bit memory addressing, and so can handle total cache sizes as large as a 64–bit processor can address. For small to medium deployments it is often possible to provide enough memory that all directory data can be held in cache. For large deployments, however, caching everything may not be practical or cost effective.

For large deployments, caching everything in memory can cause side effects. Tools such as the pmap command, that traverse the process memory map to gather data, can freeze the server process for a noticeable time. Core files can become so large that writing them to disk during a crash can take several minutes. Startup times can be slow if the server is shut down abruptly and then restarted. Directory Server can also pause and stop responding temporarily when it reaches a checkpoint and has to flush dirty cached pages to disk. When the cache is very large, the pauses can become so long that monitoring software assumes Directory Server is down.

I/O buffers at the operating system level can provide better performance. Very large buffers can compensate for smaller database caches.

For a detailed discussion of cache and cache settings, read Tuning Cache Settings. For more information on tuning cache sizes, read The Basics of Directory Server Cache Sizing.

Directory Server Indexes

Directory Server indexes directory entry attribute values to speed searches for those values. You can configure attributes to be indexed in various ways. For example, indexes can help Directory Server determine quickly whether an attribute has a value, whether it has a value equal to a given value, and whether it has a value containing a given substring.

Indexes can add to search performance, but they can also impact write performance. When an attribute is indexed, Directory Server has to update the index as values of the attribute change.

Directory Server saves index data to files. The more indexes you configure, the more disk space required. Directory Server indexes and data files are found, by default, under the instance-path/db/ directory.

For a detailed discussion of indexing and index settings, read Chapter 9, Directory Server Indexing, in Sun Directory Server Enterprise Edition 7.0 Reference.

Directory Server Administration Files

Some Directory Server administration files can potentially become very large. These files include the LDIF files containing directory data, backups, core files, and log files.

Depending on your deployment, you may use LDIF both to import Directory Server data, and to serve as auxiliary backup. A standard text format, LDIF allows you to export binary data as well as strings. LDIF can occupy significant disk space in large deployments. For example, a directory containing 10 million entries having an average size of 2 kilobytes, would in LDIF representation occupy 20 gigabytes on disk. You might maintain multiple LDIF files of that size if you use the format for auxiliary backup.

Binary backup files also occupy space on disk, at least until you move them somewhere else for safekeeping. Backup files produced with Directory Server utilities consist of binary copies of the directory database files. Alternatively for large deployments you can put Directory Server in frozen mode and take a snapshot of the file system. Either way, you must have disk space available for the backup.

By default Directory Server writes log messages to instance-path/logs/access and instance-path/logs/errors. By default Directory Server requires one gigabyte of local disk space for access logging, and another 200 megabytes of local disk space for errors logging.

For a detailed discussion of Directory Server logging, read Chapter 10, Directory Server Logging, in Sun Directory Server Enterprise Edition 7.0 Reference.

Directory Server Replication

Directory Server lets you replicate directory data for availability and load balancing between the servers in your deployment. Directory Server allows you to have multiple read-write (master) replicas deployed together.

Internally, the server makes this possible by keeping track of changes to directory data. When the same data are modified on more than one read-write replica Directory Server can resolve the changes correctly on all replicas. The data to track these changes, must be retained until they are no longer needed for replication. Changes are retained for a period of time specified by the purge delay whose default value is seven days. If your directory data undergoes much modification, especially of large multi-valued attributes, this data can grow quite large.

Because the level of growth is dependent on several factors, there is no catch-all formula to calculate potential data growth. The best approach is to test typical modifications and measure the growth. The following factors have an effect on data growth as a result of entry modification:

Note that the replication metadata remains in the entry until the purge delay has passed and the entry is modified again.

For a detailed discussion of Directory Server replication, read Chapter 7, Directory Server Replication, in Sun Directory Server Enterprise Edition 7.0 Reference.

Directory Server Threads and File Descriptors

Directory Server runs as a multithreaded process, and is designed to scale on multiprocessor systems. You can configure the number of threads Directory Server creates at startup to process operations. By default Directory Server creates 30 threads. The value is set using the dsconf(1M) command to adjust the server property thread-count.

The trick is to keep the threads as busy as possible without incurring undo overhead from having to handle many threads. As long as all directory data fits in cache, better performance is often seen when thread-count is set to twice the number of processors plus the expected number of simultaneous update operations. If only a fraction of a large directory data set fits in cache, Directory Server threads may often have to wait for data being read from disk. In that case you may find performance improves with a much higher thread count, up to 16 times the number of available processors.

Directory Server uses file descriptors to hold data related to open client application connections. By default Directory Server uses a maximum of 1024 file descriptors. The value is set using the dsconf command to adjust the server property file-descriptor-count. If you see a message in the errors log stating too many fds open, you may observe better performance by increasing file-descriptor-count, presuming your system allows Directory Server to open additional file descriptors.

The file-descriptor-count property does not apply on Windows.

Directory Server Growth

Once in deployment Directory Server use is likely to grow. Planning for growth is key for a successful deployment, in which you continue to provide a consistently high level of service. Plan for larger, more powerful systems than you need today, basing your requirements in part on the growth you expect tomorrow.

Sometimes directory services must grow rapidly, even suddenly. This is the case for example when a directory service sized for one organization is merged with that of another organization. By preparing for growth in advance and by explicitly identifying your expectations, you are better equipped to deal with rapid and sudden growth, because you know in advance whether the expected increase outstrips the capacity you planned.

Top Tuning Tips

Basic recommendations follow. These recommendations apply in most situations. Although the recommendations presented here are in general valid, avoid the temptation to apply the recommendations without understanding the impact on the deployment at hand. This section is intended as a checklist, not a cheat sheet.

  1. Adjust cache sizes.

    Ideally, the server has enough available physical memory to hold all caches used by Directory Server. Furthermore, an appropriate amount of extra physical memory is available to account for future growth. When plenty of physical memory is available, set the entry cache size large enough to hold all entries in the directory. Use the entry-cache-size suffix property. Set the database cache size large enough to hold all indexes with the db-cache-size property. Use the dn-cache-size or dn-cache-count properties to control the size of the DN cache.

  2. Optimize indexing.

    1. Remove unnecessary indexes. Add additional indexes to support expected requests.

      From time to time, you can add additional indexes that support requests from new applications. You can add, remove, or modify indexes while Directory Server is running. Use for example the dsconf create-index and dsconf delete-index commands.

      Be careful not to remove system indexes. For a list of system indexes, see System Indexes and Default Indexes in Sun Directory Server Enterprise Edition 7.0 Reference.

      Directory Server gradually indexes data after you make changes to the indexes. You can also force Directory Server to rebuild indexes with the dsconf reindex command.

    2. Allow only indexed searches.

      Unindexed searches can have a strong negative impact on server performance. Unindexed searches can also consume significant server resources.

      Consider forcing the server to reject unindexed searches by setting the require-index-enabled suffix property to on.

    3. Adjust the maximum number of values per index key with the all-ids-threshold property.

  3. Tune the underlying operating system according to recommendations made by the idsktune command. For more information, see idsktune(1M).

  4. Adjust operational limits.

    Adjustable operational limits prevent Directory Server from devoting inordinate resources to any single operation. Consider assigning unique bind DNs to client applications requiring increased capabilities, then setting resource limits specifically for these unique bind DNs.

  5. Distribute disk activity.

    Especially for deployments that support large numbers of updates, Directory Server can be extremely disk I/O intensive. If possible, consider spreading the load across multiple disks with separate controllers.

  6. Disable unnecessary logging.

    Disk access is slower than memory access. Heavy logging can therefore have a negative impact on performance. Reduce disk load by leaving audit logging off when not required, such as on a read-only server instance. Leave error logging at a minimal level when not using the error log to troubleshoot problems. You can also reduce the impact of logging by putting log files on a dedicated disk, or on a lesser used disk, such as the disk used for the replication changelog.

  7. When replicating large numbers of updates, consider adjusting the appropriate replication agreement properties.

    The properties are transport-compression, transport-group-size, and transport-window-size.

  8. On Solaris systems, move the database home directory to a tmpfs file system.

    The database home directory, specified by the db-env-path property, indicates where Directory Server locates database cache backing files. Data files continue to reside by default under instance-path/db.

    With the database cache backing files on a tmpfs file system, the system does not repeatedly flush the database cache backing files to disk. You therefore avoid a performance bottleneck for updates. In some cases, you also avoid the performance bottleneck for searches. The database cache memory is mapped to the Directory Server process space. The system essentially shares cache memory and memory used to hold the backing files in the tmpfs file system. You therefore gain performance at essentially no cost in terms of memory space needed.

    The primary cost associated with this optimization is that database cache must be rebuilt after a restart of the host machine. This cost is probably not a cost that you can avoid, however, if you expect a restart to happen only after a software or hardware failure. After such a failure, the database cache must be rebuilt anyway.

  9. Enable transaction batches if you can afford to lose updates during a software or hardware failure.

    You enable transaction batches by setting the server property db-batched-transaction-count.

    Each update to the transaction log is followed by a sync operation to ensure that update data is not lost. By enabling transaction batches, updates are grouped together before being written to the transaction log. Sync operations only take place when the whole batch is written to the transaction log. Transaction batches can therefore significantly increase update performance. The improvement comes with a trade off. The trade off is during a crash, you lose update data not yet written to the transaction log.


    Note –

    With transaction batches enabled, you lose up to db-batched-transaction-count - 1 updates during a software or hardware failure. The loss happens because Directory Server waits for the batch to fill, or for 1 second, whichever is sooner, before flushing content to the transaction log and thus to disk.

    Do not use this optimization if you cannot afford to lose updates.


  10. Configure the referential integrity plug-in to delay integrity checks.

    The referential integrity plug-in ensures that when entries are modified, or deleted from the directory, all references to those entries are updated. By default, the processing is performed synchronously, before the response for the delete operation is returned to the client. You can configure the plug-in to have the updates performed asynchronously. Use the ref-integrity-check-delay server property.