Sun Java System Directory Server Enterprise Edition 6.3 Deployment Planning Guide

Chapter 9 Designing a Basic Deployment

In the simplest Directory Server Enterprise Edition deployment, your directory service requirements can be fulfilled by a single Directory Server, installed on one machine, in a single data center. Such a scenario might occur in a small organization or if, you are running Directory Server for demonstration or evaluation purposes. Note that the technical requirements discussed in the previous chapters apply equally to all deployments.

This chapter describes a basic deployment, involving a single Directory Server. The chapter covers the following topics:

Basic Deployment Architecture

A basic Directory Server Enterprise Edition deployment includes the following elements:

These elements can all be installed on a single machine. The following figure illustrates the high-level architecture of a basic Directory Server Enterprise Edition deployment.

Figure 9–1 Basic Directory Server Enterprise Edition Architecture on a Single Machine

Figure shows a basic deployment with all elements installed
on a single server.

In this scenario, internal LDAP and DSML clients can be configured to access Directory Server directly. External HTML clients can be configured to access DSCC over a firewall.

Although all of the components described previously can be installed on a single machine, this is unlikely in a real deployment. A more typical scenario would be the installation of DSCC and the dsconf command-line utility on separate remote machines. All Directory Server hosts could then be configured remotely from these machines. The following figure illustrates this more typical scenario.

Figure 9–2 Basic Directory Server Enterprise Edition Architecture With Remote Directory Service Control Center

Figure shows a basic deployment with the Directory Service Control Center and
dsconf installed on a remote server.

The Directory Server instance stores server and application configuration settings, as well as user information. Typically, server and application configuration information is stored in one suffix of Directory Server while user and group entries are stored in another suffix. A suffix refers to the name of the entry in the directory tree, below which data is stored.

Directory Service Control Center (DSCC) is a centralized, web-based user interface for all servers, and is the Directory component of Java Web Console. DSCC locates all servers and applications that are registered with it. DSCC displays the servers in a graphical user interface, where you can manage and configure the servers. The Directory Service Control Center might not be required in a small deployment because all functionality is also provided through a command-line interface.

In the chapters that follow, it is assumed that the Directory Service Control Center is installed on a separate machine. This aspect of the topology is not referred to again in the remaining chapters.

Basic Deployment Setup

Complete installation information is provided in the Sun Java System Directory Server Enterprise Edition 6.3 Installation Guide. The purpose of this section is to provide a clear picture of the elements that make up a basic deployment and how these elements work together.

This section lists the main tasks for setting up the basic deployment described in the previous section.

Improving Performance in a Basic Deployment

In even the most basic deployment, you might want to tune Directory Server to improve performance in specific areas. The following sections describe basic tuning strategies that can be applied to a simple single-server deployment. These strategies can be applied to each server in larger, more complex deployments, for improved performance across the topology.

Using Indexing to Speed Up Searches

Indexes speed up searches by effectively reducing the number of entries a search has to check to find a match. An index contains a list of values. Each value is associated with a list of entry identifiers. Directory Server can look up entries quickly by using the lists of entry identifiers in indexes. Without an index to manage a list of entries, Directory Server must check every entry in a suffix to find matches for a search.

    Directory Server processes each search request as follows:

  1. Directory Server receives a search request from a client.

  2. Directory Server examines the request to confirm that the search can be processed.

    If Directory Server cannot perform the search, it returns an error to the client and might refer the search to another instance of Directory Server.

  3. Directory Server determines whether it manages one or more indexes that are appropriate to the search.

    • If Directory Server manages indexes that are appropriate to the search, the server looks in all of the appropriate indexes for candidate entries. A candidate entry is an entry that might be a match for the search request.

    • If Directory Server does not manage an index appropriate to the search, the server generates the set of candidate entries by checking all of the entries in the database.

      When Directory Server cannot use indexes, this process consumes more time and system resources.

  4. Directory Server examines each candidate entry to determine whether the entry matches the search criteria.

  5. Directory Server returns matching entries to the client application as it finds the entries.

You can optimize search performance by doing the following:

For a comprehensive overview of how indexes work, see Chapter 6, Directory Server Indexing, in Sun Java System Directory Server Enterprise Edition 6.3 Reference. For information about defining indexes, see Chapter 13, Directory Server Indexing, in Sun Java System Directory Server Enterprise Edition 6.3 Administration Guide.

Optimizing Cache for Search Performance

For improved search performance, cache as much directory data as possible in memory. By preventing the directory from reading information from disk, you limit the disk I/O bottleneck. Different possibilities exist for doing this, depending on the size of your directory tree, the amount of memory available, and the hardware used. Depending on the deployment, you might choose to allocate more or less memory to entry and database caches to optimize search performance. You might alternatively choose to distribute searches across Directory Server consumers on different servers.

Consider the following scenarios:

All Entries and Indexes Fit Into Memory

In the optimum case, the database cache and the entry cache fit into the physical memory available. The entry caches are large enough to hold all entries in the directory. The database cache is large enough to hold all indexes and entries. In this case, searches find everything in cache. Directory Server never has to go to file system cache or to disk to retrieve entries.

Ensure that database cache can contain all database indexes, even after updates and growth. When space runs out in the database cache for indexes, Directory Server must read indexes from disk for every search request, severely impacting throughput. You can monitor paging and cache activity with DSCC or through the command line.

Appropriate cache sizes must be determined through empirical testing with representative data. In general, the database cache size can be calculated as (total size of database files) x 1.2. Start by allocating a large amount of memory for the caches. Then exercise and monitor Directory Server to observe the result, repeating the process as necessary. Entry caches in particular might use much more memory than you allocate to these caches.

Sufficient Memory For 32-Bit Directory Server

Imagine a system with sufficient memory to hold all data in entry and database caches, but no support for a 64-bit Directory Server process. If hardware constraints prevent you from deploying Directory Server on a Solaris system with 64-bit support, size caches appropriately with respect to memory limitations for 32-bit processes. Then leave the remaining memory to the file system cache.

As a starting point when benchmarking performance, size the entry cache to hold as many entries as possible. Size the database cache relatively small such as 100 Mbytes without completely minimizing it, but letting file system cache hold the database pages.

Note –

File system cache is shared with other processes on the system, especially file-based operations. Thus, controlling file system cache is more difficult than controlling other caches, particularly on systems that are not dedicated to Directory Server.

The system might reallocate file system cache to other processes.

Avoid online import in this situation because import cache is associated with the Directory Server process.

Insufficient Memory

Imagine a system with insufficient memory to hold all data in entry and database caches. In this case, avoid causing combined entry and database cache sizes to exceed the available physical memory. This might result in heavy virtual memory paging that could bring the system to a virtual halt.

For small systems, start benchmarking by devoting available memory to entry cache and database caches, with sizes no less than 100 Mbytes each. Try disabling the file system cache by mounting Solaris UFS file systems with the -o forcedirectio option of the mount_ufs command. For more information, see the mount_ufs(1M) man page. Disabling file system cache can prevent the file system cache from using memory needed by Directory Server.

For large Directory Servers running on large machines, maximize the file system cache and reduce the database cache. Verify and correct assumptions through empirical testing.

Optimizing Cache for Write Performance

In addition to planning a deployment for write scalability from the outset, provide enough memory for the database cache to handle updates in memory. Also, minimize disk activity. You can monitor the effectiveness of the database cache by reading the hit ratio in the Directory Service Control Center.

After Directory Server has run for some time, the caches should contain enough entries and indexes that disk reads are no longer necessary. Updates should affect the database cache in memory, with data from the large database cache in memory being flushed only infrequently.

Flushing data to disk during a checkpoint can be a bottleneck. The larger the database cache size, the larger the bottleneck. Storing the database on a separate RAID system, such as a Sun StorEdgeTM disk array, can help improve update performance. You can use utilities such as iostat on Solaris systems to isolate potential I/O bottlenecks. For more information, see the iostat(1M) man page.

The following table shows database and log placement recommendations for systems with 2, 3, and 4 disks.

Table 9–1 Isolating Databases and Logs on Different Disks

Disks Available 


  • Place the Directory Server database on one disk.

  • Place the transaction log, the access, audit, and error logs and the retro change log on the other disk.

  • Place the Directory Server database on one disk.

  • Place the transaction log on the second disk.

  • Place the access, audit, and error logs and the retro change log on the third disk.

  • Place the Directory Server database on one disk.

  • Place the transaction log on the second disk.

  • Place the access, audit, and error logs on the third disk.

  • Place the retro change log on the fourth disk.