Sun Java logo     Previous      Contents      Index      Next     

Sun logo
Sun Java(TM) System Directory Server 5 2004Q2 Deployment Planning Guide 

Chapter 10
System Sizing

Appropriate hardware sizing is a critical component of directory service planning and deployment. When sizing hardware, the amount of memory available and the amount of local disk space available are of key importance.


Note

For best results, install and configure a test system with a subset of entries representing those used in production. You can then use the test system to approximate the behavior of the production server.

When optimizing for particular systems, ensure you understand how system buses, peripheral buses, I/O devices, and supported file systems work so you can take advantage of I/O subsystem features when tuning these to support Directory Server.


This chapter suggests ways of estimating disk and memory requirements for a Directory Server instance. It also touches on network and SSL accelerator hardware requirements.


Suggested Minimum Requirements

Table 10-1 proposes minimum memory and disk space requirements for installing and using the software in a production environment.

Minimum requirements for specified numbers of entries may in fact differ from those provided in Table 10-1. Sizes here reflect relatively small entries, with indexes set according to the default configuration, and with caches minimally tuned. If entries include large binary attribute values such as digital photos, or if indexing or caching is configured differently, then revise minimum disk space and memory estimates upward accordingly.

Table 10-1  Minimum Disk Space and Memory Requirements 

Required for...

Free Local Disk Space

Free RAM

Unpacking product

At least 125 MB

-

Product installation

At least 200 MB

At least 256 MB

10,000-250,000 entries

Add at least 3 GB

Add at least 256 MB

250,000-1,000,000 entries

Add at least 5 GB

Add at least 512 MB

Over 1,000,000 entries

Add 8 GB or more

Add 1 GB or more

Minimum disk space requirements include 1 GB devoted to access logs. By default, Directory Server is configured to rotate through 10 access log files (nsslapd-accesslog-maxlogsperdir on cn=config) each holding up to 100 MB (nsslapd-accesslog-maxlogsize on cn=config) of messages. Volume for error and audit logs depends on how Directory Server is configured. Refer to the “Monitoring Directory Server Using Log Files,“ in the Directory Server Administration Guide for details on configuring logging.

Minimum Available Memory

Minimum memory estimates reflect memory used by an instance of Directory Server in a typical deployment. The estimates do not account for memory used by the system and by other applications. For a more accurate picture, you must measure memory use empirically. Refer to Sizing Physical Memory for details.

As a rule, the more available memory, the better.

Minimum Local Disk Space

Minimum local disk space estimates reflect the space needed for an instance of Directory Server in a typical deployment. Experience suggests that if directory entries are large, the space needed is at minimum four times the size of the equivalent LDIF on disk. Refer to Sizing Disk Subsystems for details.

Do not install the server or any data it accesses on network disks. Directory Server software does not support the use of network attached storage via NFS, AFS, or SMB. Instead, all configuration, log, database, and index files must reside on local storage at all times, even after installation.

Minimum Processing Power

High volume systems typically employ multiple, high-speed processors to provide appropriate processing power for multiple simultaneous searches, extensive indexing, replication, and other features. Refer to Sizing for Multiprocessor Systems for details.

Minimum Network Capacity

Testing has demonstrated that 100 Mbit Ethernet may be sufficient for even service provider performance, depending on the maximum throughput expected. You may estimate theoretical maximum throughput as follows:

max. throughput = max. entries returned/second x average entry size

Imagine for example that a Directory Server must respond to a peak of 5000 searches per second for which it returns 1 entry each with entries having average size of 2000 bytes, then the theoretical maximum throughput would be 10 MB, or 80 Mbit. 80 Mbit is likely to be more than a single 100 Mbit Ethernet adapter can provide. Actual observed performance may vary.

If you expect to perform multi-master replication over a wide area network, ensure the connection provides sufficient throughput with minimum latency and near-zero packet loss.

Refer to Sizing Network Capacity for more information.


Sizing Physical Memory

Directory Server stores information using database technology. As is the case for any application relying on database technology, adequate fast memory is key to optimum Directory Server performance. As a rule, the more memory available, the more directory information can be cached for quick access. In the ideal case, each server has enough memory to cache the entire contents of the directory at all times. As Directory Server 5.2 supports 64-bit memory addressing, it is now possible to handle total cache sizes of as much as the 64-bit processor can address.


Note

When deploying Directory Server in a production environment, configure cache sizes well below theoretical process limits, leaving appropriate resources available for general system operation.


Estimating memory size required to run Directory Server involves estimating the memory needed both for a specific Directory Server configuration, and for the underlying system on which Directory Server runs.

Sizing Memory for Directory Server

Given estimated configuration values for a specific deployment, you can estimate physical memory needed for an instance of Directory Server. Table 10-2 summarizes the values used for the calculations in this section.

Table 10-2  Values for Sizing Memory for Directory Server 

Value

Description1

nsslapd-cachememsize

Entry cache size for a suffix

An entry cache contains formatted entries, ready to be sent in response to a client request. One instance may handle several entry caches.

nsslapd-dbcachesize

Database cache size

The database cache holds elements from databases and indexes used by the server.

nsslapd-import-cachesize

Database cache size for bulk import

Import cache is used only when importing entries. You may be able to avoid budgeting extra memory for import cache, instead reusing memory budgeted for entry or database cache if you perform only offline imports.

nsslapd-maxconnections

Maximum number of connections managed.

nsslapd-threadnumber

Number of operation threads created at server startup

1For complete descriptions, refer to the Directory Server Administration Reference.

To estimate approximate memory size, perform the following steps.

  1. Estimate the base size of the server process, slapdBase.
  2. slapdBase     = 75 MB +(nsslapd-threadnumber x 0.5 MB) +(nsslapd-maxconnections x 0.5 KB)

  3. Determine the sum of entry cache sizes, entryCacheSum.
  4. entryCacheSum = Sumall entry caches(nsslapd-cachememsize)

Note that the entry cache includes an allocation overhead (in other words, the cache will consume more memory than you specify in the nsslapd-cachememsize parameter.) This may appear to be a memory leak, but it is not. Depending on how the memory allocation library handles requests, actual memory used may be much larger than the memory specified. For more information see “Entry Cache,” in Chapter 3 of the Directory Server Performance Tuning Guide.

  1. Determine the total size for all caches, cacheSum.
  2. cacheSum      = entryCacheSum + nsslapd-dbcachesize + nsslapd-import-cachesize

  3. Determine the total size for the Directory Server process, slapdSize.
  4. slapdSize     = slapdBase + cacheSum

    You may use utilities such as pmap(1) on Solaris systems or the Windows Task Manager to measure physical memory used by Directory Server.

  5. Estimated memory needed to handle incoming client requests, slapdGrowth.
  6. slapdGrowth   = 20% x slapdSize

    As a first estimate, we assume 20 percent overhead for handling client requests. The actual percentage may depend on the characteristics of your particular deployment. Validate this percentage empirically before putting Directory Server into production.

  7. Determine total memory size for Directory Server, slapdTotal.
  8. slapdTotal    = slapdSize + slapdGrowth

    For large deployments involving 32-bit servers, slapdTotal may exceed the practical limit of about 3.4 GB, and perhaps even the theoretical process limit of about 3.7 GB. In this case, you may choose either to tune caching to work within the limits of the system, or to use a 64-bit version of the product. For more information, see Chapter 3, “Tuning Cache Sizes,” in the Directory Server Performance Tuning Guide.

Sizing Memory for the Operating System

Estimating the memory needed to run the underlying operating system must be done empirically, as operating system memory requirements vary widely based on the specifics of the system configuration. For this reason, consider tuning a representative system for deployment before attempting to estimate how much memory the underlying operating system needs. For more information, see Chapter 2, “Tuning the Operating System,” in the Directory Server Performance Tuning Guide. After tuning the system, monitor memory use to arrive at an initial estimate, systemBase. You may use utilities such as sar(1M) on Solaris systems or the Task Manager on Windows to measure memory use.


Note

For top performance, dedicate the system running Directory Server to this service only.

If you must run other applications or services, monitor the memory they use as well when sizing total memory required.


Additionally, allocate memory for general system overhead and normal administrative use. A first estimate for this amount, systemOverhead, should be at least several hundred megabytes, or 10 percent of the total physical memory, whichever is greater. The goal is to allocate enough space for systemOverhead that the system avoids swapping pages in and out of memory while in production.

The total memory needed by the operating system, systemTotal, can then be estimated as follows.

systemTotal = systemBase + systemOverhead

Sizing Total Memory

Given slapdTotal and systemTotal estimates from the preceding sections, estimate the total memory needed, totalRAM.

totalRAM = slapdTotal + systemTotal

Notice totalRAM is an estimate of the total memory needed, including the assumption that the system is dedicated to the Directory Server process, and including estimated memory use for all other applications and services expected to run on the system.

Dealing With Insufficient Memory

In many cases, it is not cost effective to provide enough memory to cache all data used by Directory Server.

At minimum, equip the server with enough memory that running Directory Server does not cause constant page swapping. Constant page swapping has a strong negative performance impact. You may use utilities such as vmstat(1M) on Solaris and other systems to view memory statistics before and after starting Directory Server and priming the entry cache. Unsupported utilities available separately such as MemTool for Solaris systems can be useful in monitoring how memory is used and allocated when applications are running on a test system.

If the system cannot accommodate additional memory, yet you continue to observe constant page swapping, reduce the size of the database and entry caches. Running out of swap space can cause Directory Server to shut itself down.

Refer to Chapter 3, “Tuning Cache Sizes,” in the Directory Server Performance Tuning Guide, for a discussion of the alternatives available when providing adequate physical memory to cache all directory data is not an option.


Sizing Disk Subsystems

Disk use and I/O capabilities can strongly impact performance. Especially for a deployment supporting large numbers of modifications, the disk subsystem can become an I/O bottleneck. This section offers recommendations for estimating overall disk capacity for a Directory Server instance, and for alleviating disk I/O bottlenecks.

Refer to Chapter 5, “Tuning Logging,” in the Directory Server Performance Tuning Guide, for more information on alleviating disk I/O bottlenecks.

Sizing Directory Suffixes

Disk space requirements for a suffix depend not only on the size and number of entries in the directory, but also on the directory configuration and in particular how the suffix is indexed. To gauge disk space needed for a large deployment, perform the following steps:

  1. Generate LDIF for three representative sets of entries like those expected for deployment, one of 10,000 entries, one of 100,000, one of 1,000,000.
  2. Generated entries should reflect not only the mix of entry types (users, groups, roles, entries for extended schema) expected, but also the average size of individual attribute values, especially if single large attribute values such as userCertificate and jpegPhoto are expected.

  3. Configure an instance of Directory Server as expected for deployment.
  4. In particular, index the database as you would for the production directory. If you expect to add indexes later, expect to have to add space for those indexes as well.

  5. Load each set of entries, recording the disk space used for each set.
  6. Graph the results to extrapolate estimated suffix size for deployment.
  7. Add extra disk space to compensate for error and variation.

If you are using replication, note that entry state information (a list of old values) is stored with the entry, and used during conflict resolution. State information can cause an entry to grow significantly in size and should be taken into account when sizing the suffix.

Disk space for suffixes is only part of the picture; you must also consider how Directory Server uses disks.

How Directory Server Uses Disks

Directory suffixes are part of what Directory Server stores on disk. A number of other factors affecting disk use may vary widely depending even on how Directory Server is used after deployment and so are covered here in general terms. Refer to the Directory Server Administration Guide for instructions on configuring the items discussed here.

Directory Server Binaries

You need approximately 200 MB disk space to install this version of Directory Server. This estimate is not meant to include space for data or logs, but only for the product binaries.

Event Logging

Disk use estimates for log files depend on the rate of Directory Server activity, the type and level of logging, and the strategy for log rotation.

Many logging requirements can be predicted and planned in advance. If Directory Server writes to logs and in particular audit logs, disk use increases with load level. When high load deployments call for extensive logging, plan for extra disk space to accommodate the high load. You may decrease disk space requirements for deployments with high load logging by establishing an intelligent log rotation and archival system, rotating the logs often, and automating migration of old files to less expensive, higher capacity storage mediums such as tape or cheaper disk clusters.

Some logging requirements cannot easily be predicted. Debug logging can cause temporary but explosive growth in the size of the errors log, for example. For a large, high load deployment, consider setting aside several gigabytes of dedicated disk space for temporary, high-volume debug logging. Refer to Chapter 5, “Tuning Logging,” in the Directory Server Performance Tuning Guide, for further information.

Transaction Log

Transaction log volume depends upon peak write loads. If writes occur in bursts, transaction logs use more space than if the write load is constant. Directory Server trims transaction logs periodically. Transaction logs therefore should not continue to grow unchecked.

Transaction logs are not flushed during online backup, because database files cannot be modified while they are being copied (this would result in an inconsistent data image.) Transaction logs are copied to the backup location as the last step of the backup.

Directory Server is generally run with durable transactions enabled. When durable transaction capabilities are enabled, Directory Server performs a synchronous write to the transaction log for each modification (add, delete, modify, modrdn) operation. In this case, an operation can be blocked if the disk is busy, resulting in a potential I/O bottleneck.

If update performance is critical, plan to use a disk subsystem having fast write cache for the transaction log. Refer to Chapter 5, “Tuning Logging,” in the Directory Server Performance Tuning Guide, for further information.

Replication Changelog Database

If the deployment involves replication, the Directory Server suppliers perform change logging. Changelog size depends on the volume of modifications and on the type of changelog trimming employed. Plan capacity based on how the changelog is trimmed. For a large, high load deployment, consider setting aside several gigabytes of disk space to handle changelog growth during periods of abnormally high modification rates. Refer to Chapter 5, “Tuning Logging,” in the Directory Server Performance Tuning Guide, for further information.

Suffix Initialization and LDIF Files

During suffix initialization, also called bulk loading or importing, Directory Server requires disk space not only for the suffix database files and the LDIF used to initialize the suffix, but also for intermediate files used during the initialization process. Plan extra (temporary) capacity in the same directory as the database files for the LDIF files and for the intermediate files used during suffix initialization. This may be as much as double the size of the largest index, depending on what indexes you have created.

Backups and LDIF Files

Backups often consume a great deal of disk space. The size of a backup equals the size of the database files involved, and the transaction logs. Accommodate for several backups by allocating space equal to several times the volume of the database files, ensuring that databases and their corresponding backups are maintained on separate disks. Employ intelligent strategies for migrating backups to cheaper storage mediums as they age.

If the deployment involves replication, plan additional space to hold initialization LDIF files, as these differ from backup LDIF files.

Memory Based Rather Than Disk Based File Systems

Some systems support memory based tmpfs file systems. On Solaris for example /tmp is often mounted as a memory based file system to increase performance.

Only database cache files should be placed on a memory based file system. For more information, see “nsslapd-db-home-directory“ in the Directory Server Administration Reference. Never put database or transaction log binaries or configuration files on a memory based file system.

If cache files are placed on /tmp, a location shared with other applications on the system, ensure that the system never runs out of space under /tmp. Otherwise, when memory is low, Directory Server files in memory based file systems may be paged to the disk space dedicated for the swap partition.

Some systems support RAM disks and other alternative memory based file systems. Refer to the operating system documentation for instructions on creating and administering memory based file systems. Notice that everything in such file systems is volatile and must be reloaded into memory after system reboot. This reinitialization can take a long time to complete, depending on factors such as the processor speed, memory speed, and memory size.

Core Files

Leave room for at minimum one or two core files. Although Directory Server should not dump core, recovery and troubleshooting after a crash can be greatly simplified if the core file generated during the crash is available for inspection. When generated, core files are stored either in the same directory as the file specified by nsslapd-errorlog on cn=config, or under ServerRoot/bin/slapd/server/ if a crash occurs during startup.

Space for Administration

Leave room for expected system use, including system and Directory Server administration. Ensure that sufficient space is allocated for the base Directory Server installation, for the configuration suffix if it resides on the local instance, for configuration files, and so forth.

Distributing Files Across Disks

By placing commonly-updated Directory Server database and log files on separate disk subsystems, you can spread I/O traffic across multiple disk spindles and controllers, avoiding I/O bottlenecks. Consider providing dedicated disk subsystems for each of the following items.

Transaction Logs

When durable transaction capabilities are enabled, Directory Server performs a synchronous write to the transaction log for each modification operation. An operation is thus blocked when the disk is busy. Placing transaction logs on a dedicated disk can improve write performance, and increase the modification rate Directory Server can handle.

Refer to “Transaction Logging,” in Chapter 5 of the Directory Server Performance Tuning Guide.

Databases

Multiple database support allows each database to reside on its own physical disk. You can thus distribute the Directory Server load across multiple databases each on its own disk subsystem. To prevent I/O contention for database operations, consider placing each set of database files on a separate disk subsystem.

For top performance, place database files on a dedicated fast disk subsystem with a large I/O buffer. Directory Server reads data from the disk when it cannot find candidate entries in cache. It regularly flushes writes. Having a fast, dedicated disk subsystem for these operations can alleviate a potential I/O bottleneck.

The nsslapd-directory attribute on cn=config,cn=ldbm database,cn=plugins,cn=config specifies the disk location where Directory Server stores database files, including index files. By default, such files are located under ServerRoot/slapd-ServerID/db/.

Changing database location of course requires not only that you restart Directory Server, but also that you rebuild the database completely. Changing database location on a production server can be a major undertaking, so identify your most important database and put it on a separate disk before putting the server into production.

Log Files

Directory Server provides access, error, and audit logs featuring buffered logging capabilities. Despite buffering, writes to these log files require disk access that may contend with other I/O operations. Consider placing log files on separate disks for improved performance, capacity, and management.

Refer to Chapter 5, “Tuning Logging,” in the Directory Server Performance Tuning Guide, for more information.

Cache Files on Memory Based File Systems

In a tmpfs file system, for example, files are swapped to disk only when physical memory is exhausted. Given sufficient memory to hold all cache files in physical memory, you may derive improved performance by allocating equivalent disk space for a tmpfs file system on Solaris platforms or other memory based file systems such as RAM disks for other platforms, and setting the value of nsslapd-db-home-directory to have the Directory Server store cache files on that file system. This prevents the system from unnecessarily flushing memory mapped cache files to disk.

Disk Subsystem Alternatives

“Fast, cheap, safe: pick any two.” — Sun Performance and Tuning, Cockroft and Pettit.

Fast and Safe

When implementing a deployment in which both performance and uptime are critical, consider hardware-based RAID controllers having non-volatile memory caches to provide high speed buffered I/O distributed across large arrays of disks. By spreading load across many spindles and buffering access over very fast connections, I/O can be optimized, and excellent stability provided through high performance RAID striping or parity blocks.

Large non-volatile I/O buffers and high performance disk subsystems such as those offered in Sun StorEdge™ products can greatly enhance Directory Server performance and uptime.

Fast write cache cards provide potential write performance improvements, especially when dedicated for database and/or transaction log use. Fast write cache cards provide non-volatile memory cache that is independent from the disk controller.

Fast and Cheap

For fast, low-cost performance, ensure you have adequate capacity distributed across a number of disks. Consider disks having high rotation speed and low seek times. For best results, dedicate one disk to each distributed component. Consider using multi-master replication to avoid single points of failure.

Cheap and Safe

For cheap, safe configurations, consider low-cost, software-based RAID controllers such as Solaris Volume Manager.

RAID Alternatives

RAID stands for Redundant Array of Inexpensive Disks. As the name suggests, the primary purpose of RAID is to provide resiliency. If one disk in the array fails, data on that disk is not lost but remains available on one or more other disks in the array. To implement resiliency, RAID provides an abstraction allowing multiple disk drives to be configured as a larger virtual disk, usually referred to as a volume. This is achieved by concatenating, mirroring, or striping physical disks. Concatenation is implemented by having blocks of one disk logically follow those of another disk. For example, disk 1 has blocks 0-99, disk 2 has blocks 100-199 and so forth. Mirroring is implemented by copying blocks of one disk to another and then keeping them in continuous synchronization. Striping uses algorithms to distribute virtual disk blocks over multiple physical disks.

The purpose of striping is performance. Random writes can be dealt with very quickly as data being written is likely to be destined for more than one of the disks in the striped volume, hence the disks are able to work in parallel. The same applies to random reads. For large sequential reads and writes the case may not be quite so clear. It has been observed, however, that sequential I/O performance can be improved. An application generating many I/O requests can swamp a single disk controller, for example. If the disks in the striped volume all have their own dedicated controller, however, swamping is far less likely to occur and so performance is improved.

RAID can be implemented using either a software or a hardware RAID manager device. There are advantages and disadvantages in using either method:

The following sections discuss RAID configurations, known as levels. The most common RAID levels, 0, 1, 1+0 and 5 are covered in some detail, whereas less common levels are merely compared and contrasted.

RAID 0, Striped Volume

Striping spreads data across multiple physical disks. The logical disk, or volume, is divided into chunks or stripes and then distributed in a round-robin fashion on physical disks. A stripe is always one or more disk blocks in size, with all stripes having the same size.

The name RAID 0 is a contradiction in that it provides no redundancy. Any disk failure in a RAID 0 stripe causes the entire logical volume to be lost. RAID 0 is, however, the least expensive of all RAID levels as all disks are dedicated to data.

RAID 1, Mirrored Volume

The purpose of mirroring is to provide redundancy. If one of the disks in the mirror fails then the data remains available and processing may continue. The trade off is that each physical disk is mirrored, meaning that half the physical disk space is devoted to mirroring.

RAID 1+0

Also known as RAID 10, RAID 1+0 provides the highest levels of performance and resiliency. Consequently, it is the most expensive level of RAID to implement. Data continues to remain available after up to three disk failures as long as all of the disks that fail form different mirrors. RAID 1+0 is implemented as a striped array where segments are RAID 1.

RAID 0+1

RAID 0+1 is slightly less resilient than RAID 1+0. A stripe is created and then mirrored. If one or more disks fails on the same side of the mirror, then the data remains available. If a disk then fails on the other side of the mirror, however, the logical volume is lost. This subtle difference with RAID 1+0 means disks on either side can fail simultaneously yet data remains available. RAID 0+1 is implemented as a mirrored array where segments are RAID 0.

RAID 5

RAID 5 is not as resilient as mirroring yet nevertheless provides redundancy in that data remains available after a single disk failure. RAID 5 implements redundancy using a parity stripe created by performing logical exclusive or on bytes of corresponding stripes on other disks. When one disk fails, data for that disk is recalculated using the data and parity in the corresponding stripes on the remaining disks. Performance suffers however when such corrective calculations must be performed.

During normal operation, RAID 5 usually offers lower performance than RAID 0, 1+0 and 0+1, as a RAID 5 volume must do four physical I/O operations for every logical write. The old data and parity are read, two exclusive or operations are performed, and the new data and parity are written. Read operations do not suffer the same penalty and thus provide only slightly lower performance than a standard stripe using an equivalent number of disks. That is, the RAID 5 volume has effectively one less disk in its stripe because the space is devoted to parity. This means a RAID 5 volume is generally cheaper than RAID 1+0 and 0+1, because RAID 5 devotes more of the available disk space to data.

Given the performance issues, RAID 5 is not generally recommended unless the data is read-only or unless there are very few writes to the volume. Disk arrays with write caches and fast exclusive or logic engines can mitigate these performance issues however, making RAID 5 a cheaper, viable alternative to mirroring for some deployments.

RAID Levels 2, 3, and 4

RAID levels 2 and 3 are good for large sequential transfers of data such as video streaming. Both levels can process only one I/O operation at time, making them inappropriate for applications demanding random access. RAID 2 is implemented using Hamming error correction coding (ECC). This means three physical disk drives are required to store ECC data, making it more expensive than RAID 5, but less expensive than RAID 1+0 as long as there are more than three disks in the stripe. RAID 3 uses a bitwise parity method to achieve redundancy. Parity is not distributed as per RAID 5, but is instead written to a single dedicated disk.

Unlike RAID levels 2 and 3, RAID 4 uses an independent access technique where multiple disk drives are accessed simultaneously. It uses parity in a manner similar to RAID 5, except parity is written to a single disk. The parity disk can therefore become a bottleneck as it is accessed for every write, effectively serializing multiple writes.

Software Volume Managers

Volume managers such as Solaris™ Volume Manager may also be used for Directory Server disk management. Solaris Volume Manager compares favorably with other software volume managers for deployment in production environments.

Monitoring I/O and Disk Use

Disks should not be saturated under normal operating circumstances. You may use utilities such as iostat(1M) on Solaris and other systems to isolate potential I/O bottlenecks. Refer to Windows help for details on handling I/O bottlenecks on Windows systems.


Sizing for Multiprocessor Systems

Directory Server software is optimized to scale across multiple processors. In general, adding processors may increase overall search, index maintenance, and replication performance.

In specific directory deployments, however, you may reach a point of diminishing returns where adding more processors does not impact performance significantly. When handling extremely demanding performance requirements for searching, indexing, and replication, consider load balancing and directory proxy technologies as part of the solution.


Sizing Network Capacity

Directory Server is a network intensive application. To improve network availability for a Directory Server instance, equip the system with two or more network interfaces. Directory Server can support such a hardware configuration, listening on multiple network interfaces within the same process.

If you intend to cluster directory servers on the same network for load balancing purposes, ensure the network infrastructure can support the additional load generated. If you intend to support high update rates for replication in a wide area network environment, ensure through empirical testing that the network quality and bandwidth meet your requirements for replication throughput.


Sizing for SSL

By default, support for the Secure Sockets Layer (SSL) protocol is implemented in software. Using the software-based SSL implementation may have significant negative impact on Directory Server performance. Running the directory in SSL mode may require the deployment of several directory replicas to meet overall performance requirements.

Although hardware accelerator cards cannot eliminate the impact of using SSL, they can improve performance significantly compared with software-based implementation. Directory Server 5.2 supports the use of SSL hardware accelerators such as supported Sun Crypto Accelerator hardware.

Using a Sun Crypto Accelerator board can be useful when SSL key calculation is a bottleneck. Such hardware may not improve performance when SSL key calculation is not a bottleneck, however, as it specifically accelerates key calculations during the SSL handshake to negotiate the connection, but not encryption and decryption of messages thereafter. Refer to Appendix A, “Using the Sun Crypto Accelerator Board,” in the Directory Server Administration Guide, for instructions on using such hardware with a Directory Server instance.



Previous      Contents      Index      Next     


Copyright 2004 Sun Microsystems, Inc. All rights reserved.