Disk use and I/O capabilities can have great impact on performance. The disk subsystem can become an I/O bottleneck, especially for a deployment that supports large numbers of modifications. This section recommends ways to estimate overall disk capacity for a Directory Server instance.
Do not install Directory Server or any data it accesses on network disks.
Directory Server software does not support the use of network-attached storage through NFS, AFS, or SMB. All configuration, database, and index files must reside on local storage at all times, even after installation. Log files can be stored on network disks.
The following factors significantly affect the amount of local disk space needed:
Number of directory entries
Average sizes of entries
Server database page size setting when directory data is imported
To adjust the database page size, set the nsslapd-db-page-size attribute. For more information, see Directory Server Database Page Size.
Number of indexes maintained on directory data
Size of stored LDIF, backups, logs, and core files
When you have set up indexes, adjusted the database page size, and imported directory data, you can estimate the disk capacity required for the instance by reading the size of the instance-path/ contents, and adding the size of expected LDIF, backups, logs, and core files. Also estimate how much the sizes you measure are expected to grow, particularly during peak operation. Make sure you leave a couple of gigabytes of extra space for the errors log in case you need to increase the log level and size for debugging purposes.
Getting an estimation of the disk required for directory data can be done in some cases by extrapolation. If it is not practical to load Directory Server with as much data as you expect in production, extrapolate from smaller sets of sample data as suggested in Making Sample Directory Data. When the amount of directory data you use is smaller than in production, you must extrapolate for other measurements, too.
The following factors determine how fast the local disk must be:
Level of updates sustained, including the volume of replication traffic
Whether directory data are mainly in cache or on disk
Log levels used for access and error logging, and whether the audit log is enabled
Whether directory data, logs, and the transaction log (for updates) can be placed on separate disk subsystems
Whether backups are performed with Directory Server online or offline
Disks used should not be saturated under normal operating circumstances. You can use tools such as the Solaris iostat command to isolate potential I/O bottlenecks.
To increase disk throughput distribute files across disk subsystems. Consider providing dedicated disk subsystems for transaction logs (dsconf set-server-prop db-log-path:/transaction/log/path), databases (dsconf create-suffix --db-path /suffix/database/path suffix-name), and log files (dsconf set-log-prop path:/log/file/path). In addition consider putting database cache files on a memory-based file system such as a Solaris tmpfs file system, where files are swapped to disk only if available memory is exhausted (for example, dsconf set-server-prop db-env-path:/tmp). If you put database cache files on a memory-based file system, make sure the system does not run out of space to keep that entire file system in memory.
To further increase throughput use multiple disks in RAID configuration. Large, non volatile I/O buffers and high-performance disk subsystems such as those offered in Sun StorEdgeTM products can greatly enhance Directory Server performance and uptime. On Solaris 10 systems, using ZFS can also improve performance.