Oracle Fusion Middleware Deployment Planning Guide for Oracle Directory Server Enterprise Edition

Part II Technical Requirements

Technical requirements analysis begins with the business requirements documents that are created during the business analysis phase of the solution life cycle. Using the business analysis, you perform a usage analysis. This analysis helps you to determine expected load conditions and to create use cases that model typical user interaction with the system. The analysis also helps when creating a set of quality of service requirements. These requirements define how a deployed solution must perform in areas such as response time, availability, and security.

This part describes the technical requirements that must be defined for a Directory Server Enterprise Edition deployment. It is divided into the following chapters:

Chapter 3 Usage Analysis for Directory Server Enterprise Edition

Usage analysis involves identifying the users of your system and determining the usage patterns for those users. In doing so, a usage analysis enables you to determine expected load conditions on your directory service.

Usage Analysis Factors

Your reasons for offering Oracle Directory Server Enterprise Edition as an identity management solution have a direct effect on how you deploy the server.

During usage analysis, interview users whenever possible. Research existing data on usage patterns, and interview builders and administrators of previous systems. A usage analysis should provide you with the data that enables you to determine the service requirements that are described in Chapter 5, Defining Service Level Agreements.

The information that should come out of a usage analysis includes the following:

For more information about usage analysis, see Chapter 3, Usage Analysis for Directory Server Enterprise Edition.

Chapter 4 Defining Data Characteristics

The type of data in your directory determines how you structure the directory, who can access the data, and how access is granted. Data types can include, among others, user names, email addresses, telephone numbers, and information about groups to which users belong.

This chapter explains how to locate, categorize, structure, and organize data. It also explains how to map data to the Directory Server schema. This chapter covers the following topics:

Determining Data Sources and Ownership

The first step in categorizing existing data is to identify where that data comes from and who owns it.

Identifying Data Sources

To identify the data to be included in your directory, locate and analyze existing data sources.

Determining Data Ownership

Data ownership refers to the person or organization that is responsible for ensuring that data is up-to-date. During the data design phase, decide who can write data to the directory. Common strategies for determining data ownership include the following:

As you determine who can write to the data, you might find that multiple individuals require write access to the same information. For example, an information systems or directory management group should have write access to employee passwords. You might also want all employees to have write access to their own passwords. While you generally must give multiple people write access to the same information, try to keep this group small and easy to identify. Small groups help to ensure your data’s integrity.

For information about setting access control for your directory, see Chapter 6, Directory Server Access Control, in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition and How Directory Server Provides Access Control in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Distinguishing Between User and Configuration Data

To distinguish between data used to configure Directory Server and other Java Enterprise System servers and the actual user data stored in the directory, do the following:

Identifying Data From Disparate Data Sources

When determining data sources, ensure that you include data from other data sources, including legacy data sources. This data might not be stored in the directory. However, Directory Server might need to have some knowledge of, or control over, the data.

Directory Proxy Server provides a virtual directory feature that aggregates information, in real-time, from multiple data repositories. These repositories include LDAP directories, data that complies with the JDBC specification, and LDIF flat files.

The virtual directory supports complex filters that handle attributes from different data sources. It also supports modifications that combine attributes from different data sources.

During the data analysis phase, you might find that the same data is required by several applications, but in a different format. Instead of duplicating this information, it is preferable to have the applications transform it for their requirements.

Designing the DIT

DIT design involves choosing a suffix to contain your data, determining the hierarchical relationship between data entries, and naming the entries in the DIT hierarchy. The DIT interacts closely with other design decisions, including how you distribute, replicate, or control access to directory data.

The following sections describe the DIT design process in more detail.

Choosing a Suffix

The suffix is the name of the entry at the root of the DIT. If you have two or more DITs that do not have a natural common root, you can use multiple suffixes. The default Directory Server installation contains multiple suffixes. One suffix is used to store user data. The other suffixes are for data that is needed by internal directory operations, such as configuration information and directory schema.

All directory entries must be located below a common base entry, the suffix. Each suffix name must be as follows:

It is generally considered best practice to map your enterprise domain name to a Distinguished Name (DN). For example, an enterprise with the domain name example.com would use a DN of dc=example,dc=com.

Creating the DIT Structure and Naming Entries

The structure of a DIT can be flat or hierarchical. Although a flat tree is easier to manage, a degree of hierarchy might be required for data partitioning, replication management, and access control.

Branch Points and Naming Considerations

A branch point is a point at which you define a new subdivision within the DIT. When deciding on branch points, avoid potential problematic name changes. The likelihood of a name changing is proportional to the number of components in the name that can potentially change. The more hierarchical the DIT, the more components in the names, and the more likely the names are to change.

Use the following guidelines when defining and naming branch points:

Table 4–1 Traditional DN Branch Point Attributes

Attribute Name  

Definition  

c

A country name. 

o

An organization name. This attribute is typically used to represent a large divisional branching. The branching might include a corporate division, academic discipline, subsidiary, or other major branching within the enterprise. You should also use this attribute to represent a domain name. 

ou

An organizational unit. This attribute is typically used to represent a smaller divisional branching of your enterprise than an organization. Organizational units are generally subordinate to the preceding organization. 

st

A state or province name. 

l

A locality, such as a city, country, office, or facility name. 

dc

A domain component. 

Be consistent when choosing attributes for branch points. Some LDAP client applications might fail if the DN format is inconsistent across your DIT. If l (localityName) is subordinate to o (organizationName) in one part of your DIT, ensure that l is subordinate to o in all other parts of your directory.

Replication Considerations

When designing a DIT, consider which entries will be replicated to other servers. If you want to replicate a specific group of entries to the same set of servers, those entries should fall below a specific subtree. To describe the set of entries to be replicated, specify the DN at the top of the subtree. For more information about replicating entries, see Chapter 7, Directory Server Replication, in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Access Control Considerations

A DIT hierarchy can enable certain types of access control. As with replication, it is easier to group similar entries and to administer the entries from a single branch.

A hierarchical DIT also enables distributed administration. For example, you can use the DIT to give an administrator from the marketing department access to marketing entries, and an administrator from the sales department access to sales entries.

You can also set access controls based on directory content, rather than the DIT. Use the ACI filtered target mechanism to define a single access control rule. This rule states that a directory entry has access to all entries that contain a particular attribute value. For example, you can set an ACI filter that gives the sales administrator access to all entries that contain the attribute ou=Sales.

However, ACI filters can be difficult to manage. You must decide which method of access control is best suited to your directory: organizational branching in the DIT hierarchy, ACI filters, or a combination of the two.

Designing a Directory Schema

The directory schema describes the types of data that can be stored in a directory. During schema design, each data element is mapped to an LDAP attribute. Related elements are gathered into LDAP object classes. A well-designed schema helps maintain data integrity by imposing constraints on the size, range, and format of data values. You decide what types of entries your directory contains and the attributes that are available to each entry.

The predefined schema that is included with Directory Server contains the Internet Engineering Task Force (IETF) standard LDAP schema. The schema contains additional application-specific schema to support the features of the server. It also contains Directory Server-specific schema extensions. While this schema meets most directory requirements, you might need to extend the schema with new object classes and attributes that are specific to your directory.

Schema Design Process

Schema design involves doing the following:

Where possible, use the existing schema elements that are defined in the default Directory Server schema. Standard schema elements help to ensure compatibility with directory-enabled applications. Because the schema is based on the LDAP standard, it has been reviewed and agreed to by a large number of directory users.

Maintaining Data Consistency

Consistent data assists LDAP client applications in locating directory entries. For each type of information that is stored in the directory, select the required object classes and attributes to support that information. Always use the same object classes and attributes. If you use schema objects inconsistently, it is difficult to locate information.

You can maintain schema consistency in the following ways:

Other Directory Data Resources

For more information about the standard LDAP schema, and about designing a DIT, see the following sites:

For a complete list of the RFCs and standards supported by Directory Server Enterprise Edition, see Appendix A, Standards and RFCs Supported by Directory Server Enterprise Edition, in Oracle Fusion Middleware Evaluation Guide for Oracle Directory Server Enterprise Edition.

Chapter 5 Defining Service Level Agreements

Service level agreements are technical specifications that determine how the system must perform under certain conditions. This chapter describes the service requirements that are specific to Directory Server Enterprise Edition. The chapter includes questions that you need to ask during the planning phase to ensure that your deployment meets these requirements.

This chapter covers the following topics:

Identifying System Qualities

To identify system qualities, specify the minimum requirements that your directory service must provide. The following system qualities typically form a basis for quality of service requirements:

Defining Performance Requirements

Performance requirements should be based on typical models of directory usage. In all directory deployments, Directory Server supports one or more client applications, and the requirements of these applications must be assessed. Estimating how much information your directory contains, and how often that information is accessed, involves identifying these applications and determining how they use Directory Server.

Identifying Client Applications

The applications that access your directory and the data needs of these applications have a significant impact on performance requirements. When identifying client applications, consider the following:

Common applications that might use your directory include the following:

When you have identified the information used by each application, you might see that some types of data are used by more than one application. Performing this kind of exercise during the planning stage can help you to avoid data redundancy.

Determining the Number and Size of Directory Entries

The number and size of entries that are stored in the directory depend largely on your data requirements, as described in Chapter 4, Defining Data Characteristics.

Consider the following when calculating the number and size of entries:

Determining the Number of Reads

In estimating read traffic, consider the following:

If read performance is crucial to your enterprise, see Chapter 10, Designing a Scaled Deployment for suggestions on deploying a directory service that is scaled for reads.

Determining the Number of Writes

In estimating write traffic, consider the following:

If write performance is crucial to your enterprise, see Chapter 10, Designing a Scaled Deployment for suggestions on deploying a directory service that is scaled for writes.

Estimating the Acceptable Response Time

For each client application, determine the maximum response time that is acceptable. The acceptable response time might differ for various geographical locations, and for different kinds of operations.

Estimating the Acceptable Replication Latency

Estimate the level of synchronicity that is required between master replicas and consumer replicas. The Directory Server replication model is loosely consistent, that is, updates are accepted on a master without requiring communication with the other replicas in a topology. At any given time, the contents of each replica might be different. Over time, the replicas converge until each replica has an identical copy of the data. As part of performance planning, determine the maximum acceptable time that replicas have to converge.

Starting with Directory Server 6.x, a new prioritized replication feature is added. This feature enables you to specify that changes to certain attributes must be replicated as soon as possible. Prioritized replication might affect your decisions about acceptable replication latency. For more information, see Prioritized Replication.

Defining Availability Requirements

Availability implies an agreed minimum up time and level of performance for your directory service. Failure, in this context, is defined as anything that prevents the directory service from providing this minimum level of service.

In assessing availability requirements, consider the following:

For suggestions on deploying a highly available directory service, see Chapter 12, Designing a Highly Available Deployment.

Defining Scalability Requirements

As your directory evolves, the service levels that must be supported might change. To raise the level of service after a system has been deployed can be difficult. Thus, the initial design must take future requirements into account.

When defining scalability requirements, consider the following:

Increase CPU estimates to make sure that your deployment does not have to be scaled prematurely. Look at the anticipated milestones for scaling and projected load increase over time to make sure that you allow enough latent capacity to reach the milestones.

Defining Security Requirements

Security requirements warrant separate discussion. These requirements are described in detail in Chapter 7, Identifying Security Requirements.

Defining Latent Capacity Requirements

In determining latent capacity requirements, estimate the peak load conditions for your directory service. Consider the following:

Defining Serviceability Requirements

Serviceability requirements are discussed in detail in Chapter 8, Identifying Administration and Monitoring Requirements.

Chapter 6 Tuning System Characteristics and Hardware Sizing

A Directory Server Enterprise Edition deployment requires that certain system characteristics be defined at the outset. This chapter describes the system information that you need to address in the planning phase of your deployment.

This chapter covers the following topics:

Host System Characteristics

When identifying the host systems that will be used in your deployment, consider the following:

When the host systems have been identified, select a host name for each host in the topology. Make sure that each host system has a static IP address.

Restrict physical access to the host system. Although Directory Server Enterprise Edition includes many security features, directory security is compromised if physical access to the host system is not controlled.

If the Directory Server instances do not provide a naming service for the network, or if the deployment involves remote administration, a naming service and the domain name for the host must be properly configured.

Port Numbers

At design time, select port numbers for each Directory Server and Directory Proxy Server instance. If possible, do not change port numbers after your directory service is deployed in a production environment.

Separate port numbers must be allocated for various services and components.

Directory Server and Directory Proxy Server LDAP and LDAPS Port Numbers

Specify the port number for accepting LDAP connections. The standard port for LDAP communication is 389, although other ports can be used. For example, if you must be able to start the server as a regular user, use an unprivileged port, by default 1389. Port numbers less than 1024 require privileged access. If you use a port number that is less than 1024, certain LDAP commands must be run as root.

Specify the port number for accepting SSL-based connections. The standard port for SSL-based LDAP (LDAPS) communication is 636, although other ports can be used, such as the default 1636 when running as a regular user. For example, an unprivileged port might be required so that the server can be started as a regular user.

If you specify a non-privileged port and a server instance is installed on a system to which other users have access, you might expose the port to a hijack risk by another application. In other words, another application can bind to the same address/port pair. The rogue application might then be able to process requests that are intended for the server. The application could also be used to capture passwords used in the authentication process, to alter client requests or server responses, or to produce a denial of service attack.

Both Directory Server and Directory Proxy Server allow you to restrict the list of IP addresses on which the server listens. Directory Server has configuration attributes nsslapd-listenhost and nsslapd-securelistenhost. Directory Proxy Server has listen-address properties on ldap-listener and ldaps-listener configuration objects. When you specify the list of interfaces on which to listen, other programs are prevented from using the same port numbers as your server.

Directory Server DSML Port Numbers

In addition to processing requests in LDAP, Directory Server also responds to requests sent in the Directory Service Markup Language v2 (DSML). DSML is another way for a client to encode directory operations. Directory Server processes DSML as any other request, with the same access control and security features.

If your topology includes DSML access, identify the following:

For information about configuring DSML, see To Enable the DSML-over-HTTP Service in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Directory Service Control Center and Common Agent Container Port Numbers

Directory Service Control Center, DSCC, is a web application that enables you to administer Directory Server and Directory Proxy Server instances through a web browser. For a server to be recognized by DSCC, the server must be registered with DSCC. Unregistered servers can still be managed using command-line utilities.

DSCC communicates with DSCC agents located on the systems where servers are installed. The DSCC agents run inside a common agent container, which routes network traffic to them and provides them a framework in which to run.

If you plan to use DSCC to administer servers in your topology, identify the following port numbers.

Even if all components are installed on the same system, DSCC still communicates with its agents through these network ports.

Identity Synchronization for Windows Port Numbers

If your deployment includes identity synchronization with Microsoft Active Directory, an available port is required for the Message Queue instance. This port must be available on each Directory Server instance that participates in the synchronization. The default non-secure port for Message Queue is 80, and the default secure port is 443.

You must also make additional installation decisions and configuration decisions when planning your deployment. For details on installing and configuring Identity Synchronization for Windows, see Sun Java System Identity Synchronization for Windows 6.0 Installation and Configuration Guide.

Hardware Sizing For Directory Service Control Center

DSCC runs as a web application and uses its own local instance of Directory Server to store configuration data.

The minimum requirement to run DSCC is 256 megabytes of memory and 100 megabytes of free disk space. However, for optimum performance run DSCC on a system with at least one gigabyte of memory devoted to DSCC and a couple gigabytes of free disk space.

Hardware Sizing For Directory Proxy Server

Directory Proxy Server runs as a multithreaded Java program, and is built to scale across multiple processors. In general, the more processing power available the better, though you might find that in practice adding memory, faster disks, or faster network connections can enhance performance more than additional processors.

Configuring Virtual Memory

Directory Proxy Server uses memory mainly to hold information that is being processed. Complex aggregations for processing some virtual directory requests against multiple data sources may temporarily use extra memory. If one of your data sources is an LDIF file, Directory Proxy Server constructs a representation of that data source in memory. However, unless you use large LDIF data sources, not a recommended deployment practice, a couple gigabytes of memory devoted to Directory Proxy Server should suffice. You might want to increase the Java virtual machine heap size when starting Directory Proxy Server if enough memory is available. For example, to set the Java virtual machine heap size to 1000 megabytes, use the following command.


$ dpadm set-flags instance-path jvm-args="-Xmx1000M -Xms1000M -XX:NewRatio=1"

This command uses the -XX:NewRatio option, which is specific to the Sun Java virtual machine. The default heap size is 250 megabytes.

Configuring Worker Threads and Backend Connections

Directory Proxy Server allows you to configure how many threads the server maintains to process requests. You configure this using the server property number-of-worker-threads, described in number-of-worker-threads(5dpconf). As a rule of thumb, try setting this number to 50 threads plus 20 threads for each data source used. To gauge whether the number is sufficient, monitor the status of the Directory Proxy Server work queue on cn=Work Queue,cn=System Resource,cn=instance-path,cn=Application System,cn=DPS6.0,cn=Installed Product,cn=monitor. If you find that the operationalStatus for the work queue is STRESSED, this can mean thread-starved connection handlers are unable to handle new client requests. Increasing number-of-worker-threads may help if more system resources are available for Directory Proxy Server.

The number of worker threads should also be appropriate for the number of backend connections. If there are too many worker threads for the number of backend connections, incoming connections are accepted but cannot be transmitted to the backend connections. Such a situation is generally problematic for client applications.

To determine whether this situation has arisen, check the log files for error messages of the following type: "Unable to get backend connections". Alternatively, look at the cn=monitor entry for load balancing. If the totalBindConnectionsRefused attribute in that entry is not null, the proxy was unable to process certain operations because there were not enough backend connections. To solve this issue, increase the maximum number of backend connections. You can configure the number of backend connections for each data source by using the num-bind-limit, num-read-limit and num-write-limit properties of the data source. If you have already reached the limit for backend connections, reduce the number of worker threads.

If there are not enough worker threads for the number of backend connections, so much work can pile up in the server's queue that no new connections can be handled. Client connections can then be refused at the TCP/IP level, with no LDAP error returned. To determine if this situation has arisen, look at the statistics in the cn=monitor entry for the work queue. In particular, readConnectionsRefused and writeConnectionsRefused should remain low. Also, the value of the maxNormalPriorityPeak attribute should remain low.

Disk Space for Directory Proxy Server

By default Directory Proxy Server requires up to one gigabyte of local disk space for access logging, and another gigabyte of local disk space for errors logging. Given the quantity of access log messages Directory Proxy Server writes when handling client application requests, logging can be a performance bottleneck. Typically, however, you must leave logging on in a production environment. For best performance therefore put Directory Proxy Server logs on a fast, dedicated disk subsystem. See Configuring Directory Proxy Server Logs in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition for instructions on adjusting log settings.

Network Connections for Directory Proxy Server

Directory Proxy Server is a network-intensive application. For each client application request, Directory Proxy Server may send multiple operations to different data sources. Make sure the network connections between Directory Proxy Server and your data sources are fast, with plenty of bandwidth and low latency. Also make sure the connections between Directory Proxy Server and client applications can handle the amount of traffic you expect.

Hardware Sizing For Directory Server

Getting the right hardware for a medium to large Directory Server deployment involves some testing with data similar to the data you expect to serve in production, and access patterns similar to those you expect from client applications. When optimizing for particular systems, make sure you understand how system buses, peripheral buses, I/O devices, and supported file systems work. This knowledge helps you take advantage of I/O subsystem features when tuning these features to support Directory Server.

This section looks at how to approach hardware sizing for Directory Server. It covers what to consider when deciding how many processors, how much memory, how much disk space, and what type of network connections to dedicate to Directory Server in your deployment.

This section covers the following topics:


Note –

Unless indicated otherwise, the server properties described in the following sections can be set with the dsconf command. For more information about using dsconf, see dsconf(1M).


The Tuning Process

To tune performance implies modification of the default configuration to reflect specific deployment requirements. The following list of process phases covers the key things to think about when tuning Directory Server.

Define goals

Define specific, measurable objectives for tuning, based on deployment requirements.

Consider the following questions.

  • Which applications use Directory Server?

  • Can you dedicate the entire system to Directory Server?

    Does the system run other applications?

    If so, which other applications run on the system?

  • How many entries are handled by the deployment?

    How large are the entries?

  • How many searches per second must Directory Server support?

    What types of searches are expected?

  • How many updates per second must Directory Server support?

    What types of updates are expected?

  • What sort of peak update and search rates are expected?

    What average rates are expected?

  • Does the deployment call for repeated bulk import initialization on this system?

    If so, how often do you expect to import data? How many entries are imported?

    What types of entries?

    Must initialization be performed online with the server running?

The list here is not exhaustive. Ensure that your list of goals is exhaustive.

Select methods

Determine how you plan to implement optimizations. Also, determine how you plan to measure and analyze optimizations.

Consider the following questions.

  • Can you change the hardware configuration of the system?

  • Are you limited to using hardware that you already have, tuning only the underlying operating system, and Directory Server?

  • How can you simulate other applications?

  • How should you generate representative data samples for testing?

  • How should you measure results?

  • How should you analyze results?

Perform tests

Carry out the tests that you planned. For large, complex deployments, this phase can take considerable time.

Verify results

Check whether the potential optimizations tested reach the goals defined at the outset of the process.

If the optimizations reach the goals, document the results.

If the optimizations do not reach the goals, profile and monitor Directory Server.

Profile and monitor

Profile and monitor the behavior of Directory Server after applying the potential modifications.

Collect measurements of all relative behavior.

Plot and analyze

Plot and analyze the behavior that you observed while profiling and monitoring. Attempt to find evidence and to discover patterns that suggest further tests.

You might need to go back to the profiling and monitoring phase to collect more data.

Tweak and tune

Apply further potential optimizations suggested by your analysis of measurements.

Return to the phase of performing tests.

Document results

When the optimizations applied reach the goals defined at the outset of the process, document the optimizations well so the optimizations can be easily reproduced.

Making Sample Directory Data

How much disk and memory space you devote to Directory Server depends on your directory data. If you already have representative data in LDIF, use that data when sizing hardware for your deployment. Representative data here means sample data that corresponds to the data you expect to use in deployment, but not actual data you use in deployment. Real data comes with real privacy concerns, can be multiple orders of magnitude larger than the specifications need to generate representative data, and may not help you exercise all the cases you want to test. Representative data includes entries whose average size is close to the size you expect to see in deployment, whose attributes have values similar to those you expect to see in deployment, and whose numbers are present in proportions similar to those you expect to see in deployment.

Take anticipated growth into account when you are deciding on representative data. It is advisable to include an overhead on current data for capacity planning.

If you do not have representative data readily available, you can use the makeldif(1) command to generate sample LDIF, which you can then import into Directory Server. Chapter 4, Defining Data Characteristics can help you figure out what representative data would be for your deployment. The makeldif command is one of the Directory Server Resource Kit tools.

For deployments expected to serve millions of entries in production, ideally you would load millions of entries for testing. Yet loading millions of entries may not be practical for a first estimate. Start by creating a few sets of representative data, for example 10,000 entries, 100,000 entries, and 1,000,000 entries, import those, and extrapolate from the results you observe to estimate the hardware required for further testing. When you are estimating hardware requirements, make provision for data that will be replicated to multiple servers.

Notice when you import directory data from LDIF into Directory Server the resulting database files (including indexes) are larger than the LDIF representation. The database files, by default, are located under the instance-path/db/ directory.

What to Configure and Why

Directory Server default configuration settings are defined for typical small deployments and to make it easy to install and evaluate the product. This section examines some key configuration settings to adjust for medium to large deployments. In medium to large deployments you can often improve performance significantly by adapting configuration settings to your particular deployment.

Directory Server Database Page Size

When Directory Server reads or writes data, it works with fixed blocks of data, called pages. By increasing the page size you increase the size of the block that is read or written in one disk operation.

The page size is related to the size of entries and is a critical element of performance. If you know that the average size of your entries is greater than db-page-size/4–24 (24 is the per page binary tree internal structure), you must increase the database page size. The database page size should also match the file system disk block size.

Directory Server Cache Sizes

Directory Server is designed to respond quickly to client application requests. In order to avoid waiting for directory data to be read from disk, Directory Server caches data in memory. You can configure how much memory is devoted to cache for database files, for directory entries, and for importing directory data from LDIF.

Ideally the hardware on which you run Directory Server allows you to devote enough space to cache all directory data in physical memory. The data should fit comfortably, such that the system has enough physical memory for operation, and the file system has plenty of physical memory for its caching and operation. Once the data are cached, Directory Server has to read data from and write data to disk only when a directory entry changes.

Directory Server supports 64–bit memory addressing, and so can handle total cache sizes as large as a 64–bit processor can address. For small to medium deployments it is often possible to provide enough memory that all directory data can be held in cache. For large deployments, however, caching everything may not be practical or cost effective.

For large deployments, caching everything in memory can cause side effects. Tools such as the pmap command, that traverse the process memory map to gather data, can freeze the server process for a noticeable time. Core files can become so large that writing them to disk during a crash can take several minutes. Startup times can be slow if the server is shut down abruptly and then restarted. Directory Server can also pause and stop responding temporarily when it reaches a checkpoint and has to flush dirty cached pages to disk. When the cache is very large, the pauses can become so long that monitoring software assumes Directory Server is down.

I/O buffers at the operating system level can provide better performance. Very large buffers can compensate for smaller database caches.

For a detailed discussion of cache and cache settings, read Tuning Cache Settings. For more information on tuning cache sizes, read The Basics of Directory Server Cache Sizing.

Directory Server Indexes

Directory Server indexes directory entry attribute values to speed searches for those values. You can configure attributes to be indexed in various ways. For example, indexes can help Directory Server determine quickly whether an attribute has a value, whether it has a value equal to a given value, and whether it has a value containing a given substring.

Indexes can add to search performance, but they can also impact write performance. When an attribute is indexed, Directory Server has to update the index as values of the attribute change.

Directory Server saves index data to files. The more indexes you configure, the more disk space required. Directory Server indexes and data files are found, by default, under the instance-path/db/ directory.

For a detailed discussion of indexing and index settings, read Chapter 9, Directory Server Indexing, in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Directory Server Administration Files

Some Directory Server administration files can potentially become very large. These files include the LDIF files containing directory data, backups, core files, and log files.

Depending on your deployment, you may use LDIF both to import Directory Server data, and to serve as auxiliary backup. A standard text format, LDIF allows you to export binary data as well as strings. LDIF can occupy significant disk space in large deployments. For example, a directory containing 10 million entries having an average size of 2 kilobytes, would in LDIF representation occupy 20 gigabytes on disk. You might maintain multiple LDIF files of that size if you use the format for auxiliary backup.

Binary backup files also occupy space on disk, at least until you move them somewhere else for safekeeping. Backup files produced with Directory Server utilities consist of binary copies of the directory database files. Alternatively for large deployments you can put Directory Server in frozen mode and take a snapshot of the file system. Either way, you must have disk space available for the backup.

By default Directory Server writes log messages to instance-path/logs/access and instance-path/logs/errors. By default Directory Server requires one gigabyte of local disk space for access logging, and another 200 megabytes of local disk space for errors logging.

For a detailed discussion of Directory Server logging, read Chapter 10, Directory Server Logging, in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Directory Server Replication

Directory Server lets you replicate directory data for availability and load balancing between the servers in your deployment. Directory Server allows you to have multiple read-write (master) replicas deployed together.

Internally, the server makes this possible by keeping track of changes to directory data. When the same data are modified on more than one read-write replica Directory Server can resolve the changes correctly on all replicas. The data to track these changes, must be retained until they are no longer needed for replication. Changes are retained for a period of time specified by the purge delay whose default value is seven days. If your directory data undergoes much modification, especially of large multi-valued attributes, this data can grow quite large.

Because the level of growth is dependent on several factors, there is no catch-all formula to calculate potential data growth. The best approach is to test typical modifications and measure the growth. The following factors have an effect on data growth as a result of entry modification:

Note that the replication metadata remains in the entry until the purge delay has passed and the entry is modified again.

For a detailed discussion of Directory Server replication, read Chapter 7, Directory Server Replication, in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Directory Server Threads and File Descriptors

Directory Server runs as a multithreaded process, and is designed to scale on multiprocessor systems. You can configure the number of threads Directory Server creates at startup to process operations. By default Directory Server creates 30 threads. The value is set using the dsconf(1M) command to adjust the server property thread-count.

The trick is to keep the threads as busy as possible without incurring undo overhead from having to handle many threads. As long as all directory data fits in cache, better performance is often seen when thread-count is set to twice the number of processors plus the expected number of simultaneous update operations. If only a fraction of a large directory data set fits in cache, Directory Server threads may often have to wait for data being read from disk. In that case you may find performance improves with a much higher thread count, up to 16 times the number of available processors.

Directory Server uses file descriptors to hold data related to open client application connections. By default Directory Server uses a maximum of 1024 file descriptors. The value is set using the dsconf command to adjust the server property file-descriptor-count. If you see a message in the errors log stating too many fds open, you may observe better performance by increasing file-descriptor-count, presuming your system allows Directory Server to open additional file descriptors.

The file-descriptor-count property does not apply on Windows.

Directory Server Growth

Once in deployment Directory Server use is likely to grow. Planning for growth is key for a successful deployment, in which you continue to provide a consistently high level of service. Plan for larger, more powerful systems than you need today, basing your requirements in part on the growth you expect tomorrow.

Sometimes directory services must grow rapidly, even suddenly. This is the case for example when a directory service sized for one organization is merged with that of another organization. By preparing for growth in advance and by explicitly identifying your expectations, you are better equipped to deal with rapid and sudden growth, because you know in advance whether the expected increase outstrips the capacity you planned.

Top Tuning Tips

Basic recommendations follow. These recommendations apply in most situations. Although the recommendations presented here are in general valid, avoid the temptation to apply the recommendations without understanding the impact on the deployment at hand. This section is intended as a checklist, not a cheat sheet.

  1. Adjust cache sizes.

    Ideally, the server has enough available physical memory to hold all caches used by Directory Server. Furthermore, an appropriate amount of extra physical memory is available to account for future growth. When plenty of physical memory is available, set the entry cache size large enough to hold all entries in the directory. Use the entry-cache-size suffix property. Set the database cache size large enough to hold all indexes with the db-cache-size property. Use the dn-cache-size or dn-cache-count properties to control the size of the DN cache.

  2. Optimize indexing.

    1. Remove unnecessary indexes. Add additional indexes to support expected requests.

      From time to time, you can add additional indexes that support requests from new applications. You can add, remove, or modify indexes while Directory Server is running. Use for example the dsconf create-index and dsconf delete-index commands.

      Be careful not to remove system indexes. For a list of system indexes, see System Indexes and Default Indexes in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

      Directory Server gradually indexes data after you make changes to the indexes. You can also force Directory Server to rebuild indexes with the dsconf reindex command.

    2. Allow only indexed searches.

      Unindexed searches can have a strong negative impact on server performance. Unindexed searches can also consume significant server resources.

      Consider forcing the server to reject unindexed searches by setting the require-index-enabled suffix property to on.

    3. Adjust the maximum number of values per index key with the all-ids-threshold property.

  3. Tune the underlying operating system according to recommendations made by the idsktune command. For more information, see idsktune(1M).

  4. Adjust operational limits.

    Adjustable operational limits prevent Directory Server from devoting inordinate resources to any single operation. Consider assigning unique bind DNs to client applications requiring increased capabilities, then setting resource limits specifically for these unique bind DNs.

  5. Distribute disk activity.

    Especially for deployments that support large numbers of updates, Directory Server can be extremely disk I/O intensive. If possible, consider spreading the load across multiple disks with separate controllers.

  6. Disable unnecessary logging.

    Disk access is slower than memory access. Heavy logging can therefore have a negative impact on performance. Reduce disk load by leaving audit logging off when not required, such as on a read-only server instance. Leave error logging at a minimal level when not using the error log to troubleshoot problems. You can also reduce the impact of logging by putting log files on a dedicated disk, or on a lesser used disk, such as the disk used for the replication changelog.

  7. When replicating large numbers of updates, consider adjusting the appropriate replication agreement properties.

    The properties are transport-compression, transport-group-size, and transport-window-size.

  8. On Solaris systems, move the database home directory to a tmpfs file system.

    The database home directory, specified by the db-env-path property, indicates where Directory Server locates database cache backing files. Data files continue to reside by default under instance-path/db.

    With the database cache backing files on a tmpfs file system, the system does not repeatedly flush the database cache backing files to disk. You therefore avoid a performance bottleneck for updates. In some cases, you also avoid the performance bottleneck for searches. The database cache memory is mapped to the Directory Server process space. The system essentially shares cache memory and memory used to hold the backing files in the tmpfs file system. You therefore gain performance at essentially no cost in terms of memory space needed.

    The primary cost associated with this optimization is that database cache must be rebuilt after a restart of the host machine. This cost is probably not a cost that you can avoid, however, if you expect a restart to happen only after a software or hardware failure. After such a failure, the database cache must be rebuilt anyway.

  9. Enable transaction batches if you can afford to lose updates during a software or hardware failure.

    You enable transaction batches by setting the server property db-batched-transaction-count.

    Each update to the transaction log is followed by a sync operation to ensure that update data is not lost. By enabling transaction batches, updates are grouped together before being written to the transaction log. Sync operations only take place when the whole batch is written to the transaction log. Transaction batches can therefore significantly increase update performance. The improvement comes with a trade off. The trade off is during a crash, you lose update data not yet written to the transaction log.


    Note –

    With transaction batches enabled, you lose up to db-batched-transaction-count - 1 updates during a software or hardware failure. The loss happens because Directory Server waits for the batch to fill, or for 1 second, whichever is sooner, before flushing content to the transaction log and thus to disk.

    Do not use this optimization if you cannot afford to lose updates.


  10. Configure the referential integrity plug-in to delay integrity checks.

    The referential integrity plug-in ensures that when entries are modified, or deleted from the directory, all references to those entries are updated. By default, the processing is performed synchronously, before the response for the delete operation is returned to the client. You can configure the plug-in to have the updates performed asynchronously. Use the ref-integrity-check-delay server property.

Simulating Client Application Load

To measure Directory Server performance, you prepare the server, then subject it to the kind of client application traffic you expect in production. The better you reproduce the kind of access patterns client applications that happen in production, the better job you can do sizing the hardware and configuring Directory Server appropriately.

Directory Server Resource Kit provides the authrate(1), modrate(1), and searchrate(1) commands you can use for basic tests. These commands let you measure the rate of binds, modifications, and searches your directory service can support.

You can also simulate, measure, and graph complex, realistic client access using SLAMD. The SLAMD Distributed Load Generation Engine (SLAMD) is a Java application that is designed to stress test and analyze the performance of network-based applications. It was originally developed by Sun Microsystems, Inc. to benchmark and analyze the performance of LDAP Directory Servers. SLAMD is available as an open source application under the Sun Public License, an OSI-approved open source license. To obtain information about SLAMD, go to http://www.slamd.com/. SLAMD is also available as a java.net project. See https://slamd.dev.java.net/.

Directory Server and Processors

As a multithreaded process built to work on systems with multiple processors, Directory Server performance scales linearly in most cases as you devote more processors to it. When running Directory Server on a system with many processors, consider using the dsconf command to adjust the server property thread-count, which is the number of threads Directory Server starts to process server operations.

In specific directory deployments, however, adding more processors might not significantly impact performance. When handling demanding performance requirements for searching, indexing, and replication, consider load balancing and directory proxy technologies as part of the solution.

Directory Server and Memory

The following factors significantly affect the amount of memory needed:

To estimate the memory size required to run Directory Server, estimate the memory needed for a specific Directory Server configuration on a system loaded as in production, including application load generated for example using the Directory Server Resource Kit commands or SLAMD.

Before you measure Directory Server process size, give the server some time after startup to fill entry caches as during normal or peak operation. If you have space to put everything in cache memory, you can speed this warm up period for Directory Server by reading every entry in the directory to fill entry caches. If you do not have space to put everything in cache memory, simulate client access for some time until the cache fills as it would with a pattern of normal or peak operation.

With the server in an equilibrium state, you can use utilities such as pmap on Solaris or Linux, or the Windows Task Manager to measure memory used by the Directory Server process, ns-slapd on UNIX systems, slapd.exe on Windows systems. For more information, see the pmap(1) man page. Measure process size both during normal operation and peak operation before deciding how much memory to use.

Make sure to add to your estimates the amount of memory needed for system administration, and for the system itself. Operating system memory requirements can vary widely depending on the system configuration. Therefore, estimating the memory needed to run the underlying operating system must be done empirically. After tuning the system, monitor memory use to your estimate. You can use utilities such as the Solaris vmstat and sar commands, or the Task Manager on Windows to measure memory use.

At a minimum, provide enough memory so that running Directory Server does not cause constant page swapping, which negatively affects performance. Utilities such as MemTool, unsupported and available separately for Solaris systems, can be useful in monitoring how memory is used by and allocated to running applications.

If the system cannot accommodate additional memory, yet you continue to observe constant page swapping, reduce the size of the database and entry caches. Although you can throttle memory use with the heap-high-threshold-size and heap-low-threshold-size server settings, consider the heap threshold mechanism as a last resort. Performance suffers when Directory Server must delay other operations to free heap memory.

On Red Hat Linux systems, you can adjust the /proc/sys/vm/swappiness parameter to tune how aggressively the kernel swaps out memory. High swappiness means that the kernel will swap out a large amount and low swappiness means that the kernel will try not to use swap space at all. Decreasing the swappiness setting may therefore result in improved Directory performance as the kernel holds more of the server process in memory longer before swapping it out. If the system is dedicated to a single Directory Server instance, set the swappiness to zero. If the system runs several heavy processes or multiple concurrent instances of Directory Server, consider testing the Directory performance with various swappiness settings.

Directory Server and Local Disk Space

Disk use and I/O capabilities can have great impact on performance. The disk subsystem can become an I/O bottleneck, especially for a deployment that supports large numbers of modifications. This section recommends ways to estimate overall disk capacity for a Directory Server instance.


Note –

Do not install Directory Server or any data it accesses on network disks.

Directory Server software does not support the use of network-attached storage through NFS, AFS, or SMB. All configuration, database, and index files must reside on local storage at all times, even after installation. Log files can be stored on network disks.


The following factors significantly affect the amount of local disk space needed:

When you have set up indexes, adjusted the database page size, and imported directory data, you can estimate the disk capacity required for the instance by reading the size of the instance-path/ contents, and adding the size of expected LDIF, backups, logs, and core files. Also estimate how much the sizes you measure are expected to grow, particularly during peak operation. Make sure you leave a couple of gigabytes of extra space for the errors log in case you need to increase the log level and size for debugging purposes.

Getting an estimation of the disk required for directory data can be done in some cases by extrapolation. If it is not practical to load Directory Server with as much data as you expect in production, extrapolate from smaller sets of sample data as suggested in Making Sample Directory Data. When the amount of directory data you use is smaller than in production, you must extrapolate for other measurements, too.

The following factors determine how fast the local disk must be:

Disks used should not be saturated under normal operating circumstances. You can use tools such as the Solaris iostat command to isolate potential I/O bottlenecks.

To increase disk throughput distribute files across disk subsystems. Consider providing dedicated disk subsystems for transaction logs (dsconf set-server-prop db-log-path:/transaction/log/path), databases (dsconf create-suffix --db-path /suffix/database/path suffix-name), and log files (dsconf set-log-prop path:/log/file/path). In addition consider putting database cache files on a memory-based file system such as a Solaris tmpfs file system, where files are swapped to disk only if available memory is exhausted (for example, dsconf set-server-prop db-env-path:/tmp). If you put database cache files on a memory-based file system, make sure the system does not run out of space to keep that entire file system in memory.

To further increase throughput use multiple disks in RAID configuration. Large, non volatile I/O buffers and high-performance disk subsystems such as those offered in Sun StorEdge products can greatly enhance Directory Server performance and uptime. On Solaris 10 systems, using ZFS can also improve performance.

Directory Server and Network Connectivity

Directory Server is a network-intensive application. You can estimate theoretical maximum throughput using the following formula. Notice that this formula does not account for replication traffic.

max. throughput = max. entries returned/second x average entry size

Imagine that a Directory Server must respond to a peak of 5000 searches per second and that the server returns one entry per search. The entries have an average size of 2000 bytes. The theoretical maximum throughput would be 10 megabytes, or 80 megabits, not counting replication. 80 megabits are likely to be more than a single 100-megabit Ethernet adapter can provide. To improve network availability for a Directory Server instance, equip the system with a faster connection, or with multiple network interfaces. Directory Server can listen on multiple network interfaces within the same process.


Note –

The preceding example assumes that the client application requests all attributes when reading or searching the directory. Generally, you should design client applications so that they request only the required attributes.


If you plan multi-master replication over a wide area network, test your configuration to make sure the connection provides sufficient throughput with minimum latency and near-zero packet loss. High latency and packet loss both slow replication. In addition, avoid a topology where replication traffic goes through a load balancer.

Limiting Directory Server Resources Available to Clients

The default configuration of Directory Server can allow client applications to use more Directory Server resources than are required.

The following uses of resources can hurt directory performance:

In some deployment situations, you should not modify the default configuration. For deployments where you cannot tune Directory Server, use Directory Proxy Server to limit resources, and to protect against denial of service attacks.

In some deployment situations, one instance of Directory Server must support client applications, such as messaging servers, and directory clients such as user mail applications. In such situations, consider using bind DN based resource limits to raise individual limits for directory intensive applications. The limits for an individual account can be adjusted by setting the attributes nsSizeLimit, nsTimeLimit, nsLookThroughLimit, and nsIdleTimeout on the individual entry. For information about how to control resource limits for individual accounts, see Setting Resource Limits For Each Client Account in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Table 6–1 describes the parameters that set the global values for resource limits. The limits in Table 6–1 do not apply to the Directory Manager user, therefore, ensure client applications do not connect as the Directory Manager user.

Table 6–1 Tuning Recommendations For Resources Devoted to Client Applications

Tuning Parameter 

Description 

Server property  

idle-timeout

Sets the time in seconds after which Directory Server closes an idle client connection. Here idle means that the connection remains open, yet no operations are requested. By default, no time limit is set.

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may open a pool of connections that remain idle when traffic is low, but that should not be closed. Ideally, you might dedicate a replica to support the application in this case. If that is not possible, consider bind DN based individual limits. 

In any case, set this value high enough not to close connections that other applications expect to remain open, but set it low enough that connections cannot be left idle abusively. Consider setting it to 7200 seconds, which is 2 hours, for example. 

Attribute  

nsslapd-ioblocktimeout on dn: cn=config

Sets the time in milliseconds after which Directory Server closes a stalled client connection. Here stalled means that the server is blocked either sending output to the client or reading input from the client.

You set this attribute with the ldapmodify command.

For Directory Server instances particularly exposed to denial of service attacks, consider lowering this value from the default of 1,800,000 milliseconds, which is 30 minutes. 

Server property  

look-through-limit

Sets the maximum number of candidate entries checked for matches during a search. 

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may need to search the entire directory. Ideally, you might dedicate a replica to support the application in this case. If that is not possible, consider bind DN based, individual limits. 

In any case, consider lowering this value from the default of 5000 entries, but not below the threshold value of search-size-limit.

Attribute  

nsslapd-maxbersize on dn: cn=config

Sets the maximum size in bytes for an incoming ASN.1 message encoded according to Basic Encoding Rules, BER. Directory Server rejects requests to add entries larger than this limit. 

You set this attribute with the ldapmodify command.

If you are confident you can accurately anticipate maximum entry size for your directory data, consider changing this value from the default of 2097152, which is 2 MB, to the size of the largest expected directory entry. 

The next largest size limit for an update is the size of the transaction log file, nsslapd-db-logfile-size, which by default is 10 MB.

Server property 

max-threads-per-connection-count

Sets the maximum number of threads per client connection. 

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may open a pool of connections and may issue many requests on each connection. Ideally, you might dedicate a replica to support the application in this case. If that is not possible, consider bind DN based, individual limits. 

If you anticipate that some applications may perform many requests per connection, consider increasing this value from the default of 5, but do not increase it to more than 10. Typically do not specify more than 10 threads per connection. 

Server property  

search-size-limit

Sets the maximum number of entries Directory Server returns in response to a search request. 

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may need to search the entire directory. Ideally, you might dedicate a replica to support the application in this case. If that is not possible, consider bind DN based, individual limits. 

In any case, consider lowering this value from the default of 2000 entries. 

Server property  

search-time-limit

Sets the maximum number of seconds Directory Server allows for handling a search request. 

You set this server property with the dsconf set-server-prop command.

Some applications, such as messaging servers, may need to perform very large searches. Ideally, you might dedicate a replica to support the application in this case. If that is not possible, consider bind DN based, individual limits. 

In any case, set this value as low as you can and still meet deployment requirements. The default value of 3600 seconds, which is 1 hour, is larger than necessary for many deployments. Consider using 600 seconds, which is 10 minutes, as a starting point for optimization tests. 

Limiting System Resources Used By Directory Server

Table 6–2 describes the parameters that can be used to tune how a Directory Server instance uses system and network resources.

Table 6–2 Tuning Recommendations For System Resources

Tuning Parameter 

Description 

Attribute 

nsslapd-listenhost on dn: cn=config

Sets the hostname for the IP interface on which Directory Server listens. This attribute is multivalued. 

You set this attribute with the ldapmodify command.

Default behavior is to listen on all interfaces. The default behavior is adapted for high volume deployments using redundant network interfaces for availability and throughput. 

Consider setting this value when deploying on a multihomed system, or when listening only for IPv4 or IPv6 traffic on a system supporting each protocol through a separate interface. Consider setting nsslapd-securelistenhost when using SSL.

Server property 

file-descriptor-count

Sets the maximum number of file descriptors Directory Server attempts to use. 

You set this server property with the dsconf set-server-prop command.

The default value is the maximum number of file descriptors allowed for a process on the system at the time when the Directory Server instance is created. The maximum value corresponds to the maximum number of file descriptors allowed for a process on the system. Refer to your operating system documentation for details. 

Directory Server uses file descriptors to handle client connections, and to maintain files internally. If the error log indicates Directory Server sometimes stops listening for new connections because not enough file descriptors are available, increasing the value of this attribute may increase the number of client connections Directory Server can handle simultaneously. 

If you have increased the number of file descriptors available on the system, set the value of this attribute accordingly. The value of this property should be less than or equal to the maximum number of file descriptors available on the system. 

Attribute 

nsslapd-nagle on dn: cn=config

Sets whether to delay sending of TCP packets at the socket level. 

You set this attribute with the ldapmodify command.

Consider setting this to on if you need to reduce network traffic.

Attribute 

nsslapd-reservedescriptors on dn: cn=config

Sets the number of file descriptors Directory Server maintains to manage indexing, replication and other internal processing. Such file descriptors become unavailable to handle client connections.

You set this attribute with the ldapmodify command.

Consider increasing the value of this attribute from the default of 64 if all of the following are true.

  • Directory Server replicates to more than 10 consumers or Directory Server maintains more than 30 index files.

  • Directory Server handles a large number of client connections.

  • Messages in the error log suggest Directory Server is running out of file descriptors for operations not related to client connections.

Notice that as the number of reserved file descriptors increases, the number of file descriptors available to handle client connections decreases. If you increase the value of this attribute, consider increasing the number of file descriptors available on the system, and increasing the value of file-descriptor-count.

If you decide to change this attribute, for a first estimate of the number of file descriptors to reserve, try setting the value of nsslapd-reservedescriptors according to the following formula.

20 + 
4 * (number of databases) +
 (total number of indexes) + 
(value of nsoperationconnectionslimit) * 
(number of chaining backends) + 
ReplDescriptors + 
PTADescriptors + 
SSLDescriptors

Here ReplDescriptors is number of supplier replica plus 8 if replication is used. PTADescriptors is 3 if the Pass Through Authentication, PTA, plug-in is enabled, and 0 otherwise. SSLDescriptors is 5 if SSL is used, and 0 otherwise.

The number of databases is the same as the number of suffixes for the instance, unless the instance is configured to use more than one database per suffix. Verify estimates through empirical testing. 

Attribute 

nsslapd-securelistenhost on dn: cn=config

Sets the hostname for the IP interface on which Directory Server listens for SSL connections. This attribute is multivalued. 

You set this attribute with the ldapmodify command.

Default behavior is to listen on all interfaces. Consider this attribute in the same way as nsslapd-listenhost.

Server property 

max-thread-count

Sets the number of threads Directory Server uses. 

You set this server property with the dsconf set-server-prop command.

Consider adjusting the value of this property if any of the following are true.

  • Client applications perform many simultaneous, time-consuming operations such as updates or complex searches.

  • Directory Server supports many simultaneous client connections.

Multiprocessor systems can sustain larger thread pools than single processor systems. As a first estimate when optimizing the value of this attribute, use two times the number of processors or 20 plus the number of simultaneous updates. 

Consider also adjusting the maximum number of threads per client connection, max-threads-per-connection-count. The maximum number of these threads handling client connections cannot exceed the maximum number of file descriptors available on the system. In some cases, it may prove useful to reduce, rather than increase, the value of this attribute.

Verify estimates through empirical testing. Results depend not only on the particular deployment situation but also on the underlying system. 

Operating System Tuning For Directory Server

Default system settings do not necessarily result in top directory service performance. This section addresses how to tune the operating system for top performance.

Operating System Version and Patch Support

See Oracle Fusion Middleware Release Notes for Oracle Directory Server Enterprise Edition for an updated list of supported operating systems.

You want to maintain overall system security. You also want to ensure proper Directory Server operation. You therefore install the latest recommended system patches, service packs, or fixes. See Oracle Fusion Middleware Release Notes for Oracle Directory Server Enterprise Edition for an updated list of the latest patches to apply for your platform.

Basic Security Checks

The recommendations in this section do not eliminate all risk. Instead, the recommendations are intended as a short checklist to help you limit typical security risks.

Accurate System Clock Time

Ensure the system clock is reasonably in sync with other systems. Good clock synchronization facilitates replication. Good synchronization also facilitates correlation of date and time stamps in log files between systems. Consider using a Network Time Protocol, NTP, client to set the correct system time.

Restart When System Reboots

You can enable a server instance to restart at system boot time by using the dsadm command. For example, use the dsadm enable-service command on Solaris 10 and Windows systems. On other systems, use the dsadm autostart command. If you did not install from native packages, refer to your operating system documentation for help ensuring the server starts at system boot time.

When possible, stop Directory Server with the dsadm command, or from DSCC. If the Directory Server is stopped abruptly during system shutdown, there is no way to guarantee that all data has been written to disk correctly. When Directory Server restarts, it must therefore verify the database integrity. This process can take some time.

Furthermore, consider using a logging option with your file system. File system logging generally both improves write performance, and also decreases the time required to perform a file system check. The system must check the file system when the file system is not cleanly unmounted during a crash. Also, consider using RAID for storage.

System-Specific Tuning With The idsktune Command

The idsktune(1M) utility that is provided with the product can help you diagnose basic system configuration shortcomings. The utility offers recommendations for tuning the system to support high performance directory services. The utility does not actually implement any of the recommendations. The recommendations should be implemented by a qualified system administrator.

When you run the utility as root, idsktune gathers information about the system. The utility displays notices, warnings, and errors with recommended corrective actions. The idsktune command checks the following.


Tip –

Fix at minimum all ERROR conditions before installing Directory Server software on a system intended for production use.


Individual deployment requirements can exceed minimum requirements. You can provide more resources than the resources identified as minimum system requirements by the idsktune utility.

Consider local network conditions and other applications before implementing specific recommendations. Refer to the operating system documentation for additional tips on tuning network settings.

File Descriptor Settings

Directory Server uses file descriptors when handling concurrent client connections. A low maximum number of file descriptors that are available for the server process can thus limit the number of concurrent connections. Recommendations that concern the number of file descriptors therefore relate to the number of concurrent connections Directory Server can handle.

On Solaris systems, the number of file descriptors available is configured through the rlim_fd_max parameter. Refer to the operating system documentation for further instructions on modifying the number of available file descriptors.

Transmission Control Protocol (TCP) Settings

Specific network settings depend on the platform. On some systems, you can enhance Directory Server performance by modifying TCP settings.


Note –

First deploy your directory service, then consider tuning these parameters, if necessary.


This section discusses the reasoning behind idsktune recommendations that concern TCP settings, and provides a method for tuning these settings on Solaris 10 systems.

Inactive Connections

Some systems allow you to configure the interval between transmission of keepalive packets. This setting can determine how long a TCP connection is maintained while inactive and potentially disconnected. When set too high, the keepalive interval can cause the system to use unnecessary resources to keep connections for clients that have become disconnected. For most deployments, set this parameter to a value of 600 seconds. This value, which is 600,000 milliseconds, or 10 minutes, allows more concurrent connections to Directory Server.

When set too low, however, the keepalive interval can cause the system to drop connections during transient network outages.

On Solaris systems, this time interval is configured through the tcp_keepalive_interval parameter.

Outgoing Connections

Some systems allow you to configure how long a system waits for an outgoing connection to be established. When set too high, establishing outgoing connections to destination servers such as replicas not responding quickly can cause long delays. For Intranet deployments on fast, reliable networks, you can set this parameter to a value of 10 seconds to improve performance. Do not, however, use such a low value on networks with slow, unreliable, or WAN connections, however.

On Solaris systems, this time interval is configured through the tcp_ip_abort_cinterval parameter.

Retransmission Timeout

Some systems allow you to configure the initial time interval between retransmission of packets. This setting affects the wait before retransmission of an unacknowledged packet. When set too high, clients can be kept waiting on lost packets. For Intranet deployments on fast, reliable networks, you can set this parameter to a value of 500 milliseconds to improve performance. Do not, however, use such a low value on networks with round trip times of more than 250 milliseconds.

On Solaris systems, this time interval is configured through the tcp_rexmit_interval_initial parameter.

Sequence Numbers

Some systems allow you to configure how the system handles initial sequence number generation. For extranet and Internet deployments, set this parameter so initial sequence number generation is based on RFC 1948 to prevent sequence number attacks. In such environments, other TCP tuning settings mentioned here are not useful.

On Solaris systems, this behavior is configured through the tcp_strong_iss parameter.

Tuning TCP Settings on Solaris 10 Systems

On Solaris 10 systems, the simplest way to tune TCP settings is to create a simple SMF service as follows:

Physical Capabilities of Directory Server

Following are the physical capabilities of Directory Server that specify its scalability:

Other Tips to Improve Overall Performance

Following are the few tips that help in improving the overall performance of the system:

Tuning Cache Settings

This section provides recommendations for setting database and entry cache sizes. It does not cover import cache sizes. The recommendations here pertain to maximizing either search rate or modify rate, not both at once.

This section covers the following topics:

Basic Tuning Recommendations

Here you find the basic recommendations for maximizing search rates or maximizing modification rates achieved by Directory Server. Set cache sizes according to the following recommendations:

For Maximum Search Rate (Searches Only)

If the directory data do not fit into available physical memory, or only just fit with no extra room to spare, set cache sizes to their default values, 32M for db-cache-size, 10M for entry-cache-size, and allow the server to use as much of the operating system's file system cache as possible. Run tests to correctly dimension the sizes based on your throughput.

If the directory data fit into available physical memory with physical memory to spare, allocate memory to the entry cache until either the entry cache is full or, on a 32–bit system, the entry cache reaches maximum size. Then allocate memory to the database cache until it is full or reaches maximum size.

See Configuring Memory in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition for instructions on setting cache sizes.

For Maximum Modification Rate (Modifications Only)

If the directory data do not fit into available physical memory, or only just fit with no extra room to spare, set the entry cache sizes to the default value, 10M for entry-cache-size and allow the server to use as much of the operating system's file system cache as possible. Keeping some database cache available ensures that modifications remain cached between each database checkpoint.

If the directory data fit into available physical memory with physical memory to spare, allocate memory to the entry cache until either the entry cache is full or, on a 32–bit system, the entry cache reaches maximum size. Then allocate memory to the database cache until it is full or reaches maximum size.

See Configuring Memory in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition for instructions on setting cache sizes.

Small, Medium, and Large Data Sets

A working set refers to the data actually pulled into memory so that the server can respond to client applications. The data set is then the entries in the directory that are being used due to client traffic. The data set may include every entry in the directory, or may be composed of some smaller number of entries, such as entries corresponding to people in a time zone where users are active.

We define three data set sizes, based on how much of the directory data set fits into available physical memory:

Small

The data set fits entirely into physical memory with fully-loaded database and entry caches.

Medium

The data set fits in physical memory, and extra physical memory can be dedicated to entry cache.

Large

The data set is too small to fit completely in available physical memory.

The ideal case is of course the small data set. If your data set is small, set database cache size and entry cache size such that all entries fit in both the database cache and the entry cache.

The following sections provide recommendations for medium and large data sets where the server performs either all searches or all modify operations.

Optimum Search Performance (Searches Only)

Figure 6–1 shows search performance on a hypothetical system. As expected, Directory Server offers top search performance for a given system configuration when the whole data set fits into memory.

Figure 6–1 Search Performance

Performance improves as more of the data set fits into
memory.

For large data sets better performance has been observed when database cache and entry cache are set to their minimum sizes and available memory is left to the operating system for use in allocating file system cache. As shown, performance improves when more of the data set fits into the file system cache.

For medium data sets better performance has been observed when the file system cache holds the whole data set, and extra physical memory available is devoted to entry cache. As shown, performance improves when more of the medium data set fits in entry cache.

Optimum Modify Performance (Modifications Only)

Figure 6–2 shows modify performance on a hypothetical system. As expected, Directory Server offers top modify performance for a given system configuration when the whole data set fits into memory.

Figure 6–2 Modify Performance

Performance improves as more of the data set fits into
memory.

For large data sets better performance has been observed when database cache and entry cache are set to their minimum sizes and available memory is left to the operating system for use in allocating file system cache. As shown, performance improves when more of the data set fits into the file system cache.

For medium data sets, modify performance reaches its maximum as all entries fit into file system cache. As suggested in Basic Tuning Recommendations, keeping some database cache available ensures the modifications to remain cached between each database checkpoint.

Tuning Indexes for Performance

The use of indexes can enhance performance by reducing the time taken to perform a search. However, indexes also have an associated cost. When an entry is updated, the indexes referring to that entry must also be updated. The more an entry is indexed, the more resources are required to update the index; indexes take up disk space and memory space.

When you design indexes, ensure that you offset the benefit of faster searches against the associated costs of the index. Maintaining useful indexes is good practice; maintaining unused indexes for attributes on which clients rarely search is wasteful.

You can optimize performance of searches in the following ways:

To prevent Directory Server from performing searches on non-indexed entries, set the require-index-enabled suffix property for the suffix.

To limit the number of values per index key for a given search you can set an index list threshold. If the number of entries in the list for a search key exceeds the index list threshold, an unindexed search is performed. The threshold can be set for an entire server instance, for an entire suffix, and for an individual attribute type. You can also set individual thresholds for equality, presence, and substring indexes.

For a detailed procedure on how to change the index list threshold, seeChanging the Index List Threshold in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition. This procedure modifies the all-ids-threshold configuration property.

The global value of all-ids-threshold for the server instance should be about 5% of the total number of entries in the directory. For example, the default value of 4000 is generally right for instances of Directory Server that handle 80 000 entries or less. You should avoid setting the threshold above 50 000, even for very large deployments. However, you might set all-ids-threshold to a value other than the 5% guideline in the following situations:

You should limit the number of unindexed searches that are performed. Use the logconv utility provided with the Directory Server Resource Kit to examine the access logs for evidence of frequent unindexed searches. For more information, see the logconv(1) man page.

Basic Directory Server Sizing Example: Disk and Memory Requirements

This section provides an example that shows initial steps in sizing Directory Server disk and memory requirements for deployment. The system used for this example was selected by chance and because it had sufficient processing power and memory to complete the sizing tasks quickly. It does not necessarily represent a recommended system for production use. You can it however to gain insight into how much memory and disk space might be required for production systems.

System Characteristics

The following system information was observed using the Solaris Management Console (smc).

For this example, the system was dedicated to Directory Server. No other user was logged in, and only the default system processes were running.

Preparing a Directory Server Instance

Unpack the zip distribution to installDirectory Server Enterprise Edition software on local disk space.

For detailed information, see Installing Directory Server Enterprise Edition Using Zip Distribution in Oracle Fusion Middleware Installation Guide for Oracle Directory Server Enterprise Edition.

For convenience set environment variables as shown.


$ export PATH=/local/dsee7/bin:${PATH}
$ export DIRSERV_PORT=1389
$ export LDAP_ADMIN_PWF=~/.pwd

After installing the software and setting environment variables, create a Directory Server instance using default ports for LDAP and LDAPS, respectively.


$ dsadm create -p 1389 -P 1636 /local/dsInst
Choose the Directory Manager password:
Confirm the Directory Manager password:
$ du -hs /local/dsInst
610K   /local/dsInst

Until you create a suffix, the Directory Server instance uses less than one megabyte of disk space.


$ dsadm start /local/dsInst
Server started: pid=8046
$ dsconf create-suffix dc=example,dc=com
Certificate "CN=hostname, CN=1636, CN=Directory Server,
 O=Sun Microsystems" presented by the server is not trusted.
Type "Y" to accept, "y" to accept just once, "n" to refuse, "d" for more
 details: Y
$ du -hs /local/dsInst
53M   /local/dsInst

For this example, make no additional changes to the default Directory Server configuration except those shown explicitly.

Populating the Suffix With 10,000 Sample Directory Entries

Using the makeldif command with the example files, you can create sample LDIF files from one kilobyte to one megabyte in size. See To Load Sample Data in Directory Server Instance in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition for an example showing how to use the makeldif command.

The entries in these files are smaller than you would expect in a real deployment.


$ du -h /var/tmp/*
 57M   /var/tmp/100k.ldif
 5.7M   /var/tmp/10k.ldif
 573M   /var/tmp/1M.ldif

An example entry from these files is shown in the following LDIF.

dn: uid=Aartjan.Aalders,ou=People,dc=example,dc=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
givenName: Aartjan
sn: Aalders
cn: Aartjan Aalders
initials: AA
uid: Aartjan.Aalders
mail: Aartjan.Aalders@example.com
userPassword: trj49xeq
telephoneNumber: 935-748-6699
homePhone: 347-586-0252
pager: 906-399-8417
mobile: 452-898-9034
employeeNumber: 1000004
street: 64197 Broadway Street
l: Lawton
st: IN
postalCode: 57924
postalAddress: Aartjan Aalders$64197 Broadway Street$Lawton, IN  57924
description: This is the description for Aartjan Aalders.

Begin sizing by importing the content of 10k.ldif, which occupies 5.7 megabytes on disk.


$ dsadm stop /local/dsInst
Server stopped
$ dsadm import -i /local/dsInst /var/tmp/10k.ldif dc=example,dc=com

With default indexing the content of 10k.ldif increases the size of the instance files by 72 megabytes - 53 megabytes, or 19 megabytes.


$ du -hs /local/dsInst
 72M   /local/dsInst

If you index five more attributes, size increases by about seven megabytes.


$ dsconf create-index dc=example,dc=com employeeNumber street st \
 postalCode description
$ dsconf reindex dc=example,dc=com
…
## example: Finished indexing.

Task completed (slapd exit code: 0).
$ du -hs /local/dsInst
 79M   /local/dsInst

Observing memory size with the default cache settings, and nothing loaded from the suffix into entry cache yet, the server process occupies approximately 170 megabytes of memory with a heap size of about 56 megabytes.


$ dsadm start /local/dsInst
Server started: pid=8482
$ pmap -x 8482
…
         Address     Kbytes        RSS       Anon     Locked Mode   Mapped File
0000000000437000      61348      55632      55380          - rw---    [ heap ]
…
---------------- ---------- ---------- ---------- ----------
        total Kb     178444     172604      76532          -

As you then prime the cache and examine output from the pmap command again, the heap grows by about 10 megabytes, and so does the total size of the process.


$ ldapsearch -D cn=Directory\ Manager -w - -p 1389 -b dc=example,dc=com \
 objectclass=\* > /dev/null
Enter bind password:
$ pmap -x 8482
…
         Address     Kbytes        RSS       Anon     Locked Mode   Mapped File
…
0000000000437000      70564      65268      65024          - rw---    [ heap ]
…
---------------- ---------- ---------- ---------- ----------
        total Kb     187692     182272      86224          -

The numbers are comparable for default indexing.


$ dsconf delete-index dc=example,dc=com employeeNumber street st \
 postalCode description
$ dsconf reindex dc=example,dc=com
…
## example: Finished indexing.

Task completed (slapd exit code: 0).
$ dsadm stop /local/dsInst
 Server stopped
$ dsadm start /local/dsInst
 Server started: pid=8541
$ ldapsearch -D cn=Directory\ Manager -w - -p 1389 -b dc=example,dc=com \
 objectclass=\* > /dev/null
Enter bind password:
$ pmap -x 8541
…
         Address     Kbytes        RSS       Anon     Locked Mode   Mapped File
…
0000000000437000      70564      65248      65004          - rw---    [ heap ]
…
---------------- ---------- ---------- ---------- ----------
        total Kb     187680     182240      86192          -

For only 10,000 entries, do not change the default cache sizes.


$ dsconf get-server-prop | grep cache
db-cache-size                      :  32M
import-cache-size                  :  64M
$ dsconf get-suffix-prop dc=example,dc=com | grep entry-cache-size
entry-cache-size                   :  10M

The small default entry cache was no doubt filled completely after priming, even with only 10,000 entries. To see the size for a full entry cache, set a large entry cache size, import the data again, and prime the cache.


$ dsconf set-suffix-prop dc=example,dc=com entry-cache-size:2G
$ dsadm stop /local/dsInst
Server stopped
$ dsadm import -i /local/dsInst /var/tmp/10k.ldif dc=example,dc=com
…
$ dsadm start /local/dsInst
Server started: pid=8806
$ ldapsearch -D cn=Directory\ Manager -w - -p 1389 -b dc=example,dc=com \
 objectclass=\* > /dev/null
Enter bind password:
$ pmap -x 8806
…
         Address     Kbytes        RSS       Anon     Locked Mode   Mapped File
…
0000000000437000     116644     109996     109780          - rw---    [ heap ]

Here 10,000 entries occupy approximately 55 megabytes of entry cache memory (110 - 55).

Populating the Suffix With 100,000 Sample Directory Entries

As you move to 100,000 entries, you have more directory data to fit into database and entry caches. Initially, import 100,000 entries and examine the size required on disk for this volume of directory data.


$ dsadm import -i /local/dsInst /var/tmp/100k.ldif dc=example,dc=com
…
$ du -hs /local/dsInst
 196M   /local/dsInst

Directory data contained in the database for our example suffix, dc=example,dc=com, now occupy about 142 megabytes.


$ du -hs /local/dsInst/db/example/
 142M   /local/dsInst/db/example

You can increase the size of the database cache to hold this content. If you expect the volume of directory data to grow over time, you can set the database cache larger than currently necessary. You can also set the entry cache size larger than necessary. Entry cache grows as the server responds to client requests, unlike the database cache, which is allocated at startup.


$ dsconf set-server-prop db-cache-size:200M
$ dsconf set-suffix-prop dc=example,dc=com entry-cache-size:2G

$ dsadm stop /local/dsInst
 Server stopped
$ dsadm start /local/dsInst
 Server started: pid=8640
$ pmap -x 8640
…
         Address     Kbytes        RSS       Anon     Locked Mode   Mapped File
…
0000000000437000      61348      55404      55148          - rw---    [ heap ]
…
---------------- ---------- ---------- ---------- ----------
        total Kb     491984     485736     174620          -

This shows the server instance has a relatively small heap at startup, but that the database cache memory has been allocated. Process size is nearing half a gigabyte.


$ ldapsearch -D cn=Directory\ Manager -w - -p 1389 -b dc=example,dc=com \
 objectclass=\* > /dev/null
Enter bind password:
$ pmap -x 8640
…
         Address     Kbytes        RSS       Anon     Locked Mode   Mapped File
…
0000000000437000     610212     604064     603840          - rw---    [ heap ]
…
---------------- ---------- ---------- ---------- ----------
        total Kb    1040880    1034428     723360          -

Heap size now reflects the entry cache being filled. It has increased by roughly 550 megabytes for 100,000 small directory entries, whose LDIF occupied 57 megabytes on disk.

With five extra indexes, the process size is about the same. The database cache size has not changed.


$ dsconf create-index dc=example,dc=com employeeNumber street st \
 postalCode description
$ dsadm stop /local/dsInst
 Server stopped
$ dsadm import -i /local/dsInst /var/tmp/100k.ldif dc=example,dc=com
…
$ dsadm start /local/dsInst
 Server started: pid=8762
$ ldapsearch -D cn=Directory\ Manager -w - -p 1389 -b dc=example,dc=com \
 objectclass=\* > /dev/null
Enter bind password:
$ pmap -x 8762
…
         Address     Kbytes        RSS       Anon     Locked Mode   Mapped File
…
0000000000437000     610212     603832     603612          - rw---    [ heap ]
…
---------------- ---------- ---------- ---------- ----------
        total Kb    1040876    1034192     723128          -

The database is somewhat larger, however. The additional indexes increased the size of the database from 142 megabytes to 163 megabytes.


$ du -hs /local/dsInst/db/example/
 163M   /local/dsInst/db/example

Populating the Suffix With 1,000,000 Sample Directory Entries

As you move from 100,000 entries to 1,000,000 entries, you no longer have enough space on a system with 4 gigabytes of physical memory to include all entries in the entry cache. You can begin by importing the data and examining the size it occupies on disk.


$ dsadm import -i /local/dsInst /var/tmp/1M.ldif dc=example,dc=com
…
$ du -hs /local/dsInst/db/example/
 1.3G   /local/dsInst/db/example

Assuming you expect approximately 25% growth in directory data size during the lifetime of the instance, set the database cache size to 1700 megabytes.


$ dsadm start /local/dsInst
Server started: pid=9060
$ dsconf set-server-prop db-cache-size:1700M
$ dsadm stop /local/dsInst
Server stopped
$ dsadm start /local/dsInst
Server started: pid=9118
$ pmap -x 9118
…
         Address     Kbytes        RSS       Anon     Locked Mode   Mapped File
…
0000000000437000      65508      55700      55452          - rw---    [ heap ]
…
---------------- ---------- ---------- ---------- ----------
        total Kb    1882448    1034180      76616          -

Given a database cache this large and only 4 gigabytes of physical memory, you cannot fit more than a fraction of entries into the entry cache for the suffix. Here, set entry cache size to one gigabyte, and then prime the cache to see the change in the process heap size.


$ dsconf set-suffix-prop dc=example,dc=com entry-cache-size:1G
$ ldapsearch -D cn=Directory\ Manager -w - -p 1389 -b dc=example,dc=com \
 objectclass=\* > /dev/null
Enter bind password:
$ pmap -x 9118
…
         Address     Kbytes        RSS       Anon     Locked Mode   Mapped File
…
0000000000437000    1016868    1009852    1009612          - rw---    [ heap ]
…
---------------- ---------- ---------- ---------- ----------
        total Kb    2883268    2477064    1080076          -

Total process size is over 2.8 gigabytes.


$ prstat -p 9118
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
  9118 myuser   2816M 2374M sleep   59    0   0:03:26 0.5% ns-slapd/42

Extrapolating from earlier entry cache sizes, you can expect to use 5.5 or 6 gigabytes for entry cache alone if you had enough physical memory.

Examining the directory database size with five additional indexes, you find adding indexes has increased the size of the database by about 200 megabytes.


$ dsconf create-index dc=example,dc=com employeeNumber street st \
 postalCode description
$ dsadm stop /local/dsInst
Server stopped
$ dsadm import -i /local/dsInst /var/tmp/1M.ldif dc=example,dc=com
…
$ du -hs /local/dsInst/db/example
 1.5G   /local/dsInst/db/example

Summary of Observations

Table 6–3 records what was observed in this example. It includes neither server process size, nor default database cache file size.


Note –

Your observations made through empirical testing for your deployment are likely to differ significantly from those shown here.


Table 6–3 Sizing Summary

Number of Entries 

LDIF File Size 

Disk with Default Indexes 

Disk with Five Additional Indexes 

Database Cache 

Entry Cache 

0 [The suffix has been created, but is empty.]

n/a 

n/a 

n/a 

n/a 

n/a 

10,000 

8 megabytes 

19 megabytes 

26 megabytes 

32 megabytes 

55 megabytes 

100,000 

83 megabytes 

142 megabytes 

163 megabytes 

200 megabytes 

550 megabytes 

1,000,000 

800 megabytes 

1300 megabytes 

1500 megabytes 

1700 megabytes (default indexing) 

n/a 

In an actual deployment, you may have significantly larger entries and more indexes. Do your own empirical testing and tuning before ordering hardware.

Chapter 7 Identifying Security Requirements

How you secure data in Directory Server Enterprise Edition has an impact on all other areas of design. This chapter describes how to analyze your security needs and explains how to design your directory service to meet those needs.

This chapter covers the following topics:

Security Threats

The most typical threats to directory security include the following:

Overview of Security Methods

A security policy must be able to prevent sensitive information from being modified or retrieved by unauthorized users, but easy enough to administer.

Directory Server Enterprise Edition provides the following security methods:

These security tools can be used in combination in your security design. You can also use other features of the directory, such as replication and data distribution, to support your security design.

Determining Authentication Methods

Directory Server Enterprise Edition supports the following authentication mechanisms:

The same authentication mechanism applies to all users, whether the users are people or LDAP-aware applications.

Apart from the authentication mechanisms described above, this section also includes the following information about authentication:

Anonymous Access

Anonymous access is the simplest form of directory access. Anonymous access makes data available to any directory user, regardless of whether the user has authenticated.

Anonymous access does not allow you to track who is performing searches or what kind searches are being performed, only that someone is performing searches. When you allow anonymous access, anyone who connects to your directory can access the data. If you allow anonymous access to data, and attempt to block a user or group from that data, the user can access the data by binding to the directory anonymously.

You can restrict the privileges of anonymous access. Usually, directory administrators allow anonymous access only for read, search, and compare privileges. You can also limit access to a subset of attributes that contain general information such as names, telephone numbers, and email addresses. Do not allow anonymous access to sensitive data, such as government identification numbers, home telephone numbers and addresses, and salary information.

Anonymous access to the root DSE (base DN "") is required. Access to the root DSE enables applications to discover the capabilities of the server, the supported security mechanisms, and the supported suffixes.

Simple Password Authentication

If anonymous access is not set up, a client must authenticate to Directory Server to access the directory contents. With simple password authentication, a client authenticates to the server by providing a simple, reusable password.

The client authenticates to Directory Server through a bind operation in which the client provides a distinguished name and credentials. The server locates the entry that corresponds to the client DN, then checks whether the client's password matches the value stored with the entry. If the password matches, the server authenticates the client. If the password does not match, the authentication operation fails and the client receives an error message.


Note –

The drawback of simple password authentication is that the password is transmitted in clear text, which can compromise security. If a rogue user is listening, that user can impersonate an authorized user.


Simple password authentication offers an easy way of authenticating users. However, you need to restrict the use of simple password authentication to your organization’s intranet. This kind of authentication does not offer the level of security that is required for transmissions between business partners over an extranet or for transmissions with customers on the Internet.

Simple Password Authentication Over a Secure Connection

A secure connection uses encryption to make data unreadable to third parties while the data is sent over the network between Directory Server and its clients. Clients can establish secure connections in either of the following ways:

In either case, the server must have a security certificate, and the client must be configured to trust this certificate. The server sends its certificate to the client to perform server authentication, using public-key cryptography. This results in the client knowing that it is connected to the intended server and that the server is not being tampered with.

Then, for privacy, the client and server encrypt all data transmitted through the connection. The client sends the bind DN and password on the encrypted connection to authenticate the user. All further operations are performed with the identity of the user. The operations might also be performed with a proxy identity if the bind DN has proxy rights to other user identities. In all cases, the results of operations are encrypted when these results are returned to the client.

Certificate-Based Client Authentication

When establishing encrypted connections over SSL or TLS, you can also configure the server to require client authentication. The client must send its credentials to the server to confirm the identity of the user. The user's certificate, not the DN, is used to determine the bind DN. Client authentication protects against user impersonation and is the most secure type of connection.

Certificate-based client authentication operates at the SSL, TLS layer only. To use a certificate-based authentication ID with LDAP, you must use SASL EXTERNAL authentication after establishing the SSL connection.

You can configure certificate-based client authentication by using the dsconf get-server-prop command. See dsconf(1M) for more information.

SASL-Based Client Authentication

Client authentication during an SSL or TLS connection can also use the Simple Authentication and Security Layer (SASL), a generic security interface, to establish the identity of the client. Directory Server Enterprise Edition supports the following mechanisms through SASL:

For more information, see Using SASL DIGEST-MD5 in Clients in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition and Using Kerberos SASL GSSAPI in Clients in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Preventing Authentication by Account Inactivation

You can temporarily prevent authentication by inactivating a user account or a set of accounts. When the account is inactivated, the user cannot bind to Directory Server, and authentication operations fail. For more information, see Manually Locking Accounts in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Preventing Authentication by Using Global Account Lockout

In this version of Directory Server, authentication failures with a password are monitored and replicated. This enables rapid, global account lockout after a specified number of authentication attempts with an invalid password. Global account lockout is supported in any of the following cases:

Imagine a situation where all bind attempts are not directed to the same server, and the client application performs bind attempts on multiple servers faster than lockout data can be replicated. In the worst-case scenario, the client would be allowed the specified number of attempts on each server where the client attempted to bind. This situation would be unlikely if the client application were driven by input from a human user. However, an automated client built to attack a topology could exploit this deployment choice.

Prioritized replication can be used to minimize the impact of asynchronous replication latency on intrusion detection. However, you might require account lockout immediately after the specified number of failed bind attempts. In this situation, you must use Directory Proxy Server to route all bind attempts on a particular entry to the same master replica. For information about how to configure Directory Proxy Server to do this, see Operational Affinity Algorithm for Global Account Lockout in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

To retain a strictly local lockout policy in a replicated topology, you must maintain compatibility with the 5.2 password policy. In this situation, the policy in effect must not be the default password policy. Local lockout can also be achieved by restricting binds to read-only consumers.

For detailed information about how global account lockout works, see Global Account Lockout in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

External Authentication Mappings and Services

Directory Server provides user account host mapping, which associates a network user account with a Directory Server user account. This feature simplifies the management of both user accounts. Host mapping is required for users who are externally authenticated.

Proxy Authorization

Proxy authorization is a special form of access control. Proxy authorization or proxy authentication is when an application is forced to use a specific username/password combination to gain access to the server.

With proxy authorization, an administrator can request access to Directory Server by assuming the identity of a regular user. The administrator binds to the directory with his own credentials and is granted the rights of the regular user. This assumed identity is called the proxy user. The DN of that user is called the proxy DN. The proxy user is evaluated as a regular user. Access is denied if the proxy user entry is locked or inactivated or if the password has expired.

An advantage of the proxy mechanism is that you can enable an LDAP application to use a single bind to service multiple users who are accessing Directory Server. Instead of each user having to bind and authenticate, the client application binds to Directory Server and uses proxy rights.

For more information, see Chapter 6, Directory Server Access Control, in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Designing Password Policies

A password policy is a set of rules that govern how passwords are administered in a system. Directory Server supports multiple password policies, as well as a default password policy.

Several elements of the password policy are configurable, enabling you to design a policy that suits the security requirements of your organization. Configuration of the password policy is described in Chapter 7, Directory Server Password Policy, in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition. The individual attributes available for configuring password policies are described in the pwpolicy(5dssd) man page.

This section is divided into the following topics:

Password Policy Options

The following password policy options are provided:

Password Policies in a Replicated Environment

Configuration information for the default password policy is not replicated. Instead, it is part of the server instance configuration. If you modify the default password policy, the same modifications must be made on each server in the topology. If you need a password policy that is replicated, you must define a specialized password policy under a part of the directory tree that is replicated.

All password information that is stored in the user entry is replicated. This information includes the current password, password history, password expiration dates and so forth.

Consider the following impact of password policies in a replicated environment:

Password Policy Migration

The Directory Server Enterprise Edition password policy configuration settings differ from the password policy configuration settings provided with the 5.2 version of Directory Server. If your topology includes servers that run different versions of Directory Server, see Password Policy in Oracle Fusion Middleware Upgrade and Migration Guide for Oracle Directory Server Enterprise Edition for information about how to migrate password policy settings.

Password Synchronization With Windows

Identity Synchronization for Windows synchronizes user account information, including passwords, between Directory Server and Windows. Both Windows Active Directory and Windows NT are supported. Identity Synchronization for Windows helps build a scalable and security-enriched password synchronization solution for small, medium, and large enterprises.

This solution provides the following:

For more information about using Identity Synchronization for Windows in your deployment, see the Sun Java System Identity Synchronization for Windows 6.0 Deployment Planning Guide.

Determining Encryption Methods

Encryption helps to protect data in transit, as well as stored data. This section describes the following methods of data encryption:

Securing Connections With SSL

Security design involves more than an authentication scheme for identifying users and an access control scheme for protecting information. You must also protect the integrity of information between servers and client applications while it is being sent over the network.

To provide secure communications over the network, you can use both the LDAP and DSML protocols over the Secure Sockets Layer (SSL). When SSL is configured and activated, clients connect to a dedicated secure port where all communications are encrypted after the SSL connection is established. Directory Server and Directory Proxy Server also support the Start Transport Layer Security (Start TLS) control. Start TLS allows the client to initiate an encrypted connection over the standard LDAP port.

For an overview of SSL and TLS in Directory Server, see Chapter 5, Directory Server Security, in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Encrypting Stored Attributes

Attribute encryption concerns the protection of stored data. This section describes the attribute encryption functionality, and covers the following topics:

What Is Attribute Encryption?

Directory Server Enterprise Edition provides various features to protect data at the access level, including password authentication, certificate-based authentication, SSL, and proxy authorization. However, the data stored in database files, backup files, and LDIF files must also be protected. The attribute encryption feature prevents users from accessing sensitive data while the data is stored.

Attribute encryption enables certain attributes to be stored in encrypted form. Attribute encryption is configured at the database level. Thus, after an attribute is encrypted, the attribute is encrypted in every entry in the database. Because attribute encryption occurs at the attribute level (not the entry level), the only way to encrypt an entire entry is to encrypt all of its attributes.

Attribute encryption also enables you to export data to another database in an encrypted format. The purpose of attribute encryption is to protect sensitive data only when the data is being stored or exported. Therefore, the encryption is always reversible. Encrypted attributes are decrypted when returned through search requests.

The following figure shows a user entry being added to the database, where attribute encryption has been configured to encrypt the salary attribute.

Figure 7–1 Attribute Encryption Logic

Figure shows attributes encrypted in the database.

Attribute Encryption Implementation

The attribute encryption feature supports a wide range of encryption algorithms. Portability across different platforms is ensured. As an additional security measure, attribute encryption uses the private key of the server’s SSL certificate to generate its own key. This key is then used to perform the encryption and decryption operations. To be able to encrypt attributes, a server must be running over SSL. The SSL certificate and its private key are stored securely in the database and protected by a password. This key database password is required to authenticate to the server. The server assumes that whoever has access to this key database password is authorized to export decrypted data.

Note that attribute encryption only protects stored attributes. If you are replicating these attributes, replication must be configured over SSL to protect the attributes during transport.

If you use attribute encryption, you cannot use the binary copy feature to initialize one server from another server.

Attribute Encryption and Performance

While attribute encryption offers increased data security, this feature does impact performance. Use attribute encryption only to encrypt particularly sensitive attributes.

Sensitive data can be accessed directly through index files. Thus, you must encrypt the index keys corresponding to the encrypted attributes, to ensure that the attributes are fully protected. Indexing already has a performance impact, without the added cost of encrypting index keys. Therefore, configure attribute encryption before data is imported or added to the database for the first time. This procedure ensures that encrypted attributes are indexed as such from the outset.

Designing Access Control With ACIs

Access control enables you to specify that certain clients have access to particular information, while other clients do not. You implement access control using one or more access control lists (ACLs). ACLs consist of a series of access control instructions (ACIs) that either allow or deny permissions to specified entries and their attributes. Permissions include the ability to read, write, search, proxy, add, delete, compare, import and export.

By using an ACL, you can set permissions for the following:

In addition, you can set permissions for a specific user, for all users that belong to a group, or for all users of the directory. You can also define access for a network location, such as an IP address or a DNS name.

This section provides an overview of the Directory Server access control mechanism. For detailed information about configuring access control and ACIs, see Chapter 6, Directory Server Access Control, in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition. For information about the architecture of the access control mechanism, see How Directory Server Provides Access Control in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Default ACIs

The default behavior of Directory Server is to deny access unless there is a specific ACI that grants access. therefore, if no ACIs are defined, all access to the server is denied.

When you install Directory Server or when you add a new suffix, several default ACIs are defined automatically in the root DSE. These ACIs can be modified to suit your security requirements.

For details on the default ACIs and how to modify them, see How Directory Server Provides Access Control in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

ACI Scope

Starting with 6.x, Directory Server includes two major changes to ACI scope.

The change in ACI scope has implications for migration. If you are migrating to Directory Server 7.0 from a 5.2 version of Directory Server, see Changes to ACIs in Oracle Fusion Middleware Upgrade and Migration Guide for Oracle Directory Server Enterprise Edition.

Obtaining Effective Rights Information

The access control model provided by Directory Server can grant access to users through many different mechanisms. However, this flexibility can make your security policy fairly complex. Several parameters can define the security context of a user, including IP address, machine name, time of day, and authentication method.

To simplify the security policy, Directory Server enables you to request and list the effective access rights that a given user has to specified directory entries and attributes. For more information, see Viewing Effective Rights in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Tips on Using ACIs

The following tips can simplify your directory security model and improve directory performance:

Designing Access Control With Connection Rules

Connection rules enable you to prevent selected clients from establishing connections to Directory Server. The purpose of connection rules is to prevent a denial-of-service attack caused by malicious or poorly designed clients that connect to Directory Server and flood the server with requests.

Connection rules are established at the TCP level by defining TCP wrappers. For more information about TCP wrappers, see Client-Host Access Control Through TCP Wrapping in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Designing Access Control With Directory Proxy Server

Directory Proxy Server connection handlers provide a method of access control that enables you to classify incoming client connections. In this way, you can restrict the operations that can be performed based on how the connection has been classified.

You can use this functionality, for example, to restrict access to clients that connect from a specified IP address only. The following figure shows how you can use Directory Proxy Server connection handlers to deny write operations from specific IP addresses.

Figure 7–2 Directory Proxy Server Connection Handler Logic

Figure shows connection handlers used to grant write
access to  clients, based on IP address.

How Connection Handlers Work

A connection handler consists of a list of criteria and a list of policies. Directory Proxy Server determines a connection's class membership by matching the origination attributes of the connection with the criteria of the class. When the connection has been matched to a class, Directory Proxy Server applies the policies that are contained in that class to the connection.

Connection handler criteria can include the following:

The following policies can be associated with a connection handler:

For more information about Directory Proxy Server connection handlers and how to set them up, see Chapter 20, Connections Between Clients and Directory Proxy Server, in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Grouping Entries Securely

Roles and CoS require special consideration with regard to security.

Using Roles Securely

Not every role is suitable for use within a security context. When creating a role, consider how easily it can be assigned to and removed from an entry. Sometimes, users should be able to add themselves to or remove themselves from a role. However, in some security contexts such open roles are inappropriate. For more information, see Directory Server Roles in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Using CoS Securely

Access control for reading applies to both the real attributes and the virtual attributes of an entry. A virtual attribute generated by the Class of Service (CoS) mechanism is read like a normal attribute. Virtual attributes should therefore be given read protection in the same way. However, to make the CoS value secure, you must protect all of the sources of information the CoS value uses: the definition entries, the template entries, and the target entries. The same is true for update operations. Write access to each source of information must be controlled to protect the value that is generated from these sources. For more information, see Chapter 12, Directory Server Class of Service, in Oracle Fusion Middleware Reference for Oracle Directory Server Enterprise Edition.

Using Firewalls

Firewall technology is typically used to filter or block network traffic to and from an internal network. If LDAP requests are coming from outside a perimeter firewall, you need to specify what ports and protocols are allowed to pass through the firewall.

The ports and protocols that you specify depend on your directory architecture. As a general rule, the firewall must be configured to allow TCP and UDP connections on ports 389 and 636.

Host-based firewalls can be installed on the same server that is running Directory Server. The rules for host-based firewalls are similar to the rules for perimeter defense firewalls.

Running as Non-Root

You can create and run server instances as a non-root user. By running server instances as a non-root user, you limit any potential damage that an exploit could cause. Furthermore, servers running as non-root users are subject to access control mechanisms on the operating system.

Other Security Resources

For more information about designing a secure directory, see the following resources:

Chapter 8 Identifying Administration and Monitoring Requirements

Directory Server Enterprise Edition administration has changed significantly since the 5.2 version of Directory Server. These changes are described in detail in the Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

This chapter provides an overview of these changes and describes the administrative decisions that you must make in the planning phase of your deployment:

Directory Server Enterprise Edition Administration Model

Directory Server Enterprise Edition gives the administrator more control over instance creation and administration. This control is achieved by using two new commands, dsadm and dsconf. These commands provide all the functionality previously supplied by the directoryserver command plus additional functionality.

The dsadm command enables the administrator to create, start, and stop a Directory Server instance. This command combines all operations that require file system access to the Directory Server instance. The command must be run on the machine that hosts the instance. It does not perform any operation that requires LDAP access to the instance or access to an agent.

In this administration model, a Directory Server instance is no longer tied to a ServerRoot. Each Directory Server instance is a standalone directory that can be manipulated in the same manner as an ordinary standalone directory.

The dsconf command combines the administration operations that require write access to cn=config. The dsconf command is an LDAP client. It can only be executed on an active Directory Server instance. The command can be run remotely, enabling administrators to configure multiple instances from a single remote machine.

Directory Proxy Server provides two comparable commands, dpadm and dpconf. The dpadm command enables the administrator to create, start, and stop a Directory Proxy Server instance. The dpconf command enables the administrator to configure Directory Proxy Server by using LDAP and to access the Directory Server configuration through Directory Proxy Server.

In addition to these command-line utilities, Directory Server Enterprise Edition also provides web interface to manage Directory Servers and Directory Proxy Server instances. DSCC provides the same functionality as the command-line utilities, as well as wizards that enable you to configure several servers simultaneously. In addition, DSCC provides a replication topology drawing tool that enables you to monitor replication topologies graphically. This tool simplifies replication monitoring by providing a real-time view of individual masters, hubs, and consumers, and the replication agreements between them.

Remote Administration

The Directory Server Enterprise Edition administration model, described in the previous section, also enables remote administration of any Directory Server or Directory Proxy Server in the topology. Servers can be administered remotely using both the command-line utilities and DSCC.

The dsadm and dpadm utilities cannot be run remotely. These utilities must be installed and run on the same machine as the server instance that is being administered. For details of the functionality provided with dsadm and dpadm, see the dsadm(1M) and dpadm(1M) man pages.

The dsconf and dpconf utilities can be run remotely. For details of the functionality provided with dsconf and dpconf, see the dsconf(1M) and dpconf(1M) man pages.

The following figure illustrates how the new administration model facilitates remote administration. This illustration shows that the console and configuration commands can be installed and run remotely from the Directory Server and Directory Proxy Server instances. The administration commands must be run locally to the instances.

Figure 8–1 Directory Server Enterprise Edition Administration Model

Figure shows the new administration model, with administration
and configuration commands, and the Directory Control Center

Designing Backup and Restore Policies

In any failure situation that involves data corruption or data loss, it is imperative that you have a recent backup of your data. Avoid reinitializing servers from other servers where possible. For information about how to back up data, seeChapter 8, Directory Server Backup and Restore, in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

This section provides an overview of what to consider when planning a backup and recovery strategy.

High-Level Backup and Recovery Principles

Apply the following high-level principles when designing a backup strategy:

Choosing a Backup Method

Directory Server Enterprise Edition provides two methods of backing up data: binary backup and backup to an LDIF file. Both of these methods have advantages and limitations, and knowing how to use each method will assist you in planning an effective backup strategy.

Binary Backup

Binary backup produces a copy of the database files, and is performed at the filesystem level. The output of a binary backup is a set of binary files containing all entries, indexes, the change log, and the transaction log. A binary backup does not contain configuration data.

Binary backup is performed using one of the following commands:

Binary backup has the following advantages:


Note –

Binary backup has one limitation. Restoration from a binary backup can be performed only on a server with an identical configuration. For more information, see Restrictions for Using Binary Copy With Replication in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.


At a minimum, you need to perform a regular binary backup on each set of coherent machines. Coherent machines are machines that have an identical configuration.


Note –

Because restoration from a local backup is easier, perform a binary backup on each server.


These abbreviations are used in the remaining diagrams in this chapter:

M = master replica 

RA = replication agreement 

The following figure assumes that M1 and M2 have an identical configuration and that M3 and M4 have an identical configuration. In this scenario, a binary backup would be performed on M1 and on M3. In the case of failure, M1 or M2 could be restored from the binary backup of M1 (db1). M3 or M4 could be restored from the binary backup of M3 (db2). M1 and M2 could not be restored from the binary backup of M3. M3 and M4 could not be restored from the binary backup of M1.

Figure 8–2 Offline Binary Backup

Offline binary backup of two servers to two separate
databases

For details on how to use the binary backup commands, see Binary Backup in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Backup to LDIF

Backup to LDIF is performed at the suffix level. The output of a backup to LDIF is a formatted LDIF file, which is a copy of the data contained in the suffix. As such, this process takes longer than a binary backup.

Backup to LDIF is performed using one of the following commands:


Note –

Replication information is backed up unless you use the -Q option when running these commands.

The dse.ldif configuration file is not backed up in a backup to LDIF. To enable you to restore a previous configuration, back this file up manually.


Backup to LDIF has the following advantages:

Backup to LDIF has one limitation. In situations where rapid backup and restoration are required, backup to LDIF might take too long to be viable.

You need to perform a regular backup by using backup to LDIF for each replicated suffix, on a single master in your topology.

In the following figure, dsadm export is performed for each replicated suffix, on one master only (M1).

Figure 8–3 Offline Backup to LDIF

Backup using dsadm export

For information about how to use the backup to LDIF commands, see Backing Up to LDIF in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Choosing a Restoration Method

Directory Server Enterprise Edition provides two methods of restoring data: binary restore and restoration from an LDIF file. As with the backup methods, both of these methods have advantages and limitations.

Binary Restore

Binary restore copies data at the database level. Binary restore is performed using one of the following commands:

Binary restore has the following advantages:

Binary restore has the following limitations:

Binary restore is the preferred restoration method if the machines have an identical configuration and time is a major consideration.

The following figure assumes that M1 and M2 have an identical configuration and that M3 and M4 have an identical configuration. In this scenario, M1 or M2 can be restored from the binary backup of M1 (db1). M3 or M4 can be restored from the binary backup of M3 (db2).

Figure 8–4 Offline Binary Restore

Binary restore from two separate databases to two separate
servers

Restoration From LDIF

Restoration from an LDIF file is performed at the suffix level. As such, this process takes longer than a binary restore. Restoration from LDIF can be performed using one of the following commands:

Restoration from an LDIF file has the following advantages:

Restoration from an LDIF file has one limitation. In situations where rapid restoration is required, this method might take too long to be viable. For more information about restoring data from an LDIF file, see Importing Data From an LDIF File in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

In the following figure, dsadmin import is performed for each replicated suffix, on one master only (M1).

Figure 8–5 Offline Restoration From LDIF

Offline restore from and LDIF file, using dsadm import

Designing a Logging Strategy

Logging is managed and configured at the individual server level. While logging is enabled by default, it can be reconfigured or disabled according to the requirements of your deployment. Designing a logging strategy assists with planning hardware requirements. For more information, see Hardware Sizing For Directory Server.

This section describes the logging facility of Directory Server Enterprise Edition.

Defining Logging Policies

Each Directory Server in a topology stores logging information in three files:

Each Directory Proxy Server in a topology stores logging information in two files:

You can manage the log files for both Directory Server and Directory Proxy Server in these ways:

Defining Log File Creation Policies

A log file creation policy enables you to periodically archive the current log and start a new log file. Log file creation policies can be defined for Directory Server and Directory Proxy Server from the Directory Control Center or using the command-line utilities.

When defining a log file creation policy, consider the following:

Log file rotation can also be based on a combination of criteria. For example, you can specify that logs be rotated at 23h30 only if the file size is greater than 10 Megabytes.

For details on how to set up a log file creation policy, see Configuring Logs for Directory Server in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Defining Log File Deletion Policies

A log file deletion policy enables you to automatically delete old archived logs. Log file deletion policies can be defined for Directory Server and Directory Proxy Server from the Directory Service Control Center or using the command-line utilities. A log file deletion policy is not applied unless you have defined a log file creation policy. Log file deletion will not work if you have just one log file. The server evaluates and applies the log file deletion policy at the time of log rotation.

When defining a log file deletion policy, consider the following:

For details on how to set up a log file deletion policy, see Configuring Logs for Directory Server in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Manually Creating and Deleting Log Files

Manual file rotation and forced log rotation do not apply to Directory Proxy Server.

If you do not want to define automatic creation and deletion policies for Directory Server, you can create and delete log files manually. In addition, Directory Server provides a task that enables you to rotate any log immediately, regardless of the defined creation policy. This functionality might be useful if, for example, an event occurs that needs to be examined in more detail. The immediate rotation function causes the server to create a new log file. The previous file can therefore be examined without the server appending logs to this file.

For information about how to rotate logs manually and how to force log rotation, see Rotating Directory Server Logs Manually in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Defining Permissions on Log Files

In Directory Server 5.2, log files could only be read by the directory manager. Directory Server Enterprise Edition enables server administrators to define the permissions with which log files are created. For information about how to define log file permissions, see Configuring Logs for Directory Server in Oracle Fusion Middleware Administration Guide for Oracle Directory Server Enterprise Edition.

Designing a Monitoring Strategy

An effective monitoring and event management strategy is crucial to a successful deployment. Such a strategy defines which events should be monitored, which tools to use, and what action to take should an event occur. If you have a plan for commonplace events, possible outages and reduced levels of service can be prevented. This strategy improves the availability and quality of service of your directory.

To design a monitoring strategy, do the following:

Monitoring Tools Provided With Directory Server Enterprise Edition

This section provides a summary of the monitoring tools that are available in Directory Server Enterprise Edition as well as additional tools that can be used to monitor server activity.

The monitoring areas described in Identifying Monitoring Areas can be monitored using one or more of these tools.

Identifying Monitoring Areas

What you monitor, and to what extent, depends on your specific deployment. In general, however, include the following elements in your monitoring strategy: