Skip Headers
Oracle® Fusion Middleware Deployment Planning Guide for Oracle Directory Server Enterprise Edition
11g Release 1 (11.1.1.7.0)

Part Number E28974-01
Go to Documentation Home
Home
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

11 Designing a Global Deployment

In a global deployment, access to directory services is required in more than one geographical location, or data center. This chapter provides strategies for effectively deploying Directory Server Enterprise Edition across multiple data centers. The strategies ensure that the quality of service requirements identified in Chapter 5, "Defining Service Level Agreements" are not compromised.

This chapter covers the following topics:

11.1 Using Replication Across Multiple Data Centers

One of the goals of replication is to enable geographic distribution of the LDAP service. Replication enables you to have identical copies of information on multiple servers and across more than one data center. Replication concepts are outlined in Chapter 10, "Designing a Scaled Deployment" in this guide, and in Chapter 7, Directory Server Replication, in Oracle Directory Server Enterprise Edition Reference.

Directory Server supports the replication between its instances running on different platforms.

This section covers the following topics:

11.1.1 Multi-Master Replication

In multi-master replication, replicas of the same data exist on more than one server. For information about multi-master replication, see the following sections:

11.1.1.1 Concepts of Multi-Master Replication

In a multi-master configuration, data is updated on multiple masters. Each master maintains a change log, and the changes made on each master are replicated to the other servers. Each master plays the role of supplier and consumer.

Multi-master configurations have the following advantages:

  • Automatic write failover occurs when one master is inaccessible.

  • Updates can be made on a local master in a geographically distributed environment.

Multi-master replication uses a loose consistency replication model. This means that the same entries may be modified simultaneously on different servers. When updates are sent between the two servers, any conflicting changes must be resolved. Various attributes of a WAN, such as latency, can increase the chance of replication conflicts. Conflict resolution generally occurs automatically. A number of conflict rules determine which change takes precedence. In some cases conflicts must be resolved manually. For more information, see Solving Common Replication Conflicts in Administrator's Guide for Oracle Directory Server Enterprise Edition.

The number of masters that are supported in a multi-master topology is theoretically unlimited. The number of consumers and hubs is also theoretically unlimited. However, the number of consumers to which a single supplier can replicate depends on the capacity of the supplier server. You can use the SLAMD Distributed Load Generation Engine (SLAMD) to assess the capacity of the supplier server. For information about SLAMD, and to download the SLAMD software, see http://www.slamd.com (http://www.slamd.com/).

Each supplier in a multi-master environment must have a replication agreement. The following figure shows two master servers and their replication agreements.

Figure 11-1 Multi-Master Replication Configuration (Two Masters)

Description of Figure 11-1 follows
Description of "Figure 11-1 Multi-Master Replication Configuration (Two Masters)"

In the preceding figure, Master A and Master B have a master replica of the same data. Each master has a replication agreement that specifies the replication flow. Master A acts as a master in the scope of Replication Agreement 1, and as a consumer in the scope of Replication Agreement 2.

Multi-master replication can be used for the following tasks:

  • To replicate updates by using the replica ID.

    Updates by using the replica ID make it possible for a consumer to be updated by multiple suppliers at the same time, provided that the updates originate from different replica IDs.

  • To enable or disable a replication agreement.

    Replication agreements can be configured but left disabled, then enabled rapidly when required. This feature provides flexibility in replication configuration. This can be done whether you use multiple masters or not.

11.1.1.2 Multi-Master Replication Over WAN

Directory Server supports multi-master replication over a WAN. This feature enables multi-master replication configurations across geographical boundaries in international, multiple data center deployments.

Generally, if the Number of hosts calculated in Assessing Initial Replication Requirements is less than 16, or not significantly larger, your topology should include only master servers in a fully connected topology, that is, every master replicates to every other master in the topology. In a multi-master replication over WAN configuration, all Directory Server instances separated by a WAN must not be running versions prior to Directory Server 5.2. For a multi-master topology with more than 4 masters, Directory Server 6.x is required.

The replication protocol provides full asynchronous support, as well as window, grouping, and compression mechanisms. These features make multi-master replication over a WAN viable. Replication data transfer rates will always be less than what the available physical medium allows in terms of bandwidth. If the update volume between replicas cannot physically be made to fit into the available bandwidth, tuning will not prevent replicas from diverging under heavy update load. Replication delay and update performance are dependent on many factors, including but not limited to modification rate, entry size, server hardware, average latency and average bandwidth.

Internal parameters of the replication mechanism are optimized by default for WANs. However, if you experience slow replication due to the factors mentioned above, you may wish to empirically adjust the window size and group size parameters. You may also be able to schedule your replication to avoid peak network times, thus improving your overall network usage. Finally, Directory Server supports the compression of replication data to optimize bandwidth usage.

When you replicate data over a WAN link, some form of security to ensure data integrity and confidentiality is advised. For more information on security methods available in Directory Server, see Chapter 5, Directory Server Security, in Oracle Directory Server Enterprise Edition Reference.

11.1.1.2.1 Group and Window Mechanisms

Directory Server provides group and window mechanisms to optimize replication flow. The group mechanism enables you to specify that changes are sent in groups, rather than individually. The group size represents the maximum number of data modifications that can be bundled into a single update message. If the network connection appears to be the bottleneck for replication, increase the group size and check replication performance again. For information on configuring the group size, see Configuring Group Size in Administrator's Guide for Oracle Directory Server Enterprise Edition.

The window mechanism specifies that a certain number of update requests are sent to the consumer, without the supplier having to wait for an acknowledgement from the consumer before continuing. The window size represents the maximum number of update messages that can be sent without immediate acknowledgement from the consumer. It is more efficient to send many messages in quick succession instead of waiting for an acknowledgement after each one. Using the appropriate window size, you can eliminate the time replicas spend waiting for replication updates or acknowledgements to arrive. If your consumer replica is lagging behind the supplier, increase the window size to a higher value than the default, such as 100, and check replication performance again before making further adjustments. When the replication update rate is high and the time between updates is therefore small, even replicas connected by a LAN can benefit from a higher window size. For information on configuring the window size, see Configuring Window Size in Administrator's Guide for Oracle Directory Server Enterprise Edition.

Both the group and window mechanisms are based on change size. Therefore, optimizing replication performance with these mechanisms might be impractical if the size of your changes varies considerably. If the size of your changes is relatively constant, you can use the group and window mechanisms to optimize incremental and total updates.

11.1.1.2.2 Replication Compression

In addition to the grouping and window mechanisms, you can configure replication compression on Solaris and Linux platforms. Replication compression streamlines replication flow, which substantially reduces the incidence of bottlenecks in replication over a WAN. Compression of replicated data can increase replication performance in specific cases, such as networks with sufficient CPU but low bandwidth, or when there are bulk changes to be replicated. You can also benefit from replication compression when initializing a remote replica with large entries. Do not set this parameter in a LAN (local area network) where there is wide network bandwidth, because the compression and decompression computations will slow down replication.

The replication mechanism uses the Zlib compression library. Empirically test and select the compression level that gives you best results in your WAN environment for your expected replication usage.

For more information on configuring replication compression, see Configuring Replication Compression in Administrator's Guide for Oracle Directory Server Enterprise Edition.

11.1.1.3 Fully Meshed Multi-Master Topology

In a fully meshed multi-master topology, each master is connected to each of the other masters. A fully meshed topology provides high availability and guaranteed data integrity. The following figure shows a fully meshed, four-way, multi-master replication topology with some consumers.

Figure 11-2 Fully Meshed, Four-Way, Multi-Master Replication Configuration

Description of Figure 11-2 follows
Description of "Figure 11-2 Fully Meshed, Four-Way, Multi-Master Replication Configuration"

In Figure 11-2, the suffix is held on four masters to ensure that it is always available for modification requests. Each master maintains its own change log. When one of the masters processes a modification request from a client, it records the operation in its change log. The master then sends the replication update to the other masters, and in turn to the other consumers. Each master also stores a Replication Manager entry used to authenticate the other masters when they bind to send replication updates.

Each consumer stores one or more entries that correspond to the Replication Manager entries. The consumers use the entries to authenticate the masters when they bind to send replication updates. It is possible for each consumer to have just one Replication Manager entry that enables all masters to use the same Replication Manager entry for authentication. By default, the consumers have referrals set up for all masters in the topology. When consumers receive modification requests from the clients, they send the referrals to back to the client. For more information about referrals, see Referrals and Replication in Oracle Directory Server Enterprise Edition Reference.

Figure 11-3 presents a detailed view of the replication agreements, change logs, and Replication Manager entries that must be set up on Master A.Figure 11-4 provides the same detailed view for Consumer E.

Figure 11-3 Replication Configuration for Master A (Fully Meshed Topology)

Description of Figure 11-3 follows
Description of "Figure 11-3 Replication Configuration for Master A (Fully Meshed Topology)"

Figure 11-4 Replication Configuration for Consumer Server E (Fully Meshed Topology)

Description of Figure 11-4 follows
Description of "Figure 11-4 Replication Configuration for Consumer Server E (Fully Meshed Topology)"

Master A requires the following:

  • A master replica

  • A change log

  • Replication Manager entries for Masters B, C, and D, unless you use the same Replication Manager entry on each replica

  • Replication agreements for Masters B, C, and D, and for Consumers E, and F

Consumer E requires the following:

  • A consumer replica

  • Replication Manager entries to authenticate Masters A, and B when they bind to send replication updates

11.1.2 Cascading Replication

In a cascading replication configuration, a server acting as a hub receives updates from a server acting as a supplier. The hub replays those updates to consumers. The following figure illustrates a cascading replication configuration.

Figure 11-5 Cascading Replication Configuration

Description of Figure 11-5 follows
Description of "Figure 11-5 Cascading Replication Configuration"

Cascading replication is useful in the following scenarios:

  • When there are a lot of consumers.

    Because the masters in a replication topology handle all update traffic, it could put them under a heavy load to support replication traffic to the consumers. You can off-load replication traffic to several hubs that can each service replication updates to a subset of the consumers.

  • To reduce connection costs by using a local hub in geographically distributed environments.

The following figure shows cascading replication to a large number of consumers.

Figure 11-6 Cascading Replication to a Large Number of Consumers

Description of Figure 11-6 follows
Description of "Figure 11-6 Cascading Replication to a Large Number of Consumers"

In Figure 11-6, hubs 1 and 2 relay replication updates to consumers 1 through 10, leaving the master replicas with more resources to process directory updates.

The masters and the hubs maintain a change log. However, only the masters can process directory modification requests from clients. The hubs contains a Replication Manager entry for each master that sends updates to them. Consumers 1 through 10 contain Replication Manager entries for hubs 1 and 2.

The consumers and hubs can process search requests received from clients, but cannot process modification requests. The consumers and hubs refer modification requests to the masters.

11.1.3 Prioritized Replication

Prioritized replication can be used when there is a strong business requirement to have tighter consistency for replicated data on specific attributes. In previous versions of Directory Server, updates were replicated in the order in which they were received. With prioritized replication, you can specify that updates to certain attributes take precedence when they are replicated to other servers in the topology.

Priority is a boolean feature, it is on or off. There are no levels of priority. In a queue of updates waiting to be replicated, updates with priority are replicated before updates without priority.

Priority rules are configured with the following replication priority rule properties:

  • The identity of the client, bind-dn.

  • The type of update, op-tyupe.

  • The entry or subtree that was updated, base-dn.

  • The attributes changed by the update, att.

For information about these properties, see repl-priority.

When the master replicates an update to one or more hubs or consumer replicas, the priority of the update is the same across all of the hubs and consumer replicas. If one parameter is configured in a priority rule for prioritized replication, all updates that match that parameter are prioritized for replication. If two or more parameters are configured in a priority rule for prioritized replication, all updates that match all parameters are prioritized for replication.

In the following scenario, it is possible that a master replica attempts to replicate an update to an entry before it has replicated the addition of the entry:

  • The entry is added on the master replica and then updated on the master replica

  • The update operation has replication priority but the add operation does not have replication priority

In this scenario, the update operation cannot be replicated until the add operation is replicated. The update waits for its chronological turn, after the add operation, to be replicated.

Prioritized replication provides the following benefits:

  • Improved security. Prioritized replication is used by default for account lockout. Imagine for example that an employee leaves your organization, and you lock the employee's account. To ensure that the employee cannot log in to a remote server to which the account lockout has not been replicated, account lockout changes are replicated before other changes are replicated.

  • Improved consistency. Directory Server replication is loosely consistent. With prioritized replication, you can assure stronger consistency for certain attributes that are considered important in your organization.

11.1.4 Fractional Replication

A global topology (with data centers in different countries) might require restricting replication for security or compliance reasons. For example, legal restrictions might state that specific employee information cannot be copied outside of the U.S.A. Or, a site in Australia might require Australian employee details only.

The fractional replication feature enables only a subset of the attributes that are present in an entry to be replicated. Attribute lists are used to determine which attributes can and cannot be replicated. Fractional replication can only be applied to read-only consumers.

Fractional replication can be used to replicate a subset of the attributes of all entries in a suffix or sub-suffix. Fractional replication can be configured, per agreement, to include attributes in the replication or to exclude attributes from the replication. Usually, fractional replication is configured to exclude attributes. The interdependency between features and attributes make managing a list of included attributes difficult.

Fractional replication can be used for the following purposes:

  • To filter content for synchronization between intranet and extranet servers

  • To reduce replication costs when a deployment requires only certain attributes to be available everywhere

Fractional replication is configured with the replication agreement properties repl-fractional-include-attr and repl-fractional-exclude-attr attributes. For information about these properties, see repl-agmt. For information about how to configure fractional replication, see Fractional Replication in Administrator's Guide for Oracle Directory Server Enterprise Edition.

11.1.5 Sample Replication Strategy for an International Enterprise

In this scenario, an enterprise has two major data centers, one in London and the other in New York, separated by a WAN. The scenario assumes that the network is very busy during normal business hours.

In this scenario, the Number of hosts has been calculated to be eight. A fully connected, 4-way multi-master topology is deployed in each of the two data centers. These two topologies are also fully connected to each other. For ease of comprehension, not all replication agreements between the two data centers are shown in the following diagram.

The replication strategy for this scenario includes the following:

  • Master copies of directory data are held on servers in both data centers.

  • A multi-master replication topology is deployed between the data centers to provide high availability and write-failover across the deployment.

  • Replication across the WAN link is scheduled so that it occurs only during off-peak hours to optimize bandwidth.

  • To increase performance, client applications are directed to local servers. Clients in the U.S. read from and write to masters in the New York data center. Clients in the UK read from and write to masters in the London data center.

Figure 11-7 Using Multi-Master Replication for Load Balancing in Two Data Centers

Description of Figure 11-7 follows
Description of "Figure 11-7 Using Multi-Master Replication for Load Balancing in Two Data Centers"

11.2 Using Directory Proxy Server in a Global Deployment

In a global enterprise, a centralized data model can cause scalability and performance issues. Directory Proxy Server can be used in such a situation to distribute data efficiently and to route search and update requests appropriately.

11.2.1 Sample Distribution Strategy for a Global Enterprise

In the architecture shown here, a large financial institution has its headquarters in London. The organization has data centers in London, New York, and Hong Kong. Currently, the vast majority of the data that is available to employees resides centrally in legacy RDBMS repositories in London. All access to this data from the financial institution's client community is over the WAN.

The organization is experiencing scalability and performance problems with this centralized model and decides to move to a distributed data model. The organization also decides to deploy an LDAP directory infrastructure at the same time. Because the data in question is considered "mission critical" it must be deployed in a highly available, fault-tolerant infrastructure.

An analysis of client application profiles has revealed that the data is customer-based. Therefore, 95 percent of the data accessed by a geographical client community is specific to that community. Clients in Asia rarely access data for a customer in North America, although this does happen infrequently. The client community must also update customer information from time to time.

The following figure shows the logical architecture of the distributed solution.

Figure 11-8 Distributed Directory Infrastructure

Description of Figure 11-8 follows
Description of "Figure 11-8 Distributed Directory Infrastructure"

Given the profile of 95 percent local data access, the organization decides to distribute the directory infrastructure geographically. Multiple directory consumers are deployed in each geographical location: Hong Kong, New York, and London. London consumers are not shown in the diagram for ease of understanding. Each of these consumers is configured to hold the customer data specific to the location. Data for European and Middle East customers is held in the London consumers. Data for North and South American customers is held in the New York consumers. Data for Asian and Pacific Rim customers is held in the Hong Kong consumers.

With this deployment, the overwhelming data requirement of the local client community is located in the community. This strategy provides significant performance improvements over the centralized model. Client requests are processed locally, reducing network overhead. The local directory servers effectively partition the directory infrastructure, which provides increased directory server performance and scalability. Each set of consumer directory servers is configured to return referrals if a client submits an update request. Referrals are also returned if a client submits a search request for data that is located elsewhere.

Client LDAP requests are sent to Directory Proxy Server through a hardware load balancer. The hardware load balancer ensures that clients always have access to at least one Directory Proxy Server. The locally deployed Directory Proxy Server initially routes all requests to the array of local directory servers that hold the local customer data. The instances of Directory Proxy Server are configured to load balance across the array of directory servers. This load balancing provides automatic failover and failback.

Client search requests for local customer information are satisfied by a local directory. Appropriate responses are returned to the client through Directory Proxy Server. Client search requests for geographically "foreign" customer information are initially satisfied by the local directory server by returning a referral back to Directory Proxy Server.

This referral contains an LDAP URL that points to the appropriate geographically distributed Directory Proxy Server instance. The local Directory Proxy Server processes the referral on behalf of the local client. The local Directory Proxy Server then sends the search request to the appropriate distributed instance of Directory Proxy Server. The distributed Directory Proxy Server forwards the search request on to the distributed Directory Server and receives the appropriate response. This response is then returned to the local client through the distributed and the local instances of Directory Proxy Server.

Update requests received by the local Directory Proxy Server are also satisfied initially by a referral returned by the local Directory Server. Directory Proxy Server follows the referral on behalf of the local client. However, this time the proxy forwards the update request to the supplier directory server located in London. The supplier Directory Server applies the update to the supplier database and sends a response back to the local client through the local Directory Proxy Server. Subsequently, the supplier Directory Server propagates the update down to the appropriate consumer Directory Server.