Oracle Internet Directory Administrator's Guide Release 2.0.6 A77230-01 |
|
Replication is the mechanism that maintains exact duplicates of specified naming contexts on multiple nodes.
This chapter addresses:
This section describes how to install and initialize Oracle Internet Directory replication server software on a node.
Each node in a group of DSAs holds an updatable copy, also called an updatable replica, of the same set of directory naming contexts. These naming contexts are synchronized with each other by replication processing. This group of nodes is called a Directory Replication Group (DRG).
Note: The instructions in this section apply to setting up replication in a group of empty nodes. For instructions on adding a node to an existing DRG, see "Adding a Replication Node". |
To install and configure a replication group, follow the general steps listed below.
Refer to Oracle Internet Directory Installation Guide. Note that the typical installation of the Oracle 8i Enterprise Edition, which is required for the Oracle Internet Directory, includes Oracle Advanced Symmetric Replication (ASR). By contrast, a typical installation of Oracle 8i Standard Edition does not include ASR.
If you are going to run replication, you must change the values for three parameters by following these steps:
A Master Definition Site (MDS) is any of the Oracle Internet Directory databases from which the administrator is going to run the configuration scripts. You should be able to connect to the MDS database and all other nodes that constitute the DRG using Net8.
The following sections lead you through installing and configuring ASR through Oracle Internet Directory installation scripts. Advanced ASR users may prefer to configure ASR through the Oracle8i Replication Manager Tool.
Setting up the Oracle Advanced Symmetric Replication (ASR) environment to establish a Directory Replication Group (DRG) requires you to perform the two tasks described in the following sections:
Execute the following steps on all nodes in the Directory Replication Group.
sqlnet.ora
.
The sqlnet.ora
file in $ORACLE_HOME/network/admin
should contain the following parameters at minimum:
names.directory_path = (TNSNAMES) names.default_domain = domain
tnsnames.ora
.
The tnsnames.ora
file in $ORACLE_HOME/network/admin
must contain connect descriptor information in the following format for all Oracle Internet Directory databases:
net_service_name = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = HOST_NAME_OR_IP_ADDRESS) (PORT = 1521)) (CONNECT_DATA = ( service_name = service_name)))
Oracle8i Administrator's Guide. for instructions on increasing size of the table spaces and segments
See Also:
Execute SQL*Plus by typing the following command:
sqlplus system/system_password@net_service_name
At the SQL*Plus prompt, type:
create tablespace table_space_name datafile file_name_with_full_path SIZE 50M REUSE AUTOEXTEND ON NEXT 10M MAXSIZE max_bulk_update transaction_size ex:500M;
Execute SQL*Plus by typing the following command,
sqlplus system/system_password@net_service_name
At the SQL*Plus prompt, type the following lines for each rollback segment:
create rollback segment rollback_segment_name tablespace table_space_name storage (INITIAL 1M NEXT 1M OPTIMAL 2M MAXEXTENTS UNLIMITED);
Repeat the commands above for each rollback segment entered in init
sid.ora.
init
sid.ora
.
Type the following lines in the init
sid.ora
file:
rollback_segments = (rollback_segment_name_1, rollback_segment_name_2 ...) JOB_QUEUE_PROCESSES = number number of LDAP nodes - 1, at a minimum SHARED_POOL_SIZE = 20000000
Ensure that the total System Global Area (SGA) does not exceed 50% of system physical memory.
Note: Every time a database is started, a System Global Area (SGA) is allocated and Oracle background processes are started. The SGA is an area of memory used for database information shared by the database users. The combination of the background processes and memory buffers is called an Oracle instance. For more information on SGA, see Oracle8i Concepts and Oracle8i Administrator's Guide. |
To stop the listener for the Oracle Internet Directory database, use the listener control utility (lsnrctl). Type the following command at a command prompt:
lsnrctl> set password password lsnrctl> stop [listener_name]
SET PASSWORD is required only if the password is set in the listener.ora
file. The password defaults to ORACLE. The default listener name is LISTENER.
To restart the listener for the Oracle Internet Directory database, type the following command at the LSNRCTL command prompt:
lsnrctl> start [listener_name]
To stop and restart the Oracle Internet Directory database, you can use SQL*Plus.
To configure ASR at the MDS and all nodes in the replication group, complete the following steps from the MDS:
$ORACLE_HOME/ldap/bin
.
Before proceeding to the next step, connect as system user to all nodes, including MDS, from MDS to ensure the following:
Note:
See Also:
ldaprepl.sh -asrsetup
This script executes a number of operations.
As the script runs, it asks for the information in Table 10-1, first for the Master Definition Site then for the master sites.
Information | Definition |
---|---|
Host Name of MDS |
Name of the computer |
Global name |
Net service name of the MDS database, as listed in the file |
system password |
system password |
After you have provided the necessary information for the first master site, it asks if there is another master site. Enter Y or N. When you enter N, to indicate that you have identified all sites, it shows you a table of the information you have provided, and asks for confirmation. If it is not correct, press N. The script will start again at the beginning, asking about the Master Definition Site again.
After you have provided all the information, the script asks you to verify the correctness of the information. If the information is correct and you press Y, the script begins configuring the sites.
This process may take a long time, depending on your system resources and the size of your DRG. The script keeps you informed of its progress.
Troubleshooting Tip: If the configuration process fails, do the following:
Run the above command for each node in the DRG. Issuing this command should result in no rows being selected. If rows are selected containing the status [failed] and error messages, then this means that ASR set up failed. In this case, you may:
|
Note: If you have large initial data requirements, use the bulkload tool to load initial data on all the nodes in the DRG. You must stop the server before using bulkload, and bring it up again afterwards. For bulkload syntax and usage notes, see "bulkload". |
Run the following command:
oidctl connect=net_service_name_of_new_node server=oidldapd instance=instance_ number_of_ldap_server flags="-p port" start
You need to configure two different kinds of entries for replication, and these are described in the following sections:
Replication agreements are entries that list the member nodes within a replication group that share their changes. Replication agreements are referenced by Oracle Directory Replication Server configuration parameters that load when the replication server runs.
Because the Oracle Directory Replication Server configuration parameters are stored as special attributes in directory entries, you can configure replication parameters and replication agreements the same way you configure the Oracle Internet Directory--that is, you can alter the contents of the configuration entries and agreement entries through the command line tools, such as ldapadd and ldapmodify, or you can view and modify the agreements by using Oracle Directory Manager. This section explains both approaches.
Important: When you install and configure replication the first time, you must inform the Oracle Directory Replication Server about the existence of the member nodes in the replication agreement. To do this, modify the orclDirReplGroupDSAs attribute in the replication agreement. This is explained in "Replication Agreement Parameters". |
The Oracle Directory Replication Server configuration parameters are stored in the replication server configuration set entry, which has the following DN:
cn=configset0, cn=osdrepld, cn=subconfigsubentry
This entry contains replication attributes which control replication processing. You can modify some of these attributes. Note that the last parameter in the list specifies a replication agreement. In this release, only one replication agreement is possible.
Table 10-2 lists and describes the replication server configuration parameters.
Configuration parameters appear in the General and Debug Flags tab pages. You can use these tab pages to view replication configuration parameters, and modify many of them. The following tables describe the fields in each tab page.
To modify replication configuration parameters by using command line tools, use the commands explained in "ldapmodify".
To modify the interval for garbage collection in replication, run ldapmodify, referencing an LDIF-formatted file. Before running this command, prepare an input file using LDIF format.
The LDAP command to apply the input file is as follows:
ldapmodify -h
host-p
port-f
filename
Example: A typical input file (using LDIF format) to modify the garbage collection interval parameter consists of the following lines:
dn: cn=configset0, cn=osdrepld, cn=subconfigsubentrychangetype:modify
replace:
orclPurgeSchedule orclPurgeSchedule:30
This procedure changes the garbage collection interval from the default of 10 minutes to 30 minutes.
Example: A typical input file (using LDIF format) to modify the retry counts parameter consists of the following lines:
dn: cn=configset0, cn=osdrepld, cn=subconfigsubentry
changetype:modify
replace:orclChangeRetryCount
orclChangeRetryCount:5
This procedure changes the number of retry attempts from the default of ten times to five times. Specifically, after attempting to apply an update five times, the update is dropped and logged in the replication log file.
Important Note: To configure replication, you must modify the attribute orclDirReplGroupDSAs to contain the values of the nodes participating in symmetrical replication. For instructions on how to modify any of these parameters, see "Modifying Replication Agreement Parameters Using Command Line Tools: A Sample". |
To modify the number of worker threads used in change log processing:
mod.ldif
as follows:
Dn: cn=configset0, cn=osdrepld, cn=subconfigsubentry Changetype: modify Replace: orclthreadspersupplier orclthreadspersupplier: new_number_of_worker_threads
ldapmodify -h host -p port -f mod.ldif
"Restarting Directory Server Instances" for instructions on restarting the replication server
See Also:
In the parameter DirectoryReplicationGroupDSAs, type all of the host names of the DSAs in the DRG. Make sure this information is identical in all the nodes.
Replication agreement parameters are stored in the replication agreement entries which have the following DN:
orclAgreementID=id number, cn=orclreplagreements
This entry contains attributes that pertain only to the nodes participating in this agreement. You can create multiple replication agreements to manage replication between reciprocating nodes, but you can reference only one of them in your start-server message using Oracle Directory Manager. For Oracle Internet Directory Release 2.0.6, only one replication agreement can be used.
Table 10-5 lists and describes the replication agreement parameters.
.To view and modify replication agreement parameters by using Oracle Directory Manager:
The fields in this tab page are described in Table 10-5. You can view the parameters and modify some of them by double-clicking the attributes.
To add more nodes to the values in a replication agreement entry, run ldapmodify at the command line, referencing an LDIF-formatted file. Before running this command, prepare an input file using LDIF format.
The LDAP command to apply the input file is as follows:
ldapmodify -h
host-p
port-f
filename
A typical input file (using LDIF format) to add two more nodes to a replication agreement consists of the following lines:
dn: orclagreementid=000001, cn=orclreplagreements
changetype:modify
add:orcldirreplgroupdsas
orcldirreplgroupdsas:hollis
orcldirreplgroupdsas:eastsun-11
This procedure modifies the entry containing the replication agreement whose DN is orclagreementid=000001,cn=orclreplagreements
. The input file adds the two nodes, hollis and eastsun-11, into the replication group governed by oraclagreementid 000001
.
Because this release of the Oracle Directory Replication Server supports only one configuration set, you do not need to specify a configuration set.
Type the following command:
oidctl connect=db_connection_string server=oidrepld instance=1
flags="-h host -p port" start
You can turn off change-logging, which occurs in the Oracle Internet Directory server, by toggling the default value of the -l
flag in the line-mode run command for Oracle Directory Server from true to false. This is useful if you suspect that the change-log file might not be emptying. However, turning change-logging off on a given node means that updates on that node cannot be replicated to other nodes in the DRG.
You can turn off the multi-master flag, which occurs in the replication server, by toggling the default value of the -m flag in the line-mode run command for Oracle Directory Server from true to false. This is useful for reducing performance overhead if you are deploying a single master with read-only replica consumers. The multi-master option controls conflict resolution, which serves no purpose if you are deploying a single master.
There are two ways to add a new node to a replication network. The easier of the two procedures is described in this section. Use this procedure unless your directory is very large. If your directory has more than a million entries, use the method described in Appendix B.
Note: Use the procedure in this section to add a new node to an existing replication group. To create a new replication group, follow the instructions in "Installing and Configuring Replication". |
Note: Before you add a replication node, prepare the Net8 environment. For instructions, see "Prepare the Net8 Environment for Replication". |
To add a replication node to a directory with less than a million entries, follow these steps, each of which is more fully described in the next few pages.
Run the following command against each node in the LDAP replication group:
oidctl connect=db_connect_string server=oidrepld instance=1 stop
Create an LDIF file, such as add_node.ldif
, as in the following example:
dn: orclagreementid=000001, cn=orclreplagreements changetype: modify replace: orcldirreplgroupdsas orcldirreplgroupdsas: host_name_of_the_new_node orcldirreplgroupdsas: host_name_of_old_node1 orcldirreplgroupdsas: host_name_of_old_node2 . . . orcldirreplgroupdsas: host_name_of_old_noden
Run the following command against each node in the LDAP replication group:
ldapmodify -h host_name_of_the_node -p port -f add_node.ldif
A sponsor node is one that will supply the data to the new node.
Edit change_mode.ldif
to the following:
dn: changetype:modify replace:orclservermode orclservermode:r
Run the following commands against the identified sponsor node:
ldapmodify -D "cn=orcladmin" -w welcome -h host_name_of_sponsor_node -p port -f change_mode.ldif oidctl connect= db_connection_string server=oidldapd restart
This restarts all running LDAP severs on the sponsor in read-only mode. It takes approximately fifteen seconds for a directory server to restart.
Because this may take a long time, you may start "Step 5: Perform ASR Add Node Setup" while backup is in process.
You can backup the sponsor node in one of two ways:
This method supports filter-based backup, and the process can be fully automated. The generated file can be used for partial replication. However, backup may take up to seven hours for a directory with one million entries. Enter the following command:
ldifwrite -c db_connect_string -b "" -f output_ldif_file
This method, described in Appendix B, cannot be fully automated, and cannot be reused for partial replication. However, cold backup takes much less time for a directory with one million entries.
You can perform this step at the same time as you are performing "Step 4: Back Up the Sponsor Node by Using ldifwrite".
From the MDS, run the following script:
ldaprepl.sh -addnode
This script executes a number of operations.
As the script runs, it asks for the information in Table 10-6, first for the Master Definition Site then for the existing master sites.
When you have identified all the existing master sites, enter N. The script then asks for information regarding the new node. Once you have provided that information, the script shows you a table of the information you have provided, and asks for confirmation.
If the information is not correct, press N. The script then starts again at the beginning, asking the same information. If the information is correct and you enter Y, the script begins configuring the sites.
Table 10-6 lists and describes the information for which the script prompts you.
Information | Description |
---|---|
Host Name of MDS or master site |
Name of the computer |
Global name |
Net service name of the MDS or master site database, as listed in |
system password |
system password |
This process may take a long time, depending on your system resources and the size of your DRG. The script will keep you informed of its progress.
Troubleshooting Tip: If the process fails, do the following:
Run the above command for each node in the DRG. Issuing this command should result in no rows being selected. If rows are selected containing the status [failed] and error messages, then this means that ASR set up failed. In this case, you may:
|
Edit change_mode.ldif
to the following:
dn: changetype:modify replace:orclservermode orclservermode:rw
Run the following commands on the sponsor node:
ldapmodify -D "cn=orcladmin" -w welcome -h host_name_of_sponsor_node
-p port -f change_mode.ldif oidctl connect=db_connection_string server=oidldapd restart
Do this by entering the following command:
oidctl connect=db_connection_string server=oidrepld instance=1
flags="-h host -p port" start
Verify that no processes are running on the new node.
Do this by entering the following command:
bulkload.sh -connect db_connect_string_of_new_node -generate -load
-restore absolute_path_to_the_ldif_file_generated_by_ldifwrite
Do this by entering the following command:
oidctl connect=db_connect_string_of_new_node server=oidldapd
instance=1 flags="-p port" start
Create an LDIF file, such as add_node.ldif
, as in the following example:
dn: orclagreementid=000001, cn=orclreplagreements changetype: modify add: orcldirreplgroupdsas orcldirreplgroupdsas: host_name_of_the_new_node orcldirreplgroupdsas: host_name_of_old_node1 orcldirreplgroupdsas: host_name_of_old_node2 . . . orcldirreplgroupdsas: host_name_of_old_noden
Run the following command against the new node:
ldapmodify -h host_name_of_the_new_node -p port -f add_node.ldif
Do this by running the following command:
oidctl connect=db_connect_string_of_new_node server=oidrepld instance=1
flags="-h host_name_of_new_node -p port" start
Multi-master replication enables updates to multiple directory servers. Conflicts are detected whenever the replication server attempts to apply remote changes from a supplier to a consumer that holds conflicting data.
Four kinds of LDAP operations can lead to conflicts:
These kinds of operations can be grouped into two categories described in the following sections:
Entry-level conflicts are caused when the replication server attempts to apply a change to the consumer directory. Such a change could be one of the following:
These conflicts can be difficult to resolve because the reasons for their existence may be complex. For instance, if an entry does not exist, and this causes a replication conflict to log, that may mean:
If an entry exists and it should not, that may mean:
Attribute-level conflicts are caused when two directories are updating the same attribute with different values at different times. If the attribute is single-valued, the replication process resolves the conflict by examining the timestamps of the changes involved in the conflict.
Conflicts usually stem from the timing of changes arising from the occasional slowness or transmission failure over wide-area networks. Also, an earlier inconsistency might continue to cause conflicts if it is not resolved in a timely manner.
The Oracle Directory Server replication application attempts to resolve all conflicts that it encounters, utilizing the following logic:
orclupdateschedule
parameter in the replication agreement multiplied by 100. Before it moves the change, the replication server writes the conflict into a log file for the System Administrator.
If a conflict has been written into the log file, the system is not able to resolve it following its resolution procedure. To avoid further replication change conflicts arising from earlier unapplied changes, it is important to monitor the log files regularly.
To monitor replication change conflicts, examine the contents of the replication log file. New conflict records are written to the file. You can distinguish between messages by the timestamp.
Conflict resolution messages, samples of which are shown below, are logged in file oidrepld00.log. The result of each attempt to resolve the replication conflict is displayed at the end of each conflict resolution message.
1999/08/03::10:59:05: ************ Conflict Resolution Message ************ 1999/08/03::10:59:05: Conflict reason: Attempted to modify a non-existent entry. 1999/08/03::10:59:05: Change number:1306. 1999/08/03::10:59:05: Supplier:eastlab-sun. 1999/08/03::10:59:05: Change type:Modify. 1999/08/03::10:59:05: Target DN:cn=ccc,ou=Recruiting,ou=HR,ou=Americas,o=IMC,c=US. 1999/08/03::10:59:05: Result: Change moved to low priority queue after failing on 10th retry. 1999/08/03::10:59:05: ************ Conflict Resolution Message ************ 1999/08/03::10:59:05: Conflict reason: Attempted to add an existing entry. 1999/08/03::10:59:05: Change number:1209. 1999/08/03::10:59:05: Supplier:eastlab-sun. 1999/08/03::10:59:05: Change type:Add. 1999/08/03::10:59:05: Target DN:cn=Lou Smith, ou=Recruiting, ou=HR, ou=Americas, o=IMC, c=US. 1999/08/03::10:59:05: Result: Deleted duplicated target entry which was created later than the change entry. Apply the change entry again. 1999/08/03::10:59:06: ************ Conflict Resolution Message ************ 1999/08/03::10:59:06: Conflict reason: Attempted to delete a non-existent entry. 1999/08/03::10:59:06: Change number:1365. 1999/08/03::10:59:06: Supplier:eastlab-sun. 1999/08/03::10:59:06: Change type:Delete. 1999/08/03::10:59:06: Target DN:cn=Lou Smith,ou=recruiting,ou=hr,ou=americas,o=imc,c=us. 1999/08/03::10:59:06: Result: Change moved to low priority queue after failing on 10th retry.
This section describes how the automated replication process adds, deletes, and modifies entries, and how it modifies DNs and RDNs. It covers topics in the following subsections:
When it successfully adds a new entry to a consumer, the replication server follows this change application process:
If the change is not successfully applied on the first try:
The replication server places the new change in the Retry Queue, sets the number of retries to the configured maximum, and repeats the change application process.
If the change is not successfully applied on all but the last retry:
The replication server keeps the change in the Retry Queue, decrements the number of retries, and repeats the change application process.
If the change is not successfully applied on the last retry:
The replication server applies the following conflict resolution rules:
If the change entry wins, then the target entry is removed, the change is applied, and the change entry is placed in the Purge Queue.
If the target entry wins, then the change entry is placed in the Purge Queue.
The replication server places the change in the Human Intervention Queue, and repeats the change application process at specified intervals.
If the change is not successfully applied after it has been placed in the Human Intervention Queue:
The replication server keeps the change in this queue, and repeats the change application process at specified intervals while awaiting action by the administrator.
When it deletes an entry from a consumer, the replication server follows this change application process:
If the change is not successfully applied on the first try:
The replication server places the change in the Retry Queue, sets the number of retries to the configured maximum, and repeats the change application process.
If the change is not successfully applied on all but the last retry:
The replication server keeps the change in the Retry Queue, decrements the number of retries, and repeats the change application process.
If the change is not successfully applied on the last retry:
The replication server places the change in the Human Intervention Queue and repeats the change application process at specified intervals.
If the change is not successfully applied after it has been placed in the Human Intervention Queue:
The replication server keeps the change in this queue, and repeats the change application process at specified intervals while awaiting action by the administrator.
When it modifies an entry in a consumer, the replication server follows this change application process:
If the change is not successfully applied on the first try:
The replication server places the change in the Retry Queue, sets the number of retries to the configured maximum, and repeats the change application process.
If the change is not successfully applied on all but the last retry:
The replication server keeps the change in the Retry Queue, decrements the number of retries, and repeats the change application process.
If the change is not successfully applied by the last retry:
The replication server places the change in the Human Intervention Queue and repeats the change application process at specified intervals.
If the change is not successfully applied after it has been placed in the Human Intervention Queue:
The replication server keeps the change in this queue, and repeats the change application process at specified intervals while awaiting action by the administrator.
When it modifies the RDN of an entry in a consumer, the replication server follows this change application process:
If the change is not successfully applied on the first try:
The replication server places the change in the Retry Queue, sets the number of retries to the configured maximum, and repeats the change application process.
If the change is not successfully applied on all but the last retry:
The replication server keeps the change in the Retry Queue, decrements the number of retries, and repeats the change application process.
If the change is not successfully applied on the last retry:
The replication server places the change in the Human Intervention Queue and checks to see if it is a duplicate of the target entry.
The replication server applies the following conflict resolution rules:
If the change entry wins, then the target entry is removed, the change is applied, and the change entry is placed in the Purge Queue.
If the target entry wins, then the change entry is placed in the Purge Queue.
The replication server places the change in the Human Intervention Queue, and repeats the change application process at specified intervals.
If the change is not successfully applied after it has been placed in the Human Intervention Queue:
The replication server keeps the change in this queue, and repeats the change application process at specified intervals while awaiting action by the administrator.
When it modifies the DN of an entry in a consumer, the replication server follows this change application process:
The replication server also looks in the consumer for the parent DN with a GUID that matches the GUID of the new parent specified in the change entry.
If the change is not successfully applied on the first try:
The replication server places the change in the Retry Queue, sets the number of retries to the configured maximum, and repeats the change application process.
If the change is not successfully applied on all but the last retry:
The replication server keeps the change in the Retry Queue, decrements the number of retries, and repeats the change application process.
If the change is not successfully applied by the last retry:
The replication server places the change in the Human Intervention Queue and checks to see if it is a duplicate of the target entry.
The replication server applies the following conflict resolution rules:
If the change entry wins, then the target entry is removed, the change is applied, and the change entry is placed in the Purge Queue.
If the target entry wins, then the change entry is placed in the Purge queue.
The replication server places the change in the Human Intervention Queue, and repeats the change application process at specified intervals.
If the change is not successfully applied after it has been placed in the Human Intervention Queue:
|
![]() Copyright © 1999 Oracle Corporation. All Rights Reserved. |
|