Sun Directory Services 3.1 Administration Guide

Chapter 5 Loading and Maintaining Directory Information

This chapter describes how to use the command-line utilities supplied with Sun Directory Services to populate and maintain your database of information. To initialize the database, you can:

Creating the Root Entry

You cannot add entries to your data store before you have created the root entry for the data store. The root entry is the top entry of the tree held by the data store. It identifies the data store. In Sun Directory Services, you can actually have up to four root entries that identify the data store and that correspond to the four possible data store suffixes that you can declare in the Admin Console.

To create the root entry, create a simple LDIF file containing the entry information, and add it to the database using the ldapadd command. An example of this procedure is given in "To Create the Root Entry for XYZ Corporation".

You can also create the root entry manually using Deja. The procedure for adding entries using Deja is explained in Sun Directory Services 3.1 User's Guide.


Note -

The root entry is created automatically if it does not already exist when you first load entries in the directory using the dsimport command.


To Create the Root Entry for XYZ Corporation

  1. Create an LDIF file called root-file that contains:

    dn: o=XYZ, c=US
    objectClass: organization

    The LDIF file format is described in detail in the ldif(4) man page.

  2. Add this file using ldapadd(1):

    prompt% ldapadd -c -D "cn=admin-cn, o= XYZ, c=US" -w admin-pw -f root-file

    where:

    • -c specifies to continue processing even if errors occur

    • -D introduces the distinguished name of the data store administrator. The DN must be given in quotes because it is likely to contain blank spaces.

    • -w introduces the administrator password

    • -f introduces the file holding the information to add to the database.

    If you want to avoid your password showing up in a command listing, you can omit the -w option. The ldapadd command will prompt you for your password.

    The root entry now exists.

Populating the Directory

If you do not want to create directory entries manually, you can populate the directory using the dsimport bulk load utility. This utility creates directory entries from any text file in which one line corresponds to one directory entry. You must create a mapping file that specifies the semantics for the information provided in each line of the input file. You might also need to create an LDAP object class and attributes that are specific to the type of information you want to store in the directory.

Refer to "Mapping Syntax and Semantics" for information on the structure and content of a mapping file. A complete example of creating a mapping file and using dsimport is given in "Example: Using dsimport".

For details on all the options of dsimport(1m), refer to the man page.

The dsimport utility is also used during the initialization of the NIS service to import all the information stored in NIS files into the LDAP directory. When you run the dsypinstall script to configure Sun Directory Services as an NIS server, the NIS information available on your server is automatically added to your directory database through a call to dsimport. The mapping of NIS files into LDAP object classes and attributes is described in the nis.mapping file in the directory /etc/opt/SUNWconn/ldap/current/mapping. For full details on importing NIS information into the directory, see "Initializing the Sun Directory Services NIS Service".


Note -

The information mapping described in the radius.mapping file in the directory /etc/opt/SUNWconn/ldap/current/mapping is used to perform RADIUS searches in the LDAP directory, not to import RADIUS information.


Mapping Syntax and Semantics

The mapping syntax and semantics are designed to provide maximum flexibility so that you can easily:

If this involves modifying or creating an object class with the attributes that you need, refer to "Modifying the Schema"

A mapping file is made up of a number of sections that conform to the following pattern:

Front-end name
	Common
	Table
		Common
		Dynamic
		Export
			Extract
			Condense
			Build
		Import
			Extract
			Condense
			Build
	Table
		Common
		Dynamic
		Export
			Extract
			Condense
			Build
		Import
			Extract
			Condense
			Build
...

The content and meaning of each section is described in "Mapping Semantics". The syntactic rules for each section are described in "Mapping Syntax".

Mapping Semantics

Front-end name indicates the name of the service. All the information that follows that name describes the mapping of service-specific information to LDAP object classes and attributes.

The first Common section immediately following the front-end name gives configuration information that applies to the front-end or service. It contains mandatory configuration variables that are required in the translation process, and optional configuration variables that are stored in the same file for convenience. In the nis.mapping and radius.mapping files, this section can be modified through the Admin Console.

The Table section defines mapping information for a particular type of information. The mapping information determines the object class of all entries created using that table definition. Each table definition is composed of the following sections:

The Dynamic section is the only one that is mandatory. Without it, neither import nor export operations work. The other sections can be omitted if you do not need them. For instance, if you never intend to export information from the directory, you do not need to create an Export section.

Each section contains keywords and definitions used in the import or export process. Table 5-1provides a list of mapping keywords, the sections in which they can occur, and their purpose.

In any section, you can create variables or tokens, that is, private definitions, by using the following format:

tokenT=token definition

Your private definitions can use the syntax and functions described in "Condense".

Table 5-1 Summary of Mapping File Keywords

Section 

Keyword 

Mandatory/Optional 

Purpose 

Common 

BASE_DN 

Mandatory, but can be specified in the Dynamic section 

Specifies a naming context. See "BASE_DN ".

MAP_NAME 

Mandatory for an NIS table definition 

Indicates the name of the NIS table corresponding to the table definition. See "MAP_NAME ".

PRIVATE_OBJECTCLASSES 

Mandatory when object class is not unique 

Used for updates on entries created from several table definitions. See "PRIVATE_OBJECTCLASSES ".

Dynamic 

ALL_FILTER 

Mandatory 

Defines a filter for identifying all entries created using the table definition. See "ALL_FILTER".

DC_NAMING 

Optional 

Defines the mechanism for converting a domain name to an LDAP dc name structure. See "DC_NAMING".

LINE 

Mandatory 

Defines decomposition of input information. See "LINE".

MATCH_FILTER 

Mandatory 

Defines a filter for identifying a particular entry created using the table definition. See "MATCH_FILTER".

Export/Build 

LINE 

Mandatory if the Export section exists 

In export file, defines format of line composed of LDAP attributes. See "Export Section".

NIS_KEY 

Mandatory for NIS 

Identifies NIS key in export file. 

NIS_VALUE 

Mandatory for NIS 

Identifies NIS value in export file. 

Import/Extract 

LINE 

Mandatory if the Import section exists 

Defines decomposition of input information. See "Import Section".

Common Section

The Common section contains definitions of variables that apply to all the entries created using that table definition but not to the entire service or front-end. For example, the Common section typically contains the naming context under which the entries are created. The naming context is specified using the BASE_DN keyword.

BASE_DN

The BASE_DN keyword specifies the naming context under the entries are to be created. The dsimport utility looks for this parameter in several places, in the following order:

  1. Command line of dsimport, option -V

  2. Dynamic section

  3. Common section for the Table

  4. Common section for the Front-End (at the beginning of the mapping file)

MAP_NAME

The MAP_NAME keyword specifies the name of the NIS map corresponding to the table definition. This keyword is used to create administrative entries for the NIS service. The directory server maintains these entries automatically.

This keyword is used also to create the naming context for the NIS entries that are created by using the generic mapping definition.

The MAP_NAME keyword is specific to the NIS service.

PRIVATE_OBJECTCLASSES

The PRIVATE_OBJECTCLASSES keyword specifies an object class when the object class and attributes derived from a table definition do not make up a complete entry. This keyword is necessary for maintaining directory entries that are created from several table definitions. This can be the case when several table definitions each create an auxiliary object class and its associated attributes.

For example, in the NIS environment, network hosts can have entries in at least three files: /etc/bootparams, /etc/ethers, /etc/hosts. However, each host has just one entry in the LDAP directory, with the three auxiliary object classes bootableDevice, ieee802Device, and ipHost. If the entry for the host is deleted in one of these files, the corresponding entry in the LDAP directory must not be deleted but simply updated by removing the appropriate auxiliary object class, and any attributes specific to that object class.

Dynamic Section

The Dynamic section contains equations that make it possible to dynamically build the filters required to locate relevant information.

LINE

The LINE keyword is necessary to define how the input information must be dynamically decomposed to provide the elements required in the MATCH_FILTER and ALL_FILTER definitions.

The syntax of the LINE keyword is given in "Extract".

MATCH_FILTER

The MATCH_FILTER keyword specifies a filter that is used by the dsimport utility to check whether an entry already exists in the database before creating it. If it exists, the dsimport utility will check whether it needs to be modified.

The MATCH_FILTER keyword is also used by the directory server to respond to commands such as ypmatch.

ALL_FILTER

The ALL_FILTER keyword specifies a filter that is used by the dsexport command to regenerate the file from which the directory entries were originally created. This filter is necessary even if you do not intend to export information from the directory to regenerate the source file for that information.

The ALL_FILTER keyword is used by the directory server to retrieve from the directory all entries that belong to a given NIS table. This is because the directory server maintains a permanently up-to-date copy of the NIS tables.

The ALL_FILTER keyword is also used by the directory server to respond to commands such as ypcat.

DC_NAMING

The DC_NAMING keyword defines the mechanism applied to convert a domain name of the form xyz.com to an LDAP data store suffix or naming context of the form dc=xyz, dc=com. This is useful if the naming structure that you use in your directory is a domain component (dc) structure.

Export Section

The Export section provides the method for regenerating a source file from LDAP directory entries. This section is optional. When it exists, it must contain the keyword LINE. The LINE keyword in the Export section must reflect the format of a line in the original source file.

The Export section contains the following subsections:

In the nis.mapping file, the Build subsection defines the rules for constructing an NIS key/NIS value pair; it also defines the rules for generating the line in the NIS file corresponding to the LDAP directory entry.

Import Section

The Import section provides the method for translating a line in an input file into an LDAP directory entry. This section must contain a LINE keyword that defines how a line in the input file can be decomposed into elements that can be described by LDAP attributes. It must also contain the list of LDAP attributes that are created from a line in the input file.

The Import section contains the following subsections:

In the nis.mapping file, the LINE definition in the Extract subsection specifies the rules for analyzing a line in an NIS source file into smaller units of information called NIS tokens.

Mapping Syntax

This section describes the syntax of the variables or tokens that you can create in each section of a table definition.

The mapping syntax is described using examples from the nis.mapping file.

Common

The variables defined in the Common section other than the keywords listed in Table 5-1 must follow this syntax:

variable-name=value

Variables defined in the Common section contain static configuration information.

Dynamic

The variables defined in the Dynamic section other than the keywords listed in Table 5-1follow the same syntax as the variables defined in the Common section. However, their values are supplied in the input to the utility (such as dsimport or dsexport) that uses the mapping file during its execution.

Extract

The variables defined in the Extract section define the rules for decomposing input information into smaller units of information, called tokens, that can be directly mapped onto LDAP attributes, or that require simple processing in order to be mapped onto LDAP attributes.

The syntax of a variable that defines a decomposition into tokens is:

VARIABLE => $element1 separator $element2 [separator $elementn...|| ...]

The separator between tokens is the separator expected in the input information. It could be white space, a comma, a colon or any other character. However, one space in the line definition will match any number of spaces or tabs in the actual input information. You can specify several alternatives for the decomposition, by using two pipe symbols (||) to introduce each alternative rule.

The conversion process examines the rules in the order in which they are specified, and applies the first one that matches the information it was given in input.

For example, in nis.mapping, the following definition extracts tokens from a line in the bootparams file:

LINE =>$ipHostNameT $parametersT

The hosts file provides a slightly more complex example:

LINE =>$dummy $ipHostNumberT $ipHostNameT $allIpHostAliasesT*#$descriptionT*||\
	        $dummy $ipHostNumberT $ipHostNameT $allIpHostAliasesT||\
	        $dummy $ipHostNumberT $ipHostNameT

In these examples, the tokens parametersT and allIpHostAliasesT require further processing before they can be mapped onto LDAP attributes. The processing required is defined in the Condense section.

Condense

The Condense section contains variables that define operations on tokens resulting from the Extract section, or any previously defined variable in the table definition.

It simplifies the attribute value definitions given in the Build section.

Variables defined in the Condense section can contain:

Variables in the Condense section can be made up of several alternative rules. The conversion process applies the first rule that matches the input information. The rules must be separated by two pipe symbols, and must all be part of the same expression. For example, the following expression is permitted:

fifi=$parameter1 - $parameter2 || $parameter1 || juju

whereas, the following expression is not:

fifi=$parameter1 - $parameter2
fifi=$parameter1
fifi=juju

You can define any number of variables in the Condense section. The order in which they are listed is important if you create dependencies between them. For instance, you can have:

fifi=$parameter1 - $parameter2 || $parameter1 || juju
riri=$fifi - $parameterA
loulou=$fifi - $parameterB

split Function

The syntax of the split function is as follows:

variableA=split(what, "separator", "add_prefix", "add_suffix", order)

where:

variableA identifies the variable

what identifies the unit of information, variable or parameter, to which the operation applies

separator indicates where to split the information. This value must be specified between quotes because it could contain a space.

add_prefix specifies a prefix to add to each item resulting from the split. This value must be specified between quotes because it could contain a space.

add_suffix specifies a suffix to add to each item resulting from the split. This value must be specified between quotes because it could contain a space.

order specifies the order in which the items resulting from the split are to be presented. The possible values for this parameter are left2right or right2left.

For example, in the nis.mapping file, the following variable definition is used to split an NIS domain name into a sequence of LDAP domain component attributes:

DC_NAMING=split($DOMAIN_NAME, ".", "dc=", ",", left2right)

If the domain name specified is eng.europe.xyz.com, the resulting expression is:

dc=eng, dc=europe, dc=xyz, dc=com.

string2instances Function

The string2instances function breaks down a specified string into instances. The syntax for this operation is:

variableA=string2instances("string", "separator")

where:

variableA identifies the variable

string identifies the unit of information, variable or parameter, to which the operation applies. This value must be specified between quotes because it could contain a space.

separator indicates where to split the information into instances. This value must be specified between quotes because it could contain a space.

For example, in nis.mapping, the following definition in the Condense section of the bootparams file breaks down a string of parameters into separate instances:

bootParameterT=string2instances($parametersT," ")

The string2instances function is also used to specify the inheritance tree for an object class. For example, if the object class of an entry created using a particular mapping definition is organizationalPerson, the Condense section of the mapping definition must contain the line:

objectClassT=string2instances("top person organizationalPerson", " ")

instances2string Function

The instances2string function combines several instances into a single string. The syntax for this operation is:

variableA=instances2string(what, "separator")

where:

variableA identifies the variable

what is a variable that has a number of instances

separator marks the separation between the elements of the string. This value must be specified between quotes because it could be a space.

For example, you could use the following variable to find the list of names and alias names for a given machine:

NameList=instances2string($cn, " ")

If the cn attribute has the values camembert, Cam, Bertie, the resulting string would be:

camembert Cam Bertie

trim Function

The trim function removes any unnecessary white space surrounding a parameter. The syntax for the trim operation is:

variableA=trim(parameter)

where:

variableA identifies the variable

parameter is the item from which white space must be removed

For example, if you decompose an alias list into its constituent members, you could define the following variables:

aliasMember=string2instances($aliasList, ",")
trimAliasMember=trim($aliasMember)

Each aliasMember parameter resulting from the string2instances operation is processed to remove any white spaces.

getrdn Function

The getrdn function returns the naming attribute of an entry, that is the attribute used in the entry's RDN. The syntax for the getrdn operation is:

variableA=getrdn()

Note -

The getrdn function can only be used in variables in the Condense section.


For example, the cn attribute of a machine has the values camembert, Cam, Bertie, but the actual system name of the machine, used in the RDN is camembert. For example, you could create the following variable:

HostName=getrdn()

The getrdn function returns the name camembert.


Note -

The getrdn function is case-sensitive.


exclude Function

The exclude function removes a value from a list or a string. The syntax for this operation is:

variableA=exclude(string, exclude-value, "separator")

where:

variableA identifies the variable

string identifies the list or string

exclude-value is the value to exclude

separator marks the separation between the elements of the list or string. This value must be specified between quotes because it could be a space.

For example, to obtain the list of aliases for a machine, you need to exclude the canonical name from the list of names. You could create the following variables:

NameList=instances2string($cn, " ")
HostName=getrdn()
HostAliases=exclude(NameList, HostName, " ")

In nis.mapping, the Condense section of the hosts mapping definition contains:

ipHostAliasesLineT=exclude($allIpHostAliasesT,$ipHostNameT, " ")

This definition excludes the ipHostName from the list of alias names for the host.

Build

The Build subsection contains a list of LDAP attributes and the definitions of their values. It must contain at least all mandatory attributes for an object class, and the DN. If the DN definition is missing from the Build section, the entries cannot be created in the directory.


Note -

You do not need to specify a DN definition in the Build sections of the radius.mapping file because this file is not used to import entries into the directory.


Attribute value definitions can be made up of:

The syntax of an LDAP attribute and its associated value definition in the Build section is as follows:

LDAPattribute=attributeValueDefinition

For example, if you wanted to create an entry for a mail alias, and use the LDAP attribute rfc822mailMember to store the names of alias members, your mapping would contain the following definitions:

Condense:
	aliasMember=string2instances($aliasList, ",")
	trimAliasMember=trim($aliasMember)
...
Build:
	rfc822mailMember=$trimAliasMember
...

Example: Using dsimport

This example describes how to create a mapping file, and use dsimport to perform a bulk load of information stored in a text file.

Input File

The file containing the information to import into the LDAP directory could be an extract from a corporate online directory service that provides basic information about employees.

This file is shown in Example 5-1.


Example 5-1 Example of Input File

Rob Green, rgreen@london.XYZ.com, phone x 44 1234, marketing communications
manager
Jean White, jwhite@london.XYZ.com, phone x44 1123, documentation manager
Susan Brown, sbrown@london, phone (44) 123 45 67 00, technical writer
Karen Gray, kgray@london, tel (44) 123 45 67 01, engineering project manager
Steve Black, sblack@eng, x44 1122, software development engineer
Felipe Diaz Gonzalez, fdgonzalez@eng, x41 2233, software development
engineer
Anne Marie de la Victoire, amvictoire@paris.xyz.com, x33 3344, software
support engineer
DURAND Pierre, pdurand@paris, tel 33 1133, software support engineer

In this file, there is one employee definition per line. On each line the information is ordered as follows:

The level of information is not always consistent for the various employees: the e-mail address is not always fully qualified, the telephone number is not always a complete telephone number but an extension, and in one case the last name is given before the first name.

If you want a consistent level of information for the entries that will be created in the LDAP directory, you must either make the necessary corrections in the source file, or make them after the import operation using the Deja tool.

Mapping File

The intention of the directory administrator is to create all employee entries under the naming context ou=People, o=XYZ, c=US. The token that specifies this in the mapping file is the BASE_DN token in the Common section.

The information will be imported just once. Therefore, it is not necessary to define an Export section in the mapping file. The Dynamic section is mandatory. The object class definition in the Condense section is also mandatory.

The mapping file created by the directory administrator is shown in Table 5-2.

Table 5-2 Example of Mapping File
# This example file for dsimport contains employee entries
# to be created as inetOrgPerson objects,in the LDAP directory.

Front-End: EXAMPLE

Table: People

	Common:
		BASE_DN=ou=People, o=XYZ, c=US

	Dynamic:
		LINE => $cn, $rest
		MATCH_FILTER=(&(objectClass=inetOrgPerson)(cn=$cn))
		ALL_FILTER=(&(objectClass=inetOrgPerson)(cn=*))

	Import:
		Extract:
			LINE => $fn $ln, $mail, $telephoneNumber, $description
		Condense:
			cn=$fn $ln
			snT=$ln
			objectClassT=string2instances("top person
organizationalPerson inetOrgPerson", " ")
		Build:
			dn=cn=$cn, $BASE_DN
			sn=$snT
			mail=$mail
			telephoneNumber=$telephoneNumber
			description=$description
			objectClass=$objectClassT

The Condense section contains the inheritance tree for the inetOrgPerson object class.

The Build section contains all the mandatory attributes pertaining to or inherited by the inetOrgPerson object class. It also contains the optional attributes pertaining to this object class that the directory administrator required.

Running dsimport

To import the file described in "Input File" sing the mapping file described in "Mapping File", you can use dsimport with the following arguments:

# dsimport -h hostname -D cn=admin,o=xyz,c=us -w secret -m mapping.file
-f EXAMPLE -t People input.file

where:

It is not strictly necessary to specify the DN and password of the administrator on the command line. If you omit these parameters, dsimport will read them from the dsserv.conf file. The advantage is that the DN and password of the administrator will not be displayed by the ps command.

After running this command, the following message is displayed:

Lines read: 9, processed: 8  Entries: added 10, modified 0, deleted 0, errors 0

The line count includes blank lines. The number of entries created is greater than the number of lines in the file because the dsimport command automatically creates the root entry, in this example o=xyz, c=US.

Data Management

Once you have populated your database with the information you need to run the directory service, you need to maintain that directory information by adding, modifying, or deleting entries. This section summarizes the command line utilities that you can use to maintain directory information.

For information on performing data management tasks from a graphical user interface, refer to Sun Directory Services 3.1 User's Guide.

Adding Entries

You can add an entry to the directory using ldapadd(1). You can specify a single entry on the command line, or you can specify one or more entries in a file. See the ldapmodify(1) man page (ldapadd is a particular configuration of ldapmodify) for details of how to use ldapadd.

You can use dsimport with the -n option to create an LDAP Data Interchange Format (LDIF) file suitable for use with ldapadd. You can also create your own LDIF file manually, and use the ldifcheck(1m) command to validate it. The format of LDIF files is described in the ldif(4) man page.

Modifying Entries

You can modify an entry in the directory using the ldapmodify or ldapmodrdn command.

Use ldapmodify(1):

See the ldapmodify(1) man page for details of how to use ldapmodify. You can use dsimport with the -n option to create an LDIF file suitable for use with ldapmodify.

Use ldapmodrdn(1) to modify the naming attribute of an entry. Changing the naming attribute changes the distinguished name of the entry. See the ldapmodrdn(1) man page for details of how to use ldapmodrdn.

Deleting Entries

You can delete an entry in the directory using ldapdelete(1). For details see the ldapdelete(1) man page.

Directory Maintenance

This section describes the tasks that you can perform on a regular basis to save space and to maintain Sun Directory Services performance.

Regenerating Indexes

You can regenerate the index database for a specific data store or for all data stores on the server using the dsidxgen command. Although the index files are automatically updated, regenerating the index database is a useful operation because it frees up disk space. Regenerating indexes helps improve performance on search operations.

For details, see the dsidxgen(1m) man page.

Regenerating the Database

When changes have been made to the directory database, the use of disk space is not optimal. To improve the use of disk space, you can regenerate the database by performing a backup followed by a restore.

You can back up the directory database in text format using the ldbmcat command. This command converts an LDBM database to the LDIF described in the ldif(1m) man page. For details, see the ldbmcat(1m) man page.

You can restore the directory database from the LDIF file created during a previous backup using the ldif2ldbm command. For details, see the ldif2ldbm(1m) man page.

For example, stop the directory server, then use the following sequence of commands to regenerate the directory database:

# ldbmcat id2entry.dbb > /usr/tmp/filename
# rm /var/SUNWconn/ldap/dbm/*
# ldif2ldbm -j 10 -i /usr/tmp/filename

Note -

You must stop the directory server before you regenerate the directory database.


If your directory server is also an NIS server, you must rebuild the NIS maps using the dsypinstall(1m) script. You can then restart the directory server.

Checking Log Files

The log directory, by default /var/opt/SUNWconn/ldap/logcontains eight log files, dsserv.log, dsradius.log, dsweb.log, dsnmpserv.log, dsnmprad.log, dsserv_admin.log, dspush.log, dspull.log. When a log file reaches its maximum size, by default 500Kbytes, another one is created, with a .1 suffix. When this one in turn reaches the maximum size, another one is created with a .2 suffix, and so on up to .9. This means that you can have up to 40 log files of 500 Kbytes each.

Because the log file mechanism can use a lot of disk space, it is good practice to delete log files that are no longer of any use to you.

Using dejasync

Whenever you modify the configuration of the NIS service or of the RADIUS service, or the mapping files for these services, respectively nis.mapping and radius.mapping under /etc/opt/SUNWconn/ldap/current, you must run the dejasync command so that these modifications are taken into account by the Deja tool. The dejasync command modifies the Deja.properties file.

You must also run dejasync when you initialize the NIS service so that you can use Deja to manage NIS entries.