This chapter describes how to use the command-line utilities supplied with Sun Directory Services to populate and maintain your database of information. To initialize the database, you can:
Manually create the root entry and all other entries using ldapadd (see "Creating the Root Entry") or Deja (see Sun Directory Services 3.1 User's Guide ).
Populate the directory automatically by using the dsimport utility (see "Populating the Directory"). If you choose this option, the root entry is created automatically.
You cannot add entries to your data store before you have created the root entry for the data store. The root entry is the top entry of the tree held by the data store. It identifies the data store. In Sun Directory Services, you can actually have up to four root entries that identify the data store and that correspond to the four possible data store suffixes that you can declare in the Admin Console.
To create the root entry, create a simple LDIF file containing the entry information, and add it to the database using the ldapadd command. An example of this procedure is given in "To Create the Root Entry for XYZ Corporation".
You can also create the root entry manually using Deja. The procedure for adding entries using Deja is explained in Sun Directory Services 3.1 User's Guide.
The root entry is created automatically if it does not already exist when you first load entries in the directory using the dsimport command.
Create an LDIF file called root-file that contains:
dn: o=XYZ, c=US objectClass: organization |
The LDIF file format is described in detail in the ldif(4) man page.
Add this file using ldapadd(1):
prompt% ldapadd -c -D "cn=admin-cn, o= XYZ, c=US" -w admin-pw -f root-file
where:
-c specifies to continue processing even if errors occur
-D introduces the distinguished name of the data store administrator. The DN must be given in quotes because it is likely to contain blank spaces.
-w introduces the administrator password
-f introduces the file holding the information to add to the database.
If you want to avoid your password showing up in a command listing, you can omit the -w option. The ldapadd command will prompt you for your password.
The root entry now exists.
If you do not want to create directory entries manually, you can populate the directory using the dsimport bulk load utility. This utility creates directory entries from any text file in which one line corresponds to one directory entry. You must create a mapping file that specifies the semantics for the information provided in each line of the input file. You might also need to create an LDAP object class and attributes that are specific to the type of information you want to store in the directory.
Refer to "Mapping Syntax and Semantics" for information on the structure and content of a mapping file. A complete example of creating a mapping file and using dsimport is given in "Example: Using dsimport".
For details on all the options of dsimport(1m), refer to the man page.
The dsimport utility is also used during the initialization of the NIS service to import all the information stored in NIS files into the LDAP directory. When you run the dsypinstall script to configure Sun Directory Services as an NIS server, the NIS information available on your server is automatically added to your directory database through a call to dsimport. The mapping of NIS files into LDAP object classes and attributes is described in the nis.mapping file in the directory /etc/opt/SUNWconn/ldap/current/mapping. For full details on importing NIS information into the directory, see "Initializing the Sun Directory Services NIS Service".
The information mapping described in the radius.mapping file in the directory /etc/opt/SUNWconn/ldap/current/mapping is used to perform RADIUS searches in the LDAP directory, not to import RADIUS information.
The mapping syntax and semantics are designed to provide maximum flexibility so that you can easily:
Import information from any text file into the directory
Adapt or create the mapping for a proprietary table in your NIS environment
If this involves modifying or creating an object class with the attributes that you need, refer to "Modifying the Schema"
A mapping file is made up of a number of sections that conform to the following pattern:
Front-end name Common Table Common Dynamic Export Extract Condense Build Import Extract Condense Build Table Common Dynamic Export Extract Condense Build Import Extract Condense Build ... |
The content and meaning of each section is described in "Mapping Semantics". The syntactic rules for each section are described in "Mapping Syntax".
Front-end name indicates the name of the service. All the information that follows that name describes the mapping of service-specific information to LDAP object classes and attributes.
The first Common section immediately following the front-end name gives configuration information that applies to the front-end or service. It contains mandatory configuration variables that are required in the translation process, and optional configuration variables that are stored in the same file for convenience. In the nis.mapping and radius.mapping files, this section can be modified through the Admin Console.
The Table section defines mapping information for a particular type of information. The mapping information determines the object class of all entries created using that table definition. Each table definition is composed of the following sections:
Common
Dynamic (mandatory)
Export
Import
The Dynamic section is the only one that is mandatory. Without it, neither import nor export operations work. The other sections can be omitted if you do not need them. For instance, if you never intend to export information from the directory, you do not need to create an Export section.
Each section contains keywords and definitions used in the import or export process. Table 5-1provides a list of mapping keywords, the sections in which they can occur, and their purpose.
In any section, you can create variables or tokens, that is, private definitions, by using the following format:
tokenT=token definition
Your private definitions can use the syntax and functions described in "Condense".
Table 5-1 Summary of Mapping File Keywords
Section |
Keyword |
Mandatory/Optional |
Purpose |
---|---|---|---|
Common |
BASE_DN |
Mandatory, but can be specified in the Dynamic section |
Specifies a naming context. See "BASE_DN ". |
MAP_NAME |
Mandatory for an NIS table definition |
Indicates the name of the NIS table corresponding to the table definition. See "MAP_NAME ". |
|
PRIVATE_OBJECTCLASSES |
Mandatory when object class is not unique |
Used for updates on entries created from several table definitions. See "PRIVATE_OBJECTCLASSES ". |
|
Dynamic |
ALL_FILTER |
Mandatory |
Defines a filter for identifying all entries created using the table definition. See "ALL_FILTER". |
DC_NAMING |
Optional |
Defines the mechanism for converting a domain name to an LDAP dc name structure. See "DC_NAMING". |
|
LINE |
Mandatory |
Defines decomposition of input information. See "LINE". |
|
MATCH_FILTER |
Mandatory |
Defines a filter for identifying a particular entry created using the table definition. See "MATCH_FILTER". |
|
Export/Build |
LINE |
Mandatory if the Export section exists |
In export file, defines format of line composed of LDAP attributes. See "Export Section". |
NIS_KEY |
Mandatory for NIS |
Identifies NIS key in export file. |
|
NIS_VALUE |
Mandatory for NIS |
Identifies NIS value in export file. |
|
Import/Extract |
LINE |
Mandatory if the Import section exists |
Defines decomposition of input information. See "Import Section". |
The Common section contains definitions of variables that apply to all the entries created using that table definition but not to the entire service or front-end. For example, the Common section typically contains the naming context under which the entries are created. The naming context is specified using the BASE_DN keyword.
The BASE_DN keyword specifies the naming context under the entries are to be created. The dsimport utility looks for this parameter in several places, in the following order:
Command line of dsimport, option -V
Dynamic section
Common section for the Table
Common section for the Front-End (at the beginning of the mapping file)
The MAP_NAME keyword specifies the name of the NIS map corresponding to the table definition. This keyword is used to create administrative entries for the NIS service. The directory server maintains these entries automatically.
This keyword is used also to create the naming context for the NIS entries that are created by using the generic mapping definition.
The MAP_NAME keyword is specific to the NIS service.
The PRIVATE_OBJECTCLASSES keyword specifies an object class when the object class and attributes derived from a table definition do not make up a complete entry. This keyword is necessary for maintaining directory entries that are created from several table definitions. This can be the case when several table definitions each create an auxiliary object class and its associated attributes.
For example, in the NIS environment, network hosts can have entries in at least three files: /etc/bootparams, /etc/ethers, /etc/hosts. However, each host has just one entry in the LDAP directory, with the three auxiliary object classes bootableDevice, ieee802Device, and ipHost. If the entry for the host is deleted in one of these files, the corresponding entry in the LDAP directory must not be deleted but simply updated by removing the appropriate auxiliary object class, and any attributes specific to that object class.
The Dynamic section contains equations that make it possible to dynamically build the filters required to locate relevant information.
The LINE keyword is necessary to define how the input information must be dynamically decomposed to provide the elements required in the MATCH_FILTER and ALL_FILTER definitions.
The syntax of the LINE keyword is given in "Extract".
The MATCH_FILTER keyword specifies a filter that is used by the dsimport utility to check whether an entry already exists in the database before creating it. If it exists, the dsimport utility will check whether it needs to be modified.
The MATCH_FILTER keyword is also used by the directory server to respond to commands such as ypmatch.
The ALL_FILTER keyword specifies a filter that is used by the dsexport command to regenerate the file from which the directory entries were originally created. This filter is necessary even if you do not intend to export information from the directory to regenerate the source file for that information.
The ALL_FILTER keyword is used by the directory server to retrieve from the directory all entries that belong to a given NIS table. This is because the directory server maintains a permanently up-to-date copy of the NIS tables.
The ALL_FILTER keyword is also used by the directory server to respond to commands such as ypcat.
The DC_NAMING keyword defines the mechanism applied to convert a domain name of the form xyz.com to an LDAP data store suffix or naming context of the form dc=xyz, dc=com. This is useful if the naming structure that you use in your directory is a domain component (dc) structure.
The Export section provides the method for regenerating a source file from LDAP directory entries. This section is optional. When it exists, it must contain the keyword LINE. The LINE keyword in the Export section must reflect the format of a line in the original source file.
The Export section contains the following subsections:
Condense: this optional subsection contains variable definitions that can be used in the Build subsection to build the parameters required to generate LINE.
Build: Contains at least the output LINE definition.
In the nis.mapping file, the Build subsection defines the rules for constructing an NIS key/NIS value pair; it also defines the rules for generating the line in the NIS file corresponding to the LDAP directory entry.
The Import section provides the method for translating a line in an input file into an LDAP directory entry. This section must contain a LINE keyword that defines how a line in the input file can be decomposed into elements that can be described by LDAP attributes. It must also contain the list of LDAP attributes that are created from a line in the input file.
The Import section contains the following subsections:
Extract: Contains the LINE definition with the notation described in "Extract".
Condense: Contains variable definitions that can be used in the Build subsection to generate LDAP attributes and attribute values.
Build: Provides a list of LDAP attributes, including the object class, and defines the rules for constructing the value of each LDAP attribute from the variables in the Condense section or from the parameters in the Extract section
In the nis.mapping file, the LINE definition in the Extract subsection specifies the rules for analyzing a line in an NIS source file into smaller units of information called NIS tokens.
This section describes the syntax of the variables or tokens that you can create in each section of a table definition.
The mapping syntax is described using examples from the nis.mapping file.
The variables defined in the Common section other than the keywords listed in Table 5-1 must follow this syntax:
variable-name=value |
Variables defined in the Common section contain static configuration information.
The variables defined in the Dynamic section other than the keywords listed in Table 5-1follow the same syntax as the variables defined in the Common section. However, their values are supplied in the input to the utility (such as dsimport or dsexport) that uses the mapping file during its execution.
The variables defined in the Extract section define the rules for decomposing input information into smaller units of information, called tokens, that can be directly mapped onto LDAP attributes, or that require simple processing in order to be mapped onto LDAP attributes.
The syntax of a variable that defines a decomposition into tokens is:
VARIABLE => $element1 separator $element2 [separator $elementn...|| ...] |
The separator between tokens is the separator expected in the input information. It could be white space, a comma, a colon or any other character. However, one space in the line definition will match any number of spaces or tabs in the actual input information. You can specify several alternatives for the decomposition, by using two pipe symbols (||) to introduce each alternative rule.
The conversion process examines the rules in the order in which they are specified, and applies the first one that matches the information it was given in input.
For example, in nis.mapping, the following definition extracts tokens from a line in the bootparams file:
LINE =>$ipHostNameT $parametersT |
The hosts file provides a slightly more complex example:
LINE =>$dummy $ipHostNumberT $ipHostNameT $allIpHostAliasesT*#$descriptionT*||\ $dummy $ipHostNumberT $ipHostNameT $allIpHostAliasesT||\ $dummy $ipHostNumberT $ipHostNameT |
In these examples, the tokens parametersT and allIpHostAliasesT require further processing before they can be mapped onto LDAP attributes. The processing required is defined in the Condense section.
The Condense section contains variables that define operations on tokens resulting from the Extract section, or any previously defined variable in the table definition.
It simplifies the attribute value definitions given in the Build section.
Variables defined in the Condense section can contain:
A token specified in the same table section
A configuration variable specified in the Common section for the same NIS table, or in the Common section that applies to all NIS tables
A constant expression
A function: exclude, getrdn, split, instances2string, string2instances, or trim. A variable can contain just one function. If you want to use several functions on the same information, you must create intermediate variables.
Variables in the Condense section can be made up of several alternative rules. The conversion process applies the first rule that matches the input information. The rules must be separated by two pipe symbols, and must all be part of the same expression. For example, the following expression is permitted:
fifi=$parameter1 - $parameter2 || $parameter1 || juju |
whereas, the following expression is not:
fifi=$parameter1 - $parameter2 fifi=$parameter1 fifi=juju |
You can define any number of variables in the Condense section. The order in which they are listed is important if you create dependencies between them. For instance, you can have:
fifi=$parameter1 - $parameter2 || $parameter1 || juju riri=$fifi - $parameterA loulou=$fifi - $parameterB |
The syntax of the split function is as follows:
variableA=split(what, "separator", "add_prefix", "add_suffix", order) |
where:
variableA identifies the variable
what identifies the unit of information, variable or parameter, to which the operation applies
separator indicates where to split the information. This value must be specified between quotes because it could contain a space.
add_prefix specifies a prefix to add to each item resulting from the split. This value must be specified between quotes because it could contain a space.
add_suffix specifies a suffix to add to each item resulting from the split. This value must be specified between quotes because it could contain a space.
order specifies the order in which the items resulting from the split are to be presented. The possible values for this parameter are left2right or right2left.
For example, in the nis.mapping file, the following variable definition is used to split an NIS domain name into a sequence of LDAP domain component attributes:
DC_NAMING=split($DOMAIN_NAME, ".", "dc=", ",", left2right) |
If the domain name specified is eng.europe.xyz.com, the resulting expression is:
dc=eng, dc=europe, dc=xyz, dc=com.
The string2instances function breaks down a specified string into instances. The syntax for this operation is:
variableA=string2instances("string", "separator") |
where:
variableA identifies the variable
string identifies the unit of information, variable or parameter, to which the operation applies. This value must be specified between quotes because it could contain a space.
separator indicates where to split the information into instances. This value must be specified between quotes because it could contain a space.
For example, in nis.mapping, the following definition in the Condense section of the bootparams file breaks down a string of parameters into separate instances:
bootParameterT=string2instances($parametersT," ") |
The string2instances function is also used to specify the inheritance tree for an object class. For example, if the object class of an entry created using a particular mapping definition is organizationalPerson, the Condense section of the mapping definition must contain the line:
objectClassT=string2instances("top person organizationalPerson", " ") |
The instances2string function combines several instances into a single string. The syntax for this operation is:
variableA=instances2string(what, "separator") |
where:
variableA identifies the variable
what is a variable that has a number of instances
separator marks the separation between the elements of the string. This value must be specified between quotes because it could be a space.
For example, you could use the following variable to find the list of names and alias names for a given machine:
NameList=instances2string($cn, " ") |
If the cn attribute has the values camembert, Cam, Bertie, the resulting string would be:
camembert Cam Bertie
The trim function removes any unnecessary white space surrounding a parameter. The syntax for the trim operation is:
variableA=trim(parameter) |
where:
variableA identifies the variable
parameter is the item from which white space must be removed
For example, if you decompose an alias list into its constituent members, you could define the following variables:
aliasMember=string2instances($aliasList, ",") trimAliasMember=trim($aliasMember) |
Each aliasMember parameter resulting from the string2instances operation is processed to remove any white spaces.
The getrdn function returns the naming attribute of an entry, that is the attribute used in the entry's RDN. The syntax for the getrdn operation is:
variableA=getrdn() |
The getrdn function can only be used in variables in the Condense section.
For example, the cn attribute of a machine has the values camembert, Cam, Bertie, but the actual system name of the machine, used in the RDN is camembert. For example, you could create the following variable:
HostName=getrdn() |
The getrdn function returns the name camembert.
The getrdn function is case-sensitive.
The exclude function removes a value from a list or a string. The syntax for this operation is:
variableA=exclude(string, exclude-value, "separator") |
where:
variableA identifies the variable
string identifies the list or string
exclude-value is the value to exclude
separator marks the separation between the elements of the list or string. This value must be specified between quotes because it could be a space.
For example, to obtain the list of aliases for a machine, you need to exclude the canonical name from the list of names. You could create the following variables:
NameList=instances2string($cn, " ") HostName=getrdn() HostAliases=exclude(NameList, HostName, " ") |
In nis.mapping, the Condense section of the hosts mapping definition contains:
ipHostAliasesLineT=exclude($allIpHostAliasesT,$ipHostNameT, " ") |
This definition excludes the ipHostName from the list of alias names for the host.
The Build subsection contains a list of LDAP attributes and the definitions of their values. It must contain at least all mandatory attributes for an object class, and the DN. If the DN definition is missing from the Build section, the entries cannot be created in the directory.
You do not need to specify a DN definition in the Build sections of the radius.mapping file because this file is not used to import entries into the directory.
Attribute value definitions can be made up of:
A variable or keyword specified in any of the sections of the table definition
A configuration variable specified in the Common section that applies to all the tables in the Front-End section
A constant expression
A concatenation of any of the above
The syntax of an LDAP attribute and its associated value definition in the Build section is as follows:
LDAPattribute=attributeValueDefinition |
For example, if you wanted to create an entry for a mail alias, and use the LDAP attribute rfc822mailMember to store the names of alias members, your mapping would contain the following definitions:
Condense: aliasMember=string2instances($aliasList, ",") trimAliasMember=trim($aliasMember) ... Build: rfc822mailMember=$trimAliasMember ... |
This example describes how to create a mapping file, and use dsimport to perform a bulk load of information stored in a text file.
The file containing the information to import into the LDAP directory could be an extract from a corporate online directory service that provides basic information about employees.
This file is shown in Example 5-1.
Rob Green, rgreen@london.XYZ.com, phone x 44 1234, marketing communications manager Jean White, jwhite@london.XYZ.com, phone x44 1123, documentation manager Susan Brown, sbrown@london, phone (44) 123 45 67 00, technical writer Karen Gray, kgray@london, tel (44) 123 45 67 01, engineering project manager Steve Black, sblack@eng, x44 1122, software development engineer Felipe Diaz Gonzalez, fdgonzalez@eng, x41 2233, software development engineer Anne Marie de la Victoire, amvictoire@paris.xyz.com, x33 3344, software support engineer DURAND Pierre, pdurand@paris, tel 33 1133, software support engineer |
In this file, there is one employee definition per line. On each line the information is ordered as follows:
Name
E-mail address
Telephone number
Job description
The level of information is not always consistent for the various employees: the e-mail address is not always fully qualified, the telephone number is not always a complete telephone number but an extension, and in one case the last name is given before the first name.
If you want a consistent level of information for the entries that will be created in the LDAP directory, you must either make the necessary corrections in the source file, or make them after the import operation using the Deja tool.
The intention of the directory administrator is to create all employee entries under the naming context ou=People, o=XYZ, c=US. The token that specifies this in the mapping file is the BASE_DN token in the Common section.
The information will be imported just once. Therefore, it is not necessary to define an Export section in the mapping file. The Dynamic section is mandatory. The object class definition in the Condense section is also mandatory.
The mapping file created by the directory administrator is shown in Table 5-2.
Table 5-2 Example of Mapping File
The Condense section contains the inheritance tree for the inetOrgPerson object class.
The Build section contains all the mandatory attributes pertaining to or inherited by the inetOrgPerson object class. It also contains the optional attributes pertaining to this object class that the directory administrator required.
To import the file described in "Input File" sing the mapping file described in "Mapping File", you can use dsimport with the following arguments:
# dsimport -h hostname -D cn=admin,o=xyz,c=us -w secret -m mapping.file -f EXAMPLE -t People input.file |
where:
hostname is the name of the host that holds the directory data store
cn=admin,o=xyz,c=us is by default the distinguished name of the directory administrator
secret represents the password of the directory administarator
mapping.file contains the mapping for the input information
EXAMPLE is the front-end specified in the mapping file
People is the table specified in the mapping file
input.file contains the input information
It is not strictly necessary to specify the DN and password of the administrator on the command line. If you omit these parameters, dsimport will read them from the dsserv.conf file. The advantage is that the DN and password of the administrator will not be displayed by the ps command.
After running this command, the following message is displayed:
Lines read: 9, processed: 8 Entries: added 10, modified 0, deleted 0, errors 0 |
The line count includes blank lines. The number of entries created is greater than the number of lines in the file because the dsimport command automatically creates the root entry, in this example o=xyz, c=US.
Once you have populated your database with the information you need to run the directory service, you need to maintain that directory information by adding, modifying, or deleting entries. This section summarizes the command line utilities that you can use to maintain directory information.
For information on performing data management tasks from a graphical user interface, refer to Sun Directory Services 3.1 User's Guide.
You can add an entry to the directory using ldapadd(1). You can specify a single entry on the command line, or you can specify one or more entries in a file. See the ldapmodify(1) man page (ldapadd is a particular configuration of ldapmodify) for details of how to use ldapadd.
You can use dsimport with the -n option to create an LDAP Data Interchange Format (LDIF) file suitable for use with ldapadd. You can also create your own LDIF file manually, and use the ldifcheck(1m) command to validate it. The format of LDIF files is described in the ldif(4) man page.
You can modify an entry in the directory using the ldapmodify or ldapmodrdn command.
to modify the attributes in a single entry, by specifying the modification on the command line
to modify multiple entries, by specifying a file containing entry modification information
See the ldapmodify(1) man page for details of how to use ldapmodify. You can use dsimport with the -n option to create an LDIF file suitable for use with ldapmodify.
Use ldapmodrdn(1) to modify the naming attribute of an entry. Changing the naming attribute changes the distinguished name of the entry. See the ldapmodrdn(1) man page for details of how to use ldapmodrdn.
You can delete an entry in the directory using ldapdelete(1). For details see the ldapdelete(1) man page.
This section describes the tasks that you can perform on a regular basis to save space and to maintain Sun Directory Services performance.
You can regenerate the index database for a specific data store or for all data stores on the server using the dsidxgen command. Although the index files are automatically updated, regenerating the index database is a useful operation because it frees up disk space. Regenerating indexes helps improve performance on search operations.
For details, see the dsidxgen(1m) man page.
When changes have been made to the directory database, the use of disk space is not optimal. To improve the use of disk space, you can regenerate the database by performing a backup followed by a restore.
You can back up the directory database in text format using the ldbmcat command. This command converts an LDBM database to the LDIF described in the ldif(1m) man page. For details, see the ldbmcat(1m) man page.
You can restore the directory database from the LDIF file created during a previous backup using the ldif2ldbm command. For details, see the ldif2ldbm(1m) man page.
For example, stop the directory server, then use the following sequence of commands to regenerate the directory database:
# ldbmcat id2entry.dbb > /usr/tmp/filename # rm /var/SUNWconn/ldap/dbm/* # ldif2ldbm -j 10 -i /usr/tmp/filename |
You must stop the directory server before you regenerate the directory database.
If your directory server is also an NIS server, you must rebuild the NIS maps using the dsypinstall(1m) script. You can then restart the directory server.
The log directory, by default /var/opt/SUNWconn/ldap/logcontains eight log files, dsserv.log, dsradius.log, dsweb.log, dsnmpserv.log, dsnmprad.log, dsserv_admin.log, dspush.log, dspull.log. When a log file reaches its maximum size, by default 500Kbytes, another one is created, with a .1 suffix. When this one in turn reaches the maximum size, another one is created with a .2 suffix, and so on up to .9. This means that you can have up to 40 log files of 500 Kbytes each.
Because the log file mechanism can use a lot of disk space, it is good practice to delete log files that are no longer of any use to you.
Whenever you modify the configuration of the NIS service or of the RADIUS service, or the mapping files for these services, respectively nis.mapping and radius.mapping under /etc/opt/SUNWconn/ldap/current, you must run the dejasync command so that these modifications are taken into account by the Deja tool. The dejasync command modifies the Deja.properties file.
You must also run dejasync when you initialize the NIS service so that you can use Deja to manage NIS entries.