Indexer adapters save data that is ready to be indexed by the Dgidx program.
The final component in your pipeline is generally an indexer adapter. These components take all record, dimension hierarchy, and index configuration information from the pipeline and combine it in a format that is ready for the indexer (Agidx or Dgidx). An indexer adapter will generally be the final component in a baseline update pipeline. Attributes of the indexer adapter control where and how it writes its output, as well as where and how it reads its index configuration input.
To add an indexer adapter:
The Indexer Adapter editor appears.
In the Name text box, type a unique name for the indexer adapter.
In the General tab, do the following:
In the URL text box, type the location to which the indexed records are written, relative to the Pipeline.epx file.
In the Output Prefix text box, type the prefix that will be attached to the output files.
(Optional) Check Filter Unknown Properties if you want the indexer adapter to remove source properties from your records.
Note
After mapping, source properties still exist as part of the Endeca record. Enabling this option removes those source properties so records consist exclusively of Endeca properties and dimension values.
(Optional) Check Custom Compression and slide the bar to the appropriate level.
(Optional) In the Sources tab, choose a record source and one or more dimension sources.
(Optional) If you are using an Agraph in your implementation, do the following:
In Number of Agraph Partitions, specify the number of child Dgraphs that the Agraph coordinates.
Note
If you want to change the partition property, open the Properties view and modify which properties are enabled for rollup and record spec. For more detailed information, see the Endeca Advanced Development Guide.
(Optional) In the Comment tab, add a comment for the component.
Note
Typically, there is only one indexer adapter per pipeline.
The Indexer Adapter editor contains a unique name for this indexer adapter.
The Indexer Adapter editor contains the following tabs:
The General tab contains the following options:
Option |
Description |
---|---|
URL |
Required. Location to which the indexed records are written. |
Output prefix |
Prefix that will be attached to the output files. For example, if the prefix is wine, the dimensions file will be called wine.Dimension.xml. |
Output formats |
Read-only information about output formats for the record file and the dimension file. |
Filter unknown properties |
Optional. Check Filter unknown properties if you want the indexer adapter to remove source properties from your records.
NoteAfter mapping, source properties still exist as part of the Endeca record. Enabling this option removes those source properties so records consist exclusively of Endeca properties and dimension values.
|
Custom compression level |
Optional. Sets the level of compression to be performed on the data. The values can range from 0 to 10, with higher numbers indicating higher compression (smaller size, slower processing). |
The Sources tab contains the following fields:
The Agraph tab contains the following options.
Option |
Description |
---|---|
Enable Agraph Support |
When checked, enables the Agraph program for use in a baseline update pipeline. |
Number of Agraph partitions |
Specifies the number of child Dgraphs that the Agraph controls. In an Agraph implementation, this must be a value of 2 or more. |
Partition property |
The partition property is a read only field that identifies the property by which records are assigned to each partition. If you are using rollup capabilities, the rollup property displays as the partition property. If you do not have a roll up property, but do have a record spec property enabled in your project, the record spec property functions as the partition property. If neither a rollup nor a record spec property exists, the partition property is empty, and Forge assigns records to each partition according to a round-robin strategy. |
See the Endeca Advanced Development Guide for detailed information on how to configure, provision, and run an Aggregated MDEX Engine implementation.