3 Configuring EDR Input Processing

This chapter describes how to set up input processing for the Oracle Communications Billing and Revenue Management (BRM) Pipeline Manager.

About the Input Process

To process incoming data, Pipeline Manager input modules convert data into an internal EDR format understood by the pipeline function modules. The input data is contained in files, such as CDRs for telco rating.

The input process works as follows:

  1. Your mediation system automatically places CDR files into a directory.

    If you start a pipeline and the directory already includes input files, they are processed according to the last-modified timestamp.

  2. The input module uses the stream format description file to create separate records from the data. For example, the data is separated into HEADER records, DETAIL records, and TRAILER records. DETAIL records can include data for one service only (for example, GSM or GPRS).

  3. The input module uses the input grammar file to process each record. The module verifies that the syntactical order for each record is correct. For example, if a particular field is supposed to include 10 characters, the input module uses the grammar file to check that. It also uses the grammar file to normalize data, such as the A number.

    If an error is found, processing stops and an error message is logged.

  4. The input module creates an EDR container for the data. See "About EDRs".

  5. The input module uses an input mapping file to copy data from the external file into the appropriate EDR container fields.

    Note:

    A separate input mapping file is required to process each external data format. See "Setting Up an Input Mapping File".

  6. The input module puts the EDR into the input buffer for processing by the function modules.

Note:

A separate input grammar file is required to process each external data format. See "Setting Up an Input Grammar File".

The following examples show how the A number in a CDR is mapped to a rating EDR container.

Note:

These examples show only selected portions of the stream format description, grammar, and mapping files.

The following example shows part of a stream format description file. This section of the file describes how a DETAIL record is formatted and lists the fields in the record (fields are separated by a semicolon). The first field (SERVICE) stores the service code, the second (A_NUMBER) stores the A number, and so forth. Data in each of the fields must be of type AscString().

DETAIL(SEPARATED)
{
Info
{
Pattern = ".*\n"; 
FieldSeparator = ';';
RecordSeparator = '\n';
}
SERVICE     AscString();
A_NUMBER    AscString();
B_NUMBER    AscString();
...

The next example shows part of an input grammar file. The first two lines create a new block of data in an EDR container (edrNew) and specify the location in an external file from which the data should be copied (edrInputMap). Inside the new block, the next two lines normalize the A number and then add a field to the EDR container called DETAIL.A_NUMBER.

edrNew( DETAIL, CONTAINER_DETAIL );
edrInputMap( "SAMPLE.DETAIL.STD_MAPPING" );
...
number = normalizeNumber( edrString( DETAIL.A_NUMBER ), "00", 0 );
edrString( DETAIL.A_NUMBER ) = number;

The next example shows part of an input mapping file. In this example, the A_NUMBER defined in the stream format description example is mapped to the DETAIL.A_NUMBER field defined in the input grammar example. Note how the nested blocks of data correspond to the edrInputMap entry (SAMPLE.DETAIL.STD_MAPPING) in the input grammar example.

SAMPLE
  DETAIL
  {
    HDR_MAPPING
    {
    "010"       -> HEADER.RECORD_TYPE;
    ... 
  }
  TRL_MAPPING
  {
    "090"       -> TRAILER.RECORD_TYPE;
    ... 
  }
  
  STD_MAPPING
  {
    "020"       -> DETAIL.RECORD_TYPE; 
    0           -> DETAIL.DISCARDING;
    "00"        -> DETAIL.A_MODIFICATION_INDICATOR;
    0           -> DETAIL.A_TYPE_OF_NUMBER;
    "0"         -> DETAIL.A_NUMBERING_PLAN;
    A_NUMBER    -> DETAIL.A_NUMBER;
...

You can map one field in an external file to multiple fields in an EDR container. The following example shows part of the same input mapping file. In this part of the file, however, the data block for the GSM service maps the A_NUMBER to a different field:

GSMW_MAPPING
  {
  "520"       -> DETAIL.ASS_GSMW_EXT.RECORD_TYPE; 
  A_NUMBER    -> DETAIL.ASS_GSMW_EXT.A_NUMBER_USER;
  B_NUMBER    -> DETAIL.ASS_GSMW_EXT.DIALED_DIGITS;
...
  }

About Setting Up Input Processing

To set up input processing, do the following:

  1. (CDR processing only) Define a CDR file input directory in your pipelines. Configure your mediation system to put the files in this folder automatically.

    Note:

    You can configure your system to route CDR files from a single input directory to multiple identical pipelines.

  2. Set up a stream format description file. You can start with the sample files that are provided. See "Creating a Stream Format Description File".

  3. Set up an input mapping file. You can start with the sample files that are provided. See "Setting Up an Input Mapping File".

  4. If necessary, set up an input grammar file. In most cases, you do not need to modify the default grammar file. If you modify an EDR container (for example, if you add fields), you might need to modify the input grammar to ensure that data in your EDR containers is correctly formatted. See "Setting Up an Input Grammar File".

    Note:

    If you customize an EDR container description, you must ensure that your customizations do not affect existing module functionality. Many modules require data from default EDR fields.

  5. Configure these input sections in the registry:

The sample Pipeline Manager registry files include stream format description, input mapping, and input grammar files that convert data using the rating EDR and TAP formats.

About Input Processing File Types

Input processing uses the following types of files:

  • Input file: The file from the external system.

  • Temporary file: The same file as the input file, but renamed as a temporary file during processing. If the file is rejected, this file is not used.

  • Done File: The same file as the input file, but renamed as a done file after processing has been successfully completed.

  • Error File: The same file as the input file, but renamed as an error file if the file is rejected.

These files are managed by the EXT_InFileManager module. See "About Getting Pipeline Input from Files".

Creating a Stream Format Description File

To create a stream format description file, first identify the data in your external files that you need to use for rating. You can then either start with one of the sample files or create your own.

In the following example, the external file is a CDR. It includes three records: a HEADER, a TRAILER, and a DETAIL. Each record starts with a character that identifies its EDR container content type (H for HEADER, T for TRAILER, and D for DETAIL), and each record has a fixed structure.

  • The example HEADER record in Table 3-1 has two fields:

    Table 3-1 Header Record in Stream Format Description

    Field Length Description

    IDENTIFIER

    1

    The character H.

    CREATION_TIME

    14

    The creation time of the CDR stream in the format YYYYMMDDHHMMSS.

  • The example DETAIL record in Table 3-2 has five fields:

    Table 3-2 Detail Record in Stream Format Description

    Field Length Description

    IDENTIFIER

    1

    The character D.

    CALLING_PARTY

    15

    The A number.

    CALLED_PARTY

    15

    The B number.

    START_TIMESTAMP

    14

    The start time of the call in format YYYYMMDDHHMMSS.

    DURATION

    9

    The duration of the call in seconds.

  • The example TRAILER record in Table 3-3 has two fields:

    Table 3-3 Trailer Record in Stream Format Description

    Field Length Description

    IDENTIFIER

    1

    The character T.

    NUMBER_OF_DETAILS

    9

    The total number of DETAIL records in the stream.

A sample CDR input stream might be the following:

H20010613123410D4943311217 4957641506 20010613100112000000045D494106136432 49401531224    20010613100215000000056T000000002

The INP_GenericStream input module uses the stream format description file to break the CDR input stream into the following records:

H20010613123410

D4943311217 4957641506 20010613100112000000045
 
D494106136432 49401531224 20010613100215000000056
 
T000000002

The input module uses regular expressions to find each record. Table 3-4 lists the three record types and their regular expressions:

Table 3-4 Regular Expressions for the Records

Record Regular expression Description

HEADER

"H.{14}"

H followed by 14 arbitrary characters.

DETAIL

"D.{53}"

D followed by 53 arbitrary characters.

TRAILER

"T.{9}"

T followed by 9 arbitrary characters.

The description of the record must contain the following:

  • The regular expression used by the input module to recognize the physical record

  • The position and types of the fields inside the physical record

A sample format stream description for this example looks like this:

SampleFormat 
{ 
  Header(FIX) 
  { 
    Info 
    { 
      Pattern = "H.{14}"; 
    } 
    IDENTIFIER        AscString(1); 
    CREATION_TIME     AscDate("YYYYmmddHHMMSS"); 
  } 

  Detail(FIX) 
  { 
    Info 
    { 
      Pattern = "D.{53}"; 
    } 

    IDENTIFIER        AscString(1); 
    CALLING_PARTY     AscString(15); 
    CALLED_PARTY      AscString(15); 
    START_TIMESTAMP   AscDate("YYYYmmddHHMMSS"); 
    DURATION          AscInteger(9); 
  } 

  Trailer(FIX) 
  { 
    Info 
    { 
    Pattern = "T.{9}"; 
  } 

    IDENTIFIER        AscString(1); 
    NUMBER_OF_DETAILS AscInteger(9); 
  } 
}

Each field in the record is defined by the field type and value. For example, the fields in the DETAIL record are defined as follows:

    IDENTIFIER        AscString(1); 
    CALLING_PARTY     AscString(15); 
    CALLED_PARTY      AscString(15); 
    START_TIMESTAMP   AscDate("YYYYMMDDHHMMSS"); 
    DURATION          AscInteger(9); 

Record Types

The INP_GenericStream input module uses regular expressions to recognize the data records in the input stream. Different types of data records define fields in different ways:

  • Fields can defined by fixed-widths.

  • Fields can be separated by special characters.

  • The length of the field can be included in the input data (as in ASN.1 input).

Therefore, each data record has a record type that tells the input module how to split up the record into fields. Each data record has an Info block in its definition that contains some setup parameters for the record, for example, the field separator for a separated record. The record type determines which parameters can be used.

For example, this DETAIL record uses the record type SEPARATED:

DETAIL(SEPARATED)
{
Info
{
Pattern = ".*\n"; 
FieldSeparator = ';';
RecordSeparator = '\n';
}
SERVICE     AscString();
A_NUMBER    AscString();
B_NUMBER    AscString();
...
Record Type SEPARATED

The record type SEPARATED is used for records in which fields are separated by a special field delimiter character. The record itself can be terminated by another character (for example, the end-of-line symbol \n). Because there is no length information for the record, the regular expression specified as a pattern must match the full record, including the record separator.

There are no restrictions for the data types that can be used inside the SEPARATED record, although it makes no sense to use binary data types inside this record. The length information calculated from the position of the field delimiters overwrites length information specified in the data types. See Table 3-5.

Table 3-5 Parameters in SEPARATED

Parameter Value Description Mandatory

Pattern

String

Regular expression that defines the entire record. This includes all records and the record separator character.

Yes

FieldSeparator

Character

Character that delimits single fields.

Default = Comma ( , )

No

RecordSeparator

Character

Character that delimits records.

Default = No record separator

No

Record Type FIX

This record type is used for records with predefined width for each field. The record must contain all the fields. The record length is calculated as a sum of the widths of individual fields. Only data types with width information can be used. See Table 3-6.

Table 3-6 Parameters in FIX

Parameter Value Description Mandatory

Pattern

String

Regular expression that identifies the record.

Yes

Record Type ASN

This record type is used for file formats defined in ASN.1, for example, TAP. You can use only the TAP and ASN data types in this record type. See Table 3-7.

Table 3-7 Parameters in ASN

Parameter Value Mandatory

Application

Integer

No

Context

Integer

No

Private

Integer

No

Universal

Integer

No

Syntax of the Stream Format Description File

The stream format description file is a simple ASCII file. The following grammar defines the syntax of the stream format description file:

<format-description-file> ::=  (<use_directive> | <stream-format>)* 
<boolean> ::=                  "true" | "false" 
<character> ::=                "'" "single character" "'" 
<decimal> ::=                  "0..9"* "." "0..9"+ 
<extension-field-type> ::=     "any field type defined in a user extension" 
<extension-record-type> ::=    "any record type defined in a user extension" 
<field-name> ::=               <identifier> 
<field-type> ::=               "AscString" | "AscDecimal" | "AscLong" | "AscDate" | ... |
                               <extension-field-type> 
<format-name> ::=              <identifier> 
<identifier> ::=               "a..z,A..Z,_" "a..z,A..Z,0..9,_"* 
<info-block> ::=               "Info" "{" <info-parameter>* "}" 
<info-parameter> ::=           <identifier> "=" <value> ";" 
<integer> ::=                  "0..9"+ 
<record-definition> ::=        <record-name> "(" <record-type> ")" "{" <info-block>
                               <record-field>* "}" 
<record-field> ::=             <field-name> <field-type> "(" [<field-parameter> 
                               [, <field-parameter>]*] ")" ";" 
<record-name> ::=              <identifier> 
<record-type> ::=              "FIX" | "SEPARATED" | "ASN" | <extension-record-type> 
<stream-format> ::=            <format-name> "{" <record-definition>* "}" 
<string> ::=                   "\"" "any character"* "\"" 
<use_directive> ::=            "use" <identifier> ";" 
<value> ::=                    <decimal> | <integer> | <identifier> | <string> | 
                               <character> | <boolean> 

Supported Data Types for the Stream Format Description File

Each entry in the stream format description file assigns a data type to a field. For example, this line assigns the AscString data type to the A number:

CALLING_PARTY     AscString(15); 

Pipeline rating supports the following categories of data types:

ASCII Data Types

Pipeline Manager supports the following ASCII data types:

  • AscDate

  • AscDecimal

  • AscInteger

  • AscString

  • AscRawString

AscDate

Use the AscDate data type to read and write date/time information as an ASCII string. The AscDate data type can be used without any parameters or with a string specifying the used date format.

AscDate( [String format] ) 

The format string uses the following patterns:

  • %Y: The year including the century (1901 ... 2037)

  • %y: The year without the century (00 ... 99)

  • %m: Month number (01 ... 12)

  • %d: Day of the month (01 ... 31)

  • %H: Hour (00 ... 23)

  • %M: Minute (00 ... 59)

  • %S: Seconds (00 ... 59)

When no format string is defined, the following default format is used:

%Y%m%d%H%M%S

AscDecimal

Use AscDecimal to read and write decimal values to and from ASCII streams.

AscDecimal( [Integer len [, Bool withPoint [, Integer precision [, Char pointChar [, Identifier rounding[, Char padChar]]]]]] ) 
  • len: The total length of the decimal value (default is 0 => unspecified).

  • withPoint: Boolean flag to specify if there is a decimal point in the string (default is true).

  • precision: Number of digits after the decimal point (default is 6).

  • pointChar: Character used as decimal point (default is the point '.').

  • rounding: Rounding method to use (PLAIN, UP, DOWN, BANK) (default is DOWN).

  • padChar: Padding character to use (default is '0').

AscInteger

Use AscInteger to read and write integer values to and from ASCII streams.

Integer values are supported in the range from -2147483648 to 2147483647.

Note:

AscInteger cannot be NULL (empty).

AscInteger( [Integer len [, Char padChar]] ) 
  • len: The total length of the integer value (default is 0 => unspecified)

  • padChar: The character used to pad integer values to a fixed length (default is the '0')

AscString

Use the AscString data type to read and write strings to and from an ASCII stream.

AscString( [Integer len [, Char padChar [, Bool isLeftJustified]]] ) 
  • len: The total length of the string (default is 0 => unspecified).

  • padChar: The character used to pad string values to a fixed length (default is a space character).

  • isLeftJustified: Flag indicating that the string is left justified (default is true).

AscRawString

Equivalent to AscString, but preserves leading and trailing spaces while AscString strips all spaces from strings.

ASN.1 Data Types

Pipeline Manager supports the following ASN.1 data types:

  • ASN_Integer

  • ASN_LegacyOctetString

  • ASN_OctetString

  • ASN_RawOctetString

  • ASN_BcdString

  • ASN_NumberString

  • ASN_HexString

  • ASN_Tag

  • ASN_Blob

ASN_Integer

Use ASN_Integer to read and write integer values from and to ASN.1 streams. Integer values are supported in the range from -2147483648 to 2147483647.

Note:

ASN_Integer cannot be null (empty).

ASN_Integer( Integer TagValue [, String Asn1Class] ) 
  • TagValue: The value to use as ASN.1 Tag.

  • Asn1Class: The Class of the ASN.1 object. Values are:

    • Application

    • Context

    • Private

    • Universal

    The default is Application.

ASN_LegacyOctetString

Use ASN_LegacyOctetString to read and write an Octet string, which is a Byte string without any specific encoding for the data, for example, ascii or hex, from and to ASN.1 streams. ASN_LegacyOctetString removes the leading and trailing spaces after decoding an octet string.

This data type is similar to the ASN_OctetString data type except for the following difference:

  • ASN_LegacyOctetString encodes an empty octet string with length = 0 and no value.

  • ASN_OctetString builds an empty octet string with length = 1 and a space for the value. See "ASN_OctetString".

ASN_LegacyOctetString( Integer TagValue [, String Asn1Class] ) 
  • TagValue: The value to use as ASN.1 Tag.

  • Asn1Class: The Class of the ASN.1 object. Values are:

    • Application (Default)

    • Context

    • Private

    • Universal

ASN_OctetString

Use ASN_OctetString to read and write strings from and to ASN.1 streams. An Octet string is a Byte string without any specific encoding for the data, for example, ascii or hex.

ASN_OctetString( Integer TagValue [, String Asn1Class] ) 
  • TagValue: The value to use as ASN.1 Tag.

  • Asn1Class: The Class of the ASN.1 object. Values are:

    • Application (Default)

    • Context

    • Private

    • Universal

ASN_RawOctetString

Use ASN_RawOctetString to read and write an Octet string, which is a Byte string without any specific encoding for the data, for example, ascii or hex, from and to ASN.1 streams. Unlike the ASN_LegacyOctetString, ASN_RawOctetString does not remove the leading and trailing spaces after decoding an octet string.

See also "ASN_LegacyOctetString".

ASN_RawOctetString( Integer TagValue [, String Asn1Class] )  
  • TagValue: The value to use as ASN.1 Tag.

  • Asn1Class: The Class of the ASN.1 object. Values are:

    • Application (Default)

    • Context

    • Private

    • Universal

ASN_BcdString

ASN_BcdString is an extension of the ASN_OctetString used to read and write strings containing data coded in the Binary Coded Decimal form to and from ASN.1 streams. This type automatically decodes and encodes BCD, so the user accesses the data seamlessly.

ASN_BcdString( Integer TagValue [, String Asn1Class] ) 
  • TagValue: The value to use as ASN.1 Tag

  • Asn1Class: The Class of the ASN.1 object. Values are:

    • Application (Default)

    • Context

    • Private

    • Universal

ASN_NumberString

ASN_NumberString is an extension of the ASN_OctetString used to read and write strings containing only numbers (and spaces) but packed in an ascii string from ASN.1 streams. An ASN_NumberString can be read and written as a string, date, or long.

ASN_NumberString( Integer TagValue [, String Asn1Class] ) 
  • TagValue: The value to use as ASN.1 Tag.

  • Asn1Class: The Class of the ASN.1 object. Values are:

    • Application (Default)

    • Context

    • Private

    • Universal

ASN_HexString

ASN_HexString is an extension of the ASN_OctetString used to read and write strings containing data coded in the Hexadecimal form, but stored as ASCII strings, from and to ASN.1 streams. This type is used because iScript cannot directly manipulate hexadecimal byte strings, so the strings are stored as ASCII representation of hexadecimal strings. For example, 0x28F3 is stored as 28F3.

ASN_HexString supports cases in which a special conversion method of read or write access is necessary.

ASN_HexString( Integer TagValue [, String Asn1Class] ) 
  • TagValue: The value to use as ASN.1 Tag.

  • Asn1Class: The Class of the ASN.1 object. Values are:

    • Application (Default)

    • Context

    • Private

    • Universal

ASN_Tag

Use ASN_Tag to read and write constructed ASN.1 objects to and from ASN.1 streams. Only the Parser should create this type of objects, out of record definitions in the block description file.

The ASN_Tag object can read both definite and indefinite length ASN.1 objects.

ASN_Blob

ASN_Blob is a special type used to store a complete structured (constructed) ASN.1 Object in the form of a byte string. This is useful when you need to transmit a block of data from the input to the output without processing, thus not needing to map the data into EDR container fields.

The only limitation for this type is that the ASN.1 object must have a definite length.

ASN_Blob( Integer TagValue [, String Asn1Class [, String Asn1Form]] )
  • TagValue: The value to use as ASN.1 Tag

  • Asn1Class: The Class of the ASN.1 object. Values are:

    • Application (Default)

    • Context

    • Private

    • Universal

  • Asn1Form: The Form of the ASN.1 object. Values are:

    • Constructed (Default)

    • Primitive

TAP Data Types

Pipeline Manager supports the following TAP data types. These are defined only to match the type name used in the TAP format description file.

  • TAP_AsciiString. Same type as a standard ASN_OctetString.

  • TAP_Description. Same type as a standard ASN_OctetString.

  • TAP_Currency. Same type as a standard ASN_OctetString.

  • TAP_PercentageRate. Same type as a standard ASN_Integer.

Setting Up an Input Mapping File

To create an input mapping file, first identify the data in your external files that you need to map from the external files to the EDR container. You can then either start with one of the sample input mapping files or create your own.

Each mapping entry contains a list of mappings either from data record fields to EDR container fields or from constant values to EDR container fields. You can map a data record field to more than one EDR container field by adding more than one mapping. For example:

A_NUMBER    -> DETAIL.A_NUMBER;
.
.
.
A_NUMBER    -> DETAIL.ASS_GSMW_EXT.A_NUMBER_USER;

The following grammar defines the syntax of the input mapping file:

<input-mapping-file> ::=    <file-format-mapping>* 
<constant> ::=              <integer> | <decimal> | <string> 
<constant-mapping> ::=      <constant> "->" <edr-field> 
<decimal> ::=               "0..9"* "." "0..9"+ 
<edr-field> ::=             <identifier> ("." <identifier>)+ 
<field-mapping> ::=         <field-name> "->" <edr-field> 
<field-name> ::=            <identifier> 
<file-format-mapping> ::=   <file-format> "{" <record-mapping>* "}" 
<identifier> ::=            "a..z,A..Z,_" "a..z,A..Z,0..9,_"* 
<integer> ::=               "0..9"+ 
<mapping-entry> ::=         <field-mapping> | <constant-mapping> 
<mappings> ::=              <mapping-name> "{" <mapping-entry>* "}" 
<record-mapping> ::=        <record-name> "{" <mappings>* "}" 
<string> ::=                "\"" "any character"* "\"" 

Setting Up an Input Grammar File

The input grammar contains iScript statements to create an EDR container.

The syntax of the input grammar file is similar to the syntax used in Yacc grammars. This file defines the grammar of the input data that are parsed and of the iScript statements that are run when a certain symbol is found in the input data stream.

Configuring the Input DataDescription Registry Section

You configure a DataDescription section in the registry for each pipeline. The DataDescription section includes the following entries:

  • StreamFormats: Specifies the input stream format file.

    See "About the Order of Listing Stream Format Description Files".

  • InputMapping: Specifies the input mapping description file.

  • OutputMapping: Specifies the output mapping description file.

    Note:

    You specify the input grammar description file in the Input section.

This sample shows the DataDescription section:

DataDescription
  {
   Standard
   {
   ModuleName = Standard
   Module 
   { 
   StreamFormats
   {
     Format1 = ./formatDesc/Formats/Flist/Flist_v01.dsc
   }
   InputMapping
   {
    Mapping1 = ./formatDesc/Formats/Flist/Flist_v01_InMap.dsc
   }
   OutputMapping
   {
    Mapping1 = ./formatDesc/Formats/Flist/Flist_v01_OutMap.dsc
   }
   }
   }
}

Note:

The DataDescription section also includes an entry for the output mapping file. See "Configuring EDR Output Processing".

About the Order of Listing Stream Format Description Files

Pipeline module instances prioritize the order in which formats are considered for parsing in the order that the stream format description files are listed in the StreamFormats section of the registry.

Parsing errors can occur in a pipeline if the stream format description files are listed in the incorrect order. For example, the stream format description file that applies to your custom input stream would be listed in the StreamFormats section before the output stream format description file.

Configuring the Input Section in the Registry

Note:

When you configure the Input section, you configure the Pipeline Input Controller. See "Input Controller".

To configure the Input section in the registry, do the following:

  • The UnitsPerTransaction entry: Use this entry to improve performance.

  • The INP_GenericStream module: Specify the input grammar file in this section. See "INP_GenericStream".

    When configuring the INP_GenericStream module, configure one of the following modules as a submodule of the INP_GenericStream module:

    • EXT_InFileManager: Configure this module if the pipeline receives input from files. This module manages the input, temporary, and done files. See "EXT_InFileManager".

    • EXT_InEasyDB: Configure this module if the pipeline receives input from a database. See "EXT_InEasyDB".

    • If the pipeline receives input from a prepaid network, configure EXT_InSocketMgrFlist for input from flist-based networks.

About Getting Pipeline Input from Files

To configure a pipeline to read data from files, use the EXT_InFileManager module.

When you configure the module, you specify the directories, suffixes, and prefixes for the following files:

  • Input files: CDR files from the mediation system. The prefix and suffix are used by the input module to identify which files to process.

    The input module checks periodically for files in this folder with the specified prefix and/or suffix.

  • Done files: Created when a transaction is successfully completed.

  • Error files: Created after a transaction rollback.

To manage file names, you can specify the following:

  • The prefix for temporary files. Temporary files are used as input until the transaction is complete.

  • Whether to replace or append prefixes and suffixes.

  • The time period (in seconds) for which the input directory must be empty before the EVT_INPUT_DIR_EMPTY event is sent.

See "EXT_InFileManager".

About Getting Pipeline Input from a Database

To get pipeline input from a database, use the EXT_InEasyDB module.

To set up a pipeline for database input:

  1. Create a job file that consists of SQL statements. It can also include iRule variables, which are defined in a parameter file. The EXT_InEasyDB module uses the commands to create EDRs.

  2. Place the job file in a specific directory. When Pipeline Manager starts, the EXT_InEasyDB module finds the directory from the registry and starts the input process according to the commands in the job file.

You can stop and restart the pipeline after a system crash by configuring a restart file.

You can start the pipeline by using the ReadDatabase semaphore. When the module receives a start command while in process, the new SQL command is written to a job file.

The returned values of the database input stream contain all fields of each selected row divided by the configurable delimiter.

To configure the pipeline to read data from a database, see "EXT_InEasyDB".

Specifying the Maximum Errors Allowed in an Input File

You can configure the Output Controller to reject an entire input stream that exceeds a maximum percentage of errors. For example, you can specify to reject an input stream if over 20% of the EDRs have a particular error.

You specify the error threshold by using the MaxErrorRates entry in the Output section of the registry file.

Note:

You can also configure a pipeline to reject individual EDRs by using the FCT_Reject module. For information, see "About Standard Recycling" and "Recycling EDRs in Pipeline-Only Systems".

When an input stream exceeds the error threshold, the Output Controller:

  • Deletes all output streams associated with the input stream. For example, if a pipeline splits an input stream into five output streams, the Output Controller deletes all five output streams.

  • Moves the input stream to the error directory. You define the location of the error directory by using the ErrorPath registry entry. For information, see "EXT_InFileManager".

To set an error threshold:

  1. Stop Pipeline Manager, if necessary.

  2. Open your registry file in a text editor.

  3. Edit the Pipeline Output Controller's MaxErrorRates registry entries, making sure you:

    • List all error codes that the Pipeline Output Controller should monitor.

    • Specify an error threshold for each error code. The threshold specifies the maximum percentage of EDRs that are allowed to have the particular error.

    For example, to configure the Output Controller to reject a stream if any of the following are true:

    • Over 10% of the EDRs have an INF_EDR_REJECTED error.

    • Over 8% of the EDRs have an ERR_CUST_NOT_FOUND error.

    • Over 20% of the EDRs have an ERR_CHARGED_ZONE_NOT_FOUND error.

    Output 
    { 
        ...
        MaxErrorRates 
        { 
            INF_EDR_REJECTED = 10 
            ERR_CUST_NOT_FOUND = 8 
            ERR_CHARGED_ZONE_NOT_FOUND = 20 
        } 
        ...
    }
  4. Save and close the registry file.

  5. Restart Pipeline Manager.

Reading TAP Files

Pipeline Manager can read the following TAP versions:

  • TAP-0301 from the TD57v3.04.00 specification

  • TAP-0303 from the TD57v3.07.01 specification

  • TAP-0304 from the TD57v3.08.02 specification

  • TAP-0309 from the TD57v3.90 specification

  • TAP-0310 from the TD57v3.10.01 specification

  • TAP-0311 from the TD57v28 specification

  • TAP-0312 from the TD57v30.1 specification

  • TAP-0312 from the TD57v32.1 specification

You can specify TAP input grammar files when you set up your input modules. The files are located in the pipeline_home/formatDesc/Formats/TAP3 directory. pipeline_home is the directory where you installed Pipeline Manager.

Note:

Because the TAP 3.10 standard introduced fundamental changes, files produced according to earlier versions of the TAP standard are not compliant with the TAP 3.10 standard. Therefore, the TAP 3.10 grammar files cannot be used to process previous TAP versions.

Note the following implementation details:

  • The following records are generated when TAP is processed by Pipeline Manager:

    • 1 Header record (010): 1 EDR

    • 1 Trailer record (090): 1 EDR

    • 1 GPRS record (040 for SGSN, 042 for GGSN or mixed ticket): 1 EDR

    • 1 mobile supplementary service (MSS) record (029): 1 EDR

    • 1 service center usage (SCU) record (050): 1 EDR

    • 1 value added service (VAS) record (060): 1 EDR

    • 1 content transaction (CONT) record (999): 1 EDR

    • 1 location service (LOCN) record (998): 1 EDR

    • 1 mobile originating call (MOC) record (021): n EDRs: one EDR for every basic service used.

    • 1 mobile terminating call (MTC) record (031): n EDRs: one EDR for every basic service used.

  • For each supplementary service, an SS_PACKET is created and attached to the corresponding EDR. For every charge detail of the VAS array, a charge packet is added to the latest generated EDR. The VAS short description is stored in the DETAIL.ASS_CBD.CP.PRODUCTCODE_USED field.

  • The Value added service used block is used to build an associated charge breakdown record containing data for rating.

  • The CAMEL service information is stored in the ASSOCIATED_CAMEL_EXTENSION ("700") block of the EDR. The associated charge packets are stored on the same ASS_CBD as others but with the PRODUCTCODE_USED field set to CAMEL to identify them.

  • Mobile originating and terminating records (MOC and MTC records) are split into multiple records downstream by the ISC_TapSplitting iScript. This iScript generates one EDR for every basic service in the basic service used array. For more information, see "ISC_TapSplitting".

Note the following restrictions:

  • The size of the ASN.1 string is not checked during input.

  • Pipeline Manager does not add the MIN to the EDR.

  • Pipeline Manager does not add the electronic serial number (ESN) to the EDR.

The TAP output grammar recognizes all record types generated by Pipeline Manager.

About Customizing Mapping of Flist Fields to Rating EDR Container Fields

To process events received from the Connection Manager (CM), a real-time pipeline converts the event data from flist format to rating EDR format for processing by Pipeline Manager.

BRM provides default flist-to-rating-EDR mappings for GSM, GPRS, and SMS events in the pipeline_home/formatDesc/Formats/Realtime/rate_event.xml file. When the real-time pipeline starts, the INP_Realtime module uses the descriptions in this XML file to construct an in-memory representation of the flist-to-rating-EDR mapping. The pipeline uses the in-memory mapping to convert incoming flists to rating EDR format.

Table 3-8 displays the XML elements are used in the flist-to-rating-EDR mapping:

Table 3-8 XML Elements in flist-to-rating-EDR Mappings

XML element Description

OpcodeMap

The root node of the XML document. The XML document can have only one OpcodeMap element. This element requires the opcode attribute, which is a unique string used to differentiate between OpcodeMap elements in other XML files.

InMap

The flist-to-rating-EDR mappings for the input flist. The XML document can have only one InMap element. The containerType attribute specifies the root-level name of the rating EDR container.

FlistField

A field in the input flist. If the target attribute is present, the flist field is mapped to the target field in the rating EDR container.

EdrBlock

Indicates when a rating EDR Block should be created.

Sample input flist:

0 PIN_FLD_POID           POID [0] 0.0.0.1 /account 12035 15
0 PIN_FLD_EVENT        SUBSTRUCT [0] allocated 52, used 52
1     PIN_FLD_POID           POID [0] 0.0.0.1 /event/delayed/session/telco/gsm 1373193266068995136 0
1     PIN_FLD_ACCOUNT_OBJ    POID [0] 0.0.0.1 /account 12035 0
1     PIN_FLD_START_T      TSTAMP [0] (1081882800) Tue Apr 13 12:00:00 2004
1     PIN_FLD_END_T        TSTAMP [0] (1081883200) Tue Apr 13 12:00:00 2004   
1     PIN_FLD_QUANTITY    DECIMAL [0] 400
1     PIN_FLD_TELCO_INFO   SUBSTRUCT [0] allocated 20, used 12
2         PIN_FLD_CALLING_FROM    STR [0] "0049100050"
2         PIN_FLD_CALLED_TO       STR [0] "0049100051"
2         PIN_FLD_USAGE_CLASS     STR [0] "NORM"
2         PIN_FLD_TERMINATE_CAUSE   ENUM [0] 0
1     PIN_FLD_GSM_INFO     SUBSTRUCT [0] allocated 20, used 14
2         PIN_FLD_CALLED_NUM_MODIF_MARK   ENUM [0] 0
2         PIN_FLD_DIRECTION          ENUM [0] 0
2         PIN_FLD_CELL_ID            STR [0] "123456"
2         PIN_FLD_SUB_TRANS_ID       STR [0] "S"
2         PIN_FLD_DESTINATION_SID    STR [0] ""
2         PIN_FLD_NUMBER_OF_UNITS    DEC[0] 1.0

The default flist-to-rating-EDR mapping in XML for the above flist:

<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<OpcodeMap xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:noNamespaceSchemaLocation="./OpcodeMapping.xsd" 
opcode="PCM_OP_RATE_PIPELINE_EVENT">
   <InMap containerType="DETAIL">
      <!--CONSTANT EDR DETAIL ITEMS-->
      <EdrField name="DETAIL.RECORD_TYPE" value="020" />
      <EdrField name="DETAIL.DISCARDING" value="0" />
      <!--EVENT FLIST ITEMS-->
      <FlistField name="PIN_FLD_EVENT">
         <!--EVENT POID-->
         <FlistField name="PIN_FLD_POID" format="type" target="DETAIL.EVENT_TYPE" alias="EventType"/>
         <!--ACCOUNT_OBJ-->
         <FlistField name="PIN_FLD_ACCOUNT_OBJ" format="short" target="DETAIL.CUST_A.ACCOUNT_PARENT_ID" />
         <!--CHARGING_START-->
         <FlistField name="PIN_FLD_START_T">
            <EdrField name="DETAIL.CHARGING_START_TIMESTAMP" />
            <EdrField name="DETAIL.NE_CHARGING_START_TIMESTAMP" />
         </FlistField>
         <!--CHARGING_END-->
         <FlistField name="PIN_FLD_END_T">
            <EdrField name="DETAIL.CHARGING_END_TIMESTAMP" />
            <EdrField name="DETAIL.NE_CHARGING_END_TIMESTAMP" />
         </FlistField>
         <!--QUANTITY FIELD-->
         <FlistField name="PIN_FLD_QUANTITY" target="DETAIL.DURATION" />
         <!--TELCO FLIST ITEMS-->
         <FlistField name="PIN_FLD_TELCO_INFO">
            <!--A_NUMBER-->
            <FlistField name="PIN_FLD_CALLING_FROM">
               <EDRField name="DETAIL.A_NUMBER"/>
               <EDRField name="DETAIL.ASS_GSMW_EXT.A_NUMBER_USER" onAliasName="EventType" onAliasValue="/event/delayed/session/telco/gsm" />
            </FlistField>
            <!--B NUMBER-->     
            <FlistField name="PIN_FLD_CALLED_TO">
               <EDRField name="DETAIL.B_NUMBER"/>
               <EDRField name="DETAIL.ASS_GSMW_EXT.DIALED_DIGITS" onAliasName="EventType" onAliasValue="/event/delayed/session/telco/gsm" />
            </FlistField>
            <!--USAGE_CLASS-->  
            <FlistField name="PIN_FLD_USAGE_CLASS" target="DETAIL.USAGE_CLASS" />
            <!--CALL_COMPLETION_INDICATOR-->
            <FlistField name="PIN_FLD_TERMINATE_CAUSE" target="DETAIL.CALL_COMPLETION_INDICATOR" />
         </FlistField> <!--END TELCO FLIST ITEMS-->
         <!--GSM FLIST ITEMS-->
         <FlistField name="PIN_FLD_GSM_INFO" onAliasName="EventType" onAliasValue="/event/delayed/session/telco/gsm" optional="true">
            <EdrBlock name="DETAIL.ASS_GSMW_EXT"/>
            <!--CONSTANT DETAIL.ASS_GSMW_EXT EDR ITEMS-->
            <EdrField name="DETAIL.ASS_GSMW_EXT.RECORD_TYPE" value="520" />
            <FlistField name="PIN_FLD_CALLED_NUM_MODIF_MARK" target="DETAIL.B_M ODIFICATION_INDICATOR" />
            <FlistField name="PIN_FLD_DIRECTION" target="DETAIL.USAGE_DIRECTION" />
            <FlistField name="PIN_FLD_CELL_ID" target="DETAIL.ASS_GSMW_EXT.CELL_ID"/>
            <FlistField name="PIN_FLD_SUB_TRANS_ID" target="DETAIL.LONG_DURATION_INDICATOR" />
            <FlistField name="PIN_FLD_NUMBER_OF_UNITS" target="DETAIL.NUMBER_OF_UNITS" />
         </FlistField> <!--END GSM FLIST ITEMS-->
         <!--END EVENT FLIST ITEMS-->
      </FlistField>
   </InMap>
</OpcodeMap>

You can customize the default GSM, GPRS, and SMS mappings and create custom mappings for other types of events. To customize flist-to-rating-EDR mappings, you must be familiar with the following topics:

  • BRM flists.

  • XML

  • XML Schema

To create and use a custom mapping:

  1. Edit the pipeline event XML file (pipeline_home/formatDesc/Formats/Realtime/rate_event.xml) to add your custom mappings.

  2. Validate the XML file using pipeline_home/formatDesc/Formats/Realtime/opcode_ifw_mapping.xml file.

    Note:

    Using an invalid XML file prevents the INP_Realtime module from successfully starting. Make sure you validate the XML file against the XML schema.

About the POID Format in the Rating EDR Container

If an flist field is a POID, the format attribute is required to indicate the format of the POID in the rating EDR container. The following format types are supported: long, short, id, and type.

For example, the format of POID 0.0.0.1 /account 12065 is mapped as follows:

  • 1_12065 /account when the format attribute is long

  • 1_12065 when the format attribute is short

  • 12065 when the format attribute is id

  • /account when the format attribute is type

Mapping an Flist Field to Multiple Rating EDR Container Fields

If the FlistField element contains child EDRField elements, the flist field is mapped to multiple rating EDR fields.

In the following example, PIN_FLD_END_T is mapped to multiple fields in the DETAIL EDR block:

<FlistField name="PIN_FLD_END_T">
<EdrField name="DETAIL.CHARGING_END_TIMESTAMP" />
<EdrField name="DETAIL.NE_CHARGING_END_TIMESTAMP" />
</FlistField>

Using Conditions to Map an Flist Field to a Rating EDR Container Field

You can use the FlistField attributes alias, onAliasName, and onAliasValue for conditional mappings.

In this example, the input flist is a /event/delayed/session/telco/gsm event. The PIN_FLD_GSM_INFO substruct of the event is mapped to the DETAIL.ASS_GSMW EDR block.

<FlistField name="PIN_FLD_POID" format="type" target="DETAIL.EVENT_TYPE" alias="EventType"/>
<!--GSM FLIST ITEMS-->
<FlistField name="PIN_FLD_GSM_INFO" onAliasName="EventType" onAliasValue="/event/delayed/session/telco/gsm" optional="true"> <EdrBlock name="DETAIL.ASS_GSMW_EXT"/>
<!--CONSTANT DETAIL.ASS_GSMW_EXT EDR ITEMS-->
<EdrField name="DETAIL.ASS_GSMW_EXT.RECORD_TYPE" value="520" />
<EdrField name="DETAIL.ASS_GSMW_EXT.TIME_BEFORE_ANSWER" value="0" />
<EdrField name="DETAIL.ASS_GSMW_EXT.NUMBER_OF_SS_PACKETS" value="0" />
<FlistField name="PIN_FLD_CALLED_NUM_MODIF_MARK" target="DETAIL.B_MODIFICATION_INDICATOR" />
<FlistField name="PIN_FLD_DIRECTION" target="DETAIL.USAGE_DIRECTION" />
<FlistField name="PIN_FLD_CELL_ID" target="DETAIL.ASS_GSMW_EXT.CELL_ID" />
<FlistField name="PIN_FLD_SUB_TRANS_ID" target="DETAIL.LONG_DURATION_INDICATOR" />
<FlistField name="PIN_FLD_NUMBER_OF_UNITS" target="DETAIL.NUMBER_OF_UNITS" />
</FlistField> <!--END GSM FLIST ITEMS-->

You can also map an alias to an EdrField defined elsewhere. For example:

<FlistField name="PIN_FLD_START_T" target="DETAIL.CHARGING_START" alias="startTime"/>
<!--GSM FLIST ITEMS-->
<FlistField name="PIN_FLD_GSM_INFO" >
<EdrBlock name="DETAIL.ASS_GSMW_EXT"/>
<!--CONSTANT DETAIL.ASS_GSMW_EXT EDR ITEMS-->
<EdrField name="DETAIL.ASS_GSMW_EXT.CHARGING_START" useAlias="startTime" />
</FlistField> <!--END GSM FLIST ITEMS-->