2 Node Attributes

This chapter describes attributes of the Oracle Communications Offline Mediation Controller nodes.

Collection Cartridges (CCs)

CC nodes should contain the following:

  • EITransport

  • NPLFieldProcessor

  • knowledge of (or Factory for) the appropriate DCFieldContainer object to generate for the incoming data

The EITransport class reads the data in from outside the system and creates the appropriate DCFieldContainer objects. The NPLFieldProcessor would then map the data from the DCFieldContainer into a NAR, based on the commands in the NPL file. The DataProvider and DataReceiver relationships are:

Table 2-1 DataProviders and DataReceivers for Collection Cartridges

DataProvider DataReceiver

      EITransport

      NPLFieldProcessor

      NPLFieldProcessor

      NARFileManager

Processor Cartridges

There is a base ProcessorNode class. However, most developers should not have to derive new Processor Node classes. Instead, one of the provided Processor Nodes (see list below), or a generic NPLProcessorNode (which contains an NPLFieldProcessor) should be used and the desired functionality should be performed using NPL commands.

In a ProcessorNode, the node's DCStreamHandler (i.e. NARFileManager) is both the DataProvider and DataReceiver for the FieldProcessor.:

Table 2-2 DataProviders and DataReceivers for Processor Cartridges

DataProvider DataReceiver

      NARFileManager

      NPLFieldProcessor

      NPLFieldProcessor

      NARFileManager

Several generic Processor Nodes are provided as part of the Offline Mediation Controller system and are described below.

NPLProcessorNode

This is a generic Processor that can be used for adding or removing fields from a NAR or filtering records based on a particular condition or calculation.

FileEnhancerNode

This node uses a lookup file that can be used to add additional data to a NAR. For example, a file containing a listing of port numbers and their associated applications can be used to add an application description field to a NAR, based on the port number contained in one of the other fields in the NAR.

Lookup file

To use this node you will need to provide a lookup file that complies to the following format:

key_value_separator = 'seperator1'
pair_separator = 'seperator2'
keyPart=valuePart1
keyPart2=valuePart2 
...
keyPartN=valuePartN 

where the 1st and 2nd lines are optional. When either of the two lines is omitted, the following defaults will be used:

key_value_separator = "=" 
pair_separator = "/n" 

key_value_separator and pair_separator values should be wrapped in single quotes, the only special characters that will be recognized as such will be '\n', '\t' and '\r'. All other characters included between the single quotes will be interpreted as independent characters.

pair_separator='\n###NEW PAIR###\n'

This would be interpreted as a new line followed by ###NEW PAIR### followed by a

new line.

key_value_separator='\t\f'

This would be interpreted as a tab followed by \f (NOT form feed).

NPL File

Following is an example of NPL that you could use with the FileEnhancerNode to do “application" enhancement.

  • Input field in.1 is a port number

  • Variable tablename is a cross-reference from port number to application name (example: 80=http). This specifically is generated from the contents of your lookup file.

  • Output field out.2 is the attribute which will receive the application name.

  • The import statement imports the interface corresponding with the Method Handler implementation that will be used by the NPL for the Java hook (the lookup call).

Example 2-1 FileEnhancer NPL

·//  NPL file to do simple app enhancement on a 1 attribute nar.
·
·import com.nt.udc.processor.FileEnhancer.FileEnhMethodHandlerIfc;
·
·String tablename = "apps";
·
·InputRec {
·  String  1;         // port number
·} in;
·
·OutputRec {
·  String  1;         // port number
·  String  2;         // application name
·}
·
·out.1 = in.1;
·out.2 = Java.lookup(tablename,in.1);
·
·write(out);
NodeConfigGUI Fields

Currently there are only two fields in the node specific portion of the GUI. These are Lookup File and Lookup Table Name. Lookup file is the absolute path to the file you want to use to generate the lookup table. Lookup table name is the tag you will use from NPL to access this table. This attribute is included because in the future we will provide support for lookup from multiple tables in one FileEnhancer. In the near future there will be added support in the Node Config GUI for the concept of configuring loaders. Loaders are objects that populate a lookup file for you from some source (e.g. an LDAP directory).

FtpFileEnhancerNode

Similar to the FileEnhancerNode except this provides functionality to obtain the lookup file from a remote location via FTP.

LDAPEnhancerNode

Directory information is gathered from an LDAP directory and stored in a file, which can then be used to add or remove fields in the NAR.

Multithreaded Programmable Aggregation Processor Node

You can customize the Aggregation Processor node by its NPL file, which contains functions that perform the common aggregation tasks, including record storage and retrieval and attribute aggregation.

Note:

All aggregator NPL rule files must start with the following statement:

import com.metasolv.nm.processor.MXAggregator.MXJavahookHandler
Configuring the Aggregator NPL Rule File

The following configuration variables are available in the Aggregator NPL rule file:

ModulusAttribute

This multithreaded aggregator distributes input records to its processor threads by using a modulus routing algorithm. The algorithm is almost identical to the one used for routing NARs between cartridges.

The only difference is that the multithreaded aggregator allows two additional types of NAR fields to be used as selector values for modulus routing. Normal modulus routing allows only IntFields and LongFields to be used as selector values. The multithreaded aggregator allows BytesFields and StringFields in addition to IntFields and LongFields.

FlushOnStartup

This variable indicates if any valid records in the stored file hash table(s) should be flushed to the output on startup. If records were stored in this table when the node was stopped, they will be restored on the next startup. This method provides a simple means of ensuring any old items in the table are immediately flushed. The valid values for this item are “true" and “false". The default is false.

Hash Table Definitions

The Aggregation Processor supports multiple tables. Each table is mapped to a file in the Aggregation Processor's scratch directory. The configuration items and Java hooks that operate on a specific hash table use an index. The indexes start at 1.

NumberOfHashTables

This variable tells the node how many file hash tables to create.

The following variables are repeated, one for each table. The GUI specifies the timer value for the tables. This value must be the same for all tables.

HashTableKeysX

Specifies the attribute IDs that are used as keys for a specific file hash table. The X is the index of the hash table. For example, HashTableKeys3. The node constructs the key using the attribute IDs specified in the HashTableKeysX variable. The value of this variable is a string of IDs separated by spaces. For example, "20001 20232 10036".

HashTableFlushX

Specifies the behavior of the flush timer for a table, where X is the table index. The three possible values for this field and their associated behaviors are:

  1. write - the record is output to the next node in the chain

  2. delete - the record is dropped from the node chain

  3. off - the flush timer is not active for the table

Traffic Volume Configurations

The traffic volume container Java hooks require you to set the following configuration variables:

TVMChangeTime

Specifies the string name for the time change component of a traffic volume container. For example, “ChangeTime".

TVMUplinkVolume

Specifies the name for the item containing the uplink volume in the traffic volume container. For example, “DataVolumeGPRSUplink".

TVMDownlinkVolume

Specifies the name for the item containing the downlink volume in the traffic volume container. For example, “DataVolumeGPRSDownlink".

Convenience Attribute Sets

There are Java hook functions that contain sets of attributes. These sets are specified in the configuration file, and can use any unused name. The Java hooks perform the appropriate lookup for the name they receive. This is a more optimized approach than using in the full string attribute list each time a Java hook is called. In the string list approach, the string must be parsed and tokenized each time the Java hook is called but in the optimized approach, each attribute set is loaded the first time it is needed, and then an internal representation is stored for subsequent uses.

For example, the following line can appear in the configuration:

config {
…
MyAttributeSet "1 5 20001 32075";
}

And then this attribute set could be called from some Java hooks:

Java.replaceAttributeSet(in, out, "MyAttributeSet");
…
Java.removeAttributeSet(out, "MyAttributeSet");
Java Hooks

The following are all the Java hooks provided by the Aggregation Processor. When a parameter type of NAR is used, the node expects to receive an InputRec or OutputRec type.

void appendLists(NAR source, NAR dest, String attrList)

Searches the attributes in the attrList and for each, appends it from the source NAR to the corresponding attribute in the destination NAR.

void appendListsWithoutRepeat(NAR source, NAR dest, String attrList)

Performs the same function as appendLists with one exception: when appending, a value will not be repeated in succession. For example, appending the following:

Source: “1", “5", “7", “7", “2"

Dest: “9", “5", “5", “6", “1"

Produces the following: “9", “5", “5", “6", “1", “5", “7", “2"

Integer compareBytes(NAR A, NAR B, Integer attrId)

Performs a byte comparison of the attributes specified in the two NARs.

Returns: “1" if the two are the same or “0" otherwise.

void concatenateStrings(NAR source, NAR dest, String attrList)

Searches the attributes specified in the string and concatenates the corresponding source attribute onto the destination.

void concatenateStrings(NAR source, NAR dest, String separator, String attrList)

Searches the attributes specified in the string and concatenates the corresponding source attribute onto the destination similar to the above function. This variation also inserts the separator between each destination and source when concatenating.

Integer distributeTrafficVolumeSetPerDay(NAR in, String trafficVolumesID, NAR dest, String uplinkSetName, String downlinkSetName)

Parses the traffic volume containers, indicated by the trafficVolumesID, and distributes the uplink and downlink volumes into other attributes according to the time. The destination attributes used for the volume distribution are specified by the uplinkSetName and downlinkSetName. Each of these must always contain 24 attributes, each corresponding to one hour of the day.

For example, if the in NAR contains two traffic volumes, one for 8am and another for 2pm, the corresponding uplink and downlink values will be put into the 9th and 15th attribute Ids from the corresponding sets.

This function only distributes the traffic volumes for one day. For example, if there is a traffic volume container that is for the next day, that traffic volume and any following it are not processed. If there are any unprocessed traffic volume containers, the processed ones will be removed from the incoming NAR so only the unprocessed ones will remain. The incoming NAR is not modified if all the volume containers are processed.

The traffic volume configuration information must be set for this function to operate. The function returns “1" if all the volume containers were processed and “0" if some were not processed and have been indicated in the incoming NAR.

Integer distributeTrafficVolumesPerDay(NAR in, String trafficVolumesID, NAR dest, String uplinkVolumeAttrs, String downlinkVolumeAttrs)

This method is the same as the one above, with the exception that the uplink and downlink attribute IDs are listed as a string parameter rather than using a named set.

Bytes generateOpeningTimeFromTrafficVolume(List trafficVolumes)

Returns a byte time representation for the beginning of the day for the first traffic volume container in the list. More specifically, it is the bytes representation of the same timestamp, but with the hour, minute and seconds all set to “0".

The traffic volume configuration information must be set for this function to operate.

Integer getBytesValueFromListMapIp(List inData, Integer index, nar dest, String attrId)

This specialized function sets the bytes in the attrId of the destination to the IP of the first valid IP type in a map, where the map comes from the specified item, as per index, in the list. For example, in the Wireless market segment, the SGSN IP Address List field is a list of maps, with an IP specified in these maps. Using this function on this field sets the bytes in the destination to be the bytes representation of the IP address of one of the SGSNs in the list, using index to determine which SGSN.

The index is specified starting at 0. This will return 1 if an IP was found and subsequently set in the destination, otherwise 0 will be returned.

Integer getDayOfYear(Bytes date)

Determines the current day of the year. For example, if the date is March 20, 2003, the method returns “20".

Integer getNAR(NAR out)

Searches for a NAR in the default hash table, 0. Upon finding the NAR, the method returns it in the out parameter and removes it from the table. The method also removes any timers. To search, the method uses the key created by the last call to setKey for this hash table.

The method returns “1" if it finds a NAR and “0" otherwise.

Integer getNAR(NAR out, Integer index)

Performs the same function as getNAR above, but also supplies an index to indicate the hash table to use.

Integer getPreviousDayOfYear(Bytes date)

Returns the day of the year for the day previous to the current date. This is the same as “getDayOfYear(date)-1" except in the case where the current date is the first day of the year. In this case, the method returns the last day of the previous year.

void keepMaxAttributes(NAR source, NAR dest, String attrList)

Searches the attributes specified in attrList, and puts the greater value of the source or destination in the dest field. This method supports the following types:

  • byte

  • short

  • int

  • long

void keepMinAttributes(NAR source, NAR dest, String attrList)

Searches the attributes specified in attrList, and puts the lesser value of the source or destination in the dest field. This function supports the following types:

  • byte

  • short

  • int

  • long

void removeAttributes(NAR in, String attrList)

Removes all attributes specified in attrList from the incoming NAR.

void removeAttributeSet(NAR in, String setName)

This method performs the same function as above, except the attribute list is contained in the set specified by setName.

Integer removeNAR()

Deletes the record matching the key set for hash table 0 from the table.

The method returns “1" if it finds and removes a matching record or “0" if it does not find a matching record.

Integer removeNAR(Integer index)

Deletes the record matching the key set for the hash table specified by index.

The method returns “1" if it finds and removes a matching record or “0" if it does not find a matching record.

void replaceAttributes(NAR source, NAR dest, String attrList)

Takes each attribute specified in attrList from NAR source and puts it in NAR destination. The method overrides any values previously set in dest.

void replaceAttributeSet(NAR source, NAR dest, String setName)

This method is the same as above, but the attribute list is in the set specified by setName.

void setKey(NAR in)

Creates a key for the default hash table, 0, from the in NAR. The attributes used to create the key are specified in the configuration.

void setKey(NAR in, Integer index)

This method is the same as the function above, except the key is created for the hash table specified by index, where index starts from 0.

void storeNAR(NAR in)

Stores the incoming NAR in the default hash table, 0, with the last key specified for this hash table. The method does not start a timer for this record even if the configuration indicates timers are enabled for this table.

void storeNAR(NAR in, Integer index)

This method has the same function as above, except the hash table is specified by index.

void storeNARWithTimer(NAR in)

This method is the same as the storeNAR(in) function, except it starts a timer for this record if the option is enabled in the configuration for the default table, 0. If the record already exists in the table and the timer is already set, the method overrides both items.

void storeNARWithTimer(NAR in, Integer index)

This method the same as the above function, except it is for the hash table specified by index.

void sumAttributes(NAR source, NAR dest, String attrList)

This method sums the source and dest attributes specified by attrList. The results are stored in dest.

Integer sumAttributesNoOverflow(NAR source, NAR dest, String attrList)

This method is the same as the above function, except it does not sum the source and dest attributes if it will result in an overflow condition.

The method returns “1" if it performs the summation and “0" if it detects an overflow condition and does not perform the summation.

void sumValue(Long sourceValue, NAR dest, String attrList)

Adds the source value to each of the attributes in dest specified by attrList.

Integer sumValueNoOverflow(Long source, NAR dest, String attrList)

This method is the same as the above function, except it does not perform a summation if it detects the attributes would overflow.

The method returns “1" if it performs the summation and “0" if it detects an overflow condition and does not perform the summation.

Usage

This section will show some examples of how you can use the Java hooks.

Simple Session Aggregation Example

The following examples demonstrate a simple way to aggregate sessions for various record types. The examples show you how to aggregate partial records from the same device but do not demonstrate how to combine record types. The examples use Wireless attributes but do not include the record declarations for readability.

The following are the major attributes:

20001 - The session identifier. Unique for a specific GGSN and shared by the SGSNs.

20200 - The record type.

20232 - The GGSN IP address

20233 - The SGSN IP address, for S-CDRs only.

Due to the different keys used to uniquely identify sessions on the GGSN and SGSN, separate tables are used in the example below.

Here is the sample NPL with comments:

// Declare the AP java hooks
import com.metasolv.nm.processor.MXAggregator.MXJavahookHandler;

// Configure the file hash tables we're going to need

Config {
// Distribute records to threads based on session ID.
ModulusAttribute "20001";

    // Don't arbitrarily flush all existing records on startup.
    FlushOnStartup "false";

    // Indicate that there will be configuration information
    // for two different tables.
    HashTables "2";

    // Set the attributes used for the key for table 0
    HashTableKeys0 "20001 20200 20232";
    // Indicate that we want flushed records to be written
    // to the output for this table.
    HashTableFlush0 "write";

    // Add the SGSN IP to the key for the second table
    HashTableKeys1 "20001 20200 20232 20233";
    // We also want flushed records for this table to be
    // written to the output.
    HashTableFlush1 "write";

    // Declare a set of attribute Ids to use later on.
ReplaceSet "20001 20002 20005 20121 20200 20201 20202 20203 20204 20205 20206 20207 20210 20211 20212 20214 20216 20217 20218 20219 20220 20221 20222 20223 20224 20225 20226 20227 20228 20229 20232 20233 20234 20235 20300 20301 20302 20303 20304 20305 20306 20310";
}

Integer returnCode = 0;
Integer tableIndex = 0;

// The input and output record declarations would be here.
// The SMS CDR types are not aggregated, so just write them out.
if ((in.20200 == 21) || (in.20200 == 22)) {
    out = in;
    write(in);
}
else {
    // Get the index to the appropriate table. Use table 0 for G-CDRs,
    // and table 1 for M and S-CDRs, as they require a key with the
    // SGSN IP address also.
    if (in.20200 == 19) {
        tableIndex = 0;
    }
    else {
        tableIndex = 1;
    }

    // Now set the key for the table we will be using. This will be
    // in place for all further operations on this table.
    Java.setKey(in, tableIndex);

    // Try to find an already existing record in the table
    // which matches our key. The returnCode variable will
    // indicate if the lookup was successful.
    returnCode = Java.getNAR(out, tableIndex);

    if (returnCode == 1) {
        // Found one. We now need to do the aggregation between the
        // found record (out) and the input record.

        // First, try to sum the duration if the variable will not
        // overflow.
        returnCode = Java.sumAttributesNoOverflow(in, out, "20004");
        if (returnCode == 0) {
        // Overflow condition. Just flush out the old record and
            // start over with the new one.
            write(clone(out));
            out = in;
        }
        else {
            // The sum worked, so continue with all the rest.
            // These two items are lists which are aggregated by
            // appending the incoming onto the end of the stored
            // attribute.
            Java.appendLists(in, out, "20209 20213");

            // This is a convenient way to perform bulk assignment
            // of attributes from in to out. This could be done as
// separate assignments (e.g. out.20001 = in.20001;), but
            // it would be less efficient than this call and would
            // also be less readable.
            // This function also makes use of the named set
            // "ReplaceSet", which identifies a set of attributes
            // declared in the configuration block at the top.
            Java.replaceAttributeSet(in, out, "ReplaceSet");
        }
    }
    else {
        // There is no matching record already in the table, so this
        // is the first. Rather than doing separate assignments for
        // all the needed attributes, this is just doing a bulk copy
        // of all of them.
        out = in;
    }

    // Now check to see if the record should be written to the output
    // or stored back in the table. We'll only ever get here for the
    // G-CDR, S-CDR and M-CDRs, so just check the cause for record
    // closing value to see if the session is closed.
    if ((out.20202 == 0) || (out.20202 == 4)) {
        write(out);
    }
    else {
        Java.storeNARWithTimer(in, tableIndex);
    }
}

Distribution Cartridges (DCs)

DC nodes should contain the following:

  • NPLFieldProcessor

  • OITransport

  • Knowledge of (or Factory for) the appropriate DCFieldContainer object to generate.

The NPLFieldProcessor maps the data from the NAR into the appropriate DCFieldContainer object based on the commands in the NPL file. The OITransport class then takes the data and transmits it via the appropriate medium outside the system. The DataProvider and DataReceiver relationships are shown in Table 2-3.

Table 2-3 DataProviders and DataReceivers for Distribution Cartridges

DataProvider DataReceiver

     NARFileManager

     NPLFieldProcessor

     NPLFieldProcessor

     OITransport

Several DC nodes are provided as part of the Offline Mediation Controller system and are outlined below.

ASCII DC Node

Flat file based DC node which will produce output in ASCII format. Also has the capability to put the completed files on a remote machine via FTP.

IPDR DC Node

Produces file-based output in IPDR format.

XML DC Node

Produces file-based output in XML.

JDBC DC Node

The JDBC DC node is a generic node that inserts data into a relational database. This DC node can be found in the Offline Mediation Controller Cartridge Kit market segment and is part of the Database Storage and Reporting solution. The DC node outputs DMS-MSC data (GCDR and GHOT records) into an Oracle 9i database, but can also be configured to work with other types of data and databases, providing the proper NPL rules files are written. The JDBC DC node can connect to any type of relational database without modification of the existing Java code. This DC node uses a Java JDBC interface to insert data into a database, therefore any database that supports JDBC works with this DC node.

The basic node chain that uses the JDBC DC node to insert data into a database contains a DMS-MSC CC node and a JDBC DC node.

This DC node obtains the necessary database information from the NPL file it is configured to use. The NPL file must have a configuration clause containing a configuration key of DBTables and an associated configuration value representing a list of one or more database table names. This comma-separated list of names corresponds to the database tables where the incoming NAR attributes are inserted. For each table in the list, there must also be a corresponding Expose clause, which will associate attributes of the output records to the appropriate column within the specified database table. The data type of the output attribute must be compatible with the data type for the database column. The DC node issues a processing exception if the data types are incompatible. Refer to the sample NPL code listed in the end of this section for more details.

The NPL file must have a configuration clause containing two configuration keys: “JDBCDriver" and “JDBCUrl". The two configuration keys must have a configuration value associated with them.

The configuration key, “JDBCDriver", is the class name of the JDBC driver provided by the specific database. In the case of the Oracle database, the class name is “oracle.jdbc.driver.OracleDriver".

The configuration key, “JDBCUrl", provides a way of identifying a data source so the appropriate driver recognizes and establishes a connection with it. Different JDBC drivers require different JDBC URLs. You must provide the appropriate JDBC URL information as the configuration value for the “JDBCUrl" clause in the NPL rules file. The values of the database host, the port that the database server is listening to, and the database SID appear in the node configuration window in the Offline Mediation Controller GUI. If the JDBC URL requires any of these three fields, you must replace these fields with “%DBHOST%", “%DBPORT%", and “%DBSID%" in the “JDBCUrl" configuration value.

For the Oracle database, the “JDBCUrl" configuration value is: “jdbc:oracle:thin:@%DBHOST%:%DBPORT%:%DBSID%".

For example, if the database host is “MyHost", the port number is “1521", and the SID is “ORCL", the JDBC DC node uses “jdbc:oracle:thin:@MyHost:1521:ORCL" as the JDBC URL to connect to the database.

The configuration values “DBCatalog" and “DBSchemaPattern" are the additional criteria used to validate the database table. You only need to specify these two values in the NPL rules file when there are two tables with the same table name (in different schema patterns). In Offline Mediation Controller, the “DBSchemaPattern" is “NMUSER1" and the “DBCatalog" is “SYS.ALL_TABLES".

The following is a sample NPL rules file for the JDBC DC node working with an Oracle database. There are two tables: “tableName1" and “tableName2". Each table has 5 columns: “column1", “column2", “column3", “column4", and “column5". Columns 1 to 4 are all INTEGER types. Column 5 is a VARCHAR type.

Config {
    DBTables   "tableName1,tableName2";
    JDBCDriver   "oracle.jdbc.driver.OracleDriver";
   JDBCUrl    "jdbc:oracle:thin:@%DBHOST%:%DBPORT%:%DBSID%";
    DBCatalog "SYS.ALL_TABLES";
               DBSchemaPattern "NMUSER1";
}

InputRec {
// input record fields
             Integer 1000;
  Integer 1001;
Integer 1002;
Integer 1003;
String 1004;  
} in;

OutputRec {
// output record fields
Integer attribute1;
Integer attribute2;
Integer attribute3;
Integer attribute4;
String attribute5;
} out;
Expose for tableName1 {
      out.attribute1      "column1";
      out.attribute2      "column2";
      out.attribute3      "column3";
      out.attribute4      "column4";
      out.attribute5      "column5";
}

Expose for tableName2 {
      out.attribute1      "column1";
      out.attribute2      "column2";
      out.attribute3      "column3";
      out.attribute4      "column4";
      out.attribute5      "column5";
}
out.**** = in.****;
…

write(out);       

Table 2-4 displays the required configurable parameters for the JDBC DC node:

Table 2-4 Configurable Parameters for JDBC DC Node

NPL Description

DBTables

List of table names separated by quotes: “ "

JDBCDriver

Class name of JDBC driver

JDBCUrl

String of JDBC URL

DBCatalog

Database catalog (optional)

DBSchemaPattern

Database schema pattern (optional)

Table 2-5 Node Config GUI Fields

Node Config GUI Description

UserId

Database user ID

Passwd

Database user password

Batch Size

Number of rows in one batch insertion

DB Host

Host name or IP address of the database host

DB Port

Port that the database is listening to

DB SID

Database SID

The JDBC DC node uses batch insertion to insert a certain number of records at one time into the database. The batch size is configurable through the node configuration window in the Offline Mediation Controller GUI. However, a driver is not required to implement the batch execution. For those drivers that do not support batch execution, the JDBC DC node singularly inserts the records into the database. The DC node can call the DatabaseMetaData method and supportsBatchUpdates value to find out whether the driver supports batch updates.