68 Pipeline Manager System Architecture

This chapter describes Oracle Billing and Revenue Management Pipeline Manager system architecture.

About the Pipeline Manager System Architecture

Pipeline Manager is used for rating and discounting events in batch and real-time.

The Pipeline Manager system architecture consists of:

  • The pipeline framework that controls the Pipeline Manager system functions.

  • The pipelines that the framework runs, which perform rating and discounting.

  • The data pool that provides data in memory, used for rating and discounting.

  • The Pipeline Manager database that stores data used for rating and discounting.

Figure 68-1 shows how a billable event is rated in batch by Pipeline Manager and recorded in the BRM database. In this case:

  1. Pipeline Manager rates event data from CDR files.

  2. Rated Event (RE) Loader loads rated events into the BRM database.

  3. Account balances are updated.

Figure 68-1 Billable Event Rating by Pipeline Manager and Storage in BRM Database

Description of Figure 68-1 follows
Description of "Figure 68-1 Billable Event Rating by Pipeline Manager and Storage in BRM Database"

Figure 68-2 shows how real-time discounting works.

Figure 68-2 Real-Time Discounting

Description of Figure 68-2 follows
Description of "Figure 68-2 Real-Time Discounting"

In this case:

  1. BRM sends an event to the pipeline for real-time discounting.

  2. The NET_EM module sends the event to the pipeline.

  3. Pipeline Manager returns the discounted amount.

  4. Account balances are updated in the BRM database.

About the Pipeline System Components

When you configure an instance of the Pipeline Manager, you configure a set of system components and one or more pipelines. The system components are:

About the Controller

The Controller manages and monitors the entire Pipeline Manager instance. The Controller performs these functions:

  • Starts and stops a Pipeline Manager instance.

  • Initiates and coordinates different threads.

  • Checks for new semaphore file entries.

  • Generates a log message table that is used by the LOG module to create the process log file, the pipeline log files, and the stream log file.

You configure the Controller by using the registry file.

About the EDR Factory

The EDR Factory is a mandatory pipeline component that generates and allocates memory to EDR containers in a single pipeline.

When a transaction starts, the EDR Factory:

  1. Allocates memory for each container.

  2. Generates an EDR container for each piece of the input stream, including one for the header, one for each EDR, and one for the trailer, by using the container description file.

  3. After the pipeline writes information to the output file, the EDR Factory empties the container and releases the cache. The EDR Factory can then reuse the memory for new containers.

You configure the EDR Factory by using the EDRFactory section of the registry file.

About the Transaction ID Controller

The Transaction ID Controller generates unique IDs for all open transactions in your pipelines. An instance of Pipeline Manager contains only one Transaction ID Controller.

The Transaction ID Controller performs these functions:

  • Stores blocks of transaction IDs in cache. The Transaction ID Controller issues IDs to TAMs directly from cache.

  • Uses the transaction state file or table to track ID numbers.

  • Assigns ID numbers to transactions.

You configure the Transaction ID Controller by using the TransactionIDController section of the registry file.

About the Sequencer

The BRM Sequencer is an optional Pipeline Manager component that performs one of these functions:

  • Sequence checking, which ensures that a CDR file is not processed more than once by keeping track of each CDR file's unique sequence number. A sequence check also logs gaps in sequence numbers.

  • Sequence generation, which generates sequence numbers for output files. This functionality is used when CDR input files do not have sequence numbers and when pipelines split CDR input files into multiple output files.

    Note:

    Sequence generation is not required when there is a one-to-one correspondence between input and output files. In this case, sequence numbers can be passed through to the output file.

Each pipeline can be configured to use one or more Sequencers. You configure your Sequencers by using the SequencerPool registry entries, and you assign Sequencers to pipelines by using the Output registry entries.

About the Event Handler

The Event Handler is an optional pipeline framework component that starts external programs when triggered by internal events. For example, you can configure the Event Handler to launch a script that moves event data record (EDR) output files to a specific directory whenever the output module finishes processing them.

An instance of the Pipeline Manager uses only one Event Handler, which monitors the events for all pipelines in your system. Each registered module in your system automatically sends events to the Event Handler. You define which of these events trigger external programs by using the ifw.EventHandler section of the registry file.

When the Event Handler receives an event from a registered module, it:

  1. Checks to see if the event is mapped to an action.

  2. Performs one of the following:

    • Starts the associated program or script.

    • If no action is mapped, ignores the event.

  3. Queues any events it receives while the external program is running.

  4. Waits for the external program to terminate.

About the Data Pool

The data pool is a set of modules that store data used by all the pipelines in a single Pipeline Manager instance. Data modules are named with the prefix “DAT", for example, DAT_AccountBatch.

Data modules get their data from the Pipeline Manager database and from the BRM database at startup. As data changes in the BRM system, the data is updated in the data pool.

About Pipelines

A single Pipeline Manager instance runs one or more pipelines. Each pipeline includes the following components:

  • The Pipeline Controller, which you use to manage the pipeline.

  • The input module reads data from the input stream, converts CDR files into the internal EDR input format, and performs error checking on the input stream.

  • Function modules perform all rating tasks and EDR management tasks for a pipeline. Function modules process the data in the EDRs. Each function module performs a specific task, for example, checking for duplicate EDRs or calculating zones.

    Function modules do not store any data; instead they get data from data modules. For example, to rate an event, the FCT_MainRating module gets pricing data from the DAT_PriceModel module.

    Function modules have two dependencies:

    • Some modules require previous processing by other modules.

    • Some modules get data from data modules.

  • The output modules convert internal EDRs to output format and write the data to the output streams.

  • The log module, which you use to generate and manage your process, pipeline, and stream log files.

About Using Multiple Pipelines

You create multiple pipelines to do the following:

  • Maximize performance and balance system loads. For example, you can create multiple pipelines to handle multiple input streams.

  • Manage different types of processing. For example, you can create separate pipelines for zoning, rating, and preprocessing. In this case, you can use the output of one pipeline as the input for another pipeline, or pipelines can run in parallel. To improve performance, aggregation is typically performed in a separate pipeline.

When you create multiple pipelines, they run in parallel in a single Pipeline Manager instance. You configure all pipelines in the same registry file. Each pipeline has its own input and output configuration, EDR Factory, Transaction Manager, and set of function modules. However, all pipelines share the same set of data modules.

You can also use a pipeline to route EDRs to different Pipeline Manager instances. For example, when you use multiple database schemas, you use the FCT_AccountRouter module to send EDRs to separate instances of Pipeline Manager.

About the Pipeline Controller

The Pipeline Controller manages all processes for one pipeline.

The Pipeline Controller performs the following functions:

  • Starts and stops the pipeline.

  • Initiates and coordinates the pipeline's threads.

  • Defines the valid country codes and international phone prefixes for the pipeline. The pipeline's function modules retrieve this information during processing.

  • Manages pipeline input and output.

You configure the Pipeline Controller by using the Pipelines section of the registry file.

About Thread Handling

You can configure each pipeline to run with multithreaded processing or single-threaded processing. By default, each pipeline is configured for multithreaded processing.

You select single-threaded or multithreaded mode to optimize performance.

About the Pipeline Manager Database

The Pipeline Manager database stores business configuration data, such as pricing and charges. Pipeline Manager accesses this information when you first start Pipeline Manager or when you force a database reconnection. Pipeline Manager then stores a copy of your pricing and rating data in your data modules.

Pipeline Manager modules connect to the Pipeline Manager database through the Database Connect module (DBC).

About Configuring Pipeline Manager

To configure Pipeline Manager, you use the following files to manage the Controller:

  • Registry files, which you use to configure a Pipeline Manager instance at system startup.

  • Semaphore files, which you use to configure and control pipelines during run time.

You can also use the pin_ctl utility to start and stop Pipeline Manager.