3 BRM System Architecture

This chapter describes the Oracle Communications Billing and Revenue Management (BRM) system components.

Before reading this chapter, read "Introducing BRM".

About the Four-Tier Architecture

The BRM system uses a four-tier architecture consisting of the following tiers:

Figure 3-1 shows the BRM system architecture:

Figure 3-1 BRM System Architecture

Description of Figure 3-1 follows
Description of "Figure 3-1 BRM System Architecture"

Figure 3-2 shows how the BRM components process a customer login.

Figure 3-2 BRM Customer Login Process

Description of Figure 3-2 follows
Description of "Figure 3-2 BRM Customer Login Process"

Application Tier

The application tier consists of the following types of client applications:

  • Client applications such as Customer Center and Pricing Center that you use to register and manage customers and their accounts

  • Customer account management applications used by customer service representatives (CSRs), such as applications that capture data from customer service usage. For example, the RADIUS Manager captures information from your terminal server about a customer's Internet session.

  • Service integration applications such as Global System for Mobile Communications (GSM) Manager and General Packet Radio Service (GPRS) Manager

  • Custom applications that you write to manage customers or support custom services and integrate your applications with BRM

The following example shows how applications connect to the BRM system.

Example: Customer Center

When a CSR logs in to the Customer Center application, a connection is made to a Connection Manager (CM). All customer management activities that the CSR performs with Customer Center are processed by the Facilities Modules (FMs) that are linked to the CM. Figure 3-3 illustrates the process flow of a CSR login to BRM.

Figure 3-3 CSR Connection Process

Description of Figure 3-3 follows
Description of "Figure 3-3 CSR Connection Process"

Example: RADIUS Manager

When a customer connects to the Internet, a RADIUS Manager front-end process accepts the dialup connection. A RADIUS Manager back-end process serves as a BRM client application and sends the login information to the CM. Figure 3-4 illustrates the RADIUS Manager example.

Figure 3-4 RADIUS Manager Connection Process

Description of Figure 3-4 follows
Description of "Figure 3-4 RADIUS Manager Connection Process"

Note:

If you have client applications running on the same server as the CM or DM, you still must use Transmission Control Protocol (TCP)/Internet Protocol (IP) to connect to the client.

Business Process Tier

The business process tier consists of the following components:

  • Connection Managers (CMs): CMs provide an interface between clients and the rest of the system. All client applications connect to the BRM system through a CM.

  • Facilities Modules (FMs): CMs include FMs that process the data captured by the client. For example, when a user logs in, FMs process the login name and password. See "About Facilities Modules (FMs)".

  • External Modules (EMs): EMs serve similar functions to FMs but must be started separately as a service or process. See "About External Modules (EMs)".

CMs run as daemons. When a client application requests a connection, the parent CM process spawns a child process to handle the connection. At that point, the application no longer communicates with the parent CM; all communication flows from the application to the child CM as depicted in Figure 3-5.

Figure 3-5 Client Connections to Child CM

Description of Figure 3-5 follows
Description of "Figure 3-5 Client Connections to Child CM"

About Facilities Modules (FMs)

When an application connects to a CM, the data sent by the application is processed first by the FMs that are included in the CM.

There are separate FMs to handle different types of tasks. For example, if a CSR changes a customer's password, the Customer FM validates that the new password has the correct number of characters.

FMs manage BRM activity and ensure that data is processed correctly. When you configure a CM, you specify the FMs that are linked to the CM. Typically, you use the default set of required FMs for each CM. Figure 3-6 depicts a CM configured with several FMs.

Figure 3-6 Facilities Modules for CM

Description of Figure 3-6 follows
Description of "Figure 3-6 Facilities Modules for CM"

Each FM is dynamically linked to a CM when the CM starts. You can link optional or custom FMs to any CM. For example, if you configure the terminal server to connect to a CM, you can link the RADIUS Manager FMs to that CM.

Because FMs are linked to CMs, the FMs can get configuration information from the CM configuration files. Therefore, one of the ways you can customize BRM policies is by editing the CM configuration file. For more information, see "Ways to Use and Customize BRM".

You can customize policy FMs or add custom FMs to a CM. See BRM Developer's Guide.

About External Modules (EMs)

An External Module (EM) is similar to an FM; it is a set of opcodes that perform BRM functions. However, it is not linked to a CM in the same way that an FM is. Instead, it runs as a separate process that you must start and stop.

The Payload Generator is an EM that processes events that are exported to external systems by using the optional Enterprise Application Integration (EAI) framework.

Figure 3-7 shows where an EM fits in the system architecture:

Figure 3-7 External Module in BRM System Architecture

Description of Figure 3-7 follows
Description of "Figure 3-7 External Module in BRM System Architecture"

About Connection Manager Master Processes (CMMPs)

Instead of configuring an application to connect directly to a CM, you can configure it to connect to a Connection Manager Master Process (CMMP).

When an application connects to a CMMP, the CMMP selects a CM from a list you provide and gives the application the machine name and port of that CM. The application then uses that address to connect to the CM.

Figure 3-8 shows how an application connects to a CM using a CMMP:

Figure 3-8 Using a CMMP to Connect to a CM

Description of Figure 3-8 follows
Description of "Figure 3-8 Using a CMMP to Connect to a CM"

For more information, see the information about improving performance in BRM System Administrator's Guide.

Data Management Tier

The data management tier consists of the following components:

  • Data Managers (DMs), which translate requests from CMs into a language that the database can understand. For the BRM database, the language is SQL. There are also DMs for credit card processors, Vertex, and so forth. See "About Data Managers (DMs)".

  • Oracle In-Memory Database (IMDB) Cache Data Manager (DM), which translates requests from CMs into a language that Oracle IMDB Cache and the BRM database can understand. See "About IMDB Cache DM".

About Data Managers (DMs)

Data Managers (DMs) translate BRM operations into a form that can be understood by the data access system.

BRM includes the following DMs:

  • The Oracle DM (dm_oracle) provides an interface to an Oracle BRM database.

  • The Paymentech DM (dm_fusa) provides an interface to the Paymentech credit-card payment processor.

  • The Vertex DM (dm_vertex) provides an interface to the Vertex Sales Tax Q Series and Communications Tax Q Series database.

Child processes are not spawned when a DM receives a connection request. Instead, child processes are spawned when a DM starts. You define the number of child processes in the DM configuration file.

When a DM receives a connection from a CM, the parent process or thread assigns a child process or thread to the connection. At that point, the client no longer communicates with the parent DM, and all communication flows from the CM to the child DM.

DMs run as a set of processes. Each process handles a single database transaction.

About Data Manager (DM) Back Ends

Each DM has a set of front-end processes that receive connection requests and send them to a shared-memory queue for processing by back-end processes. The back-end processes translate the transaction into the database language.

DM back ends are single-threaded. Therefore, a back end is dedicated to a single transaction for the duration of the transaction.

From the BRM database perspective, the back ends communicate to database back-end (shadow) processes. One database back-end process is started for each DM back end. See your database documentation for information on shadow processes.

You can write a new DM back end to integrate BRM with any relational, object-oriented, or legacy database management system. You can also layer BRM on top of basic file managers. For more information, see the documentation about writing a custom Data Manager in BRM Developer's Guide.

About IMDB Cache DM

IMDB Cache DM provides an interface between the CM and Oracle IMDB Cache and between the CM and the BRM database. When IMDB Cache DM receives a request for data from the CM, IMDB Cache DM determines which database to route the request to based on where the data resides:

  • In IMDB Cache: IMDB Cache DM uses TimesTen Cache libraries to retrieve data from IMDB Cache.

    Figure 3-9 shows the request and response flow for data that resides in IMDB Cache. When IMDB Cache DM receives authorization and reauthorization requests, it searches Oracle IMDB Cache for the required information and sends the response to the CM.

    Figure 3-9 CM Interaction with Data in the Oracle IMDB Cache

    Description of Figure 3-9 follows
    Description of "Figure 3-9 CM Interaction with Data in the Oracle IMDB Cache"

  • In the BRM database: IMDB Cache DM passes the request directly to the BRM database.

    Figure 3-10 shows the request and response flow for objects that reside in the BRM database. IMDB Cache DM forwards database object requests to the BRM database and then forwards the response from the BRM database to the CM.

    Figure 3-10 CM Interaction with Data in the BRM Database Not in Cache

    Description of Figure 3-10 follows
    Description of "Figure 3-10 CM Interaction with Data in the BRM Database Not in Cache"

Note:

IMDB Cache DM is supported on 64-bit operating systems only.

Data Tier

The data tier consists of the BRM database and other data access systems, such as the Paymentech credit-card processing service.

About the BRM Database

The BRM database stores all of your business information and data, such as information about customer accounts, your price list, and the services you provide. All information stored in the BRM database is contained in objects.

An object is equivalent to a database record or a set of database records. Each type of object contains a set of related information. For example, an account object includes the customer's name and address.

Related database objects are linked to each other. For example, an account object is linked to the payment information object that stores a customer's credit card number.

To see examples of objects, use Event Browser in Customer Center or use Object Browser.

Note:

The term object does not refer to an object request broker (ORB) or to the object programming environment. Instead, object refers to the BRM object data model. For more information about the storable object model, see BRM Developer's Guide.

About the Residency Type

RESIDENCY_TYPE is an attribute of storable objects and defines where the object resides in the BRM system. The residency type values are predefined in the data dictionary in the BRM database.

Table 3-1 describes where an object resides based on its residency type value.

Table 3-1 Residency Type Descriptions

Attribute Value Object Type Residency

0

Database object

Objects reside in the BRM database.

1

In-memory object

Objects reside in Oracle IMDB Cache.

2

Shared object

Objects reside in Oracle IMDB Cache and the BRM database.

3

Transient object

Objects reside in the Oracle IMDB Cache data file.

4

Deferred database object

Objects reside in the BRM database and are passed through the IMDB Cache DM.

5

Static reference object

Objects reside in the BRM database but are cached in Oracle IMDB Cache when they flow back from the database.

6

Volatile object

Objects reside only in the Oracle IMDB Cache data file.

7

Dynamic reference object

Reference objects that reside in the BRM database and are updated more frequently in Oracle IMDB Cache.

101

Routing database object

/uniqueness objects.

102

General routing database object

All global objects, except /uniqueness objects, stored in the BRM database.

303

IMDB Cache resident expanded object

Event objects stored in Oracle IMDB Cache in expanded format.

304

BRM resident expanded object

Event objects stored in the BRM database in expanded format.


If you create a custom object, you must set its residency type to the correct value. For more information, see "Assigning Custom Objects a Residency Value" in BRM System Administrator's Guide.

About the Multischema Architecture

In multischema systems, the database layer of your BRM system consists of one primary schema and one or more secondary schemas in a single database. A primary DM is connected to the primary schema and secondary DMs are connected to the secondary schemas. Data is separated between the schemas as follows:

  • The primary database schema stores two types of global objects: objects that the secondary DMs can only read, such as configuration and pricing objects, and objects that the secondary DMs can both read and update, such as uniqueness objects. The primary schema also stores subscriber data.

  • The secondary database schemas store subscriber data, such as event and account objects.

The primary DM updates global read-only objects in the primary schema, and the secondary DMs read the data from views in the secondary schema on tables from the primary schema.

The secondary DMs use schema qualifications to read and modify updatable global objects stored in the primary schema.

For information about how to set up a multischema system, see "Managing a Multischema System" in BRM System Administrator's Guide.

About Multidatabase Manager

Multidatabase Manager is an optional feature that enables you to have multiple BRM database schemas in a single installation. This option can support very large installations (more than a few million subscribers). It enables you to split the main database into multiple schemas.

About Oracle RAC for a High-Availability BRM System

For a high-availability system, BRM recommends Oracle Real Application Cluster (Oracle RAC), which requires a reliable and highly available storage system. For more information on a high-availability system, see BRM System Administrator's Guide.

About Oracle IMDB Cache

Oracle In-Memory Database (IMDB) Cache is an in-memory database that caches performance-critical subsets of the BRM database for improved response time. IMDB Cache includes the following functionality:

  • Caches data from BRM database tables

  • Replicates cached data for high-availability systems

  • Stores transient data

BRM stores customer account data and information related to the services that you provide in the BRM database. It uses Oracle IMDB Cache to cache a subset of the BRM database tables that are frequently accessed or that are performance-critical and require fast transaction response. For example, /account, /service, and /balance_group tables.

IMDB Cache provides fast access to the cache data resulting in improved transaction response times and increased system throughput:

  • Creating, modifying, deleting, and searching for objects in memory eliminates the network connections for communicating with the BRM database server, thus improving transaction response times.

  • Caching database tables in IMDB Cache reduces the workload on the BRM database, thus improving the overall system throughput.

About Objects in an IMDB Cache-Enabled System

BRM storable objects in an IMDB Cache-enabled system are assigned one of the following categories:

Transient Objects

Transient objects contain ongoing session data necessary for processing authorization and reauthorization requests that require low latency and high throughput. These objects are transactional and are stored only in the IMDB Cache data file. They are created and updated during a session when the system is running and deleted at the end of the session after the data is committed to the BRM database. These objects can be accessed only through simple queries.

Reference Objects

Reference objects contain data such as subscriber information and resource balances that are read often but not updated often. These objects require low latency for read-only access and are updated only at the end of the session. Reference objects are created, updated, and stored in Oracle IMDB Cache and replicated to the BRM database.

Note:

Reference objects initially stored in the BRM database are preloaded into Oracle IMDB Cache.

Examples of reference objects include:

  • Account objects (/account).

  • Service objects (/service).

  • Bill objects (/bill)

  • Item objects (/item)

  • Balance group objects (/balance_group).

Only BRM reference objects with residency type values 5 and 7 are valid for caching. See "About the Object Residency Type".

Database Objects

Database objects are of two types:

  • Database-only objects are created, updated, and stored only in the BRM database and are not cache in Oracle IMDB Cache. Examples of these database objects include configuration objects, device objects, and pricing objects, such as products, plans, and deals.

  • Hybrid database objects are created and updated in Oracle IMDB Cache and propagated to the BRM database. These objects are deleted from Oracle IMDB Cache using the least-recently-used (LRU) aging policy. Search and read operations on these objects, however, are performed in the BRM database. An example of this database object includes event objects.

About Active Session Objects

BRM stores information about subscriber calls from the start of the call to the successful completion of the call in session objects in the BRM database. In an IMDB Cache-enabled system, information about currently active calls is stored in the active session object (/active_session) in the IMDB Cache data file. Active session objects contain information about the call from the time a subscriber tries a call until the call ends, successfully or not.

The Active Session Manager (ASM) FM processes the active session objects and manages active call state information for the duration of a call. After a call is completed, a session object is created from the active session object in the BRM database. The active session object is then optionally deleted from IMDB Cache.

About the Object Residency Type

Storable objects in an IMDB Cache–enabled system reside in different databases. For instance, reference objects reside in Oracle IMDB Cache, transient objects reside in the IMDB Cache data file, and database objects reside in the BRM database. The storable object attribute, RESIDENCY_TYPE, defines where the object resides in the BRM system. The residency type values are predefined in the data dictionary in the BRM database.

Table 3-2 describes where an object resides based on its residency type value.

Table 3-2 Residency Type Descriptions

Attribute Value Object Type Residency

0

Database objects

Objects reside in the BRM database.

1

In-memory objects

Objects reside in Oracle IMDB Cache.

2

Shared objects

Objects reside in Oracle IMDB Cache and the BRM database.

3

Transient objects

Objects reside in the IMDB Cache data file.

4

Deferred database objects

Objects reside in the BRM database and are passed through the IMDB Cache DM.

5

Static reference objects

Objects reside in the BRM database but are cached in Oracle IMDB Cache when they flow back from the database.

6

Volatile objects

Objects reside only in the IMDB Cache data file.

7

Dynamic reference objects

Reference objects that reside in the BRM database and are updated more frequently in Oracle IMDB Cache.

101

Routing database object

/uniqueness objects.

102

General routing database object

All global objects, except /uniqueness objects, stored in the BRM database.

303

IMDB Cache resident expanded object

Event objects stored in Oracle IMDB Cache in expanded form.

304

BRM resident expanded object

Event objects stored in the BRM database in expanded form.


IMDB Cache DM uses the residency type values to determine which database to send request operations to. Because IMDB Cache DM uses dual connections, if the data resides in IMDB Cache, IMDB Cache DM uses the IMDB Cache connection to retrieve the data. If the data resides in the BRM database, IMDB Cache DM uses the Oracle database connection to retrieve the data.

The object residency type value is also used by the pin_tt_schema_gen utility to determine whether the object must be cached. Only reference objects with residency type values of 1, 5, or 7 are valid for caching in memory.

If you create any custom objects, you must set the residency type of the custom object to the correct value. For more information, see "Assigning Custom Objects a Residency Value" in BRM System Administrator's Guide.

About the Pipeline Manager System Architecture

Pipeline Manager is used for rating and discounting events in batch and real-time.

The Pipeline Manager system architecture consists of:

  • The pipeline framework that controls the Pipeline Manager system functions.

  • The pipelines that the framework runs, which perform rating and discounting.

  • The data pool that provides data in memory, used for rating and discounting.

  • The Pipeline Manager database that stores data used for rating and discounting.

Figure 3-11 shows how a billable event is rated in batch by Pipeline Manager and recorded in the BRM database. In this case:

  1. Pipeline Manager rates event data from CDR files.

  2. Rated Event (RE) Loader loads rated events into the BRM database.

  3. Account balances are updated.

Figure 3-11 Billable Event Rating by Pipeline Manager and Storage in BRM Database

Description of Figure 3-11 follows
Description of "Figure 3-11 Billable Event Rating by Pipeline Manager and Storage in BRM Database"

Figure 3-12 shows how real-time discounting works.

Figure 3-12 Real-Time Discounting

Description of Figure 3-12 follows
Description of "Figure 3-12 Real-Time Discounting"

In this case:

  1. BRM sends an event to the pipeline for real-time discounting.

  2. The NET_EM module sends the event to the pipeline.

  3. Pipeline Manager returns the discounted amount.

  4. Account balances are updated in the BRM database.

About the Pipeline System Components

When you configure an instance of the Pipeline Manager, you configure a set of system components and one or more pipelines. The system components are:

About the Controller

The Controller manages and monitors the entire Pipeline Manager instance. The Controller performs these functions:

  • Starts and stops a Pipeline Manager instance.

  • Initiates and coordinates different threads.

  • Checks for new semaphore file entries.

  • Generates a log message table that is used by the LOG module to create the process log file, the pipeline log files, and the stream log file.

You configure the Controller by using the registry file. For information, see the Pipeline Manager documentation in BRM System Administrator's Guide.

About the EDR Factory

The EDR Factory is a mandatory pipeline component that generates and allocates memory to EDR containers in a single pipeline.

When a transaction starts, the EDR Factory:

  1. Allocates memory for each container.

  2. Generates an EDR container for each piece of the input stream, including one for the header, one for each EDR, and one for the trailer, by using the container description file.

  3. After the pipeline writes information to the output file, the EDR Factory empties the container and releases the cache. The EDR Factory can then reuse the memory for new containers.

You configure the EDR Factory by using the EDRFactory section of the registry file.

About the Transaction ID Controller

The Transaction ID Controller generates unique IDs for all open transactions in your pipelines. An instance of Pipeline Manager contains only one Transaction ID Controller.

The Transaction ID Controller performs these functions:

  • Stores blocks of transaction IDs in cache. The Transaction ID Controller issues IDs to TAMs directly from cache.

  • Uses the transaction state file or table to track ID numbers.

  • Assigns ID numbers to transactions.

You configure the Transaction ID Controller by using the TransactionIDController section of the registry file. For information, see the Pipeline Manager documentation in BRM System Administrator's Guide.

About the Sequencer

The BRM Sequencer is an optional Pipeline Manager component that performs one of these functions:

  • Sequence checking, which ensures that a CDR file is not processed more than once by keeping track of each CDR file's unique sequence number. A sequence check also logs gaps in sequence numbers.

  • Sequence generation, which generates sequence numbers for output files. This functionality is used when CDR input files do not have sequence numbers and when pipelines split CDR input files into multiple output files.

    Note:

    Sequence generation is not required when there is a one-to-one correspondence between input and output files. In this case, sequence numbers can be passed through to the output file.

Each pipeline can be configured to use one or more Sequencers. You configure your Sequencers by using the SequencerPool registry entries, and you assign Sequencers to pipelines by using the Output registry entries.

For more information about the Sequencer, see the Pipeline Manager documentation in BRM System Administrator's Guide.

About the Event Handler

The Event Handler is an optional pipeline framework component that starts external programs when triggered by internal events. For example, you can configure the Event Handler to launch a script that moves event data record (EDR) output files to a specific directory whenever the output module finishes processing them.

An instance of the Pipeline Manager uses only one Event Handler, which monitors the events for all pipelines in your system. Each registered module in your system automatically sends events to the Event Handler. You define which of these events trigger external programs by using the ifw.EventHandler section of the registry file.

When the Event Handler receives an event from a registered module, it:

  1. Checks to see if the event is mapped to an action.

  2. Performs one of the following:

    • Starts the associated program or script.

    • If no action is mapped, ignores the event.

  3. Queues any events it receives while the external program is running.

  4. Waits for the external program to terminate.

About the Data Pool

The data pool is a set of modules that store data used by all the pipelines in a single Pipeline Manager instance. Data modules are named with the prefix ”DAT”, for example, DAT_AccountBatch.

Data modules get their data from the Pipeline Manager database and from the BRM database at startup. As data changes in the BRM system, the data is updated in the data pool.

For more information, see the Pipeline Manager documentation in BRM System Administrator's Guide.

About Pipelines

A single Pipeline Manager instance runs one or more pipelines. Each pipeline includes the following components:

  • The Pipeline Controller, which you use to manage the pipeline. See "About the Pipeline Controller".

  • The input module reads data from the input stream, converts CDR files into the internal EDR input format, and performs error checking on the input stream.

  • Function modules perform all rating tasks and EDR management tasks for a pipeline. Function modules process the data in the EDRs. Each function module performs a specific task, for example, checking for duplicate EDRs or calculating zones.

    Function modules do not store any data; instead they get data from data modules. For example, to rate an event, the FCT_MainRating module gets pricing data from the DAT_PriceModel module.

    Function modules have two dependencies:

    • Some modules require previous processing by other modules.

    • Some modules get data from data modules.

  • The output modules convert internal EDRs to output format and write the data to the output streams.

  • The log module, which you use to generate and manage your process, pipeline, and stream log files.

About Using Multiple Pipelines

You create multiple pipelines to do the following:

  • Maximize performance and balance system loads. For example, you can create multiple pipelines to handle multiple input streams.

  • Manage different types of processing. For example, you can create separate pipelines for zoning, rating, and preprocessing. In this case, you can use the output of one pipeline as the input for another pipeline, or pipelines can run in parallel. To improve performance, aggregation is typically performed in a separate pipeline.

When you create multiple pipelines, they run in parallel in a single Pipeline Manager instance. You configure all pipelines in the same registry file. Each pipeline has its own input and output configuration, EDR Factory, Transaction Manager, and set of function modules. However, all pipelines share the same set of data modules.

You can also use a pipeline to route EDRs to different Pipeline Manager instances. For example, when you use multiple database schemas, you use the FCT_AccountRouter module to send EDRs to separate instances of Pipeline Manager.

About the Pipeline Controller

The Pipeline Controller manages all processes for one pipeline.

The Pipeline Controller performs the following functions:

  • Starts and stops the pipeline.

  • Initiates and coordinates the pipeline's threads. See "About Thread Handling".

  • Defines the valid country codes and international phone prefixes for the pipeline. The pipeline's function modules retrieve this information during processing.

  • Manages pipeline input and output.

You configure the Pipeline Controller by using the Pipelines section of the registry file.

About Thread Handling

You can configure each pipeline to run with multithreaded processing or single-threaded processing. By default, each pipeline is configured for multithreaded processing.

You select single-threaded or multithreaded mode to optimize performance. For more information, see the documentation about tuning Pipeline Manager performance in BRM System Administrator's Guide.

About the Pipeline Manager Database

The Pipeline Manager database stores business configuration data, such as price models and rate plans. Pipeline Manager accesses this information when you first start Pipeline Manager or when you force a database reconnection. Pipeline Manager then stores a copy of your pricing and rating data in your data modules.

Pipeline Manager modules connect to the Pipeline Manager database through the Database Connect module (DBC).

About Configuring Pipeline Manager

To configure Pipeline Manager, you use the following files to manage the Controller:

  • Registry files, which you use to configure a Pipeline Manager instance at system startup.

  • Semaphore files, which you use to configure and control pipelines during run time.

For more information, see the documentation about configuring Pipeline Manager in BRM System Administrator's Guide.

You can also use the pin_ctl utility to start and stop Pipeline Manager.

Configuring the Four-Tier Architecture

All BRM processes can run on the same computer, or they can be distributed among different computers. Distributed processing provides a great deal of flexibility in configuring your system, especially if you have multiple machines running CMs and DMs. For example, you can do the following:

  • Run redundant system processes (for example, Customer Center, billing applications, CMs, and DMs).

  • Connect clients to specific CMs, or use a CMMP to route to any available CM.

For more information, see BRM Installation Guide.

Communication Between System Components

Communication between applications and CMs, and CMs and DMs, is accomplished by TCP/IP. You assign each component an IP port and host name. TCP/IP facilitates the use of firewalls, proxies, and filters.

Any kind of network connection that supports TCP/IP supports BRM (for example, local area network, virtual private network, and PPP).

Communication between DMs and data access systems uses the protocol required by the data access system. For example:

  • DM-to-Oracle communication is carried out using Oracle SQL*Net, a product that lets distributed computers access a central Oracle database.

  • DM-to-Paymentech communication uses the Paymentech protocol.

For information about the configuration file entries that enable communication between BRM components, see the documentation about configuration files in BRM System Administrator's Guide.

Four-Tier Architecture and Failure Recovery

The four-tier architecture provides high reliability in the event of system component failures:

  • If the BRM database is offline, any interrupted transactions are rolled back when the database is restarted. The DM automatically attempts to connect to the database until a connection is made.

  • If a DM is offline, any interrupted transactions are timed out and rolled back by the database. The CM automatically attempts to connect to the DM until a connection is made. You can provide each CM with a list of DMs. If a DM is unavailable, the CM can connect to a different one.

    If a DM fails while there are transactions in its queue, the transactions must be resubmitted. The contents of the queue are lost when the DM fails.

  • If a CM is offline, the database rolls back any interrupted transactions. You can use a CMMP to provide multiple CMs for clients to connect to. See "About Connection Manager Master Processes (CMMPs)".

  • If a client fails, any interrupted transaction is timed out and rolled back by the database.

    In all cases, errors are reported to the client application. Depending on its capabilities, the client can display an error, log the error, or retry the transaction.

  • In all cases, broken transactions are rolled back by the database, so no partial transactions are recorded.