20 Using Oracle IMDB Cache Manager

This chapter provides information about using Oracle Communications Billing and Revenue Management (BRM) In-Memory Database (IMDB) Cache Manager.

About Oracle IMDB Cache Manager

IMDB Cache Manager is an optional BRM component that enables you to use both:

  • Oracle IMDB Cache: An in-memory database for caching only performance-critical data from the BRM database.

  • BRM database: An external relational database management system (RDBMS) for storing all BRM data.

The IMDB Cache Manager package includes only the IMDB Cache Data Manager (DM). The DM provides the interface between the CM and Oracle IMDB Cache and between the CM and the BRM database. The DM determines whether to route CM requests to Oracle IMDB Cache or to the BRM database based on where the data resides (called the residency type). For more information, see "About the Residency Type and Oracle IMDB Data Manager".

About Caching BRM Objects in Memory

You can cache a subset of your BRM database tables in an in-memory database by using Oracle IMDB Cache. Oracle IMDB Cache includes the following functionality:

  • Caches data from BRM database tables

  • Replicates cached data for high-availability systems

  • Stores transient data

For more information about Oracle IMDB Cache, see "About Oracle IMDB Cache" in BRM Concepts.

About BRM Cache Groups

Oracle IMDB Cache stores BRM database tables in cache groups. Each cache group can store a single BRM table or a group of related tables (root table and one or more child tables). Oracle IMDB Cache synchronizes the data between the cache groups and the BRM database.

BRM cache groups store BRM data that require low latency and high throughput for fast access. For example:

  • Subscriber data to process authentication, authorization, and accounting (AAA) requests

  • Subscriber data accessed by customer service representatives (CSRs)

  • Account, bill unit, and item data needed to perform billing

By default, the IMDB Cache Manager installer creates the following cache groups in your Oracle IMDB Cache data stores:

  • A subscriber cache group. This cache group is defined as a Global ASYNCHRONOUS WRITETHROUGH (AWT) type, which makes the data available to all members in a cache grid. If the requested data is not in the subscriber cache group, Oracle IMDB Cache returns an error.

  • Event cache groups. These cache groups are defined as Dynamic AWT types, in which updates in the cache group are asynchronously propagated to the BRM database. By default, data in a Dynamic AWT cache group is deleted using the least-recently-used (LRU) aging policy. If the requested data is not in an event cache group, Oracle IMDB Cache forwards the request to the BRM database, gets the response, caches the data, and sends the response to the client.

You can view the default BRM cache group definitions in the BRM_home/bin/pin_tt_schema_gen.values configuration file.

See Oracle In-Memory Database Cache User's Guide for more information about cache group types and aging policy.

Guidelines for Setting Cache Group Size

When you create an Oracle IMDB Cache data store, you specify the maximum size of the data store's shared-memory segment. Data is created in the shared-memory segment until it reaches the specified maximum. After it reaches the maximum, Oracle IMDB Cache reports an out-of-memory error any time you attempt to load data into or create accounts in the shared-memory segment.

By default, the maximum size of the data store's shared-memory segment is 32 MBytes. You can change the maximum size value at the following times by using the PermSize cache attribute:

  • When you create the data store.

  • When you connect to the data store the first time. In this case, you can only increase the maximum size, not decrease it.

For more information about the PermSize cache attribute, see the discussion on specifying the size of a data store in Oracle TimesTen In-Memory Database Operations Guide.

About Loading BRM Objects into Oracle IMDB Cache

You load BRM objects into Oracle IMDB Cache by using the following:

  • The pin_tt_schema_gen utility, which uses the existing BRM database schema to generate your schema and load SQL scripts.

  • The schema SQL script, which creates the BRM cache group schema in Oracle IMDB Cache.

  • The load SQL script, which loads BRM data into the cache group schema.

For more information, see "Generating the BRM Cache Group Schema".

About Loading Migrated Accounts into the Cache Groups

If you migrated accounts into the BRM database after your BRM system is up and running, you can load the migrated data into the BRM cache groups using the pin_tt_schema_gen utility. To perform the migration without downtime or without blocking operations on the objects already in the cache, load only the objects that are not already cached in the Oracle IMDB Cache data store.

To reduce the impact on your system, Oracle recommends that you perform the load in batches.

About Managing Data in Oracle IMDB Cache

Database tables cached in Oracle IMDB Cache are stored in shared memory, which is a fixed size. You must manage the data in the cache to avoid running out of shared-memory space.

Managing Fast-Growing Tables

To manage fast-growing tables, such as /event tables, in BRM cache groups, use the Oracle IMDB Cache Aging feature.

IMDB Cache DM uses usage-based aging to remove least-recently-used (LRU) data, based on a range of threshold values that specifies when LRU aging is activated and deactivated.

The aging policy for BRM cache groups is defined in the cache group schema definitions in the pin_tt_schema_gen.values file. Aging is enabled by default for Dynamic AWT cache groups. See "Generating the BRM Cache Group Schema".

About Purging Expired Reservation and Active-Session Objects

When account balances in the BRM database are updated at the end of a session, reservation objects and active-session objects are removed from Oracle IMDB Cache. The reservations are released and session objects in the database are rated. However, when a session is terminated abnormally (for example, because of a timeout error or not receiving a stop-accounting request), the session remains open and the reservation objects and the related objects in that session remain in the database cache, using up shared-memory space. You remove expired reservation and active session objects and release shared-memory space by using the pin_clean_rsvns and pin_clean_asos utilities.

For information on how to run these utilities, see "pin_clean_rsvns" and "pin_clean_asos".

About Purging Closed Bills, Items, Journals, and Expired Subbalances

You remove closed bills, items, journals, and expired account subbalances by using the pin_purge utility.

Note:

Do not use the pin_sub_bal_cleanup utility to purge expired account subbalances. Using the pin_sub_bal_cleanup utility results in data in Oracle IMDB Cache and the BRM database being unsynchronized and might result in propagation errors. In an Oracle IMDB Cache–enabled system, you must use the pin_purge utility to purge expired account subbalances.

See "pin_purge" for more information on how to run this utility.

How Objects Are Stored in Oracle IMDB Cache-Enabled Systems

BRM storable objects in an Oracle IMDB Cache-enabled system are assigned one of the following categories:

  • Transient objects contain ongoing session data necessary for processing authorization and reauthorization requests that require low latency and high throughput. These objects are transactional and are stored only in the Oracle IMDB Cache data file. They are created and updated during a session when the system is running and deleted at the end of the session after the data is committed to the BRM database. These objects can be accessed only through simple queries.

  • Reference objects contain data such as subscriber information and resource balances that are read often but not updated often. These objects require low latency for read-only access and are updated only at the end of the session. Reference objects are created, updated, and stored in Oracle IMDB Cache and replicated to the BRM database.

  • Database objects are of the following types:

    • Database-only objects are created, updated, and stored only in the BRM database. Examples of these database objects include configuration objects, device objects, and pricing objects, such as products, plans, and deals.

    • Hybrid database objects are created and updated in Oracle IMDB Cache and propagated to the BRM database. These objects are deleted from Oracle IMDB Cache using the LRU aging policy.

About the Residency Type and Oracle IMDB Data Manager

The storable object attribute, RESIDENCY_TYPE, defines where the object resides in the BRM system. For information on RESIDENCY_TYPE, see the discussion on about the residency type in BRM Concepts.

IMDB Cache DM uses the residency type values to determine to which data source to send request operations. Because IMDB Cache DM uses dual connections, if the data resides in Oracle IMDB Cache, IMDB Cache DM uses the Oracle IMDB Cache connection to retrieve the data. If the data resides in the BRM database, IMDB Cache DM uses the Oracle database connection to retrieve the data.

About Storing Usage Events

By default, IMDB Cache Manager stores usage event data in expanded format. Only event objects in the EVENT_T base table and the EVENT_BAL_IMPACTS_T subtable are stored in expanded format. There are no limitations on the types of searches or stored procedures that you can use with these tables.

Using Oracle Functions or Procedures in an SQL Statement

Oracle IMDB Cache Manager does not support using Oracle functions or stored procedures in an SQL statement. To work around this limitation, trigger two separate calls to call the stored procedure and the SQL statement in your policy code.

About Searching for Usage Events in Oracle IMDB Cache-Enabled Systems

In Oracle IMDB Cache-enabled systems, usage events are created in Oracle IMDB Cache and are later propagated to the BRM database by the Oracle IMDB Cache synchronization process. To maintain memory space, Oracle IMDB Cache periodically deletes usage events from memory by using the LRU policy. Hence, at any given time, some usage events are in Oracle IMDB Cache, some are in the BRM database, and some are in both.

To ensure it finds events stored in both Oracle IMDB Cache and the BRM database, BRM performs a union search. Union searches consist of the following steps, which are different for each rating method:

  • Real-time rating: BRM finds usage events by searching both Oracle IMDB Cache and the BRM database, merging the results together, and then deleting any duplicate events from the search results.

    BRM uses this method to search for events only when:

    • The PCM_OP_TCF_AAA_SEARCH_SESSION helper opcode searches for duplicate session objects.

    • The PCM_OP_TCF_AAA_REFUND opcode searches for session objects to refund.

    • The PCM_OP_INV_MAKE_INVOICE opcode searches for events when creating detailed invoices.

    • The PCM_OP_BILL_MAKE_BILL_NOW opcode searches for events to bill.

  • Batch rating: BRM finds usage events by searching the BRM database only. Before searching the database, however, BRM ensures that all usage events have been propagated from Oracle IMDB Cache to the BRM database by doing the following:

    • Retrieving a list of the most recently created and modified usage events from Oracle IMDB Cache.

    • Polling the BRM database until all the usage events in the list are propagated to the BRM database.

    BRM uses this method to search for events only during the rerating process and during deferred taxation.

For information on configuring BRM to perform union searches during both batch rating and real-time rating, see "Searching for Events in Both Oracle IMDB Cache and the BRM Database".

About Committing Transactions in Both Oracle IMDB Cache and the BRM Database

Certain operations in the BRM system require updates to data in Oracle IMDB Cache and to objects in the BRM database. The transactional consistency feature ensures that if an error occurs while data is written to either Oracle IMDB Cache or the BRM database, the entire BRM transaction is rolled back on both data sources.

The transaction consistency feature is implemented with the following Oracle database features:

  • Distributed transactions.

  • Two-phase commit protocols.

  • Ability to disassociate a service context from one global transaction and reassociate it with another global transaction.

Note:

Any external application that works directly on both data sources without going through IMDB Cache DM must use their own consistency logic.

To configure your system to use the transaction consistency feature, see "Configuring IMDB Cache Manager for Transaction Consistency".

About Recovering from Oracle Global Transaction Errors

When an application, such as Siebel CRM, attempts to access data in Oracle IMDB Cache, IMDB Cache DM opens an Oracle Global Transaction. If the external application fails while the transaction is open, the Oracle Global Transaction performs one of the following:

  • If the transaction has not been precommitted, the Oracle Global Transaction rolls back the entire transaction.

  • If the transaction is precommitted, the Oracle Global Transaction keeps the transaction in an Active status for a specified length of time. This gives the application time to reestablish a connection to the IMDB Cache DM and complete the transaction. If the transaction is postcommitted before the wait time expires, the Oracle Global Transaction commits the transaction. If the transaction is not postcommitted before the wait time expires, the Oracle Global Transaction changes the transaction to a Pending status and the Oracle Database Administrator must intervene.

About Manually Fixing Oracle Global Transaction Failures

Note:

For information about manually committing or rolling back XA transactions, see "Manually Committing or Rolling Back XA Transactions" in BRM JCA Resource Adapter.

IMDB Cache DM automatically fixes most errors that occur during the BRM transaction process. However, in rare cases, a BRM transaction error may occur that IMDB Cache DM cannot fix automatically. When this occurs, the Oracle Global Transaction is changed from an Active status to a Pending status and the error must be fixed manually.

Your Oracle Database Administrator must periodically check for and fix Oracle Global Transaction failures. Table 20-1 shows the BRM tables and views that your Oracle Database Administrator should monitor.

Table 20-1 BRM Tables and Views to Monitor

Table Name Description

GLOBAL_TRANS_T

Lists information about each Active global transaction.

This IMDB Cache Manager table is stored in the data store of each logical partition.

GLOBAL_PENDING_TRANS_T

Lists the global transactions that are Pending and require manual intervention by the Oracle Database Administrator.

This table is stored in your BRM database. The IMDB Cache Manager installer creates one instance of this table for each schema in your BRM database.

DBA_2PC_PENDING

Stores detailed information about each global transaction that is Pending, such as the global and local transaction ID and the state of the commit transaction.

To access this Oracle static data dictionary view, you must have a SYS or SYSTEM password for the BRM database.


For more information about:

Using the Oracle IMDB Cache Grid to Partition Data

An Oracle IMDB Cache Grid is a collection of Oracle IMDB Caches that work together to cache data from the Oracle database. Cached data is distributed between the cache grid members. The cache grid manages information about the data location and provides applications access to the cached data in all the grid members with location transparency. Logical partitioning is a mechanism that provides scalability by using the Oracle IMDB Cache Grid architecture. You can configure cache grids to increase system performance and manage system growth.

Typically in a BRM system, data is distributed among multiple database schemas. You can improve scalability by distributing BRM data among multiple logical partitions in each database schema. In this configuration, data from a single database schema is distributed among multiple logical partitions (Oracle IMDB Caches). Logical partitioning essentially provides a second level of data partitioning. As your subscriber base grows, you can add additional database schemas and logical partitions. For example, for a single-schema database with 1,000,000 subscribers, you can create two logical partitions and distribute 500,000 subscriber accounts in each partition. When the subscriber base grows to 2,000,000, you can add a database schema with two more logical partitions to distribute the additional 1,000,000 new subscriber accounts.

About Logical Partitioning in High Availability Systems

In a high-availability system, a cache grid consists of one or more high-availability nodes (a pair of an active and a standby Oracle IMDB Cache). Data in the schema is partitioned among the Oracle IMDB Cache nodes based on the ACCOUNT_OBJ_DB column in the UNIQUENESS_T table.

Accounts and all associated objects reside in the same node. Each node owns the specific data partition. If one node is down, clients can continue to access accounts in the other nodes. Because database operations are performed at the nodes where the data resides, this reduces the workload on the BRM database and improves the overall system efficiency.

In a BRM system with high availability and logical partitions, each IMDB Cache DM handles operations for a specific logical partition and database schema. All logical partitions are assigned a database number. The following naming scheme is used to denote the logical partition for a specific database schema:

0.M.0.N

where:

  • M specifies the logical partition.

  • N specifies the database schema.

Note:

The first logical partition is 0, not 1. For example, for database schema 0.0.0.1 (schema 1), the first partition is 0.0.0.1 and the second partition is 0.1.0.1. For database schema 0.0.0.2 (schema 2), the first partition is 0.0.0.2 and the second partition is 0.1.0.2.

Figure 20-1 shows an Oracle IMDB Cache Grid architecture for a single-schema database with two logical partitions and how you can configure additional database schemas and logical partitions as your subscriber base grows.

Figure 20-1 A High-Availability Configuration with Logical Partitioning

Description of Figure 20-1 follows
Description of ''Figure 20-1 A High-Availability Configuration with Logical Partitioning''

How Accounts Are Assigned to the Logical Partition

Each database schema and logical partition is assigned a status and a priority. The status determines whether the database schema and the logical partition are available for account creation. The priorities are used during account creation to determine in which schema and partition to create the account.

BRM finds an open database schema with the highest priority and assigns accounts to an open logical partition with highest priority for that database schema. If all database schemas and logical partitions have the same priority, BRM chooses an open database schema and logical partition at random in which to create the account.

For hierarchical, brand, or group accounts, all members are created in the same database schema and generally in the same logical partition as the parent. However, it is possible that members could be located in different logical partitions based on the logical partition priority.

The database schema and logical partition status can be set to open, closed, or unavailable:

  • Open: Database schemas and logical partitions with open status are available for account creation.

  • Closed: Database schemas and logical partitions with closed status are not used for account creation under most circumstances. Accounts are created in a closed database schema or logical partition only if an account's parent, branded, or sponsoring account belongs to that database schema or logical partition or if all database schemas and logical partitions are closed. If all database schemas and logical partitions are closed, BRM chooses a closed database schema and logical partition at random in which to create accounts and continues to create accounts in that database schema until a database schema becomes open. To limit the number of accounts created in a database schema or logical partition, you can manually change the status to closed, or you can have BRM automatically change it to closed when the database schema or logical partition reaches a predefined limit.

  • Unavailable: Database schemas and logical partitions with unavailable status are not used for account creation unless the database schema or logical partition contains an account's parent, sponsoring, or branded account.

To change the status or priority for a database schema or logical partition, edit the STATUS and PRIORITY entries in the BRM_home/apps/multi_db/config_dist.conf file, and then use the load_config_dist utility to load the distribution information into the primary database.

About Finding Data in Logical Partitions

In Oracle IMDB Cache-enabled systems with logical partitions, data is spread across multiple logical partitions in a Cache Grid (or schema). BRM uses the TimesTen Global Processing feature to perform the following:

About Performing Searches in Logical Partitions

BRM applications and external applications can initiate the following types of searches:

  • Search a single schema and return all the results simultaneously by using the PCM_OP_SEARCH opcode.

  • Search a single schema and return the results in discrete chunks by using the PCM_OP_STEP_SEARCH opcode.

  • Search multiple schemas and return all the results simultaneously by using the PCM_OP_GLOBAL_SEARCH opcode.

  • Search multiple schemas and return the results in discrete chunks by using the PCM_OP_GLOBAL_STEP_SEARCH opcode.

When your schemas contain logical partitions, BRM can perform either a search on a specified logical partition or a global search across all logical partitions in the schema. BRM determines whether to search one or all logical partitions in the schema by reading flags passed in the call to the search opcode. Table 20-2 shows the type of search the opcode performs based on the opcode call type and flag.

Table 20-2 Search Performed Based on Opcode Call Types and Flags

Opcode Call Type Opcode Flag

Opcode called inside a transaction

  • No flag: The opcode searches the logical partition specified in the input flist. This is the default.

  • PCM_OPFLG_SEARCH_PARTITIONS: The opcode performs a global search across all logical partitions.

Opcode called outside a transaction

  • No flag: The opcode performs a global search across all logical partitions. This is the default.

  • PCM_OPFLG_SEARCH_ONE_PARTITION: The opcode searches the logical partition specified in the input flist.


BRM finds data in logical partitions as follows:

  1. The CM routes search operations to an IMDB Cache DM instance based on the search opcode's PIN_FLD_POID input flist field. For example, if PIN_FLD_POID is set to 0.1.0.1, the CM routes the search operation to the IMDB Cache DM instance connected to logical partition 0.1.0.1.

  2. IMDB Cache DM calls the specified search opcode.

  3. The search opcode executes the search against one or more logical partitions:

    • If the opcode call occurs inside a transaction, the opcode, by default, searches only the current logical partition. If the PCM_OPFLG_SEARCH_PARTITIONS flag is passed in the opcode call, the opcode performs a search across all logical partitions in the Cache Grid.

    • If the opcode call occurs outside a transaction, the opcode, by default, performs a search across all logical partitions in the Cache Grid. If the PCM_OPFLG_SEARCH_ONE_PARTITION flag is passed in the opcode call, the opcode searches only the specified logical partition.

Global processing searches can decrease search performance. Therefore, all BRM client applications, such as Customer Center and Developer Center, use global processing searches only when the required data is spread across multiple logical partitions.

If your BRM system includes an external application, you must customize the external application to use global processing searches only when absolutely necessary. For more information, see "Searching for Data Across Logical Partitions".

About Retrieving Data from Logical Partitions

BRM applications and external applications can retrieve the following types of data:

  • An entire object by using the PCM_OP_READ_OBJ opcode.

  • Specified object fields by using the PCM_OP_READ_FLDS opcode.

By default, the read opcodes retrieve data from only one logical partition. However, if the optional PCM_OPFLG_SEARCH_PARTITIONS flag is passed in the call to the read opcode, the opcode uses the global processing feature to retrieve data from all logical partitions in the schema.

If your BRM system includes an external application, you must customize the external application to retrieve data from multiple logical partitions. For more information, see "Searching for Data Across Logical Partitions".

How IMDB Cache DM Connects to Oracle IMDB Cache

IMDB Cache DM uses a direct driver connection to connect to Oracle IMDB Cache. For a direct driver connection, you must configure IMDB Cache DM and Oracle IMDB Cache on the same server machine.

See "Connecting IMDB Cache DM to Your Data Stores" for more information.

For more information about the direct driver connection mode, see Oracle In-Memory Database Introduction.

About the AAA Flow in a BRM System with IMDB Cache Manager

In an Oracle IMDB Cache-enabled BRM system, the authentication, authorization, and accounting (AAA) flow uses the /uniqueness table for account and service lookups to authorize accounts for prepaid services. The /uniqueness object contains the data required for BRM to find the account and service objects corresponding to the given login.

The PCM_OP_ACT_FIND opcode uses the /uniqueness object to obtain /account and /service data for a given account login. The Services Framework AAA opcodes use the data in the /uniqueness table to authorize and reauthorize prepaid sessions and to end prepaid accounting sessions. The Customer FM standard opcodes update the /uniqueness table when a new customer account is created or when a customer purchases a new service.