21 Understanding Conversion Manager

This chapter provides an overview of Oracle Communications Billing and Revenue Management (BRM) Conversion Manager. It describes the process of converting data from a legacy database to the BRM database.

Important:

Conversion Manager is an optional component, not part of base BRM, and requires a separate license.

About Conversion Manager

You use Conversion Manager to load legacy data into BRM. Conversion Manager supports the following types of data:

  • Account data, such as the customer's name, address, profile, and discount information.

  • Service data, such as the services subscribed to by the accounts.

  • Product data, such as products purchased by the accounts.

  • Billing data.

  • Account hierarchy data.

  • Balance data, including data for all resources. When data is migrated, rollover events can use the balance data to manage rollovers.

You can load account and balance data together, or you can load account data first and balance data second. All account data for an account must be loaded before loading balance data.

Before migrating data, do the following:

  • Create a price list that applies to the migrated data before migrating account and balance data. Migrated data needs to reference the price list objects in the BRM database.

  • Make sure that cycle arrears fees, billing-time discounts, and rollovers have been processed in the legacy system.

When the accounts are migrated, cycle forward fees and discount grants (for example, free minutes) are applied. However, all delayed events are handled in the current bill.

Note:

The following features are not supported by Conversion Manager:
  • Brand management

  • Auditing

Overview of the Data Conversion Process

Converting legacy data includes these tasks:

  • Understanding the data in the legacy system and deciding how to convert it to the BRM database.

  • Mapping the data in the legacy database to the BRM database. To do so, you create XML files, which are validated by the Conversion Manager XSD schema files.

  • Migrating the data to the BRM database by using the pin_cmt utility. To migrate data:

    • Import data into the BRM database. The data is hidden from BRM processes.

    • Deploy the staged data to the production area.

    See "Loading Legacy Data into the BRM Database" for more information.

    Important:

    BRM must be running when you import and deploy data.

About Testing Your Data Mapping

You test your data mapping by using a test database. To test the data mapping:

  1. Create a subset of the data in an XML file. See "Mapping Legacy Data to the BRM Database".

  2. Load the data into the test database. See "Loading Legacy Data into the BRM Database".

  3. Test the data. See "Testing the Imported Data".

After you determine that the data mapping works, you import the data into the production database.

About Mapping Data

You convert legacy data to XML format by using an extraction utility.

Important:

Conversion Manager does not include a utility for extracting legacy data into XML files. You need to develop your own extraction utility or obtain it from third-party sources.

Conversion Manager converts data from any type of legacy database; for example, Oracle.

To map data from the legacy database to the BRM data schema, you need a thorough understanding of the legacy data and how BRM stores data.

If your legacy data includes data not currently supported in BRM, you can create new storable classes, or extend storable classes, to support the custom data. You can then extend the Conversion Manager XML schema to migrate the data. See "Migrating Data by Using New and Extended Storable Classes".

You can configure data enrichment to allow system-wide values to be bulk inserted and to provide batch controls for operational integrity.

About Loading Data

You use the pin_cmt utility to load the converted data into the BRM database. Loading data includes two processes: importing accounts and deploying accounts.

  • First, you import the data into the BRM database. When the data is imported, it is hidden from BRM processes, so it is not part of your production system. (Data is hidden by changing the database number section in the Portal object ID (POID) of each imported object. The database number is set to a number defined in the infranet.cmt.dbnumber entry in the Infranet.properties file for the pin_cmt utility; for example, 0.0.0.12.)

    When you import data, you specify a stage ID. This enables you to load data in stages; for example, you can load a specific type of account first. A single database schema can include multiple stages.

  • After the data is imported, you deploy the data. When you deploy data, the imported data is exposed as production data.

    You control which accounts are deployed by entering the stage ID that you used when importing the data. In addition, you specify the billing day of month (DOM). Therefore, only those accounts with the specified stage ID and DOM are deployed.

    After the data is deployed, the database number in the POID is changed to the actual database number, and the data becomes available for use by BRM processes.

    Note:

    • Events that apply to a migrated account are not processed until the account is deployed.

    • When data is deployed, the database number used is defined in the infranet.cmt.targetdbnumber entry in the pin_cmt utility Infranet.properties file. By default, the number is 0.0.0.1.

    • File processing data, such as the file name and batch ID, is stored in the /batch/cmt object.

    When accounts are deployed:

    • The bill cycles are started.

    • Cycle fees are applied.

    • If you use a multischema system, the uniqueness table in the primary database schema is updated.

      Important:

      After deploying your imported accounts, stop and restart Pipeline Manager to send new account data to Pipeline Manager.

About Verifying Data before It Is Deployed

You can look at data that is imported but not yet deployed by logging in to Customer Center connected to a scoped Oracle Data Manager (DM). See "Viewing Data before Deploying".

About Migrating Data to Multischema Systems

To load data into a multischema system, run a separate instance of the pin_cmt utility for each database schema. The pin_cmt utility connects to the primary schema and creates a /uniqueness object for every account and service migrated to BRM.

Important:

If you have a multischema system, make sure the stage IDs are larger than the database IDs for the schemas. For example, if you have a schema with the number 0.0.0.5, use stage IDs larger than 5.

About Loading Data by Using Multiple Files

You can use multiple XML files to load data. For example, you can load account information separately from balance data. You can use the same staging area for multiple files.

Important:

  • Load account data before loading balance data.

  • Load parent accounts before loading child accounts.

About Reloading Data

If the pin_cmt utility runs out of space in your BRM database for data rows, the loading process stops. Data that is not loaded can be loaded after more space is made available in the database. See "Reloading Data".