2 Using the BRM Adapter

This chapter provides information about using the Oracle Communications Billing and Revenue Management Adapter for Oracle Communications Data Model (the BRM Adapter) to populate the foundation layer of an Oracle Communications Data Model warehouse with data from a BRM source system.

To use the BRM Adapter, you should have experience with:

  • Oracle Data Integrator

  • Oracle GoldenGate

  • Oracle SQL Developer

  • Extensible markup language (XML) programming

You should also have a basic knowledge of the following:

For information about installing the BRM Adapter, see Oracle Communications Data Model Adapters and Analytics Installation Guide.

For detailed information about the objects and procedures associated with the BRM Adapter, see "BRM Adapter Reference".

About Populating a Warehouse Using the BRM Adapter

You use the BRM Adapter to populate or refresh the foundation layer (that is, the base, reference, and lookup tables defined in the OCDM_SYS schema) of the Oracle Communications Data Model warehouse.

The BRM Adapter includes data load packages that you run in Oracle Data Integrator to populate the target Oracle Communications Data Model warehouse with data from a source BRM system.

Typically, you perform an initial load of the Oracle Communications Data Model warehouse with the data from your BRM source system, and then you set up incremental loads to regularly refresh the data in the Oracle Communications Data Model warehouse to keep the warehouse synchronized with the BRM system.

After you use the BRM Adapter to populate the foundation layer, you populate or refresh the analytical layer (that is, the derived tables, aggregate tables, Oracle OLAP cubes, and data mining models defined in the OCDM_SYS schema) of the Oracle Communications Data Model warehouse. For more information, see Oracle Communications Data Model Implementation and Operations Guide.

About the BRM Adapter Data Load Packages

The BRM Adapter includes data load packages that you run in Oracle Data Integrator to load data from the BRM source system into the foundation layer of the Oracle Communications Data Model warehouse.

The four main packages load different types of data:

  • The usage package loads usage-related data

  • The billing package loads billing-related data

  • The payment package loads payment-related data

  • The collection package loads collection-related data

The packages provide the flexibility to load the data as needed. For example, you can run the billing package to update billing-related data to the Oracle Communications Data Model warehouse during off-peak hours but run the usage package to update usage-related data to the warehouse more frequently.

Additionally, the packages load data from the BRM source system based on the data dependencies to maintain the data integrity in the warehouse.

For instance, when you run the usage package, any new subscriber information (from the last update) is loaded into the Oracle Communications Data Model warehouse before the subscriber usage data is loaded.

Similarly, when you run the billing package, any new subscriber information and subscriber usage data (from the last update) are loaded into the warehouse before the subscriber billing data is loaded. When you run the payment package, any new subscriber information, subscriber usage data, and billing data (from the last update) are loaded into the warehouse before the subscriber payment data is loaded. When you run the collection package, any new subscriber information, subscriber usage data, billing data, and payment data (from the last update) are loaded into the warehouse before the subscriber collection data is loaded.

About the BRM Adapter Data Load Parameters

The data load parameters specify which data to extract from the BRM source system and transform and load into the foundation layer in the Oracle Communications Data Model warehouse.

For example, you could extract all the usage events from the BRM source system that occurred between May 10 12:50 PM and May 11 1:00 PM.

You configure the data load parameters before you perform the initial load of the data warehouse. After the initial load, the BRM Adapter updates and maintains the date parameter values for refreshing the data in the warehouse.

About BRM Adapter Configuration

The following sections describe the BRM Adapter configurations to manage the data loads.

Using Email Alert Notifications with the BRM Adapter

You can configure the BRM Adapter to send email alert notifications to the end-users and operations personnel to keep them informed of a data load process.

The BRM Adapter triggers the alert notifications when:

  • A data load process finishes.

  • In the event an error is encountered at the mapping-level during the data load process.

See "Configuring the BRM Adapter to Send Email Alert Notifications" for more information.

Using Intermediate Commits for Huge Batch Updates

When you have a huge batch update (that might run for several hours) that would require a full rollback of the whole transaction if a database failure or a transaction failure, you can choose to use intermediate commits to prevent loss of the previous updates. When you use intermediate commits, the updates that are performed (until the intermediate commit happens) are made permanent in the database. When a failure occurs and you restart the batch update, the update proceeds from the last completed transaction.

For example, when the intermediate commit interval is six hours for a huge batch update, the BRM Adapter extracts the BRM source data and loads it to the foundation layer of the Oracle Communications Data Model using six hour time intervals as follows:

  • Data with a timestamp from midnight to 6AM is extracted and loaded.

  • Data with a timestamp from 6AM to 12PM is extracted and loaded.

  • Data with a timestamp from 12PM to 6PM is extracted and loaded.

  • Data with a timestamp from 6PM to midnight is extracted and loaded.

This method allows the BRM Adapter to continue the updates from the last data load that completed without loosing the previous updates.

The best Oracle database commit frequency is never to explicitly commit and let the entire update complete as a single transaction, but using intermediate commits is reserved only for huge batch updates where the time that is required for a full rollback of the transaction is not feasible due to the size of the batch window according to the daily scheduled jobs run by your operations team.

For information about configuring the intermediate commit interval, see "Setting the Intermediate Commit Interval".

Using Parallel Loading to Speed Up the Data Loads

Using parallel loading can speed up load times for large volume of data. When you use parallel loading, instead of one process performing the data load, multiple processes do part of the load at the same time. For example, a single process loads a batch of 100 records at one record per minute. When you use parallel loading with four processes, each process handles one fourth of the data or 25 records. It takes 25 minutes to load the entire batch instead of 100 minutes.

You might want to use parallel loading to improve load time depending on the volume of your data. It is important that you consult with your database administrator to ensure that your database system can support multiple parallel processing. Using parallel loading on a system with insufficient resources may result in poor performance because of overutilized system resources.

See "Configuring Parallel Loading" for information about configuring parallel loading.

Performing an Initial Load of the Foundation Layer of the Oracle Communications Data Model Warehouse

Oracle recommends that you perform an initial load of the foundation layer of the Oracle Communications Data Model warehouse on a pre-production environment to verify and correct any errors in the BRM source data to minimize errors in your production system, and then perform the initial load of the foundation layer on your production system.

You can perform the initial load of the foundation layer in two ways:

  • You can run one data load package to load the data from end to end, which loads the data from the BRM source system to staging layer and then to foundation layer.

  • You can run separate data load packages to load the data in stages, from the BRM source system to the staging layer and from the staging layer to foundation layer, if needed.

    For instance, you may want to load data from different source systems to the staging layer, and then load the data from the staging layer to the foundation layer.

To perform an initial load of the foundation layer:

  1. Copy the sample XML template below into an XML editor or a text editor.

    The sample XML template contains a <DATA_RECORD> element for each data load package (usage, billing, payment, and collection). The data load parameters for each load package are defined as the child elements in the <DATA_RECORD> element.

    Tip:

    When you perform the initial load on your production system, you can copy the XML configuration file that you created on your pre-production environment to your production system, and then set the data load parameters.
    <?xml version="1.0" ?>
    <!DOCTYPE main [
         <!ELEMENT main (DATA_RECORD*)>
         <!ELEMENT DATA_RECORD
         (PROCESS_ID,PROCESS_NAME,FROM_DATE_ETL,TO_DATE_ETL,LOAD_DATE?,SUCC_IND?,SPLIT_INTERVAL?,PROCESS_TYPE?)+>
         <!ELEMENT PROCESS_ID (#PCDATA)>
         <!ELEMENT PROCESS_NAME (#PCDATA)>
         <!ELEMENT FROM_DATE_ETL (#PCDATA)>
         <!ELEMENT TO_DATE_ETL (#PCDATA)>
         <!ELEMENT LOAD_DATE (#PCDATA)>
         <!ELEMENT SUCC_IND (#PCDATA)>
         <!ELEMENT SPLIT_INTERVAL (#PCDATA)>
         <!ELEMENT PROCESS_TYPE (#PCDATA)>
    ]>
    <main>
         <DATA_RECORD>
             <PROCESS_ID>1</PROCESS_ID>
             <PROCESS_NAME>BRM-ADAPTER</PROCESS_NAME>
             <FROM_DATE_ETL></FROM_DATE_ETL>
             <TO_DATE_ETL></TO_DATE_ETL>
             <LOAD_DATE></LOAD_DATE>
             <SUCC_IND>N</SUCC_IND>
             <SPLIT_INTERVAL></SPLIT_INTERVAL>
             <PROCESS_TYPE>USAGE-EVENTS</PROCESS_TYPE>
         </DATA_RECORD>
         <DATA_RECORD>
             <PROCESS_ID>2</PROCESS_ID>
             <PROCESS_NAME>BRM-ADAPTER</PROCESS_NAME>
             <FROM_DATE_ETL></FROM_DATE_ETL>
             <TO_DATE_ETL></TO_DATE_ETL>
             <LOAD_DATE></LOAD_DATE>
             <SUCC_IND>N</SUCC_IND>
             <SPLIT_INTERVAL></SPLIT_INTERVAL>
             <PROCESS_TYPE>BILLING-INVOICE</PROCESS_TYPE>
         </DATA_RECORD>
         <DATA_RECORD>
             <PROCESS_ID>3</PROCESS_ID>
             <PROCESS_NAME>BRM-ADAPTER</PROCESS_NAME>
             <FROM_DATE_ETL></FROM_DATE_ETL>
             <TO_DATE_ETL></TO_DATE_ETL>
             <LOAD_DATE></LOAD_DATE>
             <SUCC_IND>N</SUCC_IND>
             <SPLIT_INTERVAL></SPLIT_INTERVAL>
             <PROCESS_TYPE>PAYMENT-DATA</PROCESS_TYPE>
         </DATA_RECORD>
         <DATA_RECORD>
             <PROCESS_ID>4</PROCESS_ID>
             <PROCESS_NAME>BRM-ADAPTER</PROCESS_NAME>
             <FROM_DATE_ETL></FROM_DATE_ETL>
             <TO_DATE_ETL></TO_DATE_ETL>
             <LOAD_DATE></LOAD_DATE>
             <SUCC_IND>N</SUCC_IND>
             <SPLIT_INTERVAL></SPLIT_INTERVAL>
             <PROCESS_TYPE>COLLECTION-DATA</PROCESS_TYPE>
         </DATA_RECORD>
    </main>
    
  2. For each <DATA_RECORD> element, enter the values for the following child elements: <FROM_DATE_ETL>, <TO_DATE_ETL>, <LOAD_DATE>, and <SPLIT_INTERVAL>.

    Table 2-1 describes the <DATA_RECORD> child elements.

    Table 2-1 Data Load Parameters

    XML Element Description

    PROCESS_ID

    A unique ID that identifies the load process type. This value is set as follows:

    • 1: USAGE-EVENTS load process type.

    • 2: BILLING-INVOICE load process type.

    • 3: PAYMENT-DATA load process type.

    • 4: COLLECTION-DATA load process type.

    Do not change these values.

    PROCESS_NAME

    Specifies the name for the BRM Adapter process.

    This value is set to BRM-ADAPTER. Do not change this value.

    FROM_DATE_ETL

    Specifies the starting date (in the format YYYY/MM/DD HH:MM:SS) from which data in the BRM source system is extracted, transformed, and then loaded into the Oracle Communications Data Model warehouse.

    For example, if FROM_DATE_ETL is 2015/05/10 12:50:00, then all data with a timestamp after May 10 12:50 PM is extracted, transformed, and then loaded into the warehouse.

    TO_DATE_ETL

    Specifies the ending date (in the format YYYY/MM/DD HH:MM:SS) until which data in the BRM source system is extracted, transformed, and then loaded into the Oracle Communications Data Model warehouse.

    For example, if TO_DATE_ETL is 2015/05/11 13:00:00, then all data with a timestamp before May 11 1:00 PM is extracted, transformed, and then loaded into the warehouse.

    Note: If TO_DATE_ETL is null, the BRM Adapter sets the date to the source system date, which is useful in an incremental load.

    LOAD_DATE

    Specifies the date (in the format YYYY/MM/DD HH:MM:SS) on which the load process is run.

    Usually, this value is the system date.

    SUCC_IND

    Indicates whether the load process was successful.

    Y = Load process was successful.

    N = Load process was not successful.

    This value is set to N for the initial load. Do not change this value.

    SPLIT_INTERVAL

    Specifies the intermediate commit interval for a huge batch update. This value is specified in minutes.

    See "Using Intermediate Commits for Huge Batch Updates".

    PROCESS_TYPE

    Specifies the load process type:

    • USAGE-EVENTS: Loads usage data.

    • BILLING-INVOICE: Loads usage and billing data.

    • PAYMENT-DATA: Loads usage, billing, and payment data.

    • COLLECTION-DATA: Loads usage, billing, payment, and collection data.

    Do not change these values.


  3. Save the file as dwc_log_retention.xml.

  4. Using SQL Developer, update the location of the XML file in the database:

    1. Connect to the BRM_STG schema.

    2. Run the following query, which stores the location of the dwc_log_retention.xml file in the XML_FILEPATH table.

      UPDATE XML_FILEPATH SET XML_FILEPATH=file_path/dwc_log_retention.xml WHERE XML_TYPE='LOG_RETENTION';
      COMMIT;
      

      where file_path is the directory in which the dwc_log_retention.xml file is located.

  5. Using Oracle Data Integrator Studio, load the data load parameter values:

    1. Navigate to the Designer navigator.

    2. Expand the BRM-OCDM project folder, then the CONFIG folder, and then the Packages folder.

    3. Select and run the XML_LOAD_CONFIG package, which loads the data load parameter values from the dwc_log_retention.xml file into the BRM_ETL_PARAMETER table in the BRM_STG schema.

  6. (Optional) Configure email notification. See "Configuring the BRM Adapter to Send Email Alert Notifications".

  7. Using Oracle Data Integrator Studio, run the packages to populate the foundation layer. You can choose to load the data to the foundation layer or to load the data to the staging layer and then to the foundation layer.

    Do one of the following:

    • Load the BRM source data to the foundation layer (source to staging layer to foundation layer):

      1. Navigate to the Designer navigator.

      2. Expand the BRM-OCDM project folder, then the STG_OCDM folder, and then the Packages folder.

      3. Select and run the COLLECTION_DATA_BULKLOAD package, which populates the foundation layer of Oracle Communications Data Model with the BRM source data.

        If you encounter an error at the mapping level during the load process, you will need to correct the data errors and then run the COLLECTION_DATA_BULKLOAD package again. See "Restarting a Data Load Process After a Package Execution Fails".

        After the initial load has finished, the BRM Adapter automatically updates the date values in the BRM_ETL_PARAMETER table for incremental loading as follows:

        UPDATE BRM_ETL_PARAMETER SET FROM_DATE_ETL = TO_DATE_ETL, TO_DATE_ETL = NULL, WHERE PROCESS_NAME ='BRM-ADAPTER' AND PROCESS_TYPE = process_type;
        

        where process_type is USAGE-EVENTS, BILLING-INVOICE, PAYMENT-DATA, or COLLECTION-DATA.

    • Load the BRM source data to the staging layer and then to the foundation layer:

      1. Expand the BRM-OCDM project folder, then the SRC_STG_NONGG folder, and then the Procedures folder.

      2. Select and run the ETL_DATE_PARAMETER_SPLITTING_BILLING_INVOICE Version 001 scenario for the ETL_DATE_PARAMETER_SPLITTING_BILLING_INVOICE procedure.

      3. Select and run the ETL_DATE_PARAMETER_SPLITTING_COLLECTION_DATA Version 001 scenario for the ETL_DATE_PARAMETER_SPLITTING_COLLECTION_DATA procedure.

      4. Select and run the ETL_DATE_PARAMETER_SPLITTING_PAYMENT_DATA Version 001 scenario for the ETL_DATE_PARAMETER_SPLITTING_PAYMENT_DATA procedure.

      5. Select and run the ETL_DATE_PARAMETER_SPLITTING_USAGE_EVENTS Version 001 scenario for the ETL_DATE_PARAMETER_SPLITTING_USAGE_EVENTS procedure.

      6. Expand the SRC_STG_NONGG folder and then the Packages folder.

      7. Select and run the SRC_STG_LOAD package, which populates the staging layer of Oracle Communications Data Model with the BRM source data.

        If you encounter an error at the mapping level during the load process, you will need to correct the data errors and then run the SRC_STG_LOAD package again. See "Restarting a Data Load Process After a Package Execution Fails".

      8. Expand the STG_OCDM folder and then the Packages folder.

      9. Select and run the STG_OCDM_LOAD package, which populates the foundation layer of the Oracle Communications Data Model warehouse with the data from the staging layer.

        If you encounter an error at the mapping level during the load process, you will need to correct the data errors and then run the STG_OCDM_LOAD package again. See "Restarting a Data Load Process After a Package Execution Fails".

        After the initial load is finished, the BRM Adapter updates the date values in the BRM_ETL_PARAMETER table for incremental loading as follows:

        UPDATE BRM_ETL_PARAMETER SET FROM_DATE_ETL = TO_DATE_ETL, TO_DATE_ETL = NULL, WHERE PROCESS_NAME ='BRM-ADAPTER' AND PROCESS_TYPE = process_type;
        

        where process_type is USAGE-EVENTS, BILLING-INVOICE, PAYMENT-DATA, or COLLECTION-DATA.

      For more information about the packages, see "BRM Adapter Reference".

      Note:

      You must examine E$_{OCDM Table} for any records rejected during staging. These tables are truncated after every load.

Refreshing the Foundation Layer of the Oracle Communications Data Model Warehouse

After you perform an initial load of the foundation layer in the Oracle Communications Data Model warehouse, you need to refresh or update the data in the warehouse with the new data from your BRM source system. In a common business scenario; for example, usage occurs daily and billing occurs monthly. In this case, you refresh the usage data daily and refresh the billing and invoicing data monthly.

See "Execution Flow Using the BRM Adapter with Only Oracle Data Integrator" for an explanation of the execution flow.

To refresh the data in the warehouse, use one of the following methods:

Refreshing the Foundation Layer on a Scheduled Basis

You can schedule the data loads to automate the data refresh.

To schedule the data load, use one of the following methods:

Scheduling Data Loads with the Oracle Data Integrator Built-In Scheduler from a Command Line

To schedule the data loads with the Oracle Data Integrator built-in Scheduler from a command line:

  1. Verify the Oracle Data Integrator Standalone Agent is running.

    See Oracle Fusion Middleware Installation Guide for Oracle Data Integrator for information about configuring and starting the Standalone Agent.

  2. Run one of the following commands to schedule the data load (see Table 2-2 for a description of the parameters):

    • On UNIX, run the following:

      ./startscen.sh scenario_name scenario_version context_code [log_level] [-AGENT_URL=remote_agent_url] [-NAME=local_agent_name] [-SESSION_NAME=session_name] [-KEYWORDS=keywords] [variable=value]*
      
    • On Windows, run the following:

      startscen.bat scenario_name scenario_version context_code [log_level] [-AGENT_URL=remote_agent_url] ["-NAME=local_agent_name"] ["-SESSION_NAME=session_name"] ["-KEYWORDS=keywords"] ["variable=value"]*
      

    Table 2-2 Parameters for Scheduling a Scenario from a Command Line

    Parameter Description

    scenario_name

    Name of the scenario you want to schedule.

    For example, END_TO_END_LOAD_PLAN_USAGE_EVENTS.

    scenario_version

    Version of the scenario.

    context_code

    Context into which the scenario is started.

    log_level

    Level of logging information to retain.

    remote_agent_url

    URL for the remote agent running the scenario.

    local_agent_name

    Name of the local agent running the scenario.

    session_name

    Name of the session to appear in the session log.

    keywords

    Comma-separated list of keywords attached to the session.

    variable

    Variable used in the session.

    value

    Value assigned to the variable.


    For more information about scheduling scenarios from a command line, see the discussion about running integration processes in Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

    After the scenario is scheduled, Oracle Data Integrator automatically runs the scenario at the scheduled times.

Scheduling Data Loads with the Oracle Data Integrator Built-In Scheduler from a Web Service

To schedule the data loads with the Oracle Data Integrator built-in Scheduler from a Web Service:

  1. From the Oracle Data Integrator Studio Designer navigator, expand the BRM-OCDM project folder, then the STG_OCDM folder, and then the Packages folder.

  2. Open the package you want to schedule.

    For example, END_TO_END_LOAD_PLAN_USAGE_EVENTS.

  3. From the Package Toolbox, in the Internet folder, double-click OdiInvokeWebService.

  4. Click HTTP Analyzer.

    The Credentials dialog box appears.

  5. In the WSDL URL field, enter http://hostname:20910/oraclediagent/OdiInvoke?wsdl where hostname is the IP address of the Web service.

  6. From the Operations list, select InvokeStartScen(,).

  7. In the Credentials section, do the following:

    1. In the OdiUser: string field, enter SUPERVISOR.

    2. In the OdiPassword: string field, enter sunopsis.

    3. In the workRepository: string field, enter WORKREP1.

  8. In the Request section, do the following:

    1. In the ScenarioName: string field, enter the name of the scenario for the package.

    2. In the ScenarioVersion: string field, enter the version number of the scenario.

    3. In the Context: string field, enter Global.

  9. Click Send Request.

  10. Click OK.

    After the scenario is scheduled, Oracle Data Integrator automatically runs the scenario at the scheduled times.

For more information about scheduling scenarios from a Web Service, see the discussion about running integration processes in Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

Refreshing the Foundation Layer with Real-Time Data Using Oracle GoldenGate

After performing an initial load of the Oracle Communications Data Model warehouse, you can refresh the data in the foundation layer of an Oracle Communications Data Model warehouse on a real-time basis using Oracle Data Integrator and Oracle GoldenGate.

See "Execution Flow Using the BRM Adapter with Oracle Data Integrator and Oracle GoldenGate" for an explanation of the execution flow.

To refresh the data in the foundation layer of an Oracle Communications Data Model warehouse with real-time data:

  1. Configure the BRM Adapter for use by both Oracle GoldenGate and Oracle Data Integrator as described in Oracle Communications Data Model Adapters and Analytics Installation Guide.

  2. Verify that the installation and configuration created the schema objects described in "Schema Definitions Added by the BRM Adapter".

  3. From the GGSCI prompt, run info all command to verify that the Oracle GoldenGate processes needed by the BRM Adapter in the source and staging systems are in the RUNNING status.

    Table 2-3 lists the Oracle GoldenGate processes on the source and staging systems.

    Table 2-3 Oracle GoldenGate Processes

    Source System Processes Staging System Processes
    • Manager process

    • Extract process (EXTBRM)

    • Extract Pump process (EXTPBRM)

    • Manager process

    • Replicate process (REPBRM)


    The following example shows the command on the source system and successful results:

    GGSCI>  (mypc1)  5> info all
    
    Program    Status   Group    Lag        Time Since Chkpt
    
    MANAGER    RUNNING
    EXTRACT    RUNNING  EXTBRM   47:29:00   00:00:20
    EXTRACT    RUNNING  EXTPBRM  00:00:00   47:29:06
    

    The following example shows the command on the staging system and successful results:

    GGSCI>  (ocdm01)  2> info all
    
    Program    Status   Group    Lag        Time Since Chkpt
    
    MANAGER    RUNNING
    REPLICAT   RUNNING  REPBRM   00:00:00   00:03:09
    

    Tip:

    If you have two source systems, check the process status on both. For commands to manage Oracle GoldenGate processes, see Oracle Communications Data Model Adapters and Analytics Installation Guide.
  4. Reconfigure the BRM_SRC physical data server to point to the Oracle GoldenGate target tables.

    The Oracle GoldenGate target tables are maintained with the BRM_STG schema. You can reconfigure BRM_SRC data server to point to Oracle GoldenGate target tables (the BRM_STG schema). See Oracle Communications Data Model Adapters and Analytics Installation Guide for more information.

  5. Using Oracle Data Integrator Studio, do the following:

    1. Navigate to the Designer navigator.

    2. Expand the BRM-OCDM project folder and then the STG_OCDM folder.

    3. Select and run the package for the data that you want to update:

      • To load usage data, select and run END_TO_END_LOAD_PLAN_USAGE_EVENTS package.

      • To load usage and billing data, select and run END_TO_END_LOAD_PLAN_BILLING_INVOICE package.

      • To load usage, billing, and payment data, select and run END_TO_END_LOAD_PLAN_PAYMENT_DATA package.

      • To load usage, billing, payment, and collection data, select and run END_TO_END_COLLECTION_DATA package.

    For more information about these packages, see "BRM Adapter Reference".

    Note:

    You can schedule each of these packages to run periodically using Oracle Data Integrator. See "Refreshing the Foundation Layer on a Scheduled Basis".

Manually Refreshing the Foundation Layer

To manually refresh the data in the foundation layer, do the following:

  1. Update the data load parameter values, if required. See "Updating the Data Load Parameters".

  2. Update the users for email notification, if required. See "Configuring the BRM Adapter to Send Email Alert Notifications".

  3. Using Oracle Data Integrator Studio, do the following:

    1. Navigate to the Designer navigator.

    2. Expand the BRM-OCDM project folder, and then the STG_OCDM folder.

    3. Select and run the package for the data that you want to update:

      • To load usage data, select and run END_TO_END_LOAD_PLAN_USAGE_EVENTS package.

      • To load usage and billing data, select and run END_TO_END_LOAD_PLAN_BILLING_INVOICE package.

      • To load usage, billing, and payment data, select and run END_TO_END_LOAD_PLAN_PAYMENT_DATA package.

      • To load usage, billing, payment, and collection data, select and run END_TO_END_COLLECTION_DATA package.

      For more information about these packages, see "BRM Adapter Reference".

Administering BRM Adapter Data Load Processes

To administer the BRM Adapter data load processes, you perform the following system administration tasks:

Configuring the BRM Adapter for Populating Oracle Communications Data Model Warehouse

The following sections describe how to configure the BRM Adapter for populating the foundation layer of the Oracle Communications Data Model warehouse.

Updating the Data Load Parameters

You can update the data load parameter values manually, only if required. For example, to re-run a package that failed, you may need to update the dates to use the previous dates.

Note:

Because the data load date values are maintained by the BRM Adapter for incremental loads, manually updating the dates might lead to loss of data.

You can update the data load parameters in two ways:

Updating the Data Load Parameters in the XML Configuration File

To update only the split interval data load parameter for intermediate commits, see "Setting the Intermediate Commit Interval".

To update the data load date parameters in the XML configuration file:

  1. Open the dwc_log_retention.xml file using an XML editor or text editor.

  2. Edit the date values (see Table 2-1).

  3. Save and close the file.

  4. Using Oracle Data Integrator Studio, do the following:

    1. Navigate to the Designer navigator.

    2. Expand the BRM-OCDM project folder, then the CONFIG folder, and then the Packages folder.

    3. Select and run the XML_LOAD_CONFIG package, which loads the data load parameter values from the dwc_log_retention.xml file into the BRM_ETL_PARAMETER table in the BRM_STG schema.

Updating the Data Load Parameters in the BRM_ETL_PARAMETER Table

To update only the split interval data load parameter for intermediate commits, see "Setting the Intermediate Commit Interval".

To update the data load date parameters in the BRM_ETL_PARAMETER table:

  1. Using SQL Developer, connect to the BRM_STG schema.

  2. Run the update query.

    For example, the following query updates the FROM_DATE_ETL for the USAGE-EVENTS load process.

    UPDATE BRM_ETL_PARAMETER SET FROM_DATE_ETL='01-Dec-2015' WHERE PROCESS_TYPE='USAGE-EVENTS';
    COMMIT;
    

Setting the Intermediate Commit Interval

To set the intermediate commit interval:

  1. Using SQL Developer, connect to the BRM_STG schema.

  2. Run the following query.

    UPDATE BRM_ETL_PARAMETER SET SPLIT_INTERVAL=split_interval WHERE PROCESS_TYPE='process_type';
    COMMIT;
    

    where:

    • split_interval determines the time interval used to extract and load the BRM source data to the Oracle Communications Data Model warehouse. This value is specified in minutes.

      For example, if this value is 360 (6 hours), the BRM Adapter extracts and loads the BRM source data in the following order:

      • Data with a timestamp from midnight to 6 AM

      • Data with a timestamp from 6 AM to 12 PM

      • Data with a timestamp from 12 PM to 6 PM

      • Data with a timestamp from 6 PM to midnight

      To disable intermediate commits, set this value to 1440, which indicates no intermediate commits within a 24-hour interval.

    • process_type is one of the following:

      • USAGE-EVENTS: Loads usage data.

      • BILLING-INVOICE: Loads usage and billing data.

      • PAYMENT-DATA: Loads usage, billing, and payment data.

      • COLLECTION-DATA: Load usage, billing, payment, and collection data.

    For more information about using intermediate commits, see "Using Intermediate Commits for Huge Batch Updates".

Configuring the BRM Adapter to Send Email Alert Notifications

To configure the BRM Adapter to send email alert notifications:

  1. Copy the sample XML template below into an XML editor or text editor.

    <?xml version="1.0" ?>
    <!DOCTYPE main [
         <!ELEMENT main (DATA_RECORD*)>
         <!ELEMENT DATA_RECORD (MAIL_SERVER?,PORT?,FROM1?,TO1?,CC?)+>
         <!ELEMENT MAIL_SERVER (#PCDATA)>
         <!ELEMENT PORT (#PCDATA)>
         <!ELEMENT FROM1 (#PCDATA)>
         <!ELEMENT TO1 (#PCDATA)>
         <!ELEMENT CC (#PCDATA)>
    ]>
    <main>
         <DATA_RECORD>
             <MAIL_SERVER>ip_address</MAIL_SERVER>
             <PORT>port_number</PORT>
             <FROM1>from_user</FROM1>
             <TO1>to_user1,to_user2</TO1>
             <CC>cc_user</CC>
         </DATA_RECORD>
    </main>
    
  2. Edit the values of the child elements in the <DATA_RECORD> element.

    Table 2-4 describes the child elements.

    Table 2-4 Email Configuration Parameters

    Parameter Description

    ip_address

    The IP address of the SMTP mail server.

    port_number

    The port number for the mail server.

    from_user

    The email address from which to send the notification.

    to_user

    The email addresses of the users to send the notification to.

    cc_user

    The email addresses of the users to copy on the notification.


  3. Save the file as mail_config.xml.

  4. Using SQL Developer, connect to the BRM_STG schema.

  5. Run the following query, which adds the location of the mail_config.xml file to the XML_FILEPATH table.

    insert into BRM_STG.XML_FILEPATH (XML_TYPE, XML_FILEPATH) values ('MAIL_CONFIG', '/file_path/mail_config.xml');
    commit;
    

    where file_path is the directory in which the mail_config.xml file is located.

    When you run a package, the BRM Adapter locates the configuration file using the information in the XML_FILEPATH table and loads the email details from the file into a database table.

Updating the Email Alert Notification Configurations

To update the email alert notification configurations:

  1. Open the mail_config.xml file using an XML editor or text editor.

  2. Edit the configuration values (see Table 2-4).

  3. Save and close the file.

Configuring Parallel Loading

To configure parallel loading:

  1. Using Oracle Data Integrator Studio, navigate to the Designer navigator.

  2. Expand the BRM-OCDM project folder.

  3. Do one of the following:

    • To configure parallel loading for the staging layer tables, expand the SRC_STG_NONGG folder.

    • To configure parallel loading for the Oracle Communications Data Model foundation layer tables, expand the STG_OCDM folder.

  4. In the Mappings folder, open the mapping for which you want to enable parallel loading.

  5. In the physical design, select the target table.

  6. From the Window menu, select Properties.

    The Property Inspector displays.

  7. In the Property Inspector, do one of the following:

    • For source-to-staging-layer loading, select the Loading Knowledge Module tab.

    • For staging-layer-to-target-table loading, select the Integration Knowledge Module tab.

  8. In the Options tab, enter a value for SELECT_HINT using the following syntax:

    /*+PARALLEL(target_table,number_of_ports)*/
    

    where:

    • target_table is the name of the target table.

    • number_of_ports is the number of parallel ports on the database server. Consult your database administrator about the number of parallel ports that is supported on the database server.

    For example:

    /*PARALLEL(ACCOUNT_NAMEINFO_T_D,4)*/
    
Description of brm_select_hint.png follows
Description of the illustration ''brm_select_hint.png''

Managing and Monitoring BRM Adapter Data Load Processes

The following sections describe how you can manage and monitor the BRM Adapter data load processes.

Using Oracle Data Integrator Web Console to Manage and Monitor Data Load Process

Use the Oracle Data Integrator Web Console to manage and to monitor the BRM data load processes.

From the Runtime tab (see Figure 2-1), you can select and run the BRM scenarios.

Figure 2-1 RunTime Tab in Oracle Data Integrator Web Console

Description of Figure 2-1 follows
Description of ''Figure 2-1 RunTime Tab in Oracle Data Integrator Web Console''

From the Sessions tab (see Figure 2-2), you can view the results of any data load session.

Figure 2-2 Sessions Tab in Oracle Data Integrator Web Console

Description of Figure 2-2 follows
Description of ''Figure 2-2 Sessions Tab in Oracle Data Integrator Web Console''

For more information about using the Oracle Data Integrator Web Console, see Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

Displaying Your Brand Logo on the User Interface

To display your brand logo on the Oracle Data Integrator Web Console user interface:

  1. Go to the ODI_Home directory, where ODI_Home is the directory in which Oracle Data Integrator Studio is installed.

  2. Change to the /odi/studio/bin directory.

    The directory contains the file logo.jpg.

  3. Rename the existing logo.jpg file to another name. For example, logo_odi.jpg.

  4. Copy your brand logo file to the /odi/studio/bin directory.

  5. Rename your brand logo file to logo.jpg.

Monitoring a Huge Batch Update

To monitor a huge batch update:

  1. From SQL Developer, connect to the BRM_STG schema.

  2. In the Connections navigator, expand BRM_STG.

  3. Select the BRM_ETL_TIME_SPLIT_PARAMETER table, which displays the table definition.

  4. Click the Data tab, which displays the data.

    The SUB_PROCESS_ID column indicates the number of batches. A Y value in the SUCC_IND column indicates the batch update has been completed.

    Description of brm_etltimesplit_parameter.png follows
    Description of the illustration ''brm_etltimesplit_parameter.png''

Using Oracle Database In-Memory Column Store to Improve Database Operations Performance

The Oracle Database in-memory column store enables database tables to be stored in memory in a columnar format rather than the traditional row format. Data populated in the tables is optimized for analytical processing. For more information about the in-memory column store, see Oracle Database 12c Release 1 (12.1.0.2) Database Administrator's Guide.

Note:

  • The in-memory column store is available starting with Oracle Database 12c Release 1 (12.1.0.2).

  • The in-memory column store is a separately licensed option of Oracle Database Enterprise Edition.

You can use the in-memory column store to enable tables with performance-critical data to be stored in memory and improve performance of the database operations performed on the tables.

For example, you could enable the DD_OBJECT_MAP and DWC_ETL_MATRIX_OCDMLKUP_MATCH tables to be stored in the in-memory column store to improve performance for those queries that select a small number of columns from the table.

To enable a table to be stored in the in-memory column store:

  1. Ensure that the in-memory column store is enabled for the database.

    For information about enabling the in-memory column store for a database, see Oracle Database 12c Release 1 (12.1.0.2) Database Administrator's Guide.

  2. Ensure that the size of the in-memory area is large enough to accommodate all of the database objects you want to store in the in-memory column store.

    Ensure the amount of memory currently allocated for the in-memory area by entering the following command in SQL*Plus:

    SHOW PARAMETER INMEMORY_SIZE;
    

    Consult with your database administrator about how much memory can be allocated to the in-memory area.

  3. Using SQL*Plus, connect to the BRM_STG schema as user SYSDBA.

  4. Run the following statement:

    ALTER TABLE table_name INMEMORY;
    

    where table_name is the table to be stored the in-memory column store.

    For example:

    ALTER TABLE DWC_ETL_MATRIX_OCDMLKUP_MATCH INMEMORY;
    

    You can also specify a priority level that determines the priority of the table in the population queue.

    The following example enables the DWC_ETL_MATRIX_OCDMLKUP_MATCH table for the in-memory column store and specifies PRIORITY CRITICAL for populating the table data in memory:

    ALTER TABLE DWC_ETL_MATRIX_OCDMLKUP_MATCH INMEMORY PRIORITY CRITICAL;
    

    This example populates the DWC_ETL_MATRIX_OCDMLKUP_MATCH table in the in-memory column store before database objects with priority levels NONE, LOW, MEDIUM or HIGH.

    For more information about using the in-memory column store, see Oracle Database 12c Release 1 (12.1.0.2) Database Administrator's Guide.

Restarting a Data Load Process After a Package Execution Fails

If you encounter an error at the mapping level during a package execution, fix the data error and then restart the data load process.

To restart a data load process after a package execution fails, do the following:

  1. From Oracle Data Integrator Studio Operator navigator, identify the package and the mapping where the error occurred.

    For more information about using the Operator navigator to view package execution results, see Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

  2. Analyze the error and resolve the problem with the data. See "BRM Adapter Exception Handling".

  3. From SQL Developer, do one of the following:

    • If the package execution error occurred during the source-to-staging-layer data load process, run the following query:

      UPDATE BRM_EXEC_STAT SET EXEC_STAT='CLOSED' WHERE PROC_TYP='SRC_STG';
      
    • If the package execution error occurred during the staging-to-foundation-layer data load process, run the following query:

      UPDATE BRM_EXEC_STAT SET EXEC_STAT='CLOSED' WHERE PROC_TYP='STG_OCDM';
      

    The BRM Adapter does not allow to run a package if a load process is in the RUNNING or COMPLETED state.

    The load process status is set to RUNNING when the packages are run. The status is set to COMPLETED after the package executions have completed successfully.

  4. From the Oracle Data Integrator Studio Designer navigator, run the package that failed previously.

    For example, if the END_TO_END_LOAD_PLAN_USAGE package failed previously, you would run this package again. Oracle Data Integrator restarts the data load from where it failed previously.

BRM Adapter Exception Handling

Exception handling is provided within the SRC_STG_LOAD and STG_OCDM_LOAD packages. All interfaces and procedures are provided with an exception-handling procedure. If any exception occurs, the exception-handling procedure is executed. The exception-handling procedure checks the exception-handling configuration and proceeds accordingly.

Figure 2-3 shows the exception handler operator log.

Figure 2-3 BRM Adapter Exception Handler Operator Log

Description of Figure 2-3 follows
Description of ''Figure 2-3 BRM Adapter Exception Handler Operator Log''

Exception Handler Configuration

Exception handling is configured for each step within the package, not for each interface or procedure. In most cases, interfaces and procedures can have the same name as the step name. Open the respective packages and click the particular step to look for the step name, as shown in Figure 2-4.

Figure 2-4 Finding Step Names for Exception Handling

Description of Figure 2-4 follows
Description of ''Figure 2-4 Finding Step Names for Exception Handling''

Alternatively, you can find the step name in the exception handler log:

ODI-1228: Task PROCESS_ERROR_HANDLING (Procedure) fails on the target ORACLE connection BRM_STG
Caused By: java.sql.SQLException: ORA-20001:
Error occurred while processing DWR_ACCT_MAP, fix the error in interface or make entry into brm_stg (brm_odi_exception_handle) table, if you wish to skip this error

To skip an exception means do not stop the execution and proceed with the next step. You can run the following insert query in the BRM_STG schema:

insert into brm_odi_exception_handle ( interface_name, execution_context, skip_exception ) values ( 'name', 'context', 'skip' )

where

  • name is the step name.

  • context is the execution context; default is GLOBAL.

  • skip is Y to skip the execution and continue with the next step, or N to not skip the exception and raise an error.

For example:

insert into brm_odi_exception_handle ( interface_name, execution_context, skip_exception ) values ( 'DWR_ACCT_MAP', 'GLOBAL', 'Y' )

Table 2-5 Package and Step Name Mapping

Package Folder Step Name

100 - LOOKUP

STG_OCDM

DWB_CRNCY_EXCHNG_RATE_MAP

100 - LOOKUP

STG_OCDM

DWL_ACCT_BAL_TYP_MAP

100 - LOOKUP

STG_OCDM

DWL_CALL_SRVC_TYP_MAP

100 - LOOKUP

STG_OCDM

DWL_COLLCTN_TYP_MAP

100 - LOOKUP

STG_OCDM

DWL_CRNCY_MAP

100 - LOOKUP

STG_OCDM

DWL_CUST_TYP_MAP

100 - LOOKUP

STG_OCDM

DWL_PK_OFPK_TIME_MAP

100 - LOOKUP

STG_OCDM

DWL_PROD_SPEC_TYP_MAP

200 - CUSTOMER_PKG

STG_OCDM

exec_geo_map

200 - CUSTOMER_PKG

STG_OCDM

DWR_PRTY_MAP

200 - CUSTOMER_PKG

STG_OCDM

DWR_CUST_MAP

200 - CUSTOMER_PKG

STG_OCDM

DWR_CUST_ADDR_MAP

200 - CUSTOMER_PKG

STG_OCDM

DWR_PRTY_CNCT_INFO_MAP

200 - CUSTOMER_PKG

STG_OCDM

DWR_PRTY_MAP (Payinfo_CC)

200 - CUSTOMER_PKG

STG_OCDM

DWR_PRTY_MAP (Payinfo_DD)

200 - CUSTOMER_PKG

STG_OCDM

DWR_PRTY_MAP (Payinfo_INV)

200 - CUSTOMER_PKG

STG_OCDM

UPDATE_DWR_PRTY.CUST_KEY

300 - ACCOUNT_PKG

STG_OCDM

DWR_ACCT_SGMNT

300 - ACCOUNT_PKG

STG_OCDM

DWR_ACCT_MAP

300 - ACCOUNT_PKG

STG_OCDM

DWR_ACCT_BAL_GRP_MAP

300 - ACCOUNT_PKG

STG_OCDM

DWR_ACCT_PREF_INVC_DLVRY_MAP

300 - ACCOUNT_PKG

STG_OCDM

DWR_ACCT_PYMT_MTHD_MAP

400 - SERVICE_PKG

STG_OCDM

DWR_SRVC_MAP

400 - SERVICE_PKG

STG_OCDM

DWR_SRVC_SPEC_MAP

400 - SERVICE_PKG

STG_OCDM

DWR_CUST_FCNG_SRVC_MAP

500 - PRODUCT_PKG

STG_OCDM

DWR_PROD_SPEC_MAP (PRODUCT_T)

500 - PRODUCT_PKG

STG_OCDM

DWR_PROD_OFR_MAP

500 - PRODUCT_PKG

STG_OCDM

DWR_PROD_SBRP_MAP

500 - PRODUCT_PKG

STG_OCDM

DWR_PROD_OFR_PROD_SPEC_ASGN_MAP

500 - PRODUCT_PKG

STG_OCDM

DWR_PROD_CTLG_MAP

500 - PRODUCT_PKG

STG_OCDM

DWR_PROD_CTLG_PROD_OFR_ASGN_MAP

500 - PRODUCT_PKG

STG_OCDM

DWR_SRVC_SPEC_PROD_SPEC_RLTN_MAP

500 - PRODUCT_PKG

STG_OCDM

UPDATE_PROD_OFR_PLN_TYP

500 - PRODUCT_PKG

STG_OCDM

DWR_AGRMNT_MAP

500 - PRODUCT_PKG

STG_OCDM

DWR_VAL_ADD_SRVC_MAP

500 - PRODUCT_PKG

STG_OCDM

DWR_AGRMNT_ITEM_MAP

500 - PRODUCT_PKG

STG_OCDM

DWR_PROD_OFR_PRICE_MAP

600 - ACCOUNT_BALANCE_PKG

STG_OCDM

DWB_ACCT_BAL_MAP

600 - ACCOUNT_BALANCE_PKG

STG_OCDM

DWB_ACCT_BAL_IMPC_MAP

700 - INVOICE_PKG

STG_OCDM

DWB_INVC_MAP

700 - INVOICE_PKG

STG_OCDM

DWB_INVC_ITEM_MAP

700 - INVOICE_PKG

STG_OCDM

DWB_INVC_ADJ_MAP

700 - INVOICE_PKG

STG_OCDM

DWB_INVC_DISC_MAP

700 - INVOICE_PKG

STG_OCDM

exec_update_invc

800 - COLLECTIONS

STG_OCDM

DWB_ACCT_DEBT_MAP

800 - PAYMENT_PKG

STG_OCDM

DWB_ACCT_PYMT_MAP

800 - PAYMENT_PKG

STG_OCDM

payment assign view

800 - PAYMENT_PKG

STG_OCDM

DWB_INVC_PYMT_ASGN_MAP

800 - PAYMENT_PKG

STG_OCDM

update dwb_invc.FULL_PAY_RCVD_IND

10000 - BROADBAND_USAGE_EVENT_PKG

STG_OCDM

DWB_BRDBND_USG_EVT_MAP

10000 - DATA_SERVICE_EVENT_PKG

STG_OCDM

DWB_DATA_SRVC_EVT_MAP (DIALUP)

10000 - DATA_SERVICE_EVENT_PKG

STG_OCDM

DWB_DATA_SRVC_EVT_MAP (GSM POSTPAID)

10000 - DATA_SERVICE_EVENT_PKG

STG_OCDM

DWB_DATA_SRVC_EVT_MAP (GSM PREPAID)

10000 - GPRS_USAGE_EVENT_PKG

STG_OCDM

DWB_GPRS_USG_EVT_MAP (PostPaid)

10000 - GPRS_USAGE_EVENT_PKG

STG_OCDM

DWB_GPRS_USG_EVT_MAP (PrePaid)

10000 - SMS_EVENT_PKG

STG_OCDM

DWB_SMS_EVT_MAP (PostPaid)

10000 - SMS_EVENT_PKG

STG_OCDM

DWB_SMS_EVT_MAP (PrePaid)

10000 - VOICE_CALL_EVENT_PKG

STG_OCDM

DWB_WRLS_CALL_EVT_MAP(PostPaid)

10000 - VOICE_CALL_EVENT_PKG

STG_OCDM

DWB_WRLS_CALL_EVT_MAP (PrePaid)

ACCOUNT_PKG

SRC_STG_NONGG

ACCOUNT_T_NONGG_IU

ACCOUNT_PKG

SRC_STG_NONGG

ACCOUNT_NAMEINFO_T_NONGG_IU

ACCOUNT_PKG

SRC_STG_NONGG

BILLINFO_T_NONGG_IU

BAL_GRP_PKG

SRC_STG_NONGG

BAL_GRP_T_NONGG_IU

BAL_GRP_PKG

SRC_STG_NONGG

BAL_GRP_BALS_T_NONOGG_IU

BAL_GRP_PKG

SRC_STG_NONGG

BAL_GRP_SUB_BALS_T_NONOGG_IU

COLLECTION_PKG

SRC_STG_NONGG

COLLECTIONS_SCENARIO_T_NONOGG_IU

COLLECTION_PKG

SRC_STG_NONGG

COLLECTIONS_ACTION_T_NONOGG_IU

CONFIG_PKG

SRC_STG_NONGG

CONFIG_BEID_BALANCES_T_NONGG_IU

CONFIG_PKG

SRC_STG_NONGG

CONFIG_BUSINESS_TYPE_T_NONGG_IU

CONFIG_PKG

SRC_STG_NONGG

CONFIG_CUR_CONV_RATES_T_NONGG_IU

CONFIG_PKG

SRC_STG_NONGG

CONFIG_PAYMENT_PAY_TYPES_T_NONGG_IU

CONFIG_PKG

SRC_STG_NONGG

CONFIG_T_NONGG_IU

CONFIG_PKG

SRC_STG_NONGG

CONFIG_COLLECTIONS_SCENARIO_T_NONOGG_IU

CONFIG_PKG

SRC_STG_NONGG

CONFIG_BILLING_SEGMENT_T_NOOGG_IU

DD_OBJECTS_PKG

SRC_STG_NONGG

DD_OBJECTS_T_NONGG_IU

EVENT_ACTIVITY_TLCS_PKG

SRC_STG_NONGG

EVENT_ACTIVITY_TLCS_T_NONGG_IU

EVENT_ACTIVITY_TLCS_PKG

SRC_STG_NONGG

EVENT_ACTV_TLCS_SVC_CODES_T_NONGG_IU

EVENT_BAL_IMPACT_PKG

SRC_STG_NONGG

EVENT_BAL_IMPACTS_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_MISC_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_PAYMENT_CASH_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_PAYMENT_CC_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_PAYMENT_CHECK_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_PAYMENT_DD_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_PAYMENT_FAILED_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_PAYMENT_PAYORD_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_PAYMENT_POST_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_PAYMENT_T_NONGG_IU

EVENT_BILLING_PKG

SRC_STG_NONGG

EVENT_BILLING_PAYMENT_WTRAN_T_NONGG_IU

EVENT_BROADBAND_USAGE_PKG

SRC_STG_NONGG

EVENT_BROADBAND_USAGE_T_NONGG_IU

EVENT_DLAY_ACTV_SESSION_TLCO_PKG

SRC_STG_NONGG

EVENT_DLAY_ACTV_TLCS_SVC_CDS_T_NONGG_IU

EVENT_DLAY_ACTV_SESSION_TLCO_PKG

SRC_STG_NONGG

EVENT_DLAY_ACTV_TLCS_T_NONGG_IU

EVENT_DLAY_ACTV_SESSION_TLCO_PKG

SRC_STG_NONGG

EVENT_DLAY_SESS_TLCS_SVC_CDS_T_NONGG_IU

EVENT_DLAY_ACTV_SESSION_TLCO_PKG

SRC_STG_NONGG

EVENT_DLAY_SESS_TLCS_T_NONGG_IU

EVENT_DLAY_ACTV_SESSION_TLCO_PKG

SRC_STG_NONGG

EVENT_DLYD_SESSION_TLCO_GPRS_T_NONGG_IU

EVENT_DLAY_ACTV_SESSION_TLCO_PKG

SRC_STG_NONGG

EVENT_DLYD_SESSION_TLCO_GSM_T_NONGG_IU

EVENT_PKG

SRC_STG_NONGG

EVENT_T_NONGG_IU

EVENT_RUM_MAP_PKG

SRC_STG_NONGG

EVENT_RUM_MAP_T_NONGG_IU

EVENT_SESSION_PKG

SRC_STG_NONGG

EVENT_SESSION_DIALUP_T_NONGG_IU

EVENT_SESSION_PKG

SRC_STG_NONGG

EVENT_SESSION_TELCO_GPRS_T_NONGG_IU

EVENT_SESSION_PKG

SRC_STG_NONGG

EVENT_SESSION_TLCO_GSM_T_NONGG_IU

EVENT_SESSION_PKG

SRC_STG_NONGG

EVENT_SESSION_TLCS_T_NONGG_IU

EVENT_SESSION_PKG

SRC_STG_NONGG

EVENT_SESS_TLCS_SVC_CODES_T_NONGG_IU

EVENT_TAX_JURISDICTIONS_PKG

SRC_STG_NONGG

EVENT_TAX_JURISDICTIONS_T_NONGG_IU

IFW_CURR_TIMEZOZE_USAGETYPE_PKG

SRC_STG_NONGG

IFW_CURRENCY_NONGG_IU

IFW_CURR_TIMEZOZE_USAGETYPE_PKG

SRC_STG_NONGG

IFW_TIMEZONE_NONGG_IU

IFW_CURR_TIMEZOZE_USAGETYPE_PKG

SRC_STG_NONGG

IFW_USAGETYPE_NONGG_IU

INVOICE_ITEM_PKG

SRC_STG_NONGG

EVENT_ITEM_TRANSFER_T_NONGG_IU

INVOICE_ITEM_PKG

SRC_STG_NONGG

ITEM_T_NONGG_IU

INVOICE_ITEM_PKG

SRC_STG_NONGG

BILL_T_NONGG_IU

INVOICE_ITEM_PKG

SRC_STG_NONGG

INVOICE_T_NONGG_IU

INVOICE_ITEM_PKG

SRC_STG_NONGG

INVOICE_STATUSES_T_NONGG_IU

PAYINFO_PKG

SRC_STG_NONGG

PAYINFO_T_NONGG_IU

PAYINFO_PKG

SRC_STG_NONGG

PAYINFO_INV_T_NONGG_IU

PAYINFO_PKG

SRC_STG_NONGG

PAYINFO_DD_T_NONGG_IU

PAYINFO_PKG

SRC_STG_NONGG

PAYINFO_CC_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

PRODUCT_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

DEAL_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

DEAL_PRODUCTS_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

PLAN_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

PLAN_SERVICES_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

RATE_PLAN_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

DISCOUNT_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

PURCHASED_PRODUCT_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

PURCHASED_DISCOUNT_T_NONGG_IU

PRODUCT_DEAL_PLAN_PKG

SRC_STG_NONGG

DEAL_DISCOUNTS_T_D_NONGG_IU

SERVICE_PKG

SRC_STG_NONGG

SERVICE_T_NONGG_IU

SERVICE_PKG

SRC_STG_NONGG

SERVICE_EMAIL_T_NONGG_IU

SERVICE_PKG

SRC_STG_NONGG

SERVICE_TELCO_FEATURES_T_NONGG_IU

SERVICE_PKG

SRC_STG_NONGG

SERVICE_TELCO_GPRS_T_NONGG_IU

SERVICE_PKG

SRC_STG_NONGG

SERVICE_TELCO_GSM_T_NONGG_IU


Drop Error Tables (E$_)

You should drop the E$_{OCDM_SYS} tables from the OCDM_SYS schema. If the OCDM_SYS schema table changes, the load will fail. Review existing data in these tables before you drop them. You can review the data by running the following query:

SELECT * FROM USER_OBJECTS WHERE OBJECT_NAME LIKE 'E$_%' AND OBJECT_TYPE = 'TABLE';

Moving Data into History Tables

The BRM Adapter by default moves data into history tables. History table mapping is available in the BRM_MAPPING_TAB staging table. You can disable moving data into history tables by running the following query:

UPDATE BRM_MAPPING_TAB SET DELTA_HIST_TAB_NAME = NULL WHERE NORMAL_TAB_NAME = 'ACCOUNT_T';

This query disables the history only for the ACCOUNT_T source table. You can disable the moving of history data into all tables by removing the WHERE clause from the previous query.

If the BRM_MAPPING_TAB staging table lacks an entry for a history table, you can add one.

Note:

The history table structure must be the same as the structure of the delta table.