23 Generating the BRM Cache Group Schema

This chapter describes the Oracle Communications Billing and Revenue Management (BRM) pin_tt_schema_gen utility, how to configure it, and how to use it.

About Generating the BRM Cache Group Schema

You generate the BRM cache group schema by using the pin_tt_schema_gen utility. This utility performs the following tasks:

  • Generates the tt_schema.sql file used to create the BRM cache group schema

  • Generates the tt_load.sql file used to load BRM objects into the cache groups

  • Generates the tt_drop.sql file used to drop the BRM cache group schema

  • Creates missing indexes and not null constraints on the BRM database

You first run the tt_schema.sql script on Oracle IMDB Cache to create the BRM cache groups schema. After running tt_schema.sql, you run the tt_load.sql script to preload the corresponding BRM objects from the BRM database into the cache groups.

Note:

  • In a high-availability system with Oracle Clusterware, Oracle Clusterware automatically duplicates the complete data store from the active to the standby node.

  • In a high-availability system without Oracle Clusterware, you must run tt_schema.sql on both the active and the standby IMDB Cache nodes to create the schema for the BRM cache groups. You run tt_load.sql only on the active node, and the Oracle IMDB Cache replication agent replicates the data to the standby node.

To drop the BRM cache groups schema and the tables created in the cache groups, run the tt_drop.sql script. This will delete all BRM objects from the cache group. Data in the cache propagated to the BRM database before running tt_drop.sql persists in the BRM database.

Creating and Initializing Your Cache Group Schema

To create and initialize your schema:

  1. Configuring the pin_tt_schema_gen.values File.

  2. Generating Your Schema and Load SQL Scripts.

Configuring the pin_tt_schema_gen.values File

The pin_tt_schema_gen utility uses the existing BRM database schema to generate the tt_schema.sql, tt_load.sql, and tt_drop.sql scripts based on the specifications in the pin_tt_schema_gen.values file.

When you configure the pin_tt_schema_gen.values file, you specify values such as the following:

  • The database name, user name, and password for the BRM database

  • The database number to specify where to load the data

  • The cache group definitions

  • The cache group aging policy

  • The transient tables

Table 23-1 describes the entries in the BRM_home/sys/pin_tt_schema_gen.values file.

Table 23-1 pin_tt_schema_gen.values File Entries

Entry Description

$MAIN_DB{'alias'}

Specifies the database alias name for the BRM database defined in the tnsnames.ora file.

$MAIN_DB{'user'}

Specifies the user name for the BRM database.

For multischema systems, enter the user name for the database schema.

$MAIN_DB{'password'}

Specifies the password for the BRM database user.

For multischema systems, enter the password for the database schema user.

$fileName

Specifies the name of the generated SQL file that can be used to create the schema definitions for the BRM cache groups.

The default is tt_schema.sql.

For multischema systems, append the database number for the schema to the file name. For example: tt_schema_0.0.0.2.sql

$dropingDatastoreSqlFile

Specifies the name of the generated SQL file that can be used to drop the schema definitions for the BRM cache groups.

The default is tt_drop.sql.

$loadingCachegroupSqlFile

Specifies the name of the generated SQL file that can be used to load the data into the BRM cache groups.

The default is tt_load.sql.

$logfile

Specifies the name of the log file that contains the error logs for the pin_tt_schema_gen utility.

The default is pin_tt_schema_gen.log.

$primary_db_user_name

Specifies the name of the primary database user or the primary database schema user. This is required in a multischema system.

$logical_partitioning_enabled

Specifies whether logical partitioning is enabled in Oracle IMDB Cache.

The default is No.

%cache_group_info

Specifies cache group information.

For details about this entry, see "Defining Cache Group Information".

%cache_group_info_specific

Specifies information for special cases in which you must create the cache group definition for each table or class.

For details about this entry, see "Defining Cache Group Information for Special Cases".

%on_delete_cascade_def

Specifies that when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign keys are also deleted.

For details about this entry, see "Setting On Delete Cascade".

@lru_config_def

Specifies the least-recently-used (LRU) aging policy for the cache group.

The default is ('.30','.50','1').

For details about this entry, see "Defining the Cache Group Aging Policy".

@local_tables_class_def

Specifies transient tables.

For details about this entry, see "Defining Transient Local Tables".

$use_timesten_datatypes

Specifies whether you want to change the NUM field type to TT_INTEGER and TT_BIGINT.

The default is Yes.

For details about this entry, see "Supporting Oracle IMDB Cache Data Types".

%tt_load_cache_group_def

Specifies the cache groups in which to load data.

For details about this entry, see "Generating the Load SQL Script".

$db_no_for_load_sql

Specifies the logical partition database number.

This entry can be used if your Oracle IMDB Cache has multiple logical partitions. This enables you to generate separate SQL files that can be used to load the data for each logical partition or database number.

Note: You must set logical_partitioning_enabled to Yes to use this entry.

For details about this entry, see "Defining the Logical Partition Database Number".

@specific_sql_query

Specifies additional SQL statements to be added in the generated schema definition SQL file.


The pin_tt_schema_gen.values file includes syntax information and specifications for the default BRM cache groups. Use the information in the following sections to supplement the information in pin_tt_schem_gen.values.

Defining Cache Group Information

You define each cache group by using the %cache_group_info entry. This entry uses the following syntax:

%cache_group_info = (cache_group_name => {
'CLASSES' => [class1, class2, ...... classN ],
'CG_PROPERTIES' => [cache_group_type, baseTableName, optionalAging, primary_key, 'USE_GLOBAL_CACHE_GROUP'],
'FK_DEF' => ['DEFAULT => foreignKey, referredBaseTable, fieldOfReferredBaseTable',
                                                                 'baseTableName_N => foreignKey, referredBaseTable, fieldOfReferredBaseTable'] } ,....);

Table 23-2 defines the parameters in the above syntax.

Table 23-2 Parameters for the %cache_group_info Entry

Parameter Description

cache_group_name

Specifies the name of the cache group.

'CLASSES'

Specifies the BRM storable classes in the cache group, separated by commas.

For example:

'CLASSES' => ['/account','/balance_group','/bill','/billinfo'],

'CG_PROPERTIES'

Specifies the properties of the cache group where:

  • cache_group_type specifies the cache group type.

  • baseTableName specifies the root table in the cache group.

  • (Optional) optionalAging is used to specify whether aging is disabled for the cache group. Aging is enabled by default.

  • primaryKey specifies the root table primary key.

  • USE_GLOBAL_CACHE_GROUP is specified when logical partitioning is enabled.

For example:

'CG_PROPERTIES' => [
'ASYNCHRONOUS WRITETHROUGH' # Cache Group Type
'ACCOUNT_T', # RootTable
'POID_ID0',# Primary Key
'USE_GLOBAL_CACHE_GROUP'],

'FK_DEF'

(Optional) Used to define the foreign key for the subsequent base table of the subsequent class.

For example:

'FK_DEF' =>[
'DEFAULT => ACCOUNT_OBJ_ID0,ACCOUNT_T,POID_ID0',   # Default for other root table of subsequent classes.
'JOURNAL_T => ITEM_OBJ_ID0,ITEM_T,POID_ID0]

Defining Cache Group Information for Special Cases

You define cache group information for each BRM database table by using the %cache_group_info_specific entry. This entry uses the following syntax:

%cache_group_info_specific = ( table_name => [ cache_group_name, primary_key, cache_group_type,  'AGING_OFF', 'USE_PRIMARY_DB','USE_GLOBAL_CACHE_GROUP' ],      ........  );

Table 23-3 describes the parameters in the above syntax.

Table 23-3 Parameters for the %cache_group_info Entry

Parameter Description

table_name

Specifies the name of the cache table.

cache_group_name

Specifies the name of the cache group.

primary_key

The primary key for the specified table.

cache_group_type

Specifies the cache group type.


Setting On Delete Cascade

You specify the table on which to apply ON DELETE CASCADE by using the %on_delete_cascade_def entry. This entry uses the following syntax:

%on_delete_cascade_def = ( 'cache_group_name' => ['table1','table2',....,'tableN'],................);

where:

  • cache_group_name specifies the name of the cache group schema.

  • tableN is the table name.

For example:

%on_delete_cascade_def = ( 'ACCOUNT_CG' => ['JOURNAL_T']);

Defining the Cache Group Aging Policy

By default, IMDB Cache Manager checks the memory space usage of the Oracle IMDB Cache shared-segment at one-minute intervals. When the memory usage exceeds 50%, IMDB Cache Manager purges the least-recently-used (LRU) objects from the cache groups until the memory space usage is below 30%.

Note:

Aging is enabled by default for Dynamic AWT cache groups. To disable aging for any cache group, you must specify AGING_OFF in the cache group definition. See "Defining Cache Group Information" for more information.

You define the Oracle IMDB Cache aging policy by using the @lru_config_def entry. This entry uses the following syntax:

@lru_config_def = ('LowUsageThreshold', 'HighUsageThreshold', 'AgingCycle' );

where:

  • LowUsageThreshold is the memory space usage in Oracle IMDB Cache below which the aging policy is deactivated. The default is .30.

  • HighUsageThreshold is the memory space usage in Oracle IMDB Cache above which the aging policy is activated. The default is .50.

  • AgingCycle is the frequency (in minutes) that the memory space usage is checked. The default is 1.

For detailed information about implementing aging on a cache group, see Oracle In-Memory Database Cache User's Guide.

Defining Transient Local Tables

Transient tables are created as local tables in Oracle IMDB Cache. You define which storable classes to create as local tables by using the @local_tables_class_def entry. This entry uses the following syntax:

@local_tables_def = ('class1', 'class2', ...... , 'classN');

where classN is a BRM storable class.

The default entry is set to the following:

@local_tables_class_def = ('/reservation','/reservation/active','/reservation_list','/active_session','/active_session/telco','/active_session/telco/gprs','/active_session/telco/gprs/master','/active_session/telco/gprs/subsession','/active_session/telco/gsm');

Supporting Oracle IMDB Cache Data Types

You can reduce the amount of data stored in Oracle IMDB Cache by converting all BRM fields with NUMBER(38) data type to a TT_INTEGER or TT_BIGINT data type. The Oracle IMDB Cache integer data types TT_INTEGER and TT_BIGINT require less memory than the BRM integer data type NUMBER(38).

  • NUMBER(38) requires 22 bytes of storage.

  • TT_INTEGER requires 4 bytes of storage.

  • TT_BIGINT requires 8 bytes of storage.

To convert all fields with a NUMBER(38) data type to a TT_INTEGER or TT_BIGINT data type, set the $use_timesten_datatypes entry to YES:

$use_timesten_datatypes  = "YES";

When this entry is set to NO or missing, fields with a NUMBER(38) data type are left unchanged.

Generating the Load SQL Script

You define the load criteria for generating the tt_load.sql script by using the %tt_load_cache_group_def entry. This entry uses the following syntax:

%tt_load_cache_group_def = ('Cache_group_name', => 'no_of_rows_to_commit, condition, no_of_threads')
%tt_load_cache_group_def = ('Cache_group_name, LOCAL'=> 'no_of_rows_to_commit, condition, no_of_threads')

where:

  • Cache_group_name is the name of the cache group.

  • no_of_rows_to_commit is the number of rows to insert into the cache group before committing the work. It must be a non-negative integer. If no_of_rows_to_commit is 0, the entire statement is executed as one transaction.

  • (Optional) condition is the search condition used to retrieve the target rows from the BRM database.

  • (Optional) no_of_threads is the number of loading threads that run concurrently for parallel loading. One thread performs the bulk fetch from the BRM database and the remaining threads perform the inserts into Oracle IMDB Cache. Each thread uses its own connection or transaction.

    The minimum value for no_of_threads is 2 . The maximum value is 10. If you specify a value greater than 10, Oracle IMDB Cache database assumes the value is 10.

For a high-availability system, you must configure pin_tt_schema_gen.values to generate tt_load.sql for the specific Oracle IMDB Cache database node and then run tt_load.sql against the respective node.

Defining the Logical Partition Database Number

You specify the logical partition database number on which to run the tt_load.sql script by using the $dm_no_for_load_sql entry. This entry uses the following syntax:

$db_no_for_load_sql = "db_number";

where db_number is the logical partition database number.

For example, suppose there are two database schemas with two logical partitions for each schema as follows:

  • Schema 1 has logical partitions 0.0.0.1 and 0.1.0.1

  • Schema 2 has logical partitions 0.0.0.2 and 0.1.0.2

On the machine with logical partition 0.0.0.1, set $db_no_for_load_sql to "0.0.0.1". Then generate tt_load.sql using pin_tt_schema_gen -l.

On the machine with logical partition 0.1.0.1, set db_no_for_load_sql to "0.1.0.1".

On the machine with logical partition 0.0.0.2, set $db_no_for_load_sql to "0.0.0.2".

On the machine with logical partition 0.1.0.2, set $db_no_for_load_sql to "0.1.0.2".

Note:

For logical partitioning support, set logical_partitioning_enabled to YES.

In a high-availability system with Oracle Clusterware, Oracle Clusterware automatically duplicates the complete data store from the active node to the standby node.

In a high-availability system without Oracle Clusterware, you must

  • Run tt_schema.sql on both the active and the standby IMDB Cache nodes to create the schema for the BRM cache groups.

  • Run tt_load.sql only on the active node. The Oracle IMDB Cache replication agent replicates the data to the standby node.

See "Using the Oracle IMDB Cache Grid to Partition Data" for more information about logical partitioning.

Generating Your Schema and Load SQL Scripts

You can generate schema and load SQL scripts for a basic BRM system or a BRM system with logical partitioning.

Important:

If you have existing BRM data created on a BRM system before IMDB Cache Manager was installed, you must run the load_pin_uniqueness utility before loading BRM objects into the Oracle IMDB Cache.

Generating Scripts for a Basic BRM System

To generate the schema and load SQL scripts:

  1. If you have not already done so, modify the pin_tt_schema_gen.values file. See "Configuring the pin_tt_schema_gen.values File".

  2. Run the following command:

    source BRM_home/source.me.csh
    
  3. Run the pin_tt_schema_gen utility with the -a parameter:

    ./pin_tt_schema_gen -a
    

    Note:

    If you do not specify the values for MAIN_DB{'user'} and MAIN_DB{'password'} in the pin_tt_schema_gen.values file, the pin_tt_schema_gen utility prompts you to enter these values.

    This updates the BRM database with unique indexes and non-null constraints and generates the following files:

    • tt_schema.sql

    • tt_load.sql

    • tt_drop.sql

Generating Scripts for a BRM System with Logical Partitioning

To generate your scripts for logical partitions, perform the following steps for each database schema on your system:

  1. Open the pin_tt_schema_gen.values file in a text editor.

  2. Set the $logical_partitioning_enabled entry to Yes.

  3. Set the $db_no_for_load_sql entry to the appropriate value for the first logical partition in your database schema.

  4. Save and close the file.

  5. Run the following command:

    source BRM_home/source.me.csh
    
  6. Run the pin_tt_schema_gen utility with the -a parameter:

    ./pin_tt_schema_gen -a
    

    This generates your tt_schema.sql, tt_drop.sql, and tt_load_0.M.0.N.sql scripts, where M is the partition number and N is the database schema number.

  7. Open the pin_tt_schema_gen.values file in a text editor.

  8. Set the $db_no_for_load_sql entry to the appropriate value for the second logical partition in your database schema.

  9. Save and close the file.

  10. Run the pin_tt_schema_gen utility with the -l parameter:

    ./pin_tt_schema_gen -l
    

    This generates the tt_load_.0.M.0.N.sql script for loading BRM data into your second logical partition.