3 Database Configuration and Tuning

This chapter provides database configuration guidelines for configuring your Oracle Communications Billing and Revenue Management (BRM) database.

For information on installing BRM, see "BRM Installation Overview".

Generic Database Configuration

The basic tasks involved in creating a BRM database are the following:

Selecting the Storage Device Type

Databases can be stored in raw devices (partitions) or files. Raw devices provide the best performance for most workloads, but file systems are easier to administer because of the availability of system administration utilities. However, depending on your system, the performance when using files can be almost equal to the performance when using raw partitions. For example, this is true of some disk arrays that are available.

In general, databases are stored in raw partitions on large systems.

Estimating the Database Size

The objects that require the most storage space in a default BRM installation are accounts, bills, and events. The tables that correspond to these objects are also the ones with the most activity:





  • BILL_T




  • ITEM_T

Service objects also require a lot of storage:


Your customizations might create additional large tables. For example, if you store a lot of account profile data, you must account for data in the ACCOUNT_PROFILES_T table.

Your estimate should not include space gained by archiving.

It takes about 17 to 18 GB of disk space for one million accounts. Activities such as billing, invoicing, and rating result in the most disk space used.

International Version Sizing Considerations

English databases store VARCHARs as one byte per character. Localized versions of BRM can store these strings in AL32UTF8 or UTF8 format. BRM 7.5 uses the AL32UTF8 character set. Character strings such as names, addresses, descriptions, and notes that can be manipulated by BRM GUI tools can vary in size. To size the database, you must determine roughly what percentage of the database consists of strings that can vary in size.

Assigning Disks for the Operating System and for BRM

Allocate separate disks for the BRM software, operating system, and operating system SWAP space. These disks should not be used for any other purpose.

Assigning Storage for Log Files, Rollback Segments, and Temporary Storage

Use separate disks for redo log files, rollback segments, and temporary space. Do not use the disks for any other purpose.

The redo, rollback, and temporary storage database objects are not created or modified by the BRM configuration scripts. Use a separate script to create these objects.

To determine the amount of storage that is used for redo logs, rollback segments, and temporary space, use the following guidelines.

Assigning Storage for Redo Log Files

The redo log files are the most heavily used objects in the database system. You usually create two redo logs.

The redo log files should be large enough to avoid excessive switching between log files. Each switch triggers a checkpoint, which slows performance. For most implementations, the size of each redo log file should be 1 to 5 GB.

If you are not using database archiving, you can put both redo logs on one disk. If you are using archiving, you should use at least two redo logs, on separate disks. When archiving, the redo log files should be large enough to enable each redo log file to be fully archived before it has to be active. You can increase the number of redo log files to allow enough space for archiving.

Assigning Storage for Rollback Segments

Rollback segments are the second-most used objects in Oracle databases. All rollback segments can be in a single tablespace. The number of rollback segments should be approximately equal to the number of Data Manager (DM) back ends.

To prevent the growing and shrinking of rollback statements, use the following declaration:

SQL> init 256K next 256K minextents 20 optimal 5120K

Assigning Storage for Temporary Storage

Temporary storage should be three to four times the size of the largest table (usually EVENT_T). In most cases, 1 GB is enough, since simulation programs and BRM do not sort this table.


By default, temporary storage is TEMP.

Selecting the Storage Model

Choose a storage model based on the total size of your database, which you can determine by summing your data, index, rollback, and temporary tablespaces.

  • Use Test for test or demonstration databases smaller than 700 MB.

  • Use Small for test or demonstration databases smaller than 1.5 GB.

  • Use Medium for databases smaller than 30 GB.

  • Use Large for databases larger than 30 GB.

You set the storage option at installation by editing the pin_setup.values file. See "Installing BRM".

During installation, BRM tables are created using one of the storage models. Therefore, the default storage clause is not used at installation. (The default storage clause specifies the storage parameters to use if no storage clause is provided.) However, if you create custom tables, you must specify a storage clause, or else the default storage clause is used.

Creating Tablespace Storage

Tablespace storage is created from physical disks, however, the unit of storage assigned to a tablespace is a logical device (also called a logical drive or volume). A logical device can be an entire disk, a set of disks, or part of a disk.

You usually use a volume manager, a hardware-specific utility, or a hardware-level RAID manager to create logical devices. (Use either a volume manager or a RAID manager, but not both.) A disadvantage with using a RAID solution is the lack of controller redundancy.

There are various types of logical devices, for example, striped (RAID0), RAID1, and RAID5. Which one you choose depends on requirements for performance, availability, and price:

  • RAID0 has the best performance and price, but provides no fault tolerance.

  • RAID1 and RAID5 provide increased availability, but with reduced performance, especially for write operations.

  • RAID0+1 provides the performance advantage of striping, and the availability of mirroring. However, the cost is higher because you use twice as much disk space.

Before determining the number of tablespaces and how to assign them to logical devices, determine the number of available logical devices.

The number of logical devices that can be created on a UNIX system depends on the number and size of stripes used to construct each logical device. Each logical device is normally constructed from several 32 KB or 64 KB stripes made over four to seven physical disks using a volume manager. Each logical device consists of multiple stripes over the same set of disks. One or more logical devices can be created on one set of disks. Fewer than four disks might be used in small- or medium-sized databases.

For best performance:

  • Assign all logical devices created over the same set of disks to the same tablespace.

  • Each logical device should be on independent disks and controller, usually 6 to 14 disks per controller. Only three or four disks per tablespace would still be a good configuration.

Indexes occupy approximately one third of the space required by the database. A good initial size for a tablespace is 2 GB. Logical devices can then be added in 2 GB or larger increments. Logical devices smaller than 2 GB can lead to reduced performance due to maintenance overhead, although they may be required in smaller databases where the number of available disks is limited. The ability to create logical devices larger than 2 GB depends on disk size and number of disks.

If possible, create very large (10 GB) initial tablespaces instead of small (less than 2 GB) tablespaces. Use the Tables Sheet and Indexes Sheet to determine the size of these initial tablespaces.

From the pin_tables.values file, determine which tables map to which tablespaces. Then sum the storage required for the tablespace. This can be done for the larger tables and tablespaces. The smaller tablespaces do not need more than a few Gigabytes for their initial storage. See "Tablespace Priorities".

The size of the database, in turn, determines the minimum number of disks required for the database. Remember to consider disk space required for other purposes, for example, the operating system, BRM, log files, temporary storage, and swap files.

You should have enough disks to avoid performance bottlenecks. In addition, you can increase performance by spreading the most-used tables over multiple disks.


You can add disks and logical devices at any time after BRM has been installed.

After determining the number of disks available for the database, divide the tablespaces among those disks.

Half the remaining space will be used for mirroring. Put mirrors on different disks than their corresponding tables and indexes. The number of mirror logical devices will be equal, in number and size, to their corresponding table and index logical devices.

After allocating disk space for mirrors, divide the remaining disk space using the ratio of 2:1 between tables and indexes. That is, two-thirds of the space will be used for table logical devices and one-third of the space will be used for index logical devices.


The logical devices may not be the same size, because some tablespaces will be more active than others. See "Tablespace Priorities".

Create the logical devices over the remaining physical disks using the above guidelines. The next step is mapping tablespaces to logical devices.

Mapping Tablespaces to Logical Devices

Logical-to-physical table and index mapping is defined in the BRM_Home/setup/scripts/pin_tables.values file. This file lists 36 logical tablespaces for indexes and 36 logical tablespaces for data.

In the default BRM installation, all 72 BRM logical tablespaces map to two separate physical tablespaces: one for all the tables ($PIN_CONF_DM_DB_TABLES_GROUP) and one for all the indexes ($PIN_CONF_DM_DB_INDEX_GROUP). When you install BRM, the physical tablespace names PIN00 and PINX00 are substituted for the configuration variable names in the file.

The default logical-to-physical tablespace mappings defined in the pin_tables.values file are:

Default Tablespaces for Data


Default Tablespaces for Indexes


You can customize the tablespaces to improve performance.

To map logical and physical devices, you modify the entries in the right-hand column. You can edit the file before or after creating the logical devices, however, you must create logical and physical tablespaces before running the pin_setup script that uses the values in the pin_tables.values file.

Tablespace Priorities

BRM tablespaces are ranked according to highest level of activity. Table 3-1 shows the default priority:

Table 3-1 Tablespace Priorities

Priority Table Index

















































In most cases you should not change table to tablespace mappings. Instead, change the tablespace priorities, the number of logical priorities, or both. Your goal should be to maintain the best performance based on your customizations and customer usage patterns.

To change table-to-tablespace mapping, use the same tablespace for tables whose sizes are a function of different parameters. In this way, tables grow at different rates, resulting in more efficient space allocation within one tablespace. If tablespaces are not large enough, they fill up and an error is returned to BRM. This usually happens with the event tables first.

For best results, 36 tablespaces have been provided for data and another 36 tablespaces for indexes in the pin_tables.values file. Depending on your installation size, available hardware, and the functionality you are implementing, such as using the Rated Event (RE) Loader, some or all of these tablespaces might be used for database. BRM recommends the use of at least 18 tablespaces for data and another 16 for indexes.


If you are using RE Loader, you may need additional tablespaces for related tables.

If there are fewer than 72 logical devices, you can choose your own method to map the logical tablespaces to the smaller number of physical tablespaces. For example, you can allocate a few physical tablespaces to the highest-priority logical tablespaces, and allocate the remaining logical tablespaces using round-robin. Or, you can simply map all tablespaces using round-robin.

For example, assume that you have 6 logical devices for tables, and 4 logical devices for indexes. These 10 logical devices are mapped to 10 physical tablespaces. The two highest priority tablespaces are mapped to their own physical tablespaces, using the priorities defined in the priority table. The remaining 30 logical tablespaces are then mapped to the remaining tablespaces using round-robin. To do this, first assign the physical tablespace to the physical tablespace configuration variable:

# default tablespace or filegroup names

Then change the default logical tablespace to physical tablespace mapping as follows, using the priority table:

For Data


For Indexes


The details on changing configuration variables in the configuration scripts can be found in the comments in those scripts.

In test installations, where the total number of disks is constrained on the database server, it is often easier to configure a single tablespace or file group for data (PIN00) and one for indexes (PINX00). For example, if a database system has 12 physical drives available for data and indexes, you could configure the database as follows:

  • Create 3 logical volumes (E:, F:, G:) of 4 striped drives

  • Create the data tablespace PIN00 using a data file on E: and F:

  • Create the index tablespace PINX00 using a data file on G:

  • Map all the $PIN_CONF_TBLSPACE* entries to PIN00 and all the $PIN_CONF_TBLSPACEX* entries to PINX00

Since there are not enough disks to practically separate tablespaces into distinct logical volumes, it makes sense from a performance perspective to manage just 2 tablespaces. This way the operating system will manage the underlying parallel I/O operations.

Be sure to assign custom tables (new tables created for new objects that are not part of the default BRM installation) to explicit tablespaces. These can be new tablespaces that you create or existing ones. If you do not make a tablespace assignment for these tables, the new tables will go into a default tablespace which may cause performance or storage problems as the new tables grow or are accessed.

Creating Tablespaces

Create the actual tablespaces before running the pin_setup configuration script since they require existing tablespaces. Only the two default tablespaces must be created if you have not edited the pin_tables.values file to assign tables to nondefault tablespaces.

Running Configuration Scripts to Create the BRM Database

See "Installing and Configuring the Oracle Database" for information about creating the BRM database.

Configuring Oracle Databases

For information about configuring the Oracle database, see the Oracle documentation. To increase performance, experiment with database configuration options (for example, increase the number of processes, rollbacks, shared pool size, index space, shared memory, and buffers).

Examine the init.ora parameters in TPC-C full disclosure reports for your hardware platform. Compare those parameters with your installation. Understand why the examples might be different than your values, and experiment with different values.

Table 3-2 provides guidelines for some Oracle and operating system configuration options.

Table 3-2 Configuration Options for Operating Systems

Configuration Options Guidelines


Set to FALSE for all BRM system configurations. This improves BRM database performance.


Set to TRUE to enable asynchronous database updates. Using async I/O means that you only have one dbwriter process.

block size

Use a minimum 8KB block size for the database.


Set to TRUE to increase performance, although this uses slightly more disk space in the SGA.



In general, the size of the SGA should be less than or equal to one-half of real memory. The two major components of the SGA are shared_pool_size and db_block_buffers. In other words, the SGA in bytes is roughly equal to (shared_pool_size + (db_block_buffers * db_block_size)).

Set the shared_pool_size to approximately 10 to 12 MB. On larger systems, you can set it to be twice as large. To find the optimal value for db_block_buffers, divide the size of available shared memory minus shared_pool_size by db_block_size.

The amount of RAM available on the database server sets an upper bound on the size of the SGA. The SGA should be roughly one-half the available RAM. SGAs can be up to 2 GB for large installations. The number of buffers (db_block_buffers) can be up to approximately 300000 on large installations.

Sample value setting for the db_block_buffers parameter: 340000.

Sample value setting for the shared_pool_size parameter: 30000000.


You can improve I/O performance by increasing the number of DB writer processes from the default, single process. Setting db_writer_processes between 5 and 10 (for the largest systems) can improve I/O throughput. If db_writer_processes is set, dbwr_io_slaves must not be specified.

DML locks

Use 5000 for DML locks for very heavy workloads.

Sample value setting for the dml_locks parameter: 5000.

freelist and pctfree

Consider creating tablespaces with additional room for inserting. The storage parameters are pctfree and freelist. Although using freelists requires more disk and memory, insert speed is greatly enhanced. The default is 1. The most active tables should be in tablespaces with at least 10 – 20 freelists, depending on the size of the installation.


Set the log_buffer option to approximately 2 MB.

Sample value setting for the log_buffer parameter: 2621440.


Make the log_checkpoint_interval higher for performance and lower for recoverability. The log_checkpoint_interval should be set to 3600 for small workloads. A much larger number should be used for large configurations. The performance impact for checkpointing can be up to 20%.


Set the size of the logfile parameter to be between 1 and 5 GB for better performance. Configure at least two log groups and put them on the fastest disks in the system.


A minimum number for open_cursors is 1080. There is no penalty, however, for having this parameter set to a high value.

Set open_cursors to match the value of the stmt_cache_entries entry in the DM Oracle configuration (pin.conf) file. If using a statement cache size of 1080, for example, the stmt_cache_entries entry appears as follows in the DM pin.conf file:

- dm stmt_cache_entries -1080

The statement-handle caching performance feature requires a large number of open_cursors. Increase the open_cursors parameter to 4192 by adding the following line to the initSID.ora file.

open_cursors = 4192 (minimum value: statement cache size + number of dm_backends)

rollback segments

The ratio of shadow processes to rollback segments should be approximately the same. For example, if you use 142 shadow processes, you should create approximately 142 rollback segments. The number of shadow processes is the same as the number of DM back end processes. Because of the way BRM can multiplex client requests in the DMs, the client load on the database is usually under or close to 100, even on large installations.


Set the session_cached_cursors option to a nonzero value to enable the database to maintain cursors used in the repeated parsing of SQL statements. Use 150 as a starting point.

shared pool size

Shared pool size should be close to 10000000 for high hit ratios. Twice this can be used on large installations.


Sample value setting for the db_files parameter: 300 to 1020.


Sample value setting for the max_rollback_segments parameter: 200.


Sample value setting for the processes parameter: 320.


Sample value setting for the db_block_size parameter: 4096.

The shared_pool_size is not the size of available shared memory. The SGA mainly consists of two pools (shared_pool_size and db_block_buffers) whose combined size is roughly the size of the SGA. The size of shared memory should be bigger than the result of the following formula:

shared_pool_size + (db_block_buffer*db_block_size)

Example of a Large-Scale Oracle Installation

Table 3-3 shows the tablespace mapping to physical array groups for a large-scale Oracle installation.

Table 3-3 Tablespace Mapping to Physical Array Groups

Tablespace Name Array Group (LUN) Storage Size








































































































$c5d0 10G



$c5d2 10G








Using Rule-Based Optimization versus Cost-Based Optimization

If you have not partitioned any tables in your BRM database, you can use rule-based optimization or cost-based optimization. If you have partitioned tables, you can use only cost-based optimization.

BRM indexes are designed to work best when you use rule-based optimization. To specify rule-based optimization, enter this line in the init.ora initialization file:


The alternative to using rule-based optimization is using CHOOSE optimization. This is the default optimization method. It requires some initialization and maintenance to achieve consistently high performance.

Setting OPTIMIZER_RULE=CHOOSE in the init.ora file forces Oracle to use a cost-based optimization algorithm when tables in the query have statistics and the rule-based optimization when they do not. Cost-based optimization uses the actual relationship between data. To support cost-based optimization (which can be enabled by specifying HINTs in the query or running ANALYZE on a table), statistics must be updated by analyzing the actual data in the tables. This operation can take a long time on large tables, such as event tables. Cost-based optimization is used when an SQL statement accesses partitioned tables or indexes; rule-based optimization is not available for partitioned tables and indexes.

In addition, using ANALYZE requires table statistics to be periodically updated. Unless this occurs, the optimizer can choose a poorly performing access path, even though an index exists. In other words, the optimizer could actually skip an index if statistics are not updated


After ANALYZE has been run on a table or set of tables, cost-based optimization is used. If the statistics are not kept up to date, the optimizer may not choose the best access plan and performance will be impacted.

About Oracle Parallel Server

Oracle Parallel Server (OPS) is a version of Oracle that runs on a cluster. A single database is accessed through multiple host systems, each of which is running its own instance of Oracle. This configuration can be used for high availability or performance. This configuration is not necessarily the best configuration for high availability because other solutions based on a cluster and a single Oracle instance are easier to install and administer. With strong DBA expertise, however, OPS might be an option. If OPS is used, single active instance (primary node/backup node) mode is recommend.

From a performance standpoint, an OPS configuration is not recommended for BRM installations where write workloads predominate. An OPS installation can get much better performance than a single instance on read-only or read-mostly workloads such as basic authentication and authorization or basic customer service representative (CSR) operations. However, performance for write workloads, like event processing, is only slightly better than it is on a single instance. For increased performance and scalability, consider using a multischema system. See "A BRM Multischema Production System".

About Using Virtual Columns

Oracle Database 11g, by default, supports virtual columns (columns whose values are defined by an expression, are computed when you query the data, and are not physically stored in the database). You can use virtual columns in the BRM database if you have Oracle Database 11g (or later).

Implementations of BRM have shown that a high percentage of the BRM database storage space can be used by the event tables. BRM can use virtual columns in a way that results in space savings for event records. To enable virtual columns in the BRM database, you convert event classes (/event and its subclasses) in the BRM schema. The savings in database storage applies to event data that the system creates after the virtual columns are generated (not to existing event data). Virtual column functionality is transparent to BRM.

For information about virtual columns in general, see the Oracle Database documentation.

For information on generating virtual columns on BRM event tables, see the discussion on enabling virtual columns in BRM System Administrator's Guide.

Maintaining a BRM Database

You monitor and maintain the BRM database with standard database tools. For example, you can set up your database software to generate log files. For more information, see the documentation for your database software.

You can also use the sar utility to monitor performance.


Do not use SQL statements to insert, delete, or update BRM tables or objects. Always use the Portal Communications Module (PCM) interface, which guarantees the integrity of the BRM database.

For information about managing a multischema system, see "Managing a Multischema System" in BRM System Administrator's Guide.

Maintaining the Connection to the Database

If the connection to the database fails, BRM automatically attempts to reconnect, using the database name listed in the configuration file for the DM. If BRM can reestablish the connection, BRM generally restarts the operation. If BRM was in the middle of a transaction, BRM reports a PIN_ERR_STORAGE error in the log file for the DM. If BRM cannot reestablish the connection, it reports a PIN_ERR_STORAGE_DISCONNECT error.

If BRM is running on an OPS to increase fault tolerance, the OPS database appears to a DM to be multiple logical databases, each with a different host name, or SID. For the DM to be able to reconnect to a surviving host after a failure of one host, you must configure the DM to look for all hosts in an OPS system:

  1. Open the DM configuration file (BRM_Home/sys/dm_oracle/pin.conf).

  2. For each host in the OPS system, add an sm_database configuration entry below the existing entry.

    See the guidelines for that entry in the configuration file.

  3. Stop and restart the DM. See "Starting and Stopping the BRM System" in BRM System Administrator's Guide.

The DM looks for OPS hosts in the order in which they appear in the configuration file.

For more information on failure recovery, see "Four-Tier Architecture and Failure Recovery" in BRM Concepts.

Scheduling Backups

You should back up the database every night. You can choose a backup solution from your database manufacturer or from a third party.

To keep your database always running, ready to respond to real-time events, use the database online backup utilities for routine backups.

You should also back up your system files (programs, scripts, source code, and documentation) and your data files and keep the backups in a secure, offsite location. To be safe, you should keep at least three iterations of your system backups and at least one month's worth of daily backups. Storage media are usually less expensive than customer problems.

Verify the data and system backup files to ensure that you can recover the data. At times, this secondary system can also serve as a fully functional test system. If BRM releases a patch that can significantly affect your installation, it is important to try it on a test system before installing it in your production system.


The only way to verify the data and system backup files is to restore them to another location to ensure that there are no errors in reading, writing, or formatting.

Monitoring Database Space

Before you installed BRM, you set up your database based on estimates of the size of tables for your business activity. The planning process also included forecasts of how fast the tables would grow. You should monitor the growth of tables not only to ensure you maintain enough space on your system, but also to check for unexpected growth that would indicate some problem.

On a typical production system, you should check tables monthly. As part of this audit, you should match rows in each of the tables against the expected rows. If you spot discrepancies, checking the individual tables shows where the unexpected growth is coming from. For a list of the BRM tables, see "Storable Class-to-SQL Mapping" in BRM Developer's Reference.

If you have a multischema system, you can use growth information to revise your scheme for distributing accounts among your various schemas. See "Setting Database Priorities" in BRM System Administrator's Guide.

Monitoring Oracle Tablespace Usage

You should monitor the growth of tables so that you can add more extents or data files before the tablespaces are filled. For information about adding data files to tablespaces, you should make a quick check of the free space remaining in the tablespaces. For example, you can use this SQL command:

SQL> select * from user_free_space;

This command produces data about the available blocks and bytes for each tablespace. Your database administrator should provide a maximum value. If a tablespace grows past that maximum, you (or an automated script) should notify the database administrator for remedial action.

Monitoring SQL Statements

You can collect debugging information by gathering the SQL statements generated by dm_oracle processes. The statements appear in the DM log file, not the DM pinlog file.

To get the SQL statements for a specific operation or sequence of events:

  1. In the environment from which the dm_oracle will be started, set the environment variable DM_DEBUG3 to 0xFFFF003F:

    Using the c-shell (csh):

    setenv DM_DEBUG3 0xFFFF003F

    Using the korn shell (sh/ksh):

    export DM_DEBUG3
  2. Clear the old log file.

  3. Start the Oracle DM.

  4. Run the DM operation you are debugging to generate SQL statements.

  5. Stop the Oracle DM.

  6. Use the grep command on the Oracle DM log file for the "SQL_STMT" string.

  7. Unset the DM_DEBUG3 environment variable. Otherwise, subsequent DM operations generate huge log files.

    Using the c-shell (csh):

    % unsetenv DM_DEBUG3

    Using the korn shell (sh/ksh):

    $ unset DM_DEBUG3

Rebuilding Indexes

The structure of indexes influences the speed at which BRM can find records in the database. While you are using BRM in a production environment, especially when there is intensive inserting in the database, these indexes can become unbalanced, impeding access to BRM records. For best efficiency, rebuild the indexes frequently. For example, if you have a heavily used production system, you might want to rebuild the indexes weekly.


Do not delete any of the standard BRM indexes without first consulting Oracle. Removing an index can lead to serious performance problems. Also, do not delete or change any of the standard stored procedures. Otherwise, the DM might malfunction.