27 Improving Data Manager and Queue Manager Performance

Learn how to improve Data Manager (DM) performance in your Oracle Communications Billing and Revenue Management (BRM) system.

Topics in this document:

About Queuing-Based Processes

Queuing improves system performance by lowering the number of connections to the database. This reduces the number of processes and therefore reduces the system load required to handle the connections.

Queuing is used in two different types of system components:

  • The CM Proxy and Web Interface daemons use queuing to connect incoming client connections to CMs. In this case, queuing reduces the number of client connections to CMs.

  • All DMs use queuing internally. Front-end processes pass requests and data through a queue to back-end processes. In this case, queuing reduces the number of connections to the database.

CMs and Connection Manager Master Processes (CMMPs) do not use queuing.

Figure 27-1 shows an example of where queuing takes place in the BRM system architecture. Note that queuing occurs in two locations.

Figure 27-1 BRM Queuing Locations

Description of Figure 27-1 follows
Description of "Figure 27-1 BRM Queuing Locations"

Example of Queuing in a Client-to-CM Connection

Figure 27-2 shows a daemon running on a system. Front-end processes pass the connections to a shared-memory queue where the connections wait for available back ends. The back ends connect to CMs.

Figure 27-2 CM Client Connection Queuing

Description of Figure 27-2 follows
Description of "Figure 27-2 CM Client Connection Queuing"

Configuring DM Front Ends and Back Ends

You configure DM performance by specifying the number of front-end and back-end processes and the amount of shared memory the DM uses.

Note:

Queue Manager (QM) components, such as LDAP Manager, use the same types of configuration entries, but they have different names. For example, instead of dm_max_fe, the entry is named qm_max_fe. The functionality is the same.

Use the following DM and QM pin.conf entries to tune performance:

  • dm_n_fe: Specifies the number of DM front-end processes.

  • dm_n_be: Specifies the maximum number of DM back-end processes.

  • dm_max_per_fe: Specifies the maximum number of connections for each front end.

  • dm_trans_be_max: Specifies the maximum number of back ends that can be used for processing.

  • dm_init_be_timeout: Specifies the time, in seconds, that the DM waits for the DM back-end startup process to complete.

  • dm_trans_timeout: Specifies the time in minutes that DM back-end processes wait for the next opcode in a transaction.

To change one or more of these parameters:

  1. Open the DM configuration file (BRM_home/sys/dm_oracle/pin.conf).

  2. Change the configuration entry associated with the parameter. For tuning guidelines, see the topics following this procedure. For the syntax of each configuration entry, follow the guidelines in the configuration file.

  3. Save and close the file.

  4. Stop and restart the DM.

    Note:

    Besides configuring the number of connections for best performance, remember to keep the number of connections within the terms of your database license agreement.

Ratio of Front Ends to Back Ends

Oracle recommends that the total number of front ends (specified in the dm_n_fe and dm_max_per_fe entries) should be two to four times the number back ends (specified in the dm_n_be entry).

In this example, the total number of front ends is 64 (4 times 16), which is 4 times the number of back ends.

- dm dm_n_fe 4
- dm dm_max_per_fe 16
- dm dm_n_be 16

Providing Enough Front-End Connections

If connection errors occur between the CM and DM, increase the values in the dm_n_fe and dm_max_per_fe entries. If there are not enough front ends, BRM reports an error. For example:

DMfe #3: dropped connect from 194.176.218.1:45826, too full
W Thu Aug 06 13:58:05 2001 dmhost dm:17446 dm_front.c(1.47):1498

Check the dm_database.log and dm_database.pinlog files for errors.

You must have enough DM front-end connections (number of processes times the number of connections for each process) to handle the expected number of connections from all of the CMs. Otherwise, you will see errors when the applications cannot connect to BRM.

Connections might be required for the following:

  • One connection for each CM Proxy thread.

  • One connection for each Web interface thread, plus one additional connection if customers create accounts with a Web interface.

  • One connection for each billing application thread plus one additional connection for the main search thread.

  • Two connections for each instance of Billing Care.

The maximum number of connections each front-end process can handle depends on the activity of the connection and, on multi-processor machines, the processor speed. For intensive connections, such as a heavily utilized terminal server, a front end might be able to handle only 16 connections. For intermittent connections, such as through certain client tools, a single front end can handle 256 or 512 connections. For systems that use a combination of these activities (for example, real-time processing with some client tool activity), you can configure an intermediate value for the maximum connections per front end.

For a given number of connections, if you have too many front ends (too few connections for each front end), the DM process uses too much memory and there is too much context switching. Conversely, if you have too few front ends (too many connections for each front end), the system performs poorly.

Determining the Required Number of Back Ends

You configure the number of back ends to get maximum performance from your system, depending on the workload and the type of BRM activity. Here are some guidelines for various activities:

  • Authentication/authorization: For processing terminal server requests, which consist of many single operations without an explicit transaction, size the number of back ends to handle the traffic and leave the percentage of back ends available for transactions at the default value (50%).

    For example:

    -dm dm_n_be 48
    -dm dm_trans_be_max 24
    

    Normally, however, you configure the DM to perform a variety of tasks.

  • Account creation: This activity uses one transaction connection for a long time and a second regular connection intermittently. You must provide two back ends for each of the accounts you expect to be created simultaneously. Your system might lock up if you do not have enough back ends. You can leave the percentage of back ends available for transactions at the default.

    For example:

    -dm dm_n_be 48
    -dm dm_trans_be_max 46
    

    The example above allows you to have 23 account creation sessions active simultaneously.

  • Billing: Because all billing operations are transactions, ensure there is at least one back end capable of handling transactions for each billing program thread, plus one additional back end for the main thread searches.

    For example:

    -dm dm_n_be 24
    -dm dm_trans_be_max 22
    

    The example above allows you to have approximately 20 billing sessions (children) active simultaneously.

In general, if you need rapid response times, reduce the number of transactions waiting to be processed by adding more back ends, devoting a larger number of them to transactions, or both. For example, try increasing the number of back ends to 3 to 4 times the number of application processes. For performance, dedicate at least 80% of the back ends to processing transactions. For heavy updating and inserting environments, especially when billing is running, dedicate all but two of the back ends to transaction processing.

For example:

-dm dm_n_fe 4
-dm dm_max_per_fe 16
-dm dm_n_be 24
-dm dm_trans_be_max 22

If you configure too many back ends, the DM process uses too much memory and there is too much context switching. Conversely, if you have too few back ends, the system performs poorly and the network is overloaded as terminal servers retry the connection.

Note:

If there are not enough DM back ends, BRM may stop responding without reporting an error message.

On small BRM systems, where you might use a single DM for multiple activities, you can calculate the peak requirements for a combination of those activities and size the back ends accordingly. For example, you might need 32 connections for authentication and authorization and another 8 for the Web interface. If you run billing at hours when the rest of the system is relatively quiet, you do not need additional back ends.

Note:

The number of back ends is independent of the number of front ends. That is, front ends are not tied to particular back ends because requests are transferred via the shared memory queue.

To help gauge the correct number of back ends, monitor database utilization. If it is under-utilized, you can increase the number of back ends.

Determining the Maximum Number of Back Ends Dedicated to Transactions

The maximum number of back ends dedicated to transactions (specified in the dm_trans_be_max entry) should be at least 80% of the number of back ends specified in the dm_n_be entry. For heavy transaction loads, such as when running billing, use a value that is 2 less than the dm_n_be entry. For example:

- dm dm_n_be   48
- dm dm_trans_be_max   46

Note:

You cannot specify more transaction back ends than there are total back ends.

Setting the DM Time Interval between Opcode Requests

By default, the DM back-end processes wait an infinite amount of time for each opcode request, but you can set a time interval after which the DM back-end terminates if no opcode request has arrived. The following DM pin.conf entry specifies the maximum amount of time to wait, in minutes, for an opcode call before cancelling the transaction:

- dm dm_trans_timeout 4

Note:

To have DM back-end processes wait forever, set this entry to 0.

Setting How Long the DM Waits for the Background Startup Process to Complete

The DM back-end startup process connects to the BRM database and initializes the BRM data dictionary into DM memory. By default, the DM waits 60 seconds for the DM back-end startup process to complete before timing out. You can modify how long the DM waits by adding the dm_init_be_timeout entry to the DM pin.conf file.

- dm dm_init_be_timeout 60

Note:

This entry is not included in the default DM pin.conf file, so you must add it manually.

Setting DM Shared Memory Size

BRM queuing increases system performance by lowering the number of connections to the database. This reduces the number of processes, which reduces the system load required to handle the connections. All DMs use shared memory for internal queuing. Front-end processes pass connections through a shared-memory queue to back-end processes.

To specify DM shared memory, you use the following entries in the DM configuration file (pin.conf):

  • dm_shmsize: Specifies the size of the shared memory segment, in bytes, that is shared between the front ends and back ends. The maximum allowed value of dm_shmsize in the DM's pin.conf file is 274877905920 bytes (256 GB).

  • dm_bigsize: Specifies the size of shared memory for “big" shared memory structures, such as those used for large searches (with more than 128 results) or for PIN_FLDT_BUF fields larger than 4 KB.

    The maximum allowed value of dm_bigsize in the Data Manager's (DM's) pin.conf file is now 206158429184 bytes (192 GB). The value of dm_bigsize should always be set less than the value of dm_shmsize.

To specify DM shared memory:

  1. Open the DM configuration file (BRM_home/sys/dm_oracle/pin.conf).

  2. Change the configuration entry associated with each parameter. For tuning guidelines, see the discussions following this procedure. For the syntax of each configuration entry, follow the guidelines in the configuration file.

    Note:

    You may have to increase the shmmax kernel parameter for your system. It should be at least as large as the dm_shmsize entry in the DM configuration file on any computer running a DM. Otherwise, the DM will not be able to attach to all of the shared memory it might require and BRM will fail to process some transactions. See your vendor-specific system administration guide for information about how to tune the shmmax parameter.

  3. Save and close the file.

  4. Stop and restart the DM.

    Note:

    Besides configuring the number of connections for best performance, remember to keep the number of connections within the terms of your database license agreement.

Determining DM Shared Memory Requirements

The amount of shared memory required by a DM depends on:

  • Number of front ends: Each front end takes about 32 bytes of shared memory for its status block.

  • Number of connections per front end: Each connection to a front end takes at least one 8-KB block of shared memory.

  • Number of back ends: Each back end takes about 32 bytes of shared memory for its status block.

  • Size and type of DM operations: Most of the shared memory used is taken by DM operations, and particularly by large searches. For example:

    • Running the pin_ledger_report utility.

    • Running searches that return large numbers or results.

    • Using large value maps. Allocate 1 MB of memory in the dm_bigsize entry for every 3000 lines in a value map.

    Operations that read objects, read fields, or write fields and involve a large BUF field can also be significant, but they are rare. Normal operations take 0 to 16 KB above the 8-KB-per-connection overhead.

You can also reduce the requirements for shared memory by using the PCM_OP_STEP_SEARCH opcode instead of the PCM_OP_SEARCH opcode.

You should monitor the shared memory usage and the transaction queues for each DM. See "Monitoring DM Shared Memory Usage" and "Monitoring DM Transaction Queues".

How BRM Allocates Shared Memory for Searches

The dm_shmsize entry sets the total size of the shared memory pool. The dm_bigsize entry sets the size of the portion of the shared memory reserved for “big" shared memory structures. Therefore, the memory available to front ends, back ends, and normal (not “big") operations is the value of the dm_shmsize entry minus the value of the dm_bigsize entry.

For example, with these entries, the shared memory available to normal operations is 25165824:

- dm dm_shmsize  33554432
- dm dm_bigsize  8388608

Note:

The value for dm_shmsize must be a multiple of 1024. The value of dm_bigsize must be a multiple of 8192.

To allocate memory for a search, BRM uses regular shared memory until the search returns more than 128 results. At that point, BRM reallocates the search to use the memory set aside for “big" structures. When allocating this type of memory, BRM doubles the size of the initial memory requirement in anticipation of increased memory need.

For example, consider a search that returns the POIDs of accounts that need billing. For 100,000 accounts, the memory allocated to the search is as follows:

  • Memory used by “big" structures: 3.2 MB.

    The 3.2 MB figure is derived by taking the size of a POID and the anticipated number of accounts read in a billing application and then doubling the amount of memory as a safety margin.

    100,000 x 16 x 2 = 3,200,000 (3.2 MB), which is rounded up to a multiple of 8192. For example, dm_bigsize would be set to 3203072 or 391 x 8192.

    As a general rule, dm_shmsize should be approximately 4 to 6 times larger than dm_bigsize.

  • Memory used by “small" structures: 4 MB.

    This memory is allocated for the following:

    • 2 MB for the result account POIDs (100,000 accounts x 20-byte chunks).

    • 2 MB for the POID types (100,000 accounts x 20-byte chunks).

  • Total memory use: 7.2 MB.

Shared Memory Guidelines

DM shared memory is limited to 512 MB. Billing applications, internet telephony, and searching can affect DM shared memory requirements. It is usually best to start with a lower amount of shared memory to keep system resource usage minimal.

Shared memory for database servers can be from 512 MB for medium scale installations to several GB or more for the largest installations, depending upon activities. Some experimentation is necessary because more than 1 GB may not provide a performance increase, especially if there is a lot of update activity in the BRM database.

This example shows Solaris 2.6 kernel tuning parameters (for /etc/system) for the database server:

set bufhwm=2000
set autoup=600
set shmsys:shminfo_shmmax=0xffffffff
set shmsys:shminfo_shmseg=32
set semsys:seminfo_semmns=600
set semsys:seminfo_semmnu=600
set semsys:seminfo_semume=600
set semsys:seminfo_semmsl=100
forceload:drv/vxio
forceload:drv/vxspec

Note:

This example of a Solaris kernel configuration essentially sets the maximum shared memory limit to infinity. When this setting is used, the system can allocate as much RAM as required for shared memory.

Reducing Resources Used for Search Queries

You can increase performance for search queries that retrieve objects with multiple rows from the database (for example, account searches for multiple customers) by setting the value of the dm_in_batch_size entry in the DM configuration file (pin.conf).

BRM interprets the value of dm_in_batch_size as the number of matching rows to retrieve in one search. When you start a search, BRM runs n+1 searches, where n is the number of searches performed to retrieve the number of rows set in dm_in_batch_size. For example, if dm_in_batch_size is set to 25 and the search retrieved 100 matching rows, five searches were performed ([25 x 4]+1). The default setting is 80, indicating that BRM runs two searches to retrieve up to 80 matching rows. The maximum value is 160.

To preserve resources, you set the value in dm_in_batch_size to correlate to the size of the data set being searched. To increase performance when searching large data sets, you increase the number of retrieved rows in dm_in_batch_size. The larger the value set in dm_in_batch_size, the more resources are used to perform the search query. For example, if a typical user search query returns 10 rows from the database and dm_in_batch_size is set to 100, more resources than necessary are being used to complete the search.

Load Balancing DMs

The dm_pointer entry in the CM configuration file (pin.conf) tells the CM which DM to connect to. Having pointers to several DMs provides reliability because the system will switch to another DM if one DM fails.

You can ensure a more even load among the available DMs by adding several identical pointers to each DM, even if the DMs are on the same machine. When a CM receives a connection request, it chooses one of the pointers at random. Or, you can increase the load on a particular DM by increasing the relative number of pointers to that DM.

For example, if you have two DMs and you want to ensure that most activity goes to one with the most powerful hardware, make three or four pointers to that DM and only one or two to the other DM. When new child CM processes or threads are created, more of them are configured to point to the first DM:

- cm  dm_pointer  0.0.0.1  ip  127.0.0.1 15950
- cm  dm_pointer  0.0.0.1  ip  127.0.0.1 15950
- cm  dm_pointer  0.0.0.1  ip  127.0.0.1 15950
- cm  dm_pointer  0.0.0.1  ip  127.0.0.3 11950

Optimizing Memory Allocation during Database Searches

You can configure the Oracle DM to optimize memory allocation during database searches by using the extra_search entry in the DM configuration file. When this entry is set, the Oracle DM performs an extra search in the BRM database to calculate the number of database objects meeting the search criteria and then allocates the optimal amount of memory for the results.

Note:

Performing the extra search slows database search performance.

To optimize memory allocation by performing an extra search:

  1. Open the Oracle DM configuration file (BRM_home/sys/dm_oracle/pin.conf).

  2. Change the extra_search entry to 1:

    - dm extra_search 1
    
  3. Save and close the file.

  4. Stop and restart the Oracle DM.

Improving BRM Performance during Database Searches

Oracle databases can access tables that have nonbitmap indexes by performing an internal conversion from ROWIDs to bitmap and then from bitmap back to ROWIDs. This internal conversion process can significantly decrease BRM performance when a large number of rows are queried.

To increase search performance, Oracle recommends that you prevent the database from using bitmap access paths for nonbitmap indexes. To do so, add the following parameter to your database's init.ora file or spfile, and then restart your database:

_b_tree_bitmap_plans=false

Increasing DM CPU Usage

If the CPU usage on a DM machine reaches 75% over a 60-second average, increase the CPU capacity by using a faster CPU, adding CPUs, or adding another machine to run the same type of DM.

Examples of DM Configurations

These examples show DM pin.conf file settings used with a variety of multiple CPU configurations. These examples are intended as guidelines; your settings depend on your system resources and workload.

Example 1: BRM 16-CPU database server configuration

The example depicted in Table 27-1 shows a BRM system that uses:

  • A 16x450 MHz CPU database server.

  • Four 6x450 MHz CPU CM/DM/EM systems.

    Note:

    The dm_shmsize entry is set to 64 MB to handle a larger billing load.

    Table 27-1 Example 1 DM Configuration

    Daemon/program pin.conf entry Value

    dm_oracle

    dm_n_fe

    6

    dm_oracle

    dm_n_be

    22

    dm_oracle

    dm_max_per_fe

    16

    dm_oracle

    dm_trans_be_max

    20

    dm_oracle

    dm_shmsize

    67108864

    dm_oracle

    dm_bigsize

    1048576

Example 2: BRM 36-CPU database server configuration

The example shown in Table 27-2 shows a BRM system that uses:

  • A 36x336 MHz CPU database server.

  • Four 4x400 MHz CPU CM/DM/EM systems.

    Table 27-2 Example 2 DM Configuration

    Daemon/program pin.conf entry Value

    dm_oracle

    dm_n_fe

    4

    dm_oracle

    dm_n_be

    24

    dm_oracle

    dm_max_per_fe

    16

    dm_oracle

    dm_trans_be_max

    22

    dm_oracle

    dm_shmsize

    20971520

    dm_oracle

    dm_bigsize

    6291456