BEA Logo BEA MessageQ Release 5.0

  Corporate Info  |  News  |  Solutions  |  Products  |  Partners  |  Services  |  Events  |  Download  |  How To Buy

 

   MessageQ Doc Home   |   Configuration Guide for OpenVMS   |   Previous Topic   |   Next Topic   |   Contents   |   Index

Sizing and Tuning the BEA MessageQ Environment

 

BEA MessageQ software uses the capabilities of the underlying operating system and networking software to perform message queuing. For this reason, you may need to change the system configuration to run BEA MessageQ and BEA MessageQ applications efficiently. You may also need to tune your system on a regular basis to accommodate the growth and changes in your BEA MessageQ environment.

The primary BEA MessageQ resource to configure is memory-both global and local virtual memory. Other system resources, such as disk files, network links, and system parameters may need to be adjusted for large queuing networks to function properly. This chapter describes how to set OpenVMS and BEA MessageQ configuration parameters and quotas for proper BEA MessageQ operation in the following topics:

Sizing and Tuning Processes

This section describes how to configure:

Virtual Memory

Virtual memory is allocated in OpenVMS by setting the page file quota allotted to a OpenVMS process. This quota must be less than the size of the system PAGEFILE and the system parameter VIRTUALPAGECNT.

The VIRTUALPAGECNT parameter limits the total number of pages that a single process can map to itself. The amount of virtual memory that is resident at any time is determined by the working set size, which is usually set much smaller than the page file quota.

The page file quota is passed as a parameter to the RUN command used to detach a BEA MessageQ Server process. The quota for each server is specified in the DMQ$USER:DMQ$SET_SERVER_QUOTAS.COM command procedure. Methods for determining the amount of memory needed for specific servers is described in Modeling Memory Usage for Each BEA MessageQ Server.

Global Memory

BEA MessageQ for OpenVMS uses four global sections to store messages. The MCS global section stores message headers and the other three global sections, called the LLS for small, medium, and large sections, store messages.

The amount of global memory allotted to store messages is a fixed resource. Available global memory is determined by the number and size of the buffer pool specified in the %BUFFER section of the DMQ$INIT.TXT file.

BEA MessageQ also uses global sections for storing group information. These global sections are sized based on entries in the DMQ$INIT.TXT file. BEA MessageQ run-time libraries also require global memory.

Adding a new BEA MessageQ message queuing group may require changes to OpenVMS system parameters to allow the addition of global sections. The OpenVMS system parameters affected by this type of change are GBLPAGES, GBLPAGFIL and GBLSECTIONS.

BEA MessageQ has quotas to limit the amount of global memory used by a particular queue. Queue quotas limit the number of messages queued and the total number of bytes that can be queued.

I/O Channels

The OpenVMS CHANNELCNT parameter limits the number of I/O channels allotted to a single process. Channels are used by BEA MessageQ for network links and for message recovery journal files.

The COM Server allocates two channels for each cross-group link it manages. The TCP/IP and DECnet Link Driver processes allocate four channels per link. The MRS Server requires a channel for each open area file, which is a fixed size file used to store recoverable messages.

Files

The BEA MessageQ message recovery system dynamically creates and deletes files as needed. The number of files that MRS can use is limited by the FILLM quota which is passed to the RUN command that starts the MRS Server. The total number of files used by the MRS Server is also limited by the OpenVMS CHANNELCNT parameter.

Network Resources

The primary system resource needed for cross-group messaging is buffer space associated with each cross group link. Buffers are stored in the local memory of the COM Server or Link Driver processes. Refer to Allocating Virtual Memory for BEA MessageQ Servers for more information.

To maintain simultaneous DECnet connections to more than 32 message queuing groups, you must adjust the NCP parameter MAXLINKS. Similarly, for TCP/IP networks, the maximum number of sockets must be increased if an insufficient number are available. Refer to Computing the Number of TCP/IP Sockets for information on how to adjust this setting. In addition, large networks with many network links require an increase in system nonpaged pool to provide more memory for device drivers.

Other System Resources and Quotas

The COM Server and Link Driver processes use the $QIO interface to post AST service requests for network I/O. Each outstanding AST counts against the process quota called ASTLM. Timers are also associated with network requests and count against the TQELM quota.

The MRS Server uses the $QIO interface to post AST service requests for both read and write operations to a recoverable disk file. Timers are also used and associated with each unconfirmed message.

Modifying DMQ$SET_SERVER_QUOTAS.COM

Each server's process quotas and limits, process name, and server-specific output files are defined in the DMQ$SET_SERVER_QUOTAS.COM file located in the DMQ$USER: directory. The DMQ$SET_SERVER_QUOTAS.COM command procedure can be edited to modify the system quotas assigned to any BEA MessageQ Server process.

Listing 11-1 shows the quota information for the COM Server process.

Listing 11-1 COM Server Quotas


$ COM: 
$ proc_name == "DMQ_C_''comp_id'"
$ full_name == "COM Server"
$ img_file == "DMQ$EXE:DMQ$COM_SERVER.EXE"
$ log_file == "''dmq_log'DMQ$COM_SERVER_''full_id'.LOG"
$ prio == 6 !process software priority
$ biolm == 500 !buffered I/O limit (counts outstanding operations)
$ diolm == 500 !direct I/O limit (counts outstanding operations)
$ buflm == 500000 !buffered I/O byte limit
$ tqelm == 500 !timer queue elements
$ enqlm == 500 !enq / deq locks
$ fillm == 500 !open files
$ astlm == 500 !Pending ASTs
$ subprocs == 16 !child sub processes
$ pgflquo == 30000 !virtual memory
$ wsextent == 8192 !limit borrowing beyond wsquo
$ wsquo == 1024 !basic working set limit in pages
$ wsdef == 750 !starting working set size
$ goto FINISHED


Allocating Virtual Memory for BEA MessageQ Servers

Proper allocation of virtual memory resources is critical to successful and efficient processing in the BEA MessageQ environment. This section describes how to determine appropriate virtual memory allocation and shows how to model memory usage for each BEA MessageQ Server.

BEA MessageQ Servers are designed to continue operating if available virtual memory is exhausted. An operation requiring more memory than is available will fail; however, the server will continue to operate. If the server cannot delivery nonrecoverable messages, they are discarded. If the server cannot deliver a recoverable message, BEA MessageQ executes the Undeliverable Message Action.

To determine the appropriate amount of virtual memory for your BEA MessageQ configuration:

Modeling Virtual Memory Needs

A rough model of memory usage requirements can be constructed by adding the memory requirements of all components managed by a server. The objects a server must track are determined by the data flow and timing of the system. This section provides a sample calculation for the MRS Server.

The amount of virtual memory used by a server can be obtained using the DCL SHOW PROCESS /CONTINUOUS command. The maximum amount of memory used by a server is also written to a server's output log file when the group is shut down. Size the rough model by configuring a minimum server, measuring the memory it requires, and adding the memory requirements of its queues, groups, and links.

Performing Testing

After you model the system and estimate the amount of virtual memory required, you can build a network of simple message senders and receivers that send at rates that you expect the real application to encounter, or you can test the production applications under expected system load.

During testing, virtual memory exhaustion is logged as an error message in the group's event log. If errors are encountered, increase the virtual memory allocated to the server and rerun the tests until the error no longer occurs.

Modeling Memory Usage for Each BEA MessageQ Server

To see how memory varies with the addition of a new group:

In the BEA MessageQ OpenVMS environment, the objects are tracked by each server. Table 11-1 shows the objects tracked by the COM Server.

Table 11-1 COM Server

Object

Memory Size Determined By

Code, fixed overhead

Measuring the minimum configuration

Groups

Varies with the number of groups on the bus

Network buffers

Varies with the number of connected groups. Sized per group in the %XGROUP section of the DMQ$INIT.TXT file.

Queues

Local memory data structures used in attaching/detaching queues

Table 11-2 shows the objects tracked by link drivers.

Table 11-2 Link Drivers

Object

Memory Size Determined By

Code, fixed overhead

Measuring the minimum configuration

Groups

Varies with the number of groups on the bus

Network buffers

Varies with the number of active links. Sized per group in the %XGROUP section of the DMQ$INIT.TXT file

The COM Server and link drivers share a common memory allocation mechanism to handle network buffers. The following formula roughly calculates this value:

pool_size_in_pages = 
sum of XGROUP pool buffer sizes in Kbytes from the %XGROUP section of DMQ$INIT.TXT. Multiply by 2 to convert to pages.

network buffers = 
48 guard pages + pool_size_in_pages + 2 * large buffer size in pages

Table 11-3 shows the objects tracked by the MRS Server.

Table 11-3 MRS Server

Object

Memory Size Determined By

Code, fixed overhead

Measuring the minimum configuration

Groups

Varies with the number of groups on the bus

Queues

Varies with the number of recoverable queues both local and remote

Messages

Varies with the number of unconfirmed messages

Internal buffers

Varies with largest message size

I/O data structures

Measuring the size of the I/O data structures. The size varies with the size of the largest message. Assigned per each target queue with a recoverable message. To measure this:

  • Measure the MRS Server virtual memory before sending a any recoverable message

  • Send a recoverable message

  • Measure the difference

Table 11-4 shows the objects tracked by the Journal Server.

Table 11-4 Journal Server

Object

Memory Size Determined By

Code, fixed overhead

Measuring the minimum configuration

Internal buffers

Varies with largest message size

I/O data structures

One per stream. The Journal Server manages two streams, the PCJ stream and the DLJ stream. The sizing will be less than that required for the MRS Server since the Journal Server does not read the files. The size of the I/O data structures varies with the size of the largest message.

Note: The Journal Server uses the same I/O mechanism as the MRS Server, but does not allocate read ahead buffers since it does not read.

Table 11-5 shows the objects tracked by the SBS Server.

Table 11-5 SBS Server

Object

Memory Size Determined By

Code, fixed overhead

Measuring the minimum configuration

Groups

Varies with the number of groups maximum 512 groups

Avail registrations

Varies with the number of avail/unavail registrations

Broadcast registrations

Varies with the number of broadcast registrations

Multicast targets

Index that allow quick access from a MOT to a broadcast registration

Ethernet buffers

Varies with the number of MOTs assigned to a multicast address

Example Memory Allocation Model for the MRS Server

Table 11-6 shows an example memory allocation model for the MRS Server using parameter values taken from a specific release of BEA MessageQ. This model serves only as an example, along with an example configuration for a hypothetical network. Actual values are release dependent; therefore, it is important to check the product release notes.

Table 11-6 Memory Allocation Model for MRS Server

Component

Values

Page size

512 bytes

Code, RTL's, core messaging

< 10,000 pages (measured)

I/O buffer size

(large_message_size + page_size) / page_size

Cross group information

1/4 page per group

Per queue information

1 page per queue

Per message overhead

1/2 page per unconfirmed message

Overhead (large_msg_size/open area)

  • 32,000 or greater/85 pages

  • 16,000/49 pages

  • 8,000/29 pages

  • 4,000/20 pages

BEA MessageQ for OpenVMS Version uses a strategy in which I/O is addressable at a per-block level and achieves speed by use of asynchronous $QIO calls. The overhead per each open area is determined by the number of RMS data structures and buffers needed to handle the largest logical operation, and by the number of read ahead operations allowed. Large messages have the single greatest effect on the virtual memory requirements of the MRS Server.

The following example shows how to obtain the memory requirement. For example, if the MRS requires from 1 to 5 areas open for each stream, assume the following:

For this application, sizing the MRS virtual memory at 32,000 pages should be sufficient. The default provided is 30,000; therefore, the DMQ$SET_SERVER_QUOTAS.COM file must be modified.

Global Memory

All message queuing groups on a node use the shared images DMQ$EXECRTL and DMQ$ENTRYRTL. Each individual group creates nine global sections. They are:

Global Sections

The GBLSECTIONS parameter limits the total number of global sections that can be used at one time. The first message queuing group that you start up on your system uses eighteen global sections. Each additional group creates nine global sections for every COM Server that is running.

Global Pages

The GBLPAGES parameter defines the total number of pages that global sections use in the virtual memory.

Global Page File

The GBLPAGFIL parameter defines the total number of pages that global sections can take up in the page file. All dynamic BEA MessageQ global sections are paged to the page file.

Tuning TCP/IP for BEA MessageQ

If the network chosen for cross-group connections is DEC TCP/IP (formerly called UCX), then TCP/IP may need to be tuned to support the increased load of network traffic caused by running BEA MessageQ. In general, OpenVMS nonpaged pool and the number of TCP/IP sockets may need to be increased.

Approximating the Nonpaged Pool Needs

DEC TCP/IP requires a minimum of 500,000 bytes of additional nonpaged pool, and an additional 2,000 bytes for each socket. (For more information, see the DEC TCP/IP Services for OpenVMS Installation and Configuration Guide.) To determine the amount of additional nonpaged pool that will be needed for BEA MessageQ, you must allow for large buffers and group connections.

Use the following formula to determine the approximate worst case needs of BEA MessageQ:

npp = ((6 * (Large DMQ buffer + 323)) + 2,000 * (grp_connections + 1) 

To adjust the amount of nonaged pool in the system configuration, modify the following parameters in the MODPARAMS.DAT file:

ADD_NPAGEDYN = npp 
ADD_NPAGEVIR = npp

Computing the Number of TCP/IP Sockets

To determine the number of additional sockets required, multiply the number of group connections by 2. Add this number to the total number of available sockets on the system. To view the current number of sockets, use the following command:

 $ UCX SHOW COMMUNICATIONS 

To change the value of the socket setting, use the following command:

$ UCX SET COMMUNICATIONS/DEVICE_SOCKET=n 

Improper configuration of TCP/IP sockets may result in a EXCEEDQUOTA error logged by the TCP/IP Link Driver.

Configuring BEA MessageQ for Large Messages

There are several parameters that must be changed in the BEA MessageQ configuration files to provide support for large messages up to 4MB. In some cases, OpenVMS system parameters must also be changed.

Note: Although BEA MessageQ supports message sizes up to 4MB, SBS direct Ethernet broadcasting (optimized Ethernet mode) does not support messages larger than 32K. SBS Datagram Broadcasting supports large messages up to 4MB.

Maximum Message Size

In the %PROFILE section of the DMQ$INIT file, set the GROUP_MAX_MESSAGE_SIZE to the largest message size which will be used by the group. It must be less than or equal to the value 4,194,304.

Message Pool Configuration

In the %BUFFER section of the DMQ$INIT file, the size and quantity of the message pool is important for large message processing. When a message is requested that is larger than the LARGE buffer size, which is normally 64,000, then the required number of buffers are chained together to hold the message. For a message size of 4,194,304, a total of 66 LARGE buffers of 64,000 bytes are required to hold ther message. This configuration of the buffer pool directly affects the size of the global section created by COM server upon boot. The size of this buffer pool can exhaust a normal OpenVMS configuration, typically resulting in the following output:

DmQ T 28:21.1 Time Stamp -  2-NOV-1999 10:28:21.18
DmQ T 28:21.1 ------- COM Server Starting ----------
DmQ I 28:21.2 COM Server (V5.0-80) starting at 2-NOV-1999 10:28:21
DmQ F 28:21.4 Fatal error while creating Large LLS Pool
DmQ F 28:21.4 %SYSTEM-F-EXGBLPAGFIL, exceeded global page file limit
%SYSTEM-F-EXGBLPAGFIL, exceeded global page file limit

$help/message EXGBLPAGFIL

EXGBLPAGFIL, exceeded global page file limit

Facility: SYSTEM, System Services

Explanation: The attempt to allocate a global section with page file
backing store failed because the systemwide limit on these
pages is exceeded. No part of the section is allocated.

User Action: Delete some similar sections or ask the system manager
to increase the SYSGEN parameter GBLPAGFIL. Then, try the
operation again.

In this case, edit the OpenVMS system parameters to increase the SYSGEN parameter
GBLPAGFIL.

In some cases, the COM server cannnot start and the system displays information similar to the following example:

DmQ T 41:09.8 Time Stamp -  2-NOV-1999 10:41:09.84
DmQ T 41:09.8 ------- COM Server Starting ----------
DmQ I 41:09.8 COM Server (V5.0-80) starting at 2-NOV-1999 10:41:09
DmQ F 41:10.1 Fatal error while creating Large LLS Pool
DmQ F 41:10.1 %SYSTEM-F-GPTFULL, global page table is full
%SYSTEM-F-GPTFULL, global page table is full

$help/message GPTFULL

GPTFULL, global page table is full

Facility: SYSTEM, System Services

Explanation: Not enough space is available in system memory to maintain
information about global sections. This message indicates a
system error resulting from insufficient allocated space in
the global page table.

User Action: Notify your system operator or system manager to increase the
SYSGEN parameter GBLPAGES.

In this case, edit the OpenVMS system parameters to increase the SYSGEN parameter GBLPAGES.

Buffer Pool Parameter

In the %XGROUP section of the DMQ$INIT file, the Buf Pool value must be greater than that largest message that will be sent to this group. The value is only used for sending messages to other groups. The purpose of the pool is to gain concurrence over network links; however, as CPU speed increased, this buffering is significant only in specialized cases. Nevertheless, for large cross group message, such as 4MB, the Buf Pool value must be set to at least 4,200 (in thousands).

Queue Quota

The queue quota must either be large enough to receive a large message buffer, or it must be disabled. For example, when sending messages between groups on OpenVMS systems, you can use DMQ$LOOP to test sending large messages. Because DMQ$LOOP uses a temporary queue, change the temporary queue quota in the DMQ$INIT file to either be large enough to handle the increased buffer size, or set the method to NONE.

Global Section Size

In the DMQ$USER area, edit the pgflquo parameters in the DMQ$SET_SERVER_QUOTAS.COM file to allow the BEA MessageQ processes to handle the increased global section size. For the COM server and for link drivers, set the value to 90,000 or 100,000. If recoverable messages are being used, increase the MRS pgflquo value depending on the number of messages in the pipe. The value may need to be set to 200,000.

Timeouts

Timeouts must be increased. BEA MessageQ applications that expect message throughput in seconds or fractions of seconds must be changed to allow for the increased time required for large message transfer.

Message Recovery Services

If the average message size will be 4MB, adjust the following parameters: