To maintain simultaneous DECnet connections to more than 32 message
queuing groups, you must adjust the NCP parameter MAXLINKS. Similarly,
for TCP/IP networks, the maximum number of sockets must be increased if
an insufficient number are available. Refer to Section 12.5.2 for
information on how to adjust this setting. In addition, large networks
with many network links require an increase in system non-paged pool to
provide more memory for device drivers.
12.2.6 Other System Resources and Quotas
The COM Server and Link Driver processes use the $QIO interface to post AST service requests for network I/O. Each outstanding AST counts against the process quota called ASTLM. Timers are also associated with network requests and count against the TQELM quota.
The MRS Server uses the $QIO interface to post AST service requests for
both read and write operations to a recoverable disk file. Timers are
also used and associated with each unconfirmed message.
12.2.7 Modifying DMQ$SET_SERVER_QUOTAS.COM
Each server's process quotas and limits, process name, and
server-specific output files are defined in the
DMQ$SET_SERVER_QUOTAS.COM file located in the DMQ$USER: directory. The
DMQ$SET_SERVER_QUOTAS.COM command procedure can be edited to modify the
system quotas assigned to any MessageQ Server process.
Example 12-1 shows the quota information for the COM Server process.
Example 12-1 COM Server Quotas
------------------------------------------------------------------------------- $ COM: $ proc_name == "DMQ_C_''comp_id'" $ full_name == "COM Server" $ img_file == "DMQ$EXE:DMQ$COM_SERVER.EXE" $ log_file == "''dmq_log'DMQ$COM_SERVER_''full_id'.LOG" $ prio == 6 !process software priority $ biolm == 500 !buffered I/O limit (counts outstanding operations) $ diolm == 500 !direct I/O limit (counts outstanding operations) $ buflm == 500000 !buffered I/O byte limit $ tqelm == 500 !timer queue elements $ enqlm == 500 !enq / deq locks $ fillm == 500 !open files $ astlm == 500 !Pending ASTs $ subprocs == 16 !child sub processes $ pgflquo == 30000 !virtual memory $ wsextent == 8192 !limit borrowing beyond wsquo $ wsquo == 1024 !basic working set limit in pages $ wsdef == 750 !starting working set size $ goto FINISHED --------------------------------------------------------------------------------
Proper allocation of virtual memory resources is critical to successful and efficient processing in the MessageQ environment. This section describes how to determine appropriate virtual memory allocation and shows how to model memory usage for each MessageQ Server.
MessageQ Servers are designed to continue operating if available virtual memory is exhausted. An operation requiring more memory than is available will fail; however, the server will continue to operate. If the server cannot delivery nonrecoverable messages, they are discarded. If the server cannot deliver a recoverable message, MessageQ executes the Undeliverable Message Action.
To determine the appropriate amount of virtual memory for your MessageQ configuration:
A rough model of memory usage requirements can be constructed by adding the memory requirements of all components managed by a server. The objects a server must track are determined by the data flow and timing of the system. This section provides a sample calculation for the MRS Server.
The amount of virtual memory used by a server can be obtained using the
DCL SHOW PROCESS /CONTINUOUS command. The maximum amount of memory used
by a server is also written to a server's output log file when the
group is shutdown. Size the rough model by configuring a minimimum
server, measuring the memory it requires, and adding the memory
requirements of its queues, groups, and links.
12.3.2 Performing Testing
After you model the system and arrive at an estimate for virtual memory required, build a network of simple message senders and receivers that send at rates that you expect the real application to encounter. Or, test the production applications under expected system load.
During testing, virtual memory exhaustion is logged as an error message
in the group's event log. If errors are encountered, increase the
virtual memory allocated to the server and rerun the tests until the
error no longer occurs.
12.3.3 A Memory Usage Model for Each MessageQ Server
To see how memory varies with the addition of a new group:
In the MessageQ OpenVMS environment, the objects are tracked by each server. Table 12-1 shows the objects tracked by the COM Server.
Object | How to Size it |
---|---|
Code, fixed overhead | Measure the minimum configuration |
Groups | Varies with the number of groups on the bus |
Network buffers | Varies with the number of connected groups. This is sized per group in the XGROUP section of the DMQ$INIT.TXT file. |
Queues | Local memory data structures used in attaching/detaching queues |
Table 12-2 shows the objects tracked by link drivers.
Object | How to Size it |
---|---|
Code, fixed overhead | Measure the minimum configuration |
Groups | Varies with the number of groups on the bus |
Network buffers | Varies with the number of active links
This is sized per group in the XGROUP section of the DMQ$INIT.TXT file |
The COM Server and link drivers share a common memory allocation mechanism to handle network buffers. Following is a formula for roughly calculating this value:
pool_size_in_pages = sum of XGROUP pool buffer sizes in Kbytes from the %XGROUP section of DMQ$INIT.TXT. Multiply by 2 to convert to pages. network buffers = 48 guard pages + pool_size_in_pages + 2 * large buffer size in pages
Table 12-3 shows the objects tracked by the MRS Server.
Object | How to Size it |
---|---|
Code, fixed overhead | Measure the minimum configuration |
Groups | Varies with the number of groups on the bus |
Queues | Varies with the number of recoverable queues both local and remote |
Messages | Varies with the number of unconfirmed messages |
Internal buffers | Varies with largest message size |
I/O data structures | Assigned per each target queue with a recoverable message. The size of the I/O data structures varies with the size of the largest message. To measure this:
|
Table 12-4 shows the objects tracked by the Journal Server.
Object | How to Size it |
---|---|
Code, fixed overhead | Measure the minimum configuration |
Internal buffers | Varies with largest message size |
I/O data structures | 1 per each stream. The Journal Server manages two streams, the PCJ stream and the DLJ stream. The sizing will be less than that required for the MRS Server since the Journal Server does not read the files. The size of the I/O data structures varies with the size of the largest message. |
Note:
The Journal Server uses the same I/O mechanism as the MRS Server, but does not allocate read ahead buffers since it does not read.
Table 12-5 shows the objects tracked by the SBS Server.
Object | How to Size it |
---|---|
Code, fixed overhead | Measure the minimum configuration |
Groups | Varies with the number of groups maximum 512 groups |
Avail registrations | Varies with the number of avail/unavail registrations |
Broadcast registrations | Varies with the number of broadcast registrations |
Multicast targets | Index that allow quick access from a MOT to a broadcast registration |
Ethernet buffers | Varies with the number of MOTs assigned to a multicast address |
Table 12-6 shows an example memory allocation model for the MRS Server using parameter values taken from a specific release of MessageQ . This model serves only as an example, along with an example configuration for a hypothetical network. Actual values are release dependent; therefore, it is important to check the product release notes.
Component | Values |
---|---|
Page size | 512 bytes |
Code, RTL's, MessageQ core messaging | < 10000 pages (measured) |
I/O buffer size | (large_message_size + page_size) / page_size |
Cross group information | 1/4 page per group |
Per queue information | 1 page per queue |
Per message overhead | 1/2 page per unconfirmed msg |
Overhead (large_msg_size/open area) |
|
MessageQ for OpenVMS Version uses a strategy in which I/O is addressable at a per-block level and achieves speed by use of asynchronous $QIO calls. The overhead per each open area is determined by the number of RMS data structures and buffers needed to handle the largest logical operation, and by the number of read ahead operations allowed. Large messages have the single greatest effect on the virtual memory requirements of the MRS Server.
To obtain the memory requirement, let's assume that MRS requires from 1 to 5 areas open for each stream. For this example, assume the following:
Local message overhead | 250 pages. |
fixed overhead | 10000 pages |
IO_buffer_size | 64 pages 1 large message |
XGROUP connections | 3 pages 10 groups / 4 |
Queues | 50 pages 50 * 1 page per queue |
Areas | 21250 pages 50 * 5 areas * 85 pages |
Messages | 375 pages see above discussion |
---------- | |
31742 pages |
For this application, sizing the MRS virtual memory at 32000 pages
should be sufficient. The default provided is 30000; therefore, the
DMQ$SET_SERVER_QUOTAS.COM file must be modified.
12.4 Global Memory
All message queuing groups on a node use the shared images DMQ$EXECRTL and DMQ$ENTRYRTL. Each individual group creates nine global sections. They are:
The GBLSECTIONS parameter limits the total number of global sections that can be used at one time. The first message queuing group that you start up on your system uses eighteen global sections. Each additional group creates nine global sections for every COM Server that is running.
The GBLPAGES parameter defines the total number of pages that global sections use in the virtual memory.
The GBLPAGFIL parameter defines the total number of pages that global
sections can take up in the page file. All dynamic MessageQ global
sections are paged to the page file.
12.5 Tuning TCP/IP for MessageQ
If the network chosen for cross-group connections is DEC TCP/IP, then
TCP/IP may need to be tuned to support the increased load of network
traffic caused by running MessageQ . In general, OpenVMS nonpaged
pool and the number of TCP/IP sockets may need to be increased.
12.5.1 Approximating the Nonpaged Pool Needs
To determine the amount of additional nonpaged pool that will be needed, refer to the formula in the DEC TCP/IP Services for OpenVMS system management guide. This formula is used to compute the additional resources each socket will need.
Following is a simplified version of the formula that reflects the approximate worst case needs of MessageQ :
npp = ((6 * (Large DMQ buffer + 323)) + 3,584 * (grp_connections + 1)
UCX requires an additional 342,000 bytes of overhead which may be reflected in the system configuration by modifying the MODPARAMS.DAT file as follows:
ADD_NPAGEDYN = npp ADD_NPAGEVIR = npp
To determine the number of additional sockets required, multiply the number of group connections by 2. Add this number to the total number of available sockets on the system. To view the current number of sockets, use the following command:
$ UCX SHOW COMMUNICATIONS
To change the value of the socket setting, use the following command:
$ UCX SET COMMUNICATIONS/DEVICE_SOCKET=n
Improper configuration of TCP/IP sockets may result in a EXCEEDQUOTA error logged by the TCP/IP Link Driver.
MessageQ problems can affect an entire message queuing bus, one or more message queuing groups, or a single MessageQ application. This chapter describes how to troubleshoot problems that affect the operation of your MessageQ environment.
Processing problems in the MessageQ environment are generally caused by:
The information in this chapter will help you to determine whether a problem lies with the MessageQ environment or is application-specific. It describes:
For more information on problems encountered during application
development, refer to the BEA MessageQ Programmer's Guide .
13.2 MessageQ Error Logging
Before you begin troubleshooting MessageQ problems, you need to
understand how MessageQ alerts you to error conditions. This
section describes how MessageQ logs informational and error
messages, how MessageQ Servers operate, and how to pinpoint the
source of the problems you encounter.
13.2.1 MessageQ Output
MessageQ offers several mechanisms to provide MessageQ system
managers, maintainers, application developers, and users with
information about the status of MessageQ and MessageQ
applications.
13.2.1.1 MessageQ Stream Output
MessageQ outputs messages to inform users about the current status of processing and to record system events. MessageQ can display informational, status, and error messages on a terminal screen, print them on the operator's console, or write them to a log file.
MessageQ messages are designed to serve a number of purposes. Some messages assist in debugging an application. Some messages alert the user to an event, error, or potentially serious problem affecting the entire MessageQ group. Output messages are grouped according to their output streams.
There are three MessageQ output streams. Each stream can be referred to by a logical name as follows:
Stream | Logical Name | Purpose/usage |
---|---|---|
Trace | DMQ$TRACE_OUTPUT | Debugging |
Process | DMQ$PROCESS_OUTPUT | Informing the application or user of events or errors of interest ONLY to that application or user. |
Group | DMQ$GROUP_OUTPUT | Informing MessageQ Servers or other applications of events or errors. |
Both the process and trace streams are of local interest only because
they report information or events pertaining to a single application.
The group stream output is of interest to the entire message queuing
group because it reports information or events which can affect
operation of the entire group. Each MessageQ process, whether an
internal server or user application process, uses these three output
streams.
13.2.1.2 Stream Destinations
A stream can send output to up to five destinations. A destination can be a terminal screen, operator console, or a log file. The assignment of a stream's destination(s) can be made at either group or application startup.
Stream output can also be changed and redirected using the MessageQ Manager utility (DMQ$MGR_UTILITY) Redirect Output (RO) menu option. Redirecting output includes adding or removing a destination to a stream's output or creating a new output file.
The five destinations for stream output are:
Stream output is directed to a destination based on the value set for the DMQ$TRACE_OUTPUT, DMQ$PROCESS_OUTPUT, or DMQ$GROUP_OUTPUT logical names. If you want to direct output to multiple destinations, assign a string containing the destinations (separated by commas) to the logical name.
For example, you would assign SYSOUT,USER_LOG,CONSOLE to the
DMQ$GROUP_OUTPUT logical name if you want the group stream to output to
your terminal, the user log file, and the system console. The system
will take default values, if the stream logical names are not defined.
13.2.1.3 Stream Switches
Some types of stream data require switches to be set to enable the output. A switch is set if a particular string has been assigned to the corresponding logical name. Output which falls into this category (and their corresponding logical names) are:
Output Type | Switch Logical | Values | Value Interpretation |
---|---|---|---|
Debugging Trace | DMQ$DEBUG | ERROR | Display Error messages to DMQ$TRACE_OUTPUT |
TRACE | Display Trace messages to DMQ$TRACE_OUTPUT. | ||
ALL | Display both error and Trace messages to DMQ$TRACE_OUTPUT. | ||
Display of Message Header Data | DMQ$HEADER | YES | Display MessageQ message headers. |
Server Tracing (for DMQ Server processes only) | DMQ$SERVER_TRACE | YES | Display server trace messages. |
Each MessageQ group has an Event Logger Server which logs events,
errors, and informational messages to a central file called
DMQ$LOG:DMQ$EVL_bbbb_ggggg.log. These messages
include announcements of server process startup, the failure of any
portion of a MessageQ group, or any log message sent to the event
logger by a MessageQ application. To uncover the source of a
MessageQ problem, always begin by reading the event log.
13.2.1.5 Console Output
Stream output sent to the console is normally sent to the central
operator console. This output can be re-directed by setting the value
of the DMQ$ERROUT logical name. DMQ$ERROUT specifies the destination
for warning messages from the MessageQ Servers. Messages written
to DMQ$ERROUT are also logged to the appropriate log file.
13.2.2 MessageQ Servers
Internal processing of MessageQ messages is performed by several group server processes. Each server runs as an OpenVMS detached process. As separate processes, each server has its own process quotas and limits and its own stream output.
The main overseer of each MessageQ group is the COM Server. Upon
group startup, the COM Server will create other server processes.
Whether the COM Server creates a other server process is based upon
each server's entry in the DMQ$INIT.TXT file.
13.2.2.1 MessageQ Server Output Files
Each server process has a log file where its output is written. The log files are contained in the directory specified by the DMQ$LOG logical name. The content of these files can be viewed using the OpenVMS TYPE command. Server log files are very useful for monitoring resources used by the servers and traceback information in the event of a server crash.
The servers and their corresponding output files are (where bbbb=bus ID, ggggg=group ID, and eeee=endpoint):
Server Process | File |
---|---|
Com Server | DMQ$COM_SERVER_ bbbb_ ggggg.LOG |
SBS Server | DMQ$SBS_SERVER_ bbbb_ ggggg.LOG |
MRS Server | DMQ$MRS_SERVER_ bbbb_ ggggg.LOG |
Event Logger | DMQ$EVENT_ bbbb_ ggggg.LOG |
Journal Server | DMQ$JRN_SERVER_ bbbb_ ggggg.LOG |
Naming Agent Server | DMQ$NA_SERVER_ bbbb_ ggggg.LOG |
QTransfer Server | DMQ$QTRANSFER_ bbbb_ ggggg.LOG |
TCPIP Link Driver | DMQ$TCPIP_LD_ bbbb_ ggggg.LOG |
DECnet Link Driver | DMQ$DECNET_LD_ bbbb_ ggggg.LOG |
Client Library Server | DMQ$CLS_D_ eeee_ bbbb_ ggggg.LOG |
13.2.2.2 MessageQ Server Logging/Debugging
MessageQ offers the same stream output mechanisms for server
processes as are available for user processes. The logical names used
to direct output streams are contained in the
DMQ$SET_SERVER_LOGICALS.COM procedure located in the DMQ$USER directory.
To isolate and diagnose a server problem, you can modify a server process stream output to direct specific server output such as server debugger tracing. Example 13-1 is a sample of the COM Server portion of the DMQ$SET_SERVER_LOGICALS.COM file.
Example 13-1 COM Server Logical Name Settings
$ COM: $ trace_output = "SYSOUT" $ group_output = "EVL_LOG,CONSOLE" $ process_output = "SYSOUT" $ debug = "" $ headers = "" $ server_trace = "" $ user_log = "" $ goto SET_LOGICALS
The symbol definitions in this file equate to the DMQ$* stream logical names for that particular server. For example, the trace_output symbol above equates to the DMQ$TRACE_OUTPUT logical name for the COM Server. When the DMQ$SERVER_TRACE logical name is set to YES, server-specific debug tracing is activated. The output is written to wherever trace_output is specified for that server.