This section describes corrections to the MessageQ API for Version 4.0A
.
2.2.1 PAMS_GET_MSGW() Fails to Wakeup on Multi-reader Queue (MRQ) Reads
When a message was queued to a multi-reader queue on an Alpha OpenVMS
system the pams_get_msgw would occasionally fail to wakeup.
This has been corrected.
2.2.2 PAMS_ATTACH_Q() to a Temporary Queue Will Hang The Process If The Temporary Queues Are Exhausted
If all of the temporary queues were used up and a process attempted to
attach to a temporary queue, the process would hang for several
minutes. This has been corrected so that if all of the temporary queues
are exhausted the pams_attach_q function will return
PAMS__RESRCFAIL.
2.2.3 PAMS_PUT_MSG() Hangs
The pams_put_msg API was hanging when either a cross-group
message was sent with a delivery mode of WF_MEM or a recoverable
message was sent with any Wait-For (WF) recoverable delivery mode.
2.2.4 Q_NOTIFY_RESP Message Not Endian Converted
The Queue Notification Service was not correctly converting the
Q_NOTIFY_RESP message to the appropriate endian format of the
requesting system.
2.2.5 Secondary Queue Names are Sometimes Deleted
Corrected a problem with the detach queue logic that causes permanent
Secondary Queue names to be cleared. This shows up as a failure of
pams_locate_q to find the queue name or for the MessageQ
Monitor to fail to display the queue.
2.2.6 PAMS_ATTACH_Q()/PAMS_DETACH_Q() Could Cause The Process To Abort With An Access Violation
Corrected a problem with processes that are doing
pams_attach_q/pams_detach_q in a loop while the
queue(s) they are attached to is receiving messages. This can result in
an EXEC mode AST being delivered to address zero.
2.2.7 PAMS_PUT_MSG() Returns PAMS__NOACCESS
Corrected a problem with pams_put_msg not allowing a process
to send a message with a WF delivery mode and the response queue set to
a Secondary Queue. When this combination is attempted,
pams_put_msg returns PAMS__NOACCESS.
2.3 Communication (COM) Services and Link Drivers
This section describes corrections to the Communication Services and
Link Drivers for Version 4.0A .
2.3.1 Inbound Link Connection Request Is Rejected
This corrects a problem where under certain conditions the inbound link
connection request was being rejected. The indication in the EVL log
for this problem was: "link in transition."
2.3.2 COM Server Generates Unneccesary Tracebacks
The COM Server no longer generates unnecessary tracebacks when a cross-group message fails to be enqueued to its target queue. The tracebacks that appeared were similar to the following:
============================================================================= DmQ E 52:12.5 Encountered an error return status DmQ E 52:12.5 %PAMS-E-EXCEEDQUOTA, Target process's quota exceeded - message not sent %PLI-W-ERROR, PL/I ERROR condition. -PAMS-W-EXCEEDQUOTA, Target process's quota exceeded - message not sent %TRACE-W-TRACEBACK, symbolic stack dump follows module name routine name line rel PC abs PC 000C73EF 000C73EF DMQ$COM_SERVER P920_CHECK_PAMS_CALL 20567 000000B7 0004DAEF DMQ$COM_SERVER P190_BUILD_INCOMING_XGROUP_MSG 11342 00000B8A 0003745E DMQ$COM_SERVER P180_CHECK_ALL_DATA_READ_IOSBS 10950 000000C2 00036882 DMQ$COM_SERVER P170_HANDLE_XGROUP_EF 10883 0000005A 000367BE DMQ$COM_SERVER DMQ$COM_SERVER 7257 000014EA 0002D26A =============================================================================
Corrected a problem with the MessageQ Monitor utility Link Detail
request that caused the COM Server to fail with ACCVIO.
2.3.4 COM Server Fails to Spawn Subprocesses
Corrected a problem with the COM Server's subprocesses that caused them
to be unable to translate DMQ$EXE and, therefore, fail to start. This
occurred primarily on systems with large logical name tables.
2.3.5 COM Server Immediately Drops New Connections
Corrected a problem where the COM Server was not correctly handling
unconfigured incoming DECnet connections. This resulted in the link
being dropped immediately after a successful connection was made due to
the channel being set to zero.
2.3.6 COM Server Fails with an ACCVIO During Startup
Corrected a problem with the COM Server which sometimes resulted in an
ACCVIO during startup. The problem was caused when the COM Server
suffered memory corruption during startup when attempting to load a
group initialization file in which the Routing section contained more
than 256 entries. MessageQ now properly starts up groups whose Routing
table contains more than 256 entries.
2.3.7 COM Server Becomes Compute Bound on Alpha Processors
Corrected a problem on Alpha systems that caused the COM Server to
become stuck in a compute-bound loop while attempting to wakeup a
receiver of an MRQ message.
2.3.8 Adding Cross-Group Entries Following Startup are not Seen
Corrected a problem encountered when adding new cross-group entries
following initial group startup. The problem allowed new entries to be
displayed using the MessageQ Monitor utility but did not allow
cross-group links to be enabled.
2.3.9 COM Server Logs Wrong Group Number on "Forcing link down" Events
Corrected a problem with the COM Server that caused it to log the wrong
group number when it encountered a condition requiring it to take the
link down.
2.3.10 Link Driver Logs "Protocol failure" Errors
Corrected a problem with Link Drivers not properly translating some connection messages from remote Link Drivers. This problem caused ambiguous error messages to be displayed, such as:
Remote system returned LD error -53 %DMQCS-F-PROTO_FAIL, Protocol failureThe error messages are now properly translated into clear messages, such as:
Remote system returned LD error -53 (LD_DISABLED) LD_DISABLED: link has not been enabled
Corrected a problem with a memory leak associated with the processing
of a UNDECLARE_SQ message. This problem eventually lead the COM Server
to exceed its memory limits resulting in a PAMS:EXHAUSTBLKS error,
often followed by an ACCVIO abort. This problem has been corrected.
2.3.12 DECnet/OSI Link Transitions Not Seen By The DECnet Link Drivers
Corrected a problem with the COM Server not detecting some link
transition events from DECnet/OSI. This can result in continually
logging errors such as file not accessible by the COM Server. In
addition, the DECnet Link Driver was not fully transitioning the link
to the down state.
2.3.13 COM Server Did Not Handle Message Visit Counts Correctly
Corrected a problem with the COM Server not handling invalid or
exhausted message visit counts. The message visit counts are used to
detect that a message has got caught in an endless routing loop. This
problem can lead to the COM Server logging exceeded visit count events
and/or the COM Server failing with an integer overflow.
2.3.14 DECnet Link Ownership Sometimes Not Passed Correctly Between DECnet Link Drivers
Corrected a problem when link ownership is passed between the DECnet Link Driver and the COM Server. Link ownership passing occurs when a group switches between V3.X and V2.X MessageQ Cross-group protocols such as when a group is moved from a MessageQ V2.x OpenVMS node to a MessageQ V3.x OpenVMS node. This problem has a variety of symptoms but usually shows up as an unusual error status returned by $QIO call such as:
Error while posting X-group rcv QIO for group 35 - channel #400 %SYSTEM-F-MBTOOSML, Mailbox is too small for request
This section describes general software corrections for Version 4.0A .
2.4.1 Process termination via CONTROL-Y Followed By The DCL STOP Command Causes a System Level BUGCHECK
A kernal mode timer rundown handler was installed to cancel EXEC mode
timers that may be active when a process is terminated by a ^Y followed
by the DCL STOP command. Prior to this correction, it was possible for
an EXEC mode timer to be delivered to a process with the timer service
routine no longer present to handle the timer. This resulted in a
system level BUGCHECK, and either a process hang or a system crash,
depending on the setting of the SYSGEN parameter, BUGCHECKFATAL.
2.4.2 Message Byte Counters Overflowed On Fast AXP Processors
The counter reset code has been enhanced to prevent integer overflow.
2.4.3 Multiple Attachers Were Allowed To Access The Same Primary Queue
A race condition was removed where simultaneous multiple attachers were
allowed to connect to the same primary queue.
2.4.4 Loader Not Validating Permanent Queue Range
Permanent queue range checking has been added to the loader to prevent
a permanent queue definition that is higher than the FIRST_TEMP_QUEUE
definition.
2.4.5 Link Management (LINKMGT_REQ) Connect Command Not Handling The "reconnect timer" Correctly
The LINKMGT connect command has been corrected so that the "reconnect
timer" is correctly set when a reconnect time in seconds is specified
rather than "PSYM_LINKMGT_USE_PREVIOUS".
2.4.6 Cross-Group Table Entries Ordering Restriction
In previous MessageQ versions the cross-group table entries for a
particular group/transport were required to be grouped together.
Interleaving transport entries for a group were not supported. This
restriction has been lifted.
2.4.7 MessageQ Command Procedures Now Handle "<>" Directory Syntax
MessageQ command procedures have been updated to correctly handle the
use of angled brackets <> as a valid DCL directory syntax.
2.4.8 DMQ$STARTUP.COM Now Allows a User Settable Timeout
Added the capability for the user to set the amount of time that the
DMQ$STARTUP procedure will wait until all requested MessageQ Server
processes have started and finished initialization. Currently, the MRS
Server is the only MessageQ server known to require an extended period
to startup due to the need to process all Destination Queue Files (DQF)
and Store-And-Forward Files (SAF) prior to completing its
initialization.
2.4.9 DMQ$SCRIPT Logical Name Limited To 32 Characters
Corrected a problem in the translation of the logical name DMQ$SCRIPT
which limited it to 32 characters. It has been restored back to
handling up to 255 character file names.
2.4.10 Ethernet User Callback Returned DMQCS__AREATOSMALL Back To PAMS_GET_MSG()
The Ethernet User Callback handling of compressed headers was causing a
DMQCS:AREATOSMALL error to be returned from pams_get_msg. This
has been corrected.
2.4.11 AVAILMSGDEF.H Included Unknown .H File
Corrected a problem with availmsgdef.h, found in
[DMQ$V32.EXAMPLES.SBS], which attempted to include a file that was not
part of the distribution kit.
2.4.12 Product Installation Failed During Link of CLS Server
Corrected a problem with the CLS Server's linker command procedure that
made incorrect assumptions of the presence of a TCP/IP product's
logical names. The procedure has been changed to use a more reliable
method of determining when to link against TCP/IP product files.
2.4.13 Conversion Utility Failed to Convert RTO User Directory when DEV License Loaded
Corrected a problem with the V2 to V3 conversion utility which caused
it to fail when it attempted to convert a user directory when a runtime
only MessageQ was installed but a Development license was loaded. The
procedure now correctly detects this condition and skips that section
of the conversion.
2.5 Client Library Services (CLS)
This section describes corrections to the Client Library Services for
Version 4.0A .
2.5.1 Single-client CLS does not properly close UCX BG devices
Corrections have been made so that single-client CLS closes UCX BG
devices properly.
2.5.2 Exits Following an "unexpected signal 10" Error
When the CLS Server received a connection request from a client system
with a name longer than 24 characters it would log the error message
Unexpected signal 10 received, server exiting, and then exit.
2.5.3 CLS Server Reports "Endpoint Is Probably In Use"
The CLS Server was failing to restart correctly following a shutdown
with an active connection over DEC TCP/IP Services. When this occurred,
the error message "Cannot bind to server address, endpoint is probably
in use" was logged when the CLS Server was restarted. This has been
corrected and the CLS will now restart correctly.
2.5.4 Client Task ID Incorrectly Logged As Negative Numbers
Corrected a problem with the logging of client task ID numbers from
Windows 95 systems. These task IDs were logged incorrectly as negative
numbers.
2.6 Interoperability
This section describes corrections for interoperability problems
between MessageQ for OpenVMS and MessageQ for UNIX and Windows NT.
2.6.1 AVAIL Service Interoperability Between OpenVMS V3.2 and UNIX/Windows NT Was Not Working Correctly
Corrected a number of problems with AVAIL Services interoperating with UNIX/NT systems. They are:
NOTE:
As a result of these changes, the SBS Server's queue definition has been modified to now be permanently active. This has been done in order to not lose some internal AVAIL Server messaging traffic from non-OpenVMS systems during startup. If the either the SBS Server will not be started or AVAIL Services interoperability with non-OpenVMS platforms not be needed, the queue can be restored back to active on attach.
This chapter describes known problems or restrictions for MessageQ for
OpenVMS Versions 4.0A.
3.1 Limitations for the Version 4.0A Kit
This section describes the known limitations for the Version 4.0A
release of MessageQ for OpenVMS.
3.1.1 DMQ$CVT.COM uppercases node name in %XGROUP section
The DMQ$CVT.COM convert utility converts all node names to uppercase
when converting to V4.0A. This may prevent TCP/IP Xgroup connections
from working correctly. Please check your XGROUP tables after
converting your groups to V4.0A to correct this casing problem.
3.1.2 Routing limitations with AVAIL and SBS Services
All systems which will participate in AVAIL and SBS services must have
the SBS Server enabled, including routing groups. In addition, older
MessageQ Version 3.x systems wishing to use AVAIL services with Version
4.0A systems must be adjacent nodes.
3.1.3 Client Applications Using OpenVMS-based CLS via Temporary Queue Unable to Use Monitor Utility
Client applications which use a OpenVMS-based CLS and which attach to a
temporary queue are incorrectly displayed by the Monitor Utility. The
send counts for the application will always be displayed as 0 in the
monitor. The work-around for this restriction is to convert your
application to use a permanent queue.
3.1.4 MessageQ for OpenVMS Maximum Queue Number
MessageQ for OpenVMS Version 4.0A supports queue numbers 0 to 999. It does not support queue numbers 1000 to 3999 which are supported in MessageQ for Unix/Windows NT Version 4.0A .
This restriction requires any MessageQ for Unix/Windows NT MessageQ
group which exchanges messages with a MessageQ for OpenVMS group to
have GROUP_MAX_USER_QUEUE set to 999 or less in the group init file.
3.1.5 Known Problem With DECnet Cluster Aliases
MessageQ for OpenVMS does not support DECnet cluster aliases used to
generate cross-group links. However, DECnet can be configured to use
the cluster alias name as a link's source. This does not prevent the
link from being made to the correct MessageQ group, but it does prevent
the cross-group verification from being able validate the link
correctly. The alias naming feature is controlled by setting the
characteristics of the DECnet TASK 0 object. Some customer
configurations require that aliases be enabled because other network
services also share the TASK 0 object. Currently there is no known work
around or correction other than disabling MessageQ's cross-group
verification feature.
3.1.6 Known Problem with Temporary File Deletion
As part of MessageQ startup, temporary files are created for input to each detached server (DMQ$EXE:DMQ$SERVER_TEMP_*.COM). When MessageQ attempts to delete these files after server startup, they are often left in a DELHEADER state because a process still has the file opened or mapped. When the process that has the file open or mapped exits, the file is deleted.
However, if the system is rebooted before the last process exits, the
file remains as a DELHEADER until the ANALYZE/DISK/REPAIR utility is
run to complete the deletion. To determine which process still has the
file open, use the SHOW DEVICE/FILE disk_name: command. If you are
working in a clustered environment, you must issue this command from
each node.
3.1.7 Sending Messages With New Delivery Modes to Older MessageQ Groups
If a MessageQ 4.0A application sends a message to, or routed through, a Version 2.x MessageQ Group using a delivery mode that is not supported in Version 2.x, the message will be delivered but the delivery options will be lost in the conversion. For example, a message sent with WF_DEQ from a Version 4.0A group to a Version 2.1 group will be received by the target process but the sender will time-out because the internal MessageQ status message will not be generated.
Another problem in sending recoverable messages between systems running
different version of MessageQ is that recoverable messages can be
delivered to the target process yet the older MessageQ group will be
unable to acknowledge receipt of the message. This results in the
message being periodically redelivered to target queue.
3.1.8 Cross-group AK_CONF/UMA_SAF is Downgraded to NN_DQF/UMA_SAF
The recoverable delivery mode and UMA combination AK_CONF/UMA_SAF is not supported on OpenVMS but is supported on other MessageQ Server implementations. If a message using this delivery mode and UMA is sent to or routed through an OpenVMS Version 4.0A system, the SAF would stall due to the unsupported delivery mode. To prevent this problem, the OpenVMS Link Drivers will detect this delivery mode and UMA combination and automatically change it NN_DQF/UMA_SAF.
The change has the following effects:
Translation error of recoverable IPI message - forcing NN_DQF/UMA_SAF + Message info - Src=27.10 Tgt=1.1 Class=7 Type=77 + Org-Src=27.10 Org-Tgt=1.1 Size=150 Seq=005A0001:00002901
OpenVMS Version 7.0 extends the addressing range to the full 64 bits
supported by the Alpha platform. However, MessageQ has not been
enhanced to take advantage of the 64-bit addresses and therefore
applications are limited to a 32-bit address space.
3.1.10 POSIX and MessageQ
Applications developed for both POSIX and for OpenVMS execute
correctly, however, program development under the POSIX shell is not
currently supported. To run a MessageQ image under POSIX requires that
the DMQ$SET_LNM_TABLE.COM command procedure first be executed under
DCL. This will set the logical names correctly.
3.1.11 Minimum Version of MessageQ for MRS Interoperability
The minimum version of MessageQ for UNIX and Windows NT for
interoperation of the message recovery systems with MessageQ for
OpenVMS 4.0A is MessageQ for UNIX and NT Version 3.2A-1.
3.1.12 Startup of MessageQ Will Fail if The Default Directory is Invalid
The startup of a MessageQ group via the DMQ$STARTUP.COM command
procedure will fail if the default directory is invalid or not write
accessible.
3.1.13 Script Parsing Does Not Recognize Group Names That Start With a Number
Group names which start with a number are not correctly parsed by the
script parser. For example, group 11X cannot be used in a SOURCE or
TARGET clause in a script file (i.e. TARGET=11X.QUEUE_1) Group names
must begin with a letter for Script Parsing to work correctly.
3.1.14 SBS Ethernet broadcasting limited to 32K messages
SBS direct Ethernet broadcasting does not support messages larger than
32K. SBS Datagram Broadcasting, however, does support large messages up
to 4MB.
3.1.15 PAMS API calls are NOT thread safe
OpenVMS MessageQ is NOT thread safe. This is because the MessageQ API context information is maintained on a per-process (rather than a per-thread) basis, multiple threads performing context-based processing must be synchronized so that only one MessageQ API call of a given type is active at once; otherwise, one thread could corrupt another thread's MessageQ operation.