This topic includes the following sections:
The BEA Tuxedo /Q component allows messages to be queued to persistent storage (disk) or to non-persistent storage (memory) for later processing or retrieval. The BEA Tuxedo Application-to-Transaction Monitor Interface (ATMI) provides functions that allow messages to be added to or read from queues. Reply messages and error messages can be queued for later return to client programs. An administrative command interpreter is provided for creating, listing, and modifying the queues. Servers are provided to accept requests to enqueue and dequeue messages, to forward messages from the queue for processing, and to manage the transactions that involve the queues.
The following figure shows the components of the queued message facility.
The figure illustrates how each component of the queuing system operates for queued service invocation. In this discussion, we use the figure to explain how administrators and programmers work with the BEA Tuxedo /Q component to define it and use it to queue a message for processing and get back a reply. The queuing service may also be used for simple peer-to-peer communication by using a subset of the components shown in the figure.
A queue space is a resource. Access to the resource is provided by an X/OPEN XA-compliant resource manager interface. This interface is necessary so that enqueuing and dequeuing can be done as part of a two-phase committed transaction in coordination with other XA-compliant resource managers.
The BEA Tuxedo administrator is responsible for defining servers and creating queue spaces and queues like those shown between the vertical dashed lines in the figure Queued Service Invocation.
The administrator must define at least one queue server group with
TMS_QM as the transaction manager server for the group.
Two additional system-provided servers need to be defined in the configuration file. These servers perform the following functions:
main()for servers that handles server initialization and termination, allocates buffers to receive and dispatch incoming requests to service routines, and routes replies to the correct destination. All of this processing is transparent to the application. Existing servers do not dequeue their own messages or enqueue replies. One goal of BEA Tuxedo /Q is to be able to use existing servers to service queued messages, without change. The
TMQFORWARDserver dequeues a message from one or more queues in the queue space, forwards the message to a server with a service that is named the same as the queue, waits for the reply, and queues the success reply or failure reply on the associated reply or failure queues, respectively, as specified by the originator of the message (if the originator specified a reply or failure queue).
An administrator also must create a queue space using the queue administration program, qmadmin(1), or the APPQ_MIB(5) Management Information Base (MIB). The queue space contains a collection of queues. In the figure Queued Service Invocation, for example, four queues are present within the APP queue space. There is a one-to-one mapping of queue space to queue server group since each queue space is a resource manager (RM) instance and only a single RM can exist in a group.
The notion of queue space allows for reducing the administrative overhead associated with a queue by sharing the overhead among a collection of queues in the following ways:
TMQUEUEin the figure Queued Service Invocation, can be used to enqueue and dequeue messages for multiple queues within a single queue space.
TMQFORWARDin the figure Queued Service Invocation, can be used to dequeue and forward messages to services from multiple queues within a single queue space.
TMS_QMin the figure Queued Service Invocation, can be used to complete transactions for multiple queues within a single queue space. One instance of the transaction manager server is reserved for non-blocking transactions so that they will be processed as quickly as possible and not be held up by blocking transactions. Blocking transactions are handled by the second instance of the transaction manager server.
The administrator can define a single server group in the application configuration for the queue space by specifying the group in
UBBCONFIG or by using
tmconfig(1) (see tmconfig, wtmconfig(1)) to add the group dynamically.
Part of the task of defining a queue is specifying the order for messages on the queue. Queue ordering can be determined by message availability time, expiration time, priority,
LIFO, or a combination of these criteria.
The administrator specifies one or more of these sort criteria for the queue, listing the most significant criteria first. The
LIFO values must be the least significant sort criteria. Messages are put on the queue according to the specified sort criteria and dequeued from the top of the queue. The administrator can configure as many message queuing servers as are needed to keep up with the requests generated by clients for the stable queues.
Data-dependent routing can be used to route between multiple server groups with servers offering the same service.
For housekeeping purposes, the administrator can set up a command to be executed when a threshold is reached for a queue that does not routinely get drained. This can be based on the bytes, blocks, or percentage of the queue space used by the queue or the number of messages on the queue. The command might boot a
TMQFORWARD server to drain the queue or send mail to the administrator for manual handling.
The BEA Tuxedo system uses the Queueing Services component of the BEA Tuxedo infrastructure for some operations. (The BEA Tuxedo infrastructure provides services such as security, scalability, message queuing, and transactions.) For example, administrative operations for shared memory are provided by the Queuing Services component. Some functions are not currently applicable to BEA Tuxedo applications; this is noted in descriptions of these functions.
You can also use the queued message facility for peer-to-peer communication between clients, such that a client communicates with other clients without using any forwarding server. The peer-to-peer communication model is shown in the following figure.
In steps 1 through 3 of the figure Queued Service Invocation, a client enqueues a message to the
SERVICE1 queue in the APP queue space using tpenqueue(3c). Optionally, the name of a reply queue and a failure queue can be included in the call to
tpenqueue(). In the example they are the queues
FAILURE_Q. The client can specify a correlation identifier value to accompany the message. This value is persistent across queues so that any reply or failure message associated with the queued message can be identified when it is read from the reply or failure queue.
The client can use the default queue ordering (for example, a time after which the message should be made available for dequeuing), or can specify an override of the default queue ordering (asking, for example, that this message be put at the top of the queue or ahead of another message on the queue).
tpenqueue() sends the message to the
TMQUEUE server, the message is queued, and an acknowledgment (step 3) is sent to the client; the acknowledgment is not seen directly by the client but can be assumed when the client gets a successful return. (A failure return includes information about the nature of the failure.)
A message identifier assigned by the queue manager is returned to the application. The identifier can be used to dequeue a specific message. It can also be used in another
tpenqueue() to identify a message already on the queue that the subsequent message should be enqueued ahead of.
Before an enqueued message is made available for dequeuing, the transaction in which the message is enqueued must be committed successfully.
When using BEA Tuxedo /Q for queued service invocation, and the message reaches the top of the queue, the
TMQFORWARD server dequeues the message and forwards it, via tpcall(3c), to a service with the same name as the queue name. In the figure Queued Service Invocation, the queue and the service are named
SERVICE1 and steps 4, 5, and 6 in the figure show this. The client identifier and the application authentication key are set to the client that caused the message to be enqueued; they accompany the dequeued message as it is sent to the service.
When the service returns a reply,
TMQFORWARD enqueues the reply (with an optional user-return code) to the reply queue (step 7 in the figure Queued Service Invocation).
Sometime later (steps 8, 9 and 10 in the figure Queued Service Invocation), the client uses tpdequeue(3c) to read from the reply queue
CLIENT_REPLY1 in order to get the reply message.
You can dequeue messages without removing them from the queue by using the TPQPEEK flag with
tpdequeue(). Messages that have expired or have been deleted by an administrator are immediately removed from the queue.
With regard to transaction management, one goal is to ensure reliability by enqueuing and dequeuing messages within global transactions. However, a conflicting goal is to reduce the execution overhead by minimizing the number of transactions that are involved.
An option is provided for the caller to enqueue a message outside any transaction in which the caller is involved (decoupling the queuing from the caller's transaction). However, a timeout in this situation leaves it unknown as to whether or not the message is enqueued.
A better approach is to enqueue the message within the caller's transaction, as is shown in the following figure.
In the figure, the client starts a transaction, queues the message and commits the transaction. The message is dequeued within a second transaction started by
TMQFORWARD; the service is called with tpcall(3c), is executed and the reply is enqueued within the same transaction. A third transaction, started by the client, is used to dequeue the reply (and possibly enqueue another request message). In ongoing processing, the third and first transactions can meld into one since enqueuing the next request can be done in the same transaction as dequeuing the response from the previous request.
|Note:||The system allows you to dequeue a response from one message and enqueue the next request within the same transaction, but does not allow you to enqueue a request and dequeue the response within the same transaction. The transaction in which the request is enqueued must be successfully committed before the message is available for dequeuing.|
A reply queue can be either specified or not by the application when calling
tpenqueue(). The effect is as follows:
tpdequeue(), it is the responsibility of the application to call
tpenqueue()to enqueue the reply. Normally, this would be done in the same transaction in which the request message is dequeued and executed so the entire operation is handled atomically (that is, the reply is enqueued only if the transaction succeeds).
TMQFORWARDenqueues a reply if the application service returns successfully (that is, the service routine called tpreturn(3c) with
tpcall()did not return 1). If
tpcall()receives data, then the typed buffer used is enqueued to the reply queue. If no data is received in
tpcall(), then a message with no data (that is, a NULL message) is enqueued; the fact that a message is enqueued (even if NULL) is sufficient to signify that the operation has been completed.
Handling of errors requires both an understanding of the nature of the errors the application may encounter and careful planning and coordination between the BEA Tuxedo administrator and the application program developers. The way BEA Tuxedo /Q works, if a message is dequeued within a transaction and the transaction is rolled back, then (if the retry parameter is greater than 0) the message ends up back on the queue where it can be dequeued and executed again.
For a transient problem, it may be desirable to delay for a short period before retrying to dequeue and execute the message, allowing the transient problem to clear. For example, if there is a lot of activity against the application database, there may be occasions when all you need is a little time to allow locks in a database to be released by another transaction. Normally, a limit on the number of retries is also useful to ensure that some application flaw doesn't cause significant waste of resources. When a queue is configured by the administrator, both a retry count and a delay period (in seconds) can be specified. A retry count of 0 implies that no retries are done. After the retry count is reached, the message is moved to an error queue that can be configured by the administrator for the queue space.
There are cases where the problem is not transient. For example, the queued message may request operations on an account that does not exist. In this case, it is desirable not to waste any resources by trying again. If the application programmer or administrator determines that failures for a particular operation are never transient, then it is simply a matter of setting the retry count to zero. It is more likely the case that for the same service some problems will be transient and some problems will be permanent; the administrator and application developers need to have more than a single approach to handle errors.
Other variations come about because the application may either dequeue messages directly or use the
TMQFORWARD server and because an error may cause a transaction to be rolled back and the message requeued while logic dictates that the transaction should be committed. These variations and ways to deal with them are discussed in BEA Tuxedo /Q Administration, BEA Tuxedo /Q C Language Programming, and BEA Tuxedo /Q COBOL Language Programming.
To summarize, BEA Tuxedo /Q provides the following features to BEA Tuxedo application programmers and administrators:
There are many application paradigms in which queued messages can be used. This feature can be used to queue requests when a machine, server, or resource is unavailable or unreliable (for example, in the case of a wide area or wireless networks). This feature can also be used for work flow provisioning where each step generates a queued request to do the next step in the process. Yet another use is for batch processing of potentially long running transactions, such that the initiator does not have to wait for completion but is assured that the message will eventually be processed. This facility may also be used to provide a data pipe between two otherwise unrelated applications in a peer-to-peer relationship.