OTMQ is implemented based on the Oracle Tuxedo infrastructure, which is a typical client-server mode. The basic queuing features are provided by the central OTMQ server TuxMsgQ(). For more information, see the Oracle Tuxedo Message Queue Reference Guide.User processes cannot access the QSpace directly. All requests to operate the QSpace should go through the OTMQ server. Figure 1 show the OTMQ and Tuxedo Architecture.Figure 1 OTMQ and Oracle Tuxedo Architecture.There are several server components that enrich the OTMQ queuing features. For more information, see Oracle Tuxedo Message Queue UBB Server Reference.
•
• The *RESOURCES section defines parameters that control the application as a whole and server as system-wide defaults.The *MACHINES section defines parameters for each machine in an application.The *GROUPS section designates logically grouped sets of servers. At least one server group for a machine should be defined.The *SERVERS section contains information specific to a server process. Each entry in this section represents a server process to be booted in the application.The *SERVICES section provides information on services that advertised by server processes.For more information, see UBBCONFIG(5) in “Section 5 - File Formats, Data Descriptions, MIBs, and System Processes Reference” in the Oracle Tuxedo Reference Guide.The DMCONFIG file is made up of the following specification sections: *DM_LOCAL, *DM_REMOTE, *DM_EXPORT, *DM_IMPORT, *DM_RESOURCES, *DM_ROUTING, *DM_ACCESS_CONTROL, *DM_TDOMAIN.The *DM_LOCAL section defines one or more local domain access point identifiers and their associated gateway groups. Correspondingly, the *DM_REMOTE section defines one or more remote domain access point identifiers and their characteristics.The *DM_EXPORT section provides information on the services exported by each individual local domain access point. And the *DM_IMPORT section provides information on services imported and available to the local domain through remote domain access points defined in the *DM_REMOTE section.For more information, see DMCONFIG(5) in “Section 5 - File Formats, Data Descriptions, MIBs, and System Processes Reference” in the Oracle Tuxedo Reference Guide.The Workstation component of the Oracle Tuxedo system allows application clients to reside on a machine that does not have a full server-side installation, that is, a machine that does not support any administration or application servers. All communication between a Workstation client (an application client running on a Workstation component) and the server application takes place over the network. For more information, see Using The Oracle Tuxedo ATMI Workstation Component.
•
3. OTMQ application on an Oracle Tuxedo SHM domain can create one or multiple QSpaces. Each QSpace maps to a service provided by OTMQ server TuxMsgQ. Application clients calls OTMQ API tpqattach() first to attach to a queue to get access to specific QSpace, then calls tpenqplus() to enqueue message, or dequeue message from attached queue.Also the client can enqueue message to the queue that belongs to another QSpace that it is not attached to, as shown in Figure 2.OTMQ application on an Oracle Tuxedo MP domain can create one or multiple QSpaces on its master and slave node respectively. Each QSpace maps to a service provided by OTMQ server TuxMsgQ. Application clients on master or slave node can first call OTMQ API tpqattach() to attach to a queue to get access to specific QSpace on its own node, and then call tpenqplus() to enqueue message, or call tpdeqplus() to dequeue message from attached queue.Also the client on node B can enqueue message to the queue that belongs to another QSpace on node A that it doesn't attached to, as shown in Figure 3.OTMQ application can be deployed across multiple Oracle Tuxedo domains. Each domain can have one or multiple QSpaces. Each QSpace maps to a service provided by OTMQ server TuxMsgQ. Application clients of one domain A can first call OTMQ API tpqattach() to attach to a queue to get access to specific QSpace in its domain, and then call tpenqplus() to enqueue message to queues that belong to the attached QSpace, or call tpdeqplus() to dequeue message from attached queue.The client of domain B can also enqueue message to the queue that belongs to the QSpace of domain A as long as domain A has exported its service for its own QSpace, as shown in Figure 4.OTMQ application can also have workstation clients by utilizing Oracle Tuxedo workstation component, as shown in Figure 5.The administrator also must create a queue space using the queue administration program, tmqadmin(1), or the OTMQ_MIB() Management Information Base (MIB) that includes extended classes for OTMQ. There is a one-to-one mapping of queue space to queue server group since each queue space is a resource manager (RM) instance and only a single RM can exist in a group.The administrator can define a single server group in the application configuration for the queue space by specifying the group in UBBCONFIG or by using tmconfig to add the group dynamically.Part of the task of defining a queue is specifying the order for messages on the queue. Queue ordering can be determined by message availability time, expiration time, priority, FIFO and LIFO. For more information, see the tmqadmin() qcreate sub-command in the Oracle Tuxedo Message Queue Command Reference.Traditional Oracle Tuxedo /Q clients can communicate with OTMQ with only configuration change, as shown in Figure 6Of course, to take advantage of the new features introduced in OTMQ, application code need be changed. Traditional Tuxedo /Q tpenqueue(3c)/tpdequeue(3c) functions need be replaced with OTMQ counterparts tpenqplus(3c)/tpdeqplus(3c). Traditional Tuxedo /Q clients including APPQ_MIB classes (T_APPQ, T_APPQMSG, T_APPQSPACE and T_APPQTRANS) need replace them with corresponding OTMQ_MIB(5) classes (T_OTMQ, T_OTMQMSG, T_OTMQSPACE and T_OTMQTRANS).To support cross-group connection with OMQ, OTMQ provides thr Link Driver Server TuxMsgQLD(), as shown in Figure 7. With this server deployed, OTMQ and OMQ can have message-level compatibilities, with following limitations:To support interoperability between OMQ client application and OTMQ, OTMQ provides Client Library Server TuxCls(). With this server deployed, OTMQ and OMQ client can have message-level compatibilities. The traditional OMQ client can work with OTMQ server without any code change, compile and link, and any configuration change, except for below limitations:.To integrate the OTMQ global naming service with OMQ naming, the global naming file, which is indicated by the enviorment variables DMQNS_DEFAULTPATH and DMQNS_DEVICE, should have the read and write permissions for OTMQ and OMQ naming services.
•
• If you are using tpenqplus/tpdeqplus, you must add tpqattach before invoking tpenqplus/tpdeqplus, and the attached qspace/qname is an OTMQ qspace/qname .
• Using buildqclient to re-compile code.Core servers TMS_TMQM(), TuxMsgQ(), TuxMQFWD() and TMQEVT() are provided by the OTMQ. TMS_TMQM() manages global transactions for the queued message facility. It must be defined in the *GROUPS section of the configuration file.TuxMsgQ() and TuxMQFWD() provide message queuing services to users. They must be defined in the *SERVERS section of the configuration file. TMQEVT() provides publish/subscribe services to users. It must be defined in *SERVERS section of the configuration file.Supplemental servers TMQ_NA(), TuxMsgQLD() and TuxCls() can be configured at machine level for one or more OTMQ queue space.In addition to the standard requirements of a group name tag and a value for GRPNO, there must be a server group defined for each OTMQ queue space the application will use. The TMSNAME and OPENINFO parameters need to be set. Here are examples:TMS_TMQM is the name for the transaction manager server for OTMQ. In OPENINFO parameter, TUXEDO/TMQM is the literal name for the resource manager as it appears in $TUXDIR/udataobj/RM. The values for <device_name> and <queue_space_name> are instance-specific and must be set to the pathname for the universal device list and the name associated with the queue space, respectively. These values are specified by the administrator using tmqadmin(1).Listing 1 OTMQ Server Group ConfigurationThe TuxMsgQ() takes the same CLOPT as TMQUEUE server of Oracle Tuxedo /Q. The TuxMsgQ() reference page gives a full description of the SERVERS section of the configuration file. But still the TuxMsgQ() has several unique properties can be specified:Attach is the mandatory operation for an OTMQ application to access the OTMQ queue space. The attach operation can be configured with a default timeout value per queue space. To configure the default attach timeout for the queue space, add a service named TuxMQATH<qspace_name> in the SERVICES section, and set the BLOCKTIME property of this service as the default attach timeout value.TuxMsgQ() provides the sanity check function to remove invalid queue owners. To configure the sanity check interval for the TuxMsgQ() server, set "-i <interval>" in CLOPT. The <interval> value means the TuxMsgQ() server will do sanity check per receiving this number of messages.OTMQ Offline Trade Driver Server TuxMQFWD() is part of the OTMQ Reliable Message Delivery feature. It is responsible for resending the recoverable messages to the target if the OTMQ Message Queue Manager Server TuxMsgQ() fails to deliver the message to target at the first time.Refer to the TuxMQFWD() reference page for a full description of the *SERVERS section of the configuration file for this server.Any improper configuration that prevents the TuxMQFWD() server from dequeuing or forwarding messages will cause the server booting failure. Some important items should be emphasized:
• *SRVGRP of TuxMQFWD() server must have TMSNAME set to TMS_TMQM, and the OPENINFO must be set to associate with the proper device and queue space.
•
• REPLYQ of TuxMQFWD() server should be set to N.
• TuxMQFWD() server does not advertise any service.
• Only one TuxMQFWD() server process can be configured for one OTMQ queue space.OTMQ Event Broker TMQEVT() is required for publish/subscribe feature. It is responsible for notifying subscribers when topics are published. It must be configured in a separate server group from TuxMsgQ() and TuxMQFWD().Refer to the TMQEVT() reference page for a full description of the *SERVERS section of the configuration file for this server.OTMQ Naming Server TMQ_NA() can be configured to provide the naming feature. It allows the application to bind the queue alias to an actual queue name, and also supports lookup the actual queue name through provided queue alias.Refer to the reference page of TMQ_NA() for a full description of the *SERVERS section of the configuration file for this server.One limitation for the naming server is only one TMQ_NA() server process can be configured for one OTMQ queue space.The TMQ_NA() may boot with a pre-defined name space file, which is specified when creating or updating the queue space. The following example shows the content of one static name space file, which defines the association between the user defined queue alias and the actual queue name.
Table 1 Static Name Space File Content Refer to qspacecreate or qspacechange command of tmqadmin(1) for specifying the static name space file.OTMQ command tmqadmin(1) is used to establish the resources of the OTMQ. The OTMQ_MIB Management Information Base also provides an alternative method of administering OTMQ programmatically. See the Using MIB for more information on the MIB operation.Most of the key commands of tmqadmin have positional parameters. If the positional parameters (those not specified with a dash (-) preceding the option) are not specified on the command line when the command is invoked, tmqadmin prompts you for the required information.If you are going to add an entry to an existing Oracle Tuxedo UDL, the value for the QMCONFIG variable must be the same pathname specified in TUXCONFIG. Once you have invoked tmqadmin(1), it is recommend that you run a lidl command to see where space is available before creating your new entry.Listing 2 qspacecreateThe program does not prompt you to specify the size of the area to reserve in shared memory for storing non-persistent messages for all queues in the queue space. When you require non-persistent (memory-based) messages, you must specify the size of the memory area on the qspacecreate command line with the -n option.
• Each TMS_TMQM server in the group that uses this queue space
•
• For the number of concurrent processes count one for each TMS_TMQM, TuxMsgQ or TuxMQFWD server in the group that uses this queue space and one for a fudge factor.Most of these prompts are the same as the qspacecreate of qmadmin command of Oracle Tuxedo /Q component. The last three prompts are specific for OTMQ.If SAF and DQF queues are created by default, the recoverable delivery feature using SAF and DQF are enabled. Otherwise they cannot be used until the user create these two queues manually by qcreate command and enable them by OTMQ_MIB.You can choose to initialize the queue space as you use the qspacecreate command, or you can let it be done by the qopen command when you first open the queue space.QSpace High Availability is supported by Oracle Tuxedo Automatic Failover feature. The solution is to enable server group migration, and configure master and backup machine for OTMQ server group. When a master machine is down, the OTMQ server group migrates to the backup machine automatically. QSpace must be located under NFS which can be accessed by both master and backup machine. Listing 3 shows a UBBCONFIG file example.When QSpace is opened for the first time, it is loaded to shared memory. During migration, if QSpace is already in shared memory, it will not be reloaded; new messages will be lost. On the backup machine, it is not recommended to run the tmqadmin qopen command to open QSpace. If needed, after closing the QSpace, it must be removed from shared memory using ipcrm.Besides enable server group migration, the key configuration is set DBBLFAILOVER and SGRPFAILOVER in the UBBconfig file *RESOURCES section, and set RESTART=Y and MAXGEN greater than 0 for each server in migration group in the *SERVERS section.Listing 3 UBBCONFIG File Example:Each queue that you intend to use must be created with the tmqadmin(1) qcreate command. First you have to open the queue space with the qopen command. If you do not provides a queue space name, qopen will prompt for it. (You can also use the T_OTMQ class of the OTMQ_MIB to create a queue.)The prompt sequence for qcreate looks like the following:Listing 4 qcreate Prompt SequenceThe program does not prompt you for a default delivery policy and memory threshold options. The default delivery policy option allows you to specify whether messages with no specified delivery mode are delivered to persistent (disk-based) or non-persistent (memory-based) storage. The memory threshold option allows you to specify values used to trigger command execution when a non-persistent memory threshold is reached. To use these options, you must specify them on the qcreate command line with -d and -n, respectively. When the delivery policy is specified as persistent, using -q option can specify the storage threshold.Most of these prompts and options are the same as the qcreate of qmadmin command of Oracle Tuxedo /Q component. Still OTMQ provides more specific options:After properly configuration, an application that attaches to the queue space QS1 then can directly enqueue messages, via tpenqplus() to a remote queue of queue space QS2 that resides in a remote domain.WSC_JOURNAL_ENABLE: If set to 1, WS SAF is enabled. If set to 0, WS SAF is disabled. The default value is 0.WSC_JOURNAL_PATH: Specifies the journal file path. The default value is "./" on UNIX and ".\\" on Windows.WSC_JOURNAL_SIZE: Initial size of the journal file. The default value is 49150.WSC_JOURNAL_CYCLE_BLOCKS: the journal cycles (reuses) disk blocks when full and overwrites previous messages. The default value is 0.WSC_JOURNAL_FIXED_SIZE: Determines if the journal size is fixed or allowed to grow. The default value is 0.WSC_JOURNAL_PREALLOC: the journal file disk blocks are pre-allocated when the journal is initially opened. The default value is 1.WSC_JOURNAL_MSG_BLOCK_SIZE: Defines the file I/O block size, in bytes. The default value is 0.OTMQ provides a utility ConvertQSPACE() to upgrade Oracle Tuxedo /Q Qspace to OTMQ Qspace, so that the customers who already deployed Oracle Tuxedo /Q applications can benefit from OTMQ new features without data lost.Refer to the ConvertQSPACE(1) reference page for a full description of usage.
3. Make sure the QMCONFIG environment variable is configured as /Q device.
4. Run the utility with a) OTMQ device name, b) Qspace to be migrated and c) OTMQ Qspace ipckey:
ConvertQSPACE -d [OTMQ device name] -s [Qspace name] -i [OTMQ ipckey]
6. Make sure the QMCONFIG environment variable is configured as OTMQ device. Configure the OTMQ servers in the Oracle Tuxedo UBBCONFIG file according to OTMQ device and Qspace name.After Tuxedo /Q Qspace is upgraded to OTMQ Qspace, and OTMQ servers are booted, Tuxedo /Q clients can communicate with OTMQ without any change. Alternatively Tuxedo /Q clients can be re-compiled and re-linked with OTMQ libraries to use /Q compatible APIs tpenqueue()and tpdequeue() provided by OTMQ.To take advantage of OTMQ new features, the application code need be changed using OTMQ APIs. In one application or applications communicating with each other, /Q compatible APIs (for example, tpenqueue() and tpdequeue() ) should not be mixed with other OTMQ APIs (for example, tpenqplus(), tpdeqplus(), tpqconfirmmsg()), otherwise the result is unpredictable.OTMQ provides Link Driver Server TuxMsgQLD() as counterpart of OMQ Link Driver to achieve message level compatibilities for cross-group communication between OTMQ and OMQ applications.Also TuxMsgQLD() provides the routing functionality like traditional OMQ Link Driver but with limitations. For more information, see InteroperabilityTuxMsgQLD() requires pre-created link table and routing table in corresponding OTMQ Qspace. Link table consists of entries that each stands for one remote OMQ group. Routing table consists of entries that each stands for one combination of target group and routing through group. Link table and routing table are created by tmqadmin(1) qspacecreate command. The default size of link table is 200, or a different size can be specified with -L option. The default size of routing table is 200, or a different size can be specified with -R option.Refer to qspacecreate command of tmqadmin(1) for specifying link table and routing table size.To notify remote OMQ that we are in the same bus, environment variable DMQ_BUS_ID should be defined before booting up TuxMsgQLD().TuxMsgQLD() requires a configuration file placed under APPDIR. The configuration file name is specified by CLOPT-f parameter. Refer to the TuxMsgQLD() reference page for a full description of the SERVERS section of the configuration file for this server.Listing 6 Link Driver Server Configuration File
• OTMQ provides Client Library Server TuxCls() as counterpart of OMQ Client Library Server to achieve message-level compatibilities with OMQ workstation clients. With TuxCls() deployed, OMQ workstation clients can work with OTMQ servers without any change.TuxCls() works as OMQ proxy clients, so the max count of supported OMQ clients is configured by MIN and MAX parameters of TuxCls() in the UBBCONFIG *SERVERS section.