OTMQ is implemented based on the Oracle Tuxedo infrastructure, which is a typical client-server mode. The basic queuing features are provided by the central OTMQ server TuxMsgQ(). For more information, see Oracle Tuxedo Message Queue Reference Guide.User processes cannot access the QSpace directly. All requests to operate the QSpace should go through the OTMQ server. Figure 1 show the OTMQ and Tuxedo Architecture.Figure 1 OTMQ and Oracle Tuxedo Architecture.There are several server components that enrich the OTMQ queuing features. For more information, see Oracle Tuxedo Message Queue UBB Server Reference.
•
• For SHM/MP domains, the UBBCONFIG is the configuration file. It is a text version of the configuration file, which can be created and edited with any text editor. Before booting the application using the configuration file, a binary version of the configuration file TUXCONFIG should be created from the text version by tmloadcf(1) command.For applications consisting of multiple domains, an additional configuration file for domain connections is required. Similar to the UBBCONFIG file, the DMCONFIG file is the text version, which describes how multiple domains are connected and which services they make accessible to each other. Use the dmloadcf(1) utility to get the binary version of domain configuration file BDMCONFIG.The *RESOURCES section defines parameters that control the application as a whole and server as system-wide defaults.The *MACHINES section defines parameters for each machine in an application.The *GROUPS section designates logically grouped sets of servers. At least one server group for a machine should be defined.The *SERVERS section contains information specific to a server process. Each entry in this section represents a server process to be booted in the application.The *SERVICES section provides information on services that advertised by server processes.For more information, see UBBCONFIG(5) in “Section 5 - File Formats, Data Descriptions, MIBs, and System Processes Reference” in the Oracle Tuxedo Reference Guide.The DMCONFIG file is made up of the following specification sections: *DM_LOCAL, *DM_REMOTE, *DM_EXPORT, *DM_IMPORT, *DM_RESOURCES, *DM_ROUTING, *DM_ACCESS_CONTROL, *DM_TDOMAIN.The *DM_LOCAL section defines one or more local domain access point identifiers and their associated gateway groups. Correspondingly, the *DM_REMOTE section defines one or more remote domain access point identifiers and their characteristics.The *DM_EXPORT section provides information on the services exported by each individual local domain access point. And the *DM_IMPORT section provides information on services imported and available to the local domain through remote domain access points defined in the *DM_REMOTE section.For more information, see DMCONFIG(5) in “Section 5 - File Formats, Data Descriptions, MIBs, and System Processes Reference” in the Oracle Tuxedo Reference Guide.The Workstation component of the Oracle Tuxedo system allows application clients to reside on a machine that does not have a full server-side installation, that is, a machine that does not support any administration or application servers. All communication between a Workstation client (an application client running on a Workstation component) and the server application takes place over the network. For more information, see Using The Oracle Tuxedo ATMI Workstation Component.
•
3. OTMQ application on an Oracle Tuxedo SHM domain can create one or multiple QSpaces. Each QSpace maps to a service provided by OTMQ server TuxMsgQ. Application clients calls OTMQ API tpqattach() first to attach to a queue to get access to specific QSpace, then calls tpenqplus() to enqueue message, or dequeue message from attached queue.Also the client can enqueue message to the queue that belongs to another QSpace that it is not attached to, as shown in Figure 2.OTMQ application on an Oracle Tuxedo MP domain can create one or multiple QSpaces on its master and slave node respectively. Each QSpace maps to a service provided by OTMQ server TuxMsgQ. Application clients on master or slave node can first call OTMQ API tpqattach() to attach to a queue to get access to specific QSpace on its own node, and then call tpenqplus() to enqueue message, or call tpdeqplus() to dequeue message from attached queue.Also the client on node B can enqueue message to the queue that belongs to another QSpace on node A that it doesn't attached to, as shown in Figure 3.OTMQ application can be deployed across multiple Oracle Tuxedo domains. Each domain can have one or multiple QSpaces. Each QSpace maps to a service provided by OTMQ server TuxMsgQ. Application clients of one domain A can first call OTMQ API tpqattach() to attach to a queue to get access to specific QSpace in its domain, and then call tpenqplus() to enqueue message to queues that belong to the attached QSpace, or call tpdeqplus() to dequeue message from attached queue.The client of domain B can also enqueue message to the queue that belongs to the QSpace of domain A as long as domain A has exported its service for its own QSpace, as shown in Figure 4.OTMQ application can also have workstation clients by utilizing Oracle Tuxedo workstation component, as shown in Figure 5.The administrator also must create a queue space using the queue administration program, tmqadmin(1), or the OTMQ_MIB() Management Information Base (MIB) that includes extended classes for OTMQ. There is a one-to-one mapping of queue space to queue server group since each queue space is a resource manager (RM) instance and only a single RM can exist in a group.The administrator can define a single server group in the application configuration for the queue space by specifying the group in UBBCONFIG or by using tmconfig to add the group dynamically.Part of the task of defining a queue is specifying the order for messages on the queue. Queue ordering can be determined by message availability time, expiration time, priority, FIFO and LIFO. For more information, see the tmqadmin() qcreate sub-command in Oracle Tuxedo Message Queue Command Reference.Traditional Oracle Tuxedo /Q clients can communicate with OTMQ with only configuration change, as shown in Figure 6Of course, to take advantage of the new features introduced in OTMQ, application code need be changed. Traditional Tuxedo /Q tpenqueue(3c)/tpdequeue(3c) functions need be replaced with OTMQ counterparts tpenqplus(3c)/tpdeqplus(3c). Traditional Tuxedo /Q clients including APPQ_MIB classes (T_APPQ, T_APPQMSG, T_APPQSPACE and T_APPQTRANS) need replace them with corresponding OTMQ_MIB(5) classes (T_OTMQ, T_OTMQMSG, T_OTMQSPACE and T_OTMQTRANS).To support cross-group connection with OMQ, OTMQ provides the Link Driver Server TuxMsgQLD(), as shown in Figure 7. With this server deployed, OTMQ and OMQ can have message-level compatibilities, with following limitations:To support interoperability between OMQ client application and OTMQ, OTMQ provides Client Library Server TuxCls(). With this server deployed, OTMQ and OMQ client can have message-level compatibilities. The traditional OMQ client can work with OTMQ server without any code change, compile and link, and any configuration change, except for below limitations:.To integrate the OTMQ global naming service with OMQ naming, the global naming file, which is indicated by the environment variables DMQNS_DEFAULTPATH and DMQNS_DEVICE, should have the read and write permissions for OTMQ and OMQ naming services.
•
• If you are using tpenqplus/tpdeqplus, you must add tpqattach before invoking tpenqplus/tpdeqplus, and the attached qspace/qname is an OTMQ qspace/qname.
• Using buildqclient to re-compile code.Core servers TMS_TMQM(), TuxMsgQ(), TuxMQFWD() and TMQEVT() are provided by the OTMQ. TMS_TMQM() manages global transactions for the queued message facility. It must be defined in the *GROUPS section of the configuration file. TuxMsgQ() and TuxMQFWD() provide message queuing services to users. They must be defined in the *SERVERS section of the configuration file. TMQEVT() provides publish/subscribe services to users. It must be defined in *SERVERS section of the configuration file.Supplemental servers TMQ_NA(), TuxMsgQLD() and TuxCls() can be configured at machine level for one or more OTMQ queue space.In addition to the standard requirements of a group name tag and a value for GRPNO, there must be a server group defined for each OTMQ queue space the application will use. The TMSNAME and OPENINFO parameters need to be set. Here are examples:TMS_TMQM is the name for the transaction manager server for OTMQ. In OPENINFO parameter, TUXEDO/TMQM is the literal name for the resource manager as it appears in $TUXDIR/udataobj/RM. The values for <device_name> and <queue_space_name> are instance-specific and must be set to the pathname for the universal device list and the name associated with the queue space, respectively. These values are specified by the administrator using tmqadmin(1).Listing 1 OTMQ Server Group ConfigurationThe TuxMsgQ() takes the same CLOPT as TMQUEUE server of Oracle Tuxedo /Q. The TuxMsgQ() reference page gives a full description of the SERVERS section of the configuration file. But still the TuxMsgQ() has several unique properties can be specified:Attach is the mandatory operation for an OTMQ application to access the OTMQ queue space. The attach operation can be configured with a default timeout value per queue space. To configure the default attach timeout for the queue space, add a service named TuxMQATH<qspace_name> in the SERVICES section, and set the BLOCKTIME property of this service as the default attach timeout value.TuxMsgQ() provides the sanity check function to remove invalid queue owners. To configure the sanity check interval for the TuxMsgQ() server, set "-i <interval>" in CLOPT. The <interval> value means the TuxMsgQ() server will do sanity check per receiving this number of messages.OTMQ Offline Trade Driver Server TuxMQFWD() is part of the OTMQ Reliable Message Delivery feature. It is responsible for resending the recoverable messages to the target if the OTMQ Message Queue Manager Server TuxMsgQ() fails to deliver the message to target at the first time.Refer to the TuxMQFWD() reference page for a full description of the *SERVERS section of the configuration file for this server.Any improper configuration that prevents the TuxMQFWD() server from dequeuing or forwarding messages will cause the server booting failure. Some important items should be emphasized:
• *SRVGRP of TuxMQFWD() server must have TMSNAME set to TMS_TMQM, and the OPENINFO must be set to associate with the proper device and queue space.
•
• REPLYQ of TuxMQFWD() server should be set to N.
• TuxMQFWD() server does not advertise any service.
• Only one TuxMQFWD() server process can be configured for one OTMQ queue space.OTMQ Event Broker TMQEVT() is required for publish/subscribe feature. It is responsible for notifying subscribers when topics are published. It must be configured in a separate server group from TuxMsgQ() and TuxMQFWD().Refer to the TMQEVT() reference page for a full description of the *SERVERS section of the configuration file for this server.OTMQ Naming Server TMQ_NA() can be configured to provide the naming feature. It allows the application to bind the queue alias to an actual queue name, and also supports lookup the actual queue name through provided queue alias.Refer to the reference page of TMQ_NA() for a full description of the *SERVERS section of the configuration file for this server.One limitation for the naming server is only one TMQ_NA() server process can be configured for one OTMQ queue space.The TMQ_NA() may boot with a pre-defined name space file, which is specified when creating or updating the queue space. The following example shows the content of one static name space file, which defines the association between the user defined queue alias and the actual queue name.
Table 1 Static Name Space File Content Refer to qspacecreate or qspacechange command of tmqadmin(1) for specifying the static name space file.OTMQ command tmqadmin(1) is used to establish the resources of the OTMQ. The OTMQ_MIB Management Information Base also provides an alternative method of administering OTMQ programmatically. See Oracle Tuxedo Message Queue MIB Reference for more information on the MIB operation.Most of the key commands of tmqadmin have positional parameters. If the positional parameters (those not specified with a dash (-) preceding the option) are not specified on the command line when the command is invoked, tmqadmin prompts you for the required information.The universal device list (UDL) is a VTOC file under the control of the Oracle Tuxedo system. It maps the physical storage space on a machine where the Oracle Tuxedo system is run. An entry in the UDL points to the disk space where the queues and messages of a queue space are stored; the Oracle Tuxedo system manages the input and output for that space. The UDL is created by tmloadcf(1) when the configuration file is first loaded.# The QMCONFIG variable points to an existing device where the UDLIf you are going to add an entry to an existing Oracle Tuxedo UDL, the value for the QMCONFIG variable must be the same pathname specified in TUXCONFIG. Once you have invoked tmqadmin(1), it is recommend that you run a lidl command to see where space is available before creating your new entry.Do the following steps to store queue space in Oracle Database. Differed from creating an entry in the Universal Device List, you do not need to create queue device by tmqadmin command crdl to create an entry in Oracle Database.
1. Install Oracle database 10g client (or above releases) and set Oracle database ENV. For example, on Linux platforms, create link libclntsh.so for libclntsh.so.x.x (for example, libclntsh.so.10.1), and set LD_LIBRARY_PATH for link libclntsh.so.
2. Set QMCONFIG variable with the format of "DB:Oracle_XA:..." (see below example). QMCONFIG will point to the Oracle database schema. The database table will be named as "QS_"+ OTMQ tablesapce name; the database table name TUX_VTOC_UDL will be reserved for Tuxedo.
3. Define OPENINFO in UBBCONFIG. OPENINFO is started with $QMCONFIG and followed with ":$QSPACE_NAME". For example,For more information, see OPENINFO in "UBBCONFIG(5)" in Oracle Tuxedo File Formats, Data Descriptions, MIBs, and System Processes Reference.
Note: Listing 2 qspacecreateThe program does not prompt you to specify the size of the area to reserve in shared memory for storing non-persistent messages for all queues in the queue space. When you require non-persistent (memory-based) messages, you must specify the size of the memory area on the qspacecreate command line with the -n option.Parameter Descriptions describes the required parameters of qspacecreate command in detail. Besides manually inputting each required parameter, you can also leverage template files to set those required parameters; see Easy Configuration for Creating a Queue Space for more information. OTMQ supports you to get error messages in time if you set an invalid value; see Error Report for more information.You can choose to initialize the queue space as you use the qspacecreate command, or you can let it be done by the qopen command when you first open the queue space. For more information, see QSpace High Availability.
• Each TMS_TMQM server in the group that uses this queue space
•
• For the number of concurrent processes count one for each TMS_TMQM, TuxMsgQ or TuxMQFWD server in the group that uses this queue space and one for a fudge factor.Most of these prompts are the same as the qspacecreate of qmadmin command of Oracle Tuxedo /Q component. The last three prompts are specific for OTMQ.If SAF and DQF queues are created by default, the recoverable delivery feature using SAF and DQF are enabled. Otherwise they cannot be used until the user create these two queues manually by qcreate command and enable them by OTMQ_MIB.You can choose to initialize the queue space as you use the qspacecreate command, or you can let it be done by the qopen command when you first open the queue space.You can use either way to set parameters of qspacecreate command.
1. qs_high_capacity_template (see Listing 5 for its context)
2. qs_general_capacity_template (see Listing 6 for its context)
3. qs_low_capacity_template (see Listing 7 for its context)
• Input the number of the template file (see Listing 3 for an example)In this way, number "1" stands for qs_high_capacity_template; number "2" stands for qs_general_capacity_template; number "3" stands for qs_low_capacity_template.
• In this way, value "H" stands for qs_high_capacity_template; value "G" stands for qs_general_capacity_template; value "L" to use qs_low_capacity_template.For more information about qspacecreate command, see "tmqadmin" in Oracle Tuxedo Message Queue Command Reference.Listing 5 Context of qs_high_capacity_templateListing 6 Context of qs_general_capacity_templateListing 7 Context of qs_low_capacity_templateYou can create your own template file by specifying -D option of qspacecreate command. The template file is located at $TUXDIR/udataobj/OTMQ/template/. You should either name this template file or give this template file an absolute file path. See Listing 8 and Listing 9 for examples.For more information about qspacecreate command, see "tmqadmin" in Oracle Tuxedo Message Queue Command Reference.Listing 10 Example A: Error Report UsageListing 11 Example B: Error Report UsageQSpace High Availability is supported by Oracle Tuxedo Automatic Failover feature. The solution is to enable server group migration, and configure master and backup machine for OTMQ server group. When a master machine is down, the OTMQ server group migrates to the backup machine automatically. QSpace must be located under NFS which can be accessed by both master and backup machine. Listing 12 shows a UBBCONFIG file example.When QSpace is opened for the first time, it is loaded to shared memory. During migration, if QSpace is already in shared memory, it will not be reloaded; new messages will be lost. On the backup machine, it is not recommended to run the tmqadmin qopen command to open QSpace. If needed, after closing the QSpace, it must be removed from shared memory using ipcrm.Besides enable server group migration, the key configuration is set DBBLFAILOVER and SGRPFAILOVER in the UBBconfig file *RESOURCES section, and set RESTART=Y and MAXGEN greater than 0 for each server in migration group in the *SERVERS section.Listing 12 UBBCONFIG File Example:Each queue that you intend to use must be created with the tmqadmin(1) qcreate command. First you have to open the queue space with the qopen command. If you do not provide a queue space name, qopen will prompt for it. (You can also use the T_OTMQ class of the OTMQ_MIB to create a queue.)The prompt sequence for qcreate looks like the following:Listing 13 qcreate Prompt SequenceThe program does not prompt you for a default delivery policy and memory threshold options. The default delivery policy option allows you to specify whether messages with no specified delivery mode are delivered to persistent (disk-based) or non-persistent (memory-based) storage. The memory threshold option allows you to specify values used to trigger command execution when a non-persistent memory threshold is reached. To use these options, you must specify them on the qcreate command line with -d and -n, respectively. When the delivery policy is specified as persistent, using -q option can specify the storage threshold.Most of these prompts and options are the same as the qcreate of qmadmin command of Oracle Tuxedo /Q component. Still OTMQ provides more specific options:Besides manually inputting each required parameter, you can also leverage template files to create queues; see Easy Configuration for Creating a Queue for more information.See Using Default Template File to Create a Queue for more information.
• Use default template files and -t option to create different queues: queue MRQ, PQ, SQ, and UNLIMITQ.See Using Default Template Files and -t Option to Create Different Queues for more information.See Using Existing Queue as Template File to Create a Queue for more information.See Define Custom Template File to Create a Queue for more information.You can use the default template file q.ini to create a queue in FIFO (First In, First Out) order by specifying -F option of qcreate command; the location of q.ini template file is $TUXDIR/udataobj/OTMQ/template/q.ini. See Listing 14 for its context and Listing 15 for a usage example.Listing 15 Example of Using -F Option to Create a QueueYou can use default template files and -t option of qcreate command to create different queues: queue MRQ, PQ, SQ, and UNLIMITQ. A usage example is listed as follows.You can define your own template file to create a queue by specifying -D option of qcreate command. You should either assign an absolute path for your own template file or name your template file in $TUXDIR/udataobj/OTMQ/template/. Usage examples are listed as follows.When you use qspacecreate command to create a queue space on shared storage, you can specify whether this newly created queue space is clustered. If positive, within one queue space cluster, the first created queue space is called this cluster's primary queue space; all changes of this primary queue space (such as creating a queue) will be propagated to other groups. Queue spaces created after this primary queue space are called physical queue spaces; each group has its dedicated physical queue space. Therefore, a queue space cluster has one primary queue space and at least one physical queue space.
• Queue spaces for all groups in a cluster must reside in shared storage; they use the same QMCONFIG and the same device file.You can use qspacecreate command -c option in the following format to create queue space cluster and all its groups.group1,group2,...,groupn are comma-separated group names; those group names should be the same as the group names specified in GROUPS in UBBCONFIG. The first group name should be the group name of the primary group. The maximum length of queue_space_name + group_name is 14.To add new queue space groups to an existing cluster, use qsgroupadd (qsga) queue_space_name group_name command. Each cluster can accommodate 32 groups at maximum.TMS_TMQM is the name of the transaction manager server for OTMQ.For OPENINFO parameter, TUXEDO/TMQM is the literal name of the resource manager as it appears in $TUXDIR/udataobj/RM. <device_name> must be set to the universal device list pathname; <queue_space_name> must be set to the name that qspacecreate specifies.The following listing is an example of server group UBBCONFIG configuration for queue space cluster.
•
• Group names specified by LMID in UBBCONFIG must be the same as the group names provided in qspacecreate and qsgroupadd.The configuration of OTMQ server (TuxMsgQ) in SERVERS section of UBBCONFIG is the same as ordinary non-cluster queue space. For example,For more information, see UBBCONFIG in File Formats, Data Descriptions, MIBs, and System Processes Reference.Application clients call OTMQ API tpqattach() first to attach to a queue to get access to a specific queue space, and then call tpenqplus() to enqueue message or call tpdeqplus() to dequeue message from that attached queue. Attach is the mandatory operation for an OTMQ application to access the OTMQ queue space. OTMQ leverages Tuxedo client/server affinity feature to guarantee one application client's subsequent requests are routed to the same OTMQ server on the same group, which is attached by OTMQ API tpqattach().To use this Tuxedo client/server affinity feature, set AFFINITYSCOPE=GROUP in SERVICES section of UBBCONFIG. For example,Only metadata changes, made by qcreate, qchange, or qdestroy command and executed on primary queue space, will be propagated to other queue spaces in the cluster. All other commands on primary queue space and all commands on physical queue spaces are applied locally.If all queue space groups are running, the propagation can take effect immediately by using MIB (TUXCONFIG environment variable must be configured). OTMQ server on physical groups will check with OTMQ server on primary group at startup; if it finds its queue space metadata is not consistent with primary queue space, it will automatically synchronize with primary queue space.OTMQ server is an Oracle Tuxedo server; when a queue space group fails, OTMQ server uses Tuxedo automatic failover feature as normal Tuxedo application servers do. With this feature, queue space high availability is supported. For more information, see QSpace High Availability.After properly configuration, an application that attaches to the queue space QS1 then can directly enqueue messages, via tpenqplus() to a remote queue of queue space QS2 that resides in a remote domain.This section describes how to configure the OTMQ WS Client. OTMQ WS client must set some environments to take advantage of WS SAF feature. The following configuration options can be set using script, WSENVFILE or default value. WSENVFILE is name of a file containing environment variable settings to be set in the client's environment. The description of its format can be seen on Oracle Tuxedo documentation.WSC_JOURNAL_ENABLE: If set to 1, WS SAF is enabled. If set to 0, WS SAF is disabled. The default value is 0.WSC_JOURNAL_PATH: Specifies the journal file path. The default value is "./" on UNIX and ".\\" on Windows.WSC_JOURNAL_SIZE: Initial size of the journal file. The default value is 49150.WSC_JOURNAL_CYCLE_BLOCKS: the journal cycles (reuses) disk blocks when full and overwrites previous messages. The default value is 0.WSC_JOURNAL_FIXED_SIZE: Determines if the journal size is fixed or allowed to grow. The default value is 0.WSC_JOURNAL_PREALLOC: the journal file disk blocks are pre-allocated when the journal is initially opened. The default value is 1.WSC_JOURNAL_MSG_BLOCK_SIZE: Defines the file I/O block size, in bytes. The default value is 0.OTMQ provides a utility ConvertQSPACE() to upgrade Oracle Tuxedo /Q Qspace to OTMQ Qspace, so that the customers who already deployed Oracle Tuxedo /Q applications can benefit from OTMQ new features without data lost.Refer to the ConvertQSPACE(1) reference page for a full description of usage.
3. Make sure the QMCONFIG environment variable is configured as /Q device.
4. Run the utility with a) OTMQ device name, b) Qspace to be migrated and c) OTMQ Qspace ipckey:
ConvertQSPACE -d [OTMQ device name] -s [Qspace name] -i [OTMQ ipckey]
6. Make sure the QMCONFIG environment variable is configured as OTMQ device. Configure the OTMQ servers in the Oracle Tuxedo UBBCONFIG file according to OTMQ device and Qspace name.After Tuxedo /Q Qspace is upgraded to OTMQ Qspace, and OTMQ servers are booted, Tuxedo /Q clients can communicate with OTMQ without any change. Alternatively Tuxedo /Q clients can be re-compiled and re-linked with OTMQ libraries to use /Q compatible APIs tpenqueue()and tpdequeue() provided by OTMQ.To take advantage of OTMQ new features, the application code need be changed using OTMQ APIs. In one application or applications communicating with each other, /Q compatible APIs (for example, tpenqueue() and tpdequeue() ) should not be mixed with other OTMQ APIs (for example, tpenqplus(), tpdeqplus(), tpqconfirmmsg()), otherwise the result is unpredictable.OTMQ provides Link Driver Server TuxMsgQLD() as counterpart of OMQ Link Driver to achieve message level compatibilities for cross-group communication between OTMQ and OMQ applications.Also TuxMsgQLD() provides the routing functionality like traditional OMQ Link Driver but with limitations. For more information, see InteroperabilityTuxMsgQLD() requires pre-created link table and routing table in corresponding OTMQ Qspace. Link table consists of entries that each stands for one remote OMQ group. Routing table consists of entries that each stands for one combination of target group and routing through group. Link table and routing table are created by tmqadmin(1) qspacecreate command. The default size of link table is 200, or a different size can be specified with -L option. The default size of routing table is 200, or a different size can be specified with -R option.To notify remote OMQ that we are in the same bus, environment variable DMQ_BUS_ID should be defined before booting up TuxMsgQLD().TuxMsgQLD() requires a configuration file placed under APPDIR. The configuration file name is specified by CLOPT-f parameter. Refer to the TuxMsgQLD() reference page for a full description of the SERVERS section of the configuration file for this server.Listing 24 Link Driver Server Configuration File
• OTMQ provides Client Library Server TuxCls() as counterpart of OMQ Client Library Server to achieve message-level compatibilities with OMQ workstation clients. With TuxCls() deployed, OMQ workstation clients can work with OTMQ servers without any change.TuxCls() works as OMQ proxy clients, so the max count of supported OMQ clients is configured by MIN and MAX parameters of TuxCls() in the UBBCONFIG *SERVERS section.