[Top] [Previous Page] [Next Page] [Bottom]



Chapter 14. Tuning Applications

This chapter discusses the following topics:

Maximizing Your Application Resources

Making correct decisions in response to the following questions can improve the functioning of your BEA TUXEDO application:

When to Use MSSQ Sets

When is it beneficial to use MSSQ sets?

When to Use MSSQ Sets When Not to Use MSSQ Sets
There are several, but not too many servers. There are a large number of servers. (A compromise is to use many MSSQ sets.)
Buffer sizes are not too large. Buffer sizes are large enough to exhaust one queue.
The servers offer identical sets of services. Services are different for each server.
The messages involved are reasonably sized. Long messages are being passed to the services causing the queue to be exhausted. This causes nonblocking sends to fail, or blocking sends to block.
Optimization and consistency of service turnaround time is paramount.  

Two analogies from everyday life may help to show why using MSSQ sets is sometimes, but not always, beneficial:

Enabling Load Balancing

You can control whether a load balancing algorithm is used on the system as a whole. With load balancing, a load factor is applied to each service within the system, and you can track the total load on every server. Every service request is sent to the qualified server that is least loaded.

This algorithm, although effective, is expensive and should be used only when necessary, that is, only when a service is offered by servers that use more than one queue. Services offered by only one server, or by multiple servers all in an MSSQ (multiple server single queue) do not need load balancing. The LDBAL parameter for these services should be set to N. In other cases, you may want to set LDBAL to Y.

To figure out how to assign load factors (located in the SERVICES section), run an application for a long period of time. Note the average time it has taken for each service to be performed. Assign a LOAD value of 50 (LOAD=50) to any service that takes roughly the average amount of time. Any service taking longer than the average amount of time to execute should have a LOAD>50; any service taking less than the average amount of "code" time to execute should have a LOAD<50.

Two Ways to Measure Service Performance Time

You can measure service performance time in one of the following ways:

Assigning Priorities to Interfaces or Services

You can exert significant control over the flow of data in an application by assigning priorities to BEA TUXEDO services using the PRIO parameter.

For an application running on a BEA TUXEDO system, you can specify the PRIO parameter for each service named in the SERVICES section of the application's UBBCONFIG file.

For example, Server 1 offers Interfaces A, B, and C. Interfaces A and B have a priority of 50 and Interface C has a priority of 70. An interface requested for C is always dequeued before a request for A or B. Requests for A and B are dequeued equally with respect to one another. The system dequeues every tenth request in first-in, first-out (FIFO) order to prevent a message from waiting indefinitely on the queue.

You can also dynamically change a priority with the tpsprio() call. Only preferred clients should be able to increase the service priority. In a system on which servers perform service requests, the server can call tpsprio()to increase the priority of its interface or service calls so the user does not wait in line for every interface or service request that is required.

Characteristics of the PRIO Parameter

The PRIO parameter should be used cautiously. Depending on the order of messages on the queue (for example, A, B, and C), some (such as A and B) will be dequeued only one in ten times. This means reduced performance and potential slow turnaround time on the service.

The characteristics of the PRIO parameter are as follows:

Assigning priorities enables you to provide faster service to the most important requests and slower service to the less important requests. You can also give priority to specific users or in specific circumstances.

Bundling Services into Servers

The easiest way to package services into server executables is to not package them at all. Unfortunately, if you do not package services, the number of server executables, and also message queues and semaphores, rises beyond an acceptable level. There is a trade-off between no bundling and too much bundling.

When to Bundle Services

You should bundle services for the following reasons:

Enhancing Efficiency with Application Parameters

You can set the following application parameters to enhance the efficiency of your system:

Setting the MAXACCESSERS, MAXSERVERS, MAXINTERFACES, and MAXSERVICES Parameters

The MAXACCESSERS, MAXSERVERS, MAXINTERFACES, and MAXSERVICES parameters increase semaphore and shared memory costs, so you should choose the minimum value that satisfies the needs of the system. You should also allow for the variation in the number of clients accessing the system at the same time. Defaults may be appropriate for a generous allocation of IPC resources; however, it is prudent to set these parameters to the lowest appropriate values for the application.

Setting the MAXGTT, MAXBUFTYPE, and MAXBUFSTYPE Parameters

You should increase the value of the MAXGTT parameter if the product of multiplying the number of clients in the system times the percentage of time they are committing a transaction is close to 100. This may require a great number of clients, depending on the speed of commit. If you increase MAXGTT, you should also increase TLOGSIZE accordingly for every machine. You should set MAXGTT to 0 for applications that do not use distributed transactions.

You can limit the number of buffer types and subtypes allowed in the application with the MAXBUFTYPE and MAXBUFSTYPE parameters, respectively. The current default for MAXBUFTYPE is 16. Unless you are creating many user-defined buffer types, you can omit MAXBUFTYPE. However, if you intend to use many different VIEW subtypes, you may want to set MAXBUFSTYPE to exceed its current default of 32.

Setting the SANITYSCAN, BLOCKTIME, BBLQUERY, and DBBLWAIT Parameters

If a system is running on slow processors (for example, due to heavy usage), you can increase the timing parameters: SANITYCAN, BLOCKTIME, and individual transaction timeouts. If networking is slow, you can increase the value of the BLOCKTIME, BBLQUERY, and DBBLWAIT parameters.

Setting Application Parameters

The following table describes the system parameters available for tuning an application.

Parameters Action
MAXACCESSERS, MAXSERVERS, MAXINTERFACES, and MAXSERVICES Set the smallest satisfactory value because of IPC cost.

Allow for extra clients.

MAXGTT, MAXBUFTYPE, and MAXBUFSTYPE Increase MAXGTT for many clients; set MAXGTT to 0 for nontransactional applications.

Use MAXBUFTYPE only if you create eight or more user-defined buffer types.

If you use many different VIEW subtypes, increase the value of MAXBUFSTYPE.

BLOCKTIME, TRANTIME, and SANITYSCAN Increase the value for a slow system.
BLOCKTIME, TRANTIME, BBLQUERY, and DBBLWAIT Increase values for slow networking.

Determining IPC Requirements

The values of different system parameters determine IPC requirements. You can use the tmboot -c command to test a configuration's IPC needs. The values of the following parameters affect the IPC needs of an application:

Table 14-1 describes the system parameters that affect the IPC needs of an application.

Table 14-1 Tuning IPC Parameters

Parameter(s) Action
MAXACCESSSERS Equals the number of semaphores.

Number of message queues is almost equal to MAXACCESSERS + number of servers with reply queues (number of servers in MSSQ set + number of MSSQ sets).

MAXSERVERS, MAXSERVICES, and MAXGTT While MAXSERVERS, MAXSERVICES, MAXGTT, and the overall size of the ROUTING, GROUP, and NETWORK sections affect the size of shared memory, an attempt to devise formulas that correlate these parameters can become complex. Instead, simply run tmboot -c or tmloadcf -c to calculate the minimum IPC resource requirements for your application.
Queue-related kernel parameters Need to be tuned to manage the flow of buffer traffic between clients and servers. The maximum total size of a queue in bytes must be large enough to handle the largest message in the application, and to typically be 75 to 85 percent full. A smaller percentage is wasteful; a larger percentage causes message sends to block too frequently.

Set the maximum size for a message to handle the largest buffer that the application sends.

Maximum queue length (the largest number of messages that are allowed to sit on a queue at once) must be adequate for the application's operations.

Simulate or run the application to measure the average fullness of a queue or its average length. This may be a trial and error process in which tunables are estimated before the application is run and are adjusted after running under performance analysis.

For a large system, analyze the effect of parameter settings on the size of the operating system kernel. If unacceptable, reduce the number of application processes or distribute the application to more machines to reduce MAXACCESSERS.

Measuring System Traffic

As on any road in which traffic exists and runs at finite speed, bottlenecks can occur in your system. On a highway, cars can be counted with a cable strung across the road, that causes a counter to be incremented each time a car drives over it. Similarly, you can measure service traffic. For example, at boot time (that is, when tpsvrinit() is invoked), you can initialize a global counter and record a starting time. Subsequently, each time a particular service is called, the counter is incremented. When the server is shut down (by invoking the tpsvrdone() function, the final count and the ending time are recorded. This mechanism allows you to determine how busy a particular service is over a specified period of time.

In the BEA TUXEDO system, bottlenecks can originate from data flow patterns. The quickest way to detect bottlenecks is to begin with the client and measure the amount of time required by relevant services.

Example: Detecting a System Bottleneck

Client 1 requires 4 seconds to print to the screen. Calls to time(2) determine that the tpcall to service A is the culprit with a 3.7 second delay. Service A is monitored at the top and bottom and takes 0.5 seconds. This implies that a queue may be clogged, which was determined by using the pq command.

On the other hand, suppose service A takes 3.2 seconds. The individual parts of service A can be bracketed and measured. Perhaps service A issues a tpcall to service B, which requires 2.8 seconds. It should be possible then to isolate queue time or message send blocking time. Once the relevant amount of time has been identified, the application can be retuned to handle the traffic.

Using time(2), you can measure the duration of the following:

Detecting Bottlenecks on UNIX Platforms

The UNIX system sar(1) command provides valuable performance information that can be used to find system bottlenecks. You can use sar(1) to do the following:

The following table describes the sar(1) command options.

Use This Option To
-u Gather CPU utilization numbers, including the portion of the time running in user mode, running in system mode, idle with some process waiting for block I/O, and otherwise idle.
-b Report buffer activity, including transfers per second of data between system buffers and disk, or other block devices.
-c Report system call activity. This includes system calls of all types, as well as specific system calls such as fork(2) and exec(2).
-w Monitor system swapping switching activity. This includes the number of transfers for swapins and swapouts.
-q Report average queue lengths while occupied and the percent of time occupied.
-m Report message and system semaphore activities, including the number of primitives per second.
-p Report paging activity, including the address translation page faults, page faults and protection errors, and the valid pages reclaimed for free lists.
-r Report unused memory pages and disk blocks, including the average number of pages available to user processes and the disk blocks available for process swapping.

Note: Some flavors of the UNIX system do not provide the sar(1) command, but offer equivalent commands instead. BSD, for example, offers the iostat(1) command; Sun offers perfmeter(1).

Detecting Bottlenecks on Windows NT Platforms

On Windows NT platforms, use the Performance Monitor to collect system information and detect bottlenecks. Select the following options from the Start menu.

Start -> Programs -> Administration Tools -> NT Performance Monitor


[Top] [Previous Page] [Next Page] [Bottom]