![]() |
![]() |
BEA WebLogic Enterprise 4.2 Developer Center |
![]() HOME | SITE MAP | SEARCH | CONTACT | GLOSSARY | PDF FILES | WHAT'S NEW |
||
![]() Administration | TABLE OF CONTENTS | PREVIOUS TOPIC | NEXT TOPIC | INDEX |
This chapter discusses the following topics:
Making correct decisions in response to the following questions can improve the functioning of your WLE or BEA TUXEDO application:
Maximizing Your Application Resources
Note:
MSSQ sets are not supported in the WLE system.
When is it beneficial to use MSSQ sets?
Two analogies from everyday life may help to show why using MSSQ sets is sometimes, but not always, beneficial:
When to Use MSSQ Sets (BEA TUXEDO Servers)
On BEA TUXEDO systems, you can control whether a load balancing algorithm is used on the system as a whole. With load balancing, a load factor is applied to each service within the system, and you can track the total load on every server. Every service request is sent to the qualified server that is least loaded.
Note:
On WLE systems, load balancing is always enabled. In other words, while you can specify This algorithm, although effective, is expensive and should be used only when necessary, that is, only when a service is offered by servers that use more than one queue. Services offered by only one server, or by multiple servers all in an MSSQ (multiple server single queue) do not need load balancing. The To figure out how to assign load factors (located in the You can measure service performance time in one of the following ways:
Enabling Load Balancing
LDBAL=N
, the parameter is ignored for WLE systems.
LDBAL
parameter for these services should be set to N
. In other cases, you may want to set LDBAL
to Y
.
SERVICES
section), run an application for a long period of time. Note the average time it has taken for each service to be performed. Assign a LOAD
value of 50 (LOAD=50
) to any service that takes roughly the average amount of time. Any service taking longer than the average amount of time to execute should have a LOAD>50
; any service taking less than the average amount of "code" time to execute should have a LOAD<50
.
Two Ways to Measure Service Performance Time
servopts
-r
in the configuration file. The -r
option causes a log of services performed to be written to standard error. You can then use the txrpt
(1) command to analyze this information. (For details about servopts
(5) and txrpt
(1), see the BEA TUXEDO Reference Manual.)
The WLE Java system supports the ability to configure multithreaded WLE applications written in Java. A multithreaded WLE JavaServer can service multiple object requests simultaneously, while a single-threaded WLE JavaServer runs only one request at a time.
Running the WLE Java server in multithreaded mode or in single-threaded mode is transparent to the application programmer. In the current version of WLE Java, you should not establish multiple threads in your object implementation code.
Programs written to WLE Java run without modification in both modes.
The potential for a performance gain from a multithreaded JavaServer depends on:
Using Multithreaded JavaServers
If the application is running on a single-processor machine and the application is CPU-intensive only, without any I/O or delays, in most cases the multithreaded JavaServer will not perform better. In fact, due to the overhead of switching between threads, the multithreaded JavaServer in this configuration may perform worse than a single-threaded JavaServer.
A performance gain is more likely with a multithreaded JavaServer when the application has some delays or is running on a multi-processor machine.
You can establish the number of threads for a Java server application by using the For multithreaded WLE JavaServers, you must account for the number of worker threads that each server is configured to run. A worker thread is a thread that is started and managed by the WLE Java software, as opposed to threads started and managed by an application program. Internally, WLE Java manages a pool of available worker threads. When a client request is received, an available worker thread from the thread pool is scheduled to execute the request. When the request is done, the worker thread is returned to the pool of available threads.
The A single threaded server counts as one accesser.
For a multithreaded JavaServer, the number of accessers can be up to twice the maximum number of worker threads that the server is configured to run, plus one for the server itself. However, to calculate a For example, assume that you have three multithreaded JavaServers in your system. JavaServer A is configured to run three worker threads. JavaServer B is configured to run four worker threads. JavaServer C is configured to run five worker threads. The accesser requirement of these servers is calculated by using the following formula:
You can exert significant control over the flow of data in an application by assigning priorities to BEA TUXEDO services using the For an application running on a BEA TUXEDO system, you can specify the For example, Server 1 offers Interfaces A, B, and C. Interfaces A and B have a priority of 50 and Interface C has a priority of 70. An interface requested for C is always dequeued before a request for A or B. Requests for A and B are dequeued equally with respect to one another. The system dequeues every tenth request in first-in, first-out (FIFO) order to prevent a message from waiting indefinitely on the queue.
You can also dynamically change a priority with the The The characteristics of the -M option
to the JavaServer
parameter. This parameter is used in the SERVERS
section of the application's UBBCONFIG
file. The -M
options are described in the section "JavaServer Command Line Options" on page 3-40.
MAXACCESSERS
parameter in the application's UBBCONFIG
file sets the maximum number of concurrent accessers of a WLE system. Accessers include native and remote clients, servers, and administration processes.
MAXACCESSERS
value for a WLE system running multithreaded servers, do not simply double the existing MAXACCESSERS
value of the whole system. Instead, you add up the accessers for each multithreaded server.
[(3*2) + 1] + [(4*2) + 1] + [(5*2) + 1] = 27 accessers
Assigning Priorities to Interfaces or Services
PRIO
parameter.
PRIO
parameter for each service named in the SERVICES
section of the application's UBBCONFIG
file.
tpsprio()
call. Only preferred clients should be able to increase the service priority. In a system on which servers perform service requests, the server can call tpsprio()
to increase the priority of its interface or service calls so the user does not wait in line for every interface or service request that is required.
Characteristics of the PRIO Parameter
PRIO
parameter should be used cautiously. Depending on the order of messages on the queue (for example, A, B, and C), some (such as A and B) will be dequeued only one in ten times. This means reduced performance and potential slow turnaround time on the service.
PRIO
parameter are as follows:
Assigning priorities enables you to provide faster service to the most important requests and slower service to the less important requests. You can also give priority to specific users or in specific circumstances.
The easiest way to package services into server executables is to not package them at all. Unfortunately, if you do not package services, the number of server executables, and also message queues and semaphores, rises beyond an acceptable level. There is a trade-off between no bundling and too much bundling.
You should bundle services for the following reasons:
Bundling Services into Servers (BEA TUXEDO Servers)
When to Bundle Services
bankapp
application, in which the WITHDRAW
, DEPOSIT
, and INQUIRY
services are all teller operations. Administration of services becomes simpler.
You can set the following application parameters to enhance the efficiency of your system:
Enhancing Efficiency with Application Parameters
MAXACCESSERS
, MAXSERVERS
, MAXINTERFACES
, and MAXSERVICES
MAXGTT
, MAXBUFTYPE
, and MAXBUFSTYPE
The For multithreaded WLE JavaServers, you must account for the number of worker threads that each server is configured to run. The A single threaded server counts as one accesser.
For a multithreaded JavaServer, the number of accessers can be up to twice the maximum number of worker threads that the server is configured to run, plus one for the server itself. However, to calculate a For example, assume that you have three multithreaded JavaServers in your system. JavaServer A is configured to run three worker threads. JavaServer B is configured to run four worker threads. JavaServer C is configured to run five worker threads. The accesser requirement of these servers is calculated by using the following formula:
You should increase the value of the You can limit the number of buffer types and subtypes allowed in the application with the If a system is running on slow processors (for example, due to heavy usage), you can increase the timing parameters: The following table describes the system parameters available for tuning an application.
The values of different system parameters determine IPC requirements. You can use the Setting the MAXACCESSERS, MAXSERVERS, MAXINTERFACES, and MAXSERVICES Parameters
MAXACCESSERS
, MAXSERVERS
, MAXINTERFACES
, and MAXSERVICES
parameters increase semaphore and shared memory costs, so you should choose the minimum value that satisfies the needs of the system. You should also allow for the variation in the number of clients accessing the system at the same time. Defaults may be appropriate for a generous allocation of IPC resources; however, it is prudent to set these parameters to the lowest appropriate values for the application.
MAXACCESSERS
parameter sets the maximum number of concurrent accessers of a WLE system. Accessers include native and remote clients, servers, and administration processes.
MAXACCESSERS
value for a WLE system running multithreaded servers, do not simply double the existing MAXACCESSERS
value of the whole system. Instead, you add up the accessers for each multithreaded server.
[(3*2) + 1] + [(4*2) + 1] + [(5*2) + 1] = 27 accessers
Setting the MAXGTT, MAXBUFTYPE, and MAXBUFSTYPE Parameters
MAXGTT
parameter if the product of multiplying the number of clients in the system times the percentage of time they are committing a transaction is close to 100. This may require a great number of clients, depending on the speed of commit. If you increase MAXGTT
, you should also increase TLOGSIZE
accordingly for every machine. You should set MAXGTT
to 0
for applications that do not use distributed transactions.
MAXBUFTYPE
and MAXBUFSTYPE
parameters, respectively. The current default for MAXBUFTYPE
is 16. Unless you are creating many user-defined buffer types, you can omit MAXBUFTYPE
. However, if you intend to use many different VIEW
subtypes, you may want to set MAXBUFSTYPE
to exceed its current default of 32.
Setting the SANITYSCAN, BLOCKTIME, BBLQUERY, and DBBLWAIT Parameters
SANITYCAN
, BLOCKTIME
, and individual transaction timeouts. If networking is slow, you can increase the value of the BLOCKTIME
, BBLQUERY
, and DBBLWAIT
parameters.
Setting Application Parameters
Determining IPC Requirements
tmboot
-c
command to test a configuration's IPC needs. The values of the following parameters affect the IPC needs of an application:
Table 17-1 describes the system parameters that affect the IPC needs of an application.
As on any road in which traffic exists and runs at finite speed, bottlenecks can occur in your system. On a highway, cars can be counted with a cable strung across the road, that causes a counter to be incremented each time a car drives over it. Similarly, you can measure service traffic. For example, at boot time (that is, when In the BEA TUXEDO system, bottlenecks can originate from data flow patterns. The quickest way to detect bottlenecks is to begin with the client and measure the amount of time required by relevant services.
Client 1 requires 4 seconds to print to the screen. Calls to On the other hand, suppose service A takes 3.2 seconds. The individual parts of service A can be bracketed and measured. Perhaps service A issues a Using
Table 17-1 Tuning IPC Parameters
Measuring System Traffic
tpsvrinit()
is invoked), you can initialize a global counter and record a starting time. Subsequently, each time a particular service is called, the counter is incremented. When the server is shut down (by invoking the tpsvrdone
() function, the final count and the ending time are recorded. This mechanism allows you to determine how busy a particular service is over a specified period of time.
Example: Detecting a System Bottleneck
time
(2) determine that the tpcall
to service A is the culprit with a 3.7 second delay. Service A is monitored at the top and bottom and takes 0.5 seconds. This implies that a queue may be clogged, which was determined by using the pq
command.
tpcall
to service B, which requires 2.8 seconds. It should be possible then to isolate queue time or message send blocking time. Once the relevant amount of time has been identified, the application can be retuned to handle the traffic.
time
(2), you can measure the duration of the following:
The UNIX system Detecting Bottlenecks on UNIX Platforms
sar
(1) command provides valuable performance information that can be used to find system bottlenecks. You can use sar
(1) to do the following:
The following table describes the
Note:
Some flavors of the UNIX system do not provide the On Windows NT platforms, use the Performance Monitor to collect system information and detect bottlenecks. Select the following options from the Start menu.
sar
(1) command options.
sar
(1) command, but offer equivalent commands instead. BSD, for example, offers the iostat
(1) command; Sun offers perfmeter
(1).
Detecting Bottlenecks on Windows NT Platforms
Start -> Programs -> Administration Tools -> NT Performance Monitor