Scaling, Distributing, and Tuning CORBA Applications

     Previous  Next    Open TOC in new window  Open Index in new window  View as PDF - New Window  Get Adobe Reader - New Window
Content starts here

Tuning CORBA Applications

This topic includes the following sections:

For more information about monitoring BEA Tuxedo applications, see "Monitoring a Running System" in the Administering a BEA Tuxedo Application at Run Time .

Notes: The BEA Tuxedo CORBA Java client and BEA Tuxedo CORBA Java client ORB were deprecated in Tuxedo 8.1 and are no longer supported. All BEA Tuxedo CORBA Java client and BEA Tuxedo CORBA Java client ORB text references, associated code samples, should only be used to help implement/run third party Java ORB libraries, and for programmer reference only.
Note: Technical support for third party CORBA Java ORBs should be provided by their respective vendors. BEA Tuxedo does not provide any technical support or documentation for third party CORBA Java ORBs.

 


Maximizing Application Resources

Making correct decisions in the following areas can improve the functioning of your BEA Tuxedo applications:

 


When to Use MSSQ Sets (BEA Tuxedo ATMI Servers Only)

Note: Multiple Servers, Single Queue (MSSQ) sets are not supported in BEA Tuxedo CORBA servers.

Table 4-1 describes when to use MSSQ sets with BEA Tuxedo servers.

Table 4-1 When and When Not to Use MSSQ Sets 
Use MSSQ Sets When
Do Not Use MSSQ Sets When
There are several, but not too many servers.
There is a large number of servers. (A compromise is to use many MSSQ sets.)
Buffer sizes are not too large.
Buffer sizes are large enough to exhaust one queue.
The servers offer identical sets of services.
Services are different for each server.
The messages involved are reasonably sized.
Long messages are being passed to the services causing the queue to be exhausted. This causes nonblocking sends to fail, or blocking sends to block.
Optimization and consistency of service turnaround time is paramount.
Optimization and consistency of service turnaround time is not critical.

The following two analogies help to show why using MSSQ sets is sometimes, but not always, beneficial:

 


Enabling System-controlled Load Balancing

You can control whether a load-balancing algorithm is used on the BEA Tuxedo system as a whole. When load balancing is used, a load factor is applied to each service within the system, allowing you to track the total load on every server. Every service request is sent to the qualified server that is least loaded.

Note: On BEA Tuxedo CORBA systems, system-controlled load balancing is enabled automatically. You cannot disable load balancing by specifying LDBAL=N.

To determine how to assign load factors (located in the SERVICES section), run an application continually and calculate the average time it takes for each service to be performed. Assign a LOAD value of 50 (LOAD=50) to any service that requires the average amount of time that you calculated. Any service taking longer to execute than the calculated average should have a LOAD>50. Any service taking less to execute than the calculated average should have a LOAD<50.

A LOAD factor is assigned to each service performed, which keeps track of the total load of services that each server has performed. Each service request is routed to the server with the smallest total load. The routing of that request causes the server's total to be increased by the LOAD factor of the service requested.

You can also apply LOAD factors to interfaces. For more information about LOAD factors, see "Creating a Configuration File" in the Administering a BEA Tuxedo Application at Run Time .

 


Configuring Replicated Server Processes and Groups

To configure replicated server processes and groups in the BEA Tuxedo domain, complete the following steps:

  1. Edit the application's UBBCONFIG file using a text editor.
  2. In the GROUPS section, specify the names of the groups you want to configure.
  3. In the SERVERS section, specify the parameters in Table 4-2 for the server process you want to replicate.
  4. Table 4-2 Parameters Specified in the SERVERS Section 
    Parameter
    Description
    Server application name
    Specifies the name of the executable file that contains the application server.
    GROUP
    Specifies the name of the group to which the server process belongs. If you are replicating a server process across multiple groups, specify the server process once for each group.
    SRVID
    Specifies a numeric identifier, giving the server process a unique identity.
    MIN
    Specifies the number of instances of the server process to start when you start the application.
    MAX
    Specifies the maximum number of server processes that can be running at any one time.

    The MIN and MAX parameters determine the degree to which a given server application can process requests on a given interface in parallel. During run time, the system administrator can examine resource bottlenecks and start additional server processes, if necessary, thereby scaling the application. For more information, see "Monitoring a Running Application" in the Administering a BEA Tuxedo Application at Run Time .

    Note: The MAX parameter controls the maximum number of instances. However, BEA Tuxedo does not spawn instances automatically. The system will automatically start up to the specified MIN number of instances. Between MIN and MAX, the system administrator will need to spawn new instances manually. Once MAX is reached, an error will be returned by tmboot, tmadmin, or the TMIB API.

 


Configuring Multithreaded Servers

This topic includes the following sections:

For more information about multithreaded servers, see Using Multithreaded Servers.

Setting the OPENINFO Parameter for Database Interoperation

To enable the use of threads by a multithreaded server when interoperating with the Oracle XA database software, you must add Threads=true to the OPENINFO parameter in the GROUPS section of the UBBCONFIG file, as shown in Listing 4-1. For more information, see the Oracle XA online documentation.

Listing 4-1 Adding Threads=true to the OPENINFO Parameter
OPENINFO="ORACLE_XA:Oracle_XA+Acc=P/scott/tiger+SesTm=100+LogDir=.+MaxCur=5+Threads=true"

Parameters Used to Configure Multithreaded Servers

The following parameters are used configure multithreaded CORBA servers. These parameters are set in UBBCONFIG file:

For a description how to set these parameters, see the following topics:

Assigning Priorities to Interfaces

This topic includes the following sections:

About Priorities to Interfaces

You can exert significant control over the flow of data in an application by assigning priorities to BEA Tuxedo Interfaces using the PRIO parameter. For a CORBA application running on a BEA Tuxedo system, you can specify the PRIO parameter for each interface named in the INTERFACES section of the application's UBBCONFIG file.

For example, Server 1 offers Interfaces A, B, and C. Interfaces A and B have a priority of 50 and Interface C has a priority of 70. An interface requested for C is always dequeued before a request for A or B. Requests for A and B are dequeued equally with respect to one another. The system dequeues every tenth request in first in first out (FIFO) order to prevent a message from waiting indefinitely on the queue.

You can also dynamically change a priority with the tpsprio() call. Only preferred clients should be able to increase the interface priority. In a system on which servers perform interface request, the server can call tpsprio() to increase the priority of its interface so the user does not wait in line for every interface request that is required.

Characteristics of the PRIO Parameter

The PRIO parameter should be used carefully. Depending on the order of messages on the queue (for example, A, B, and C), some (such as A and B) will be dequeued only one in ten times. This means reduced performance and potential slow turnaround time on the service.

The characteristics of the PRIO parameter are as follows:

Assigning priorities enables you to provide more efficient service to the most important requests and slower service to the less important requests. You can also give priority to specific users or in specific circumstances.

 


Bundling Services into Servers (BEA Tuxedo ATMI Servers Only)

This topic includes the following sections:

About Bundling Services

The easiest way to package services into server executables is to not package them at all. Unfortunately, if you do not package services, the number of server executables, and also message queues and semaphores, rises beyond an acceptable level. There is a trade-off between not bundling services and bundling services too much.

When to Bundle Services

You should bundle services for the following reasons:

 


Performance Options

Performance options were added to BEA Tuxedo in release 8.0. These options enable you to turn off specific features in the BEA Tuxedo infrastructure. You should turn off these features only if they are not required by your CORBA or ATMI applications. Table 4-3 describes these options.

Table 4-3 Performance Options 
Option
Description
How to set . . .
Service and Interface Caching options (SICACHEENTRIESMAX and TMSICACHEENTRIESMAX)
This option enables you to cache service and interface entries, and to use the cached copies of the service or interface without locking the bulletin board.
For more information about these options, see Administering a BEA Tuxedo Application at Run Time and UBBCONFIG(5) and TM_MIB(5), and tuxenv(5) in the File Formats, Data Descriptions, MIBs, and System Processes Reference.
Turning off threads (TMNOTHREADS)
Set this option to yes to turn off multithreaded processing. For applications that do not use threads, turning them off should significantly improve performance.
You use the tuxenv(5) to set this option. For more information, see Administering a BEA Tuxedo Application at Run Time and tuxenv(5) in the File Formats, Data Descriptions, MIBs, and System Processes Reference.
Turning off auditing and authorization (Options {[NO_AA]})
Setting this option disables the auditing and authorization functions on a per application basis.
You set this option in the RESOURCES section of the UBBCONFIG file. For more information, see Administering a BEA Tuxedo Application at Run Time and OPTION in the RESOURCES section of UBBCONFIG(5) in the File Formats, Data Descriptions, MIBs, and System Processes Reference.
Turning off XA Transactions (NO_XA)
Setting this option turns Off XA Transactions.
For more information about the NO_XA option, see Administering a BEA Tuxedo Application at Run Time and UBBCONFIG(5) and TM_MIB(5) in the File Formats, Data Descriptions, MIBs, and System Processes Reference.

 


Enhancing Efficiency with Application Parameters

This topic includes the following sections:

You can set these application parameters to enhance the efficiency of your system.

MAXDISPATCHTHREADS

The MAXDISPATCHTHREADS parameter determines the maximum number of concurrently dispatched threads that each server process can spawn. When specifying this parameter, consider the following:

The value of the MAXDISPATCHTHREADS parameter affects other parameters. For example, the MAXACCESSORS parameter controls the number of simultaneous accesses to the BEA Tuxedo system, and each thread counts as one accessor. For a multithreaded server application, you must account for the number of system-managed threads that each server is configured to run. A system-managed thread is a thread that is started and managed by the BEA Tuxedo software, as opposed to threads started and managed by an application. Internally, BEA Tuxedo manages a pool of available system-managed threads. When a client request is received, an available system-managed thread from the thread pool is scheduled to execute the request. When the request is completed, the system-managed thread is returned to the pool of available threads.

For example, if that you have 4 multithreaded servers in your system and each server is configured to run 50 system-managed threads, the accessor requirement for these servers is the sum total of the accessors, calculated as follows:

50 + 50 + 50 + 50 = 200 accessors

MINDISPATCHTHREADS

Use the MINDISPATCHTHREADS parameter to specify the number of server dispatch threads that are started when the server is initially booted. When you specify this parameter, consider the following:

Setting the MAXACCESSERS, MAXOBJECTS, MAXSERVERS, MAXINTERFACES, and MAXSERVICES Parameters

The MAXACCESSERS, MAXOBJECTS, MAXSERVERS, MAXINTERFACES, and MAXSERVICES parameters increase semaphore and shared memory costs, so you should choose the minimum value that satisfies the needs of the system. You should also allow for the variation in the number of clients accessing the system at the same time. Defaults may be appropriate for a generous allocation of IPC resources. However, it is prudent to set these parameters to the lowest appropriate values for the application.

For multithreaded servers, you must account for the number of threads that each server is configured to run. The MAXACCESSERS parameter sets the maximum number of concurrent accessors of a BEA Tuxedo system. Accessors include native and remote clients, servers, and administration processes. For more information on setting the MAXACCESSERS parameter, see MAXDISPATCHTHREADS.

Setting the MAXGTT, MAXBUFTYPE, and MAXBUFSTYPE Parameters

You should increase the value of the MAXGTT parameter if the product of multiplying the number of clients in the system times the percentage of time they are committing a transaction is close to 100. This may require a great number of clients, depending on the speed of commit. If you increase MAXGTT, you should also increase TLOGSIZE accordingly for every machine. You should set MAXGTT to 0 for applications that do not use distributed transactions.

You can limit the number of buffer types and subtypes allowed in the application with the MAXBUFTYPE and MAXBUFSTYPE parameters, respectively. The current default for MAXBUFTYPE is 16. Unless you are creating many user-defined buffer types, you can omit MAXBUFTYPE. However, if you intend to use many different VIEW subtypes, you may want to set MAXBUFSTYPE to exceed its current default of 32.

Setting the SANITYSCAN, BLOCKTIME, BBLQUERY, and DBBLWAIT Parameters

If a system is running on slower processors (for example, due to heavy usage), you can increase the timing parameters: SANITYCAN, BLOCKTIME, and individual transaction timeouts. If networking is slow, you can increase the value of the BLOCKTIME, BBLQUERY, and DBBLWAIT parameters.

 


Setting Application Parameters

Table 4-4 describes the system parameters available for tuning an application.

Table 4-4 System Parameters for Application Tuning 
Parameters
Action
MAXACCESSERS, MAXOBJECTS, MAXSERVERS, MAXINTERFACES, and MAXSERVICES
Set the smallest satisfactory value because of IPC cost.
Allow for extra clients.
MAXGTT, MAXBUFTYPE, and MAXBUFSTYPE
Increase MAXGTT for many clients; set MAXGTT to 0 for nontransactional applications.
Use MAXBUFTYPE only if you create eight or more user-defined buffer types.
If you use many different VIEW subtypes, increase the value of MAXBUFSTYPE.
BLOCKTIME, TRANTIME, and SANITYSCAN
Increase the value for a slow system.
BLOCKTIME, TRANTIME, BBLQUERY, and DBBLWAIT
Increase values for slow networking.

 


Determining IPC Requirements

The values of different system parameters determine IPC requirements. You can use the tmboot -c command to test a configuration's IPC needs. The values of the following parameters affect the IPC needs of an application:

Table 4-5 describes the system parameters that affect the IPC needs of an application.

Table 4-5 Tuning IPC Parameters 
Parameter(s)
Action
MAXACCESSERS
Equals the number of semaphores.
Number of message queues is almost equal to MAXACCESSERS + the number of servers with reply queues (the number of servers in MSSQ set + the number of MSSQ sets).
MAXSERVERS, MAXSERVICES, and MAXGTT
While MAXSERVERS, MAXSERVICES, MAXGTT, and the overall size of the ROUTING, GROUP, and NETWORK sections affect the size of shared memory, an attempt to devise formulas that correlate these parameters can become complex. Instead, simply run tmboot -c or tmloadcf -c to calculate the minimum IPC resource requirements for your application.
Queue-related kernel parameters
Need to be tuned to manage the flow of buffer traffic between clients and servers. The maximum total size of a queue in bytes must be large enough to handle the largest message in the application, and to typically be 75 to 85 percent full. A smaller percentage is wasteful. A larger percentage causes message sends to block too frequently.
Set the maximum size for a message to handle the largest buffer that the application sends.
Maximum queue length (the largest number of messages that are allowed to sit on a queue at once) must be adequate for the application's operations.
Simulate or run the application to measure the average fullness of a queue or its average length. This may be a trial and error process in which tunables are estimated before the application is run and are adjusted after running under performance analysis.
For a large system, analyze the effect of parameter settings on the size of the operating system kernel. If unacceptable, reduce the number of application processes or distribute the application to more machines to reduce MAXACCESSERS.

 


Measuring System Traffic

This topic includes the following sections:

For more information about monitoring BEA Tuxedo applications and measuring traffic, see "Monitoring a Running System" in the Administering a BEA Tuxedo Application at Run Time .

About System Traffic and Bottlenecks

Bottlenecks can occur in your system when traffic volume nears resource capacity. You can measure service traffic using a global counter in your implementation code.

For example, in Tuxedo applications, when tpsvrinit() is invoked at boot time, you can initialize a global counter and record a starting time. Subsequently, each time a particular service is called, the counter is incremented. When the server is shut down by invoking the tpsvrdone() function, the final count and the ending time are recorded. This mechanism allows you to determine how busy a particular service is over a specified period of time.

Note: For CORBA C++ applications, use the Server::initialize() and Server::release() operations.

In BEA Tuxedo, bottlenecks can originate from data flow patterns. The quickest way to detect bottlenecks is to begin with the client and measure the amount of time required by relevant services.

Example of Detecting a System Bottleneck

Suppose Client 1 requires 4 seconds to print to the screen. Calls to time(2) determine that the tpcall to service A is the culprit with a 3.7 second delay. Service A is monitored at the top and bottom and takes 0.5 seconds. This implies that a queue may be clogged, which was determined by using the pq command.

On the other hand, suppose service A takes 3.2 seconds. The individual parts of Service A can be bracketed and measured. Perhaps Service A issues a tpcall to Service B, which requires 2.8 seconds. It should then be possible to isolate queue time or message send blocking time. Once the relevant amount of time has been identified, the application can be retuned to handle the traffic.

Using time(2), you can measure the duration of the following:

Detecting Bottlenecks on UNIX

On UNIX systems, the sar(1) command provides valuable performance information that can be used to find system bottlenecks. You can use the sar(1) command to:

Table 4-6 describes the sar(1) command options.

Table 4-6 sar(1) Command Options 
Option
Description
-u
Gathers CPU utilization numbers, including the portion of the time running in user mode, running in system mode, idle with some process waiting for block I/O, and otherwise idle.
-b
Reports buffer activity, including transfers per second of data between system buffers and disk, or other block devices.
-c
Reports system call activity. This includes system calls of all types, as well as specific system calls such as fork(2) and exec(2).
-w
Monitors system swapping activity. This includes the number of transfers for swap-ins and swap-outs.
-q
Reports average queue lengths while occupied and the percent of time occupied.
-m
Reports message and system semaphore activities, including the number of primitives per second.
-p
Reports paging activity, including the address translation page faults, page faults and protection errors, and the valid pages reclaimed for free lists.
-r
Reports unused memory pages and disk blocks, including the average number of pages available to user processes and the disk blocks available for process swapping.

Note: Some UNIX platforms do not provide the sar(1) command, but offer equivalent commands instead. BSD, for example, offers the iostat(1) command. Sun Microsystems, Inc. offers perfmeter(1).

Detecting Bottlenecks on Windows

On Windows, use the Performance Monitor to collect system information and detect bottlenecks. Click the Start button and select Programs, then Administration Tools, and then click Performance Monitor.


  Back to Top       Previous  Next