|
|
Some of the Bankapp examples in this chapter include sample code that is not implemented in the product sample's Bankapp files.
This chapter discusses the following topics:
Supporting highly scalable applications is one of the strengths of the WLE system. Many applications may perform well in an environment characterized by 1 to 10 server processes, and 10 to 100 client applications. However, in an enterprise environment, applications need to support:
Overview of the Scalability Features Available in the WLE System
Other features provided in the WLE system to make an application highly scalable include the IIOP Listener/Handler, which is summarized in Getting Started and described fully in the Administration Guide.
Using the JDBC Bankapp sample application as an example, this section explains how to scale an application to meet a significantly greater processing capability. The basic design goal for the JDBC Bankapp sample application is to greatly scale up the number of client applications it can accommodate by doing the following:
Scaling a WLE Server Application
To accommodate these design goals, the JDBC Bankapp sample application has been extended as follows:
The sections that follow describe how the JDBC Bankapp sample application uses replicated server processes and server groups, object state management, and factory-based routing to meets its scalability goals. The first section that follows provides a description of the OMG IDL changes implemented in the Bankapp sample application.
The WLE system offers a wide variety of choices for how you may configure your server applications, such as:
Replicating Server Processes and Server Groups
The following sections describe replicated server processes and groups, and also explain how you can configure them in the WLE system.
When you replicate the server processes in your application:
Replicated Server Processes
To achieve the full benefit of replicated server processes, make sure that the objects instantiated by your server application generally have unique IDs. This way, a client invocation on an object can cause the object to be instantiated on demand, within the bounds of the number of server processes that are available, and not queued up for an already active object.
As you design your application, keep in mind that there is a tradeoff between providing:
Better failover occurs only by adding processes, and not by adding threads. This section discusses the technique of adding processes. For information about the tradeoffs of single-threaded JavaServers versus multithreaded JavaServers, see the section Enabling Multithreaded JavaServers.
Figure 4-1 shows the Bankapp server application replicated in the BANK_GROUP1
group. The replicated servers are running on a single machine.
When a request arrives for this group, the WLE domain has several server processes available that can process the request, and the WLE domain can choose the server process that is least busy.
In Figure 4-1, note the following:
Figure 4-1 Replicated Servers in the Bankapp Sample
The notion of server groups is specific to the WLE system and adds value to a CORBA implementation; server groups are an important part of the scalability features of the WLE system. Basically, to add more machines to a deployment, you need to add more groups.
Figure 4-2 shows the Bankapp sample application groups replicated on another machine, as specified in the application's UBBCONFIG
file.
Note:
In the simple example shown in Figure 4-2, the content of the databases on Production Machines 1 and 2 is identical. Each database would contain all of the account records for all of the account IDs. Only the processing would be distributed, based on the ATM (atmID
field). A more realistic example, one not readily adapted to the Bankapp sample application, would distribute the data and processing based on ranges of bank account IDs.
The way in which server groups are configured, where they run, and the ways in which they are replicated is specified in the UBBCONFIG
file. When you replicate a server group, you can do the following:
Replicated Server Groups
Figure 4-2 Replicating Server Groups Across Machines
The effect of having multiple server groups includes the following:
The section Factory-based Routing shows how the Bankapp sample application uses factory-based routing to spread the application's processing load across multiple machines.
To configure replicated server processes and groups in your WLE domain:
Configuring Replicated Server Processes and Groups
Thus the MIN and MAX parameters determine the degree to which a given server application can process requests on a given interface in parallel. During run time, the system administrator can examine resource bottlenecks and start additional server processes, if necessary. In this sense, the application is designed so that the system administrator can scale it.
Note: The following example shows lines from the GROUPS and SERVERS sections of the UBBCONFIG file for a Bankapp sample application. These configuration settings are not used with the Bankapp sample provided with the WLE software.
*RESOURCES
IPCKEY 55432
DOMAINID simple
MASTER SITE1
MODEL SHM
LDBAL Y
*MACHINES
"TRIXIE"
LMID = SITE1
APPDIR = "c:\bankapp\jdbc\."
TUXCONFIG = "c:\bankapp\jdbc\.\tuxconfig"
TUXDIR = "c:\m3dir"
MAXCLIENTS = 10
*GROUPS
SYS_GRP
LMID = SITE1
GRPNO = 1
BANK_GROUP1
LMID = SITE1
GRPNO = 2
BANK_GROUP2
LMID = SITE1
GRPNO = 3
*SERVERS
# By default, restart a server if it crashes, up to 5 times
# in 24 hours.
#
DEFAULT:
RESTART = Y
MAXGEN = 5
# Start the Tuxedo System Event Broker. This event broker
# must be started before any servers providing the
# NameManager Service.
#
TMSYSEVT
SRVGRP = SYS_GRP
SRVID = 1
# TMFFNAME is a M3 provided server that runs the
# object-transactional management services. This includes the
# NameManager and FactoryFinder services.
# The NameManager service is a M3-specific service
# that maintains a mapping of application-supplied names to
# object references.
# Start the NameManager Service (-N option). This name
# manager is being started as a Master (-M option).
#
TMFFNAME
SRVGRP = SYS_GRP
SRVID = 2
CLOPT = "-A -- -N -M"
# Start a slave NameManager Service
#
TMFFNAME
SRVGRP = SYS_GRP
SRVID = 3
CLOPT = "-A -- -N"
# Start the FactoryFinder (-F) service
#
TMFFNAME
SRVGRP = SYS_GRP
SRVID = 4
CLOPT = "-A -- -N -F"
# Start the JavaServer in Bank_Group1
#
JavaServer
SRVGRP = BANK_GROUP1
SRVID = 5
CLOPT = "-A -- -M 10 BankApp.jar TellerFactory_1"
SYSTEM_ACCESS=FASTPATH
RESTART = N
# Start the JavaServer in Bank_Group2
#
JavaServer
SRVGRP = BANK_GROUP2
SRVID = 6
CLOPT = "-A -- -M 10 BankApp.jar TellerFactory_1"
SYSTEM_ACCESS=FASTPATH
RESTART = N
# Start the listener for IIOP clients
#
# Specify the host name of your server machine as
# well as the port. A typical port number is 2500
#
ISL
SRVGRP = SYS_GRP
SRVID = 7
CLOPT = "-A -- -n //TRIXIE:2468"
*SERVICES
*INTERFACES
"IDL:beasys.com/BankApp/Teller:1.0"
FACTORYROUTING=atmID
*ROUTING
atmID
TYPE = FACTORY
FIELD = "atmID"
FIELDTYPE = LONG
RANGES = "1-5:BANK_GROUP1,
6-10: BANK_GROUP2,
*:BANK_GROUP1
As stated in Java Server Application Concepts, object state management is a fundamentally important concern of large-scale client/server systems because it is critically important that such systems achieve optimized throughput and response time. This section explains how you can use object state management to increase the scalability of the objects managed by a WLE server application, using the Teller objects in the Bankapp sample applications as an example.
The following table summarizes how you can use the object state management models supported in the WLE system to achieve major gains in scalability in your WLE applications.
Factory-based routing is a powerful feature that provides a means to send a client request to a specific server group. Using factory-based routing, you can spread that processing load for a given application across multiple machines, because you can determine the group, and thus the machine, in which a given object is instantiated.
You can use factory-based routing to expand upon the variety of load-balancing and scalability capabilities in the WLE system. In the case of the Bankapp sample application, you can use factory-based routing to send requests to a subset of ATMs to one machine, and requests for another subset of ATMs to another machine. As you add machines to ramp up your application's processing capability, the WLE system makes it easy to modify the factory-based routing in your application to add more machines.
The chief benefit of factory-based routing is that it provides a simple means to scale up an application, and invocations on a given interface in particular, across a growing deployment environment. Spreading the deployment of an application across additional machines is strictly an administrative function that does not require any recoding or rebuilding of the application.
The chief design consideration regarding implementing factory-based routing in your client/server application is in choosing the value on which routing is based. The sections that follow describe how factory-based routing works, using the extended JDBC Bankapp sample application, which uses factory-based routing in the following way. Client application requests to the Teller
object are routed based on a teller number. Requests for one subset of teller numbers go to one group; and requests on behalf of another subset of teller numbers go to another group.
Your factories implement factory-based routing by changing the way they create object references. All object references contain a group ID, and by default the group ID is the same as the factory that creates the object reference. However, using factory-based routing, the factory creates an object reference that includes routing criteria that determines the group ID. Then when client applications send an invocation using such an object reference, the WLE system routes the request to the group ID specified in the object reference. This section focuses on how the group ID is generated for an object reference.
To implement factory-based routing, you need to coordinate the following:
Factory-based Routing
How Factory-based Routing Works
To describe the data that needs to be coordinated, the following two sections discuss configuring for factory-based routing in the UBBCONFIG
file, and implementing factory-based routing in the factory.
For each interface for which requests are routed, you need to establish the following information in the UBBCONFIG
file:
Configuring for Factory-based Routing in the UBBCONFIG File
*INTERFACES
"IDL:beasys.com/BankApp/Teller:1.0"
FACTORYROUTING = atmID
The preceding example shows the fully qualified Interface Repository ID for an interface in the extended Bankapp sample in which factory-based routing is used. The FACTORYROUTING identifier specifies the name of the routing value, atmID .
The following example shows the ROUTING section of the UBBCONFIG file used in the Bankapp sample application:
*ROUTING
atmID
TYPE = FACTORY
FIELD = "atmID"
FIELDTYPE = LONG
RANGES = "1-5:BANK_GROUP1,
6-10: BANK_GROUP2,
*:BANK_GROUP1
The preceding example shows that Teller object references for ATMs in one range are routed to one server group, and Teller object references for ATMs in other ranges are routed to other groups. As illustrated in Figure 4-2, BANK_GROUP1 and BANK_GROUP2 reside on different production machines.
Factories implement factory-based routing by the way the invocation to the com.beasys.Tobj.TP.create_object_reference method is implemented.
This operation has the following Java binding:
public static org.omg.CORBA.Object
create_object_reference(java.lang.String interfaceName,
java.lang.String stroid,
org.omg.CORBA.NVList criteria)
throws InvalidInterface,
InvalidObjectId
The criteria specifies a list of named values that can be used to provide factory-based routing for the object reference. The use of factory-based routing is optional and is dependent on the use of this argument. If you do not want to use factory-based routing, you can pass a value of 0 (zero) for this argument. The work of implementing factory-based routing in a factory is in building the NVlist .
As stated previously, the TellerFactory object in the Bankapp sample application specifies the value atmID . This value must match exactly the following in the UBBCONFIG file:
Note: The following example is not part of the Bankapp sample code, but is shown here to illustrate the factory-based routing feature. The TellerFactory object inserts the bank account number into the NVlist using the following code:
// Put the atmID (which is the routing criteria)
// into a CORBA NVList. The atmID comes from the
// tellerName that is passed in as an input parameter;
// tellerName should have the form: Teller<atmID>
int atmID = Integer.parseInt (tellerName.substring(6));
any.insert_long(atmID);
// Create the NVlist and add the atmID to the list.
org.omg.CORBA.NVList criteria = TP.orb().create_list(1);
criteria.add_value("atmID", any, 0);
// Create the object reference.
org.omg.CORBA.Object teller_oref =
TP.create_object_reference(
BankApp.TellerHelper.id(), // Repository ID
tellerName, // Object ID
criteria // Routing Criteria
);
Note: It is possible for an object with a given interface and OID to be simultaneously active in two different groups, if those two groups both contain the same object implementation. (However, if your factories generate unique OIDs, this situation is very unlikely.) If you need to guarantee that only one object instance of a given interface name and OID is available at any one time in your domain, either: use factory-based routing to ensure that objects with a particular OID are always routed to the same group, or configure your domain so that a given object implementation is in only one group. This assures that if multiple clients have an object reference containing a given interface name and OID, the reference is always routed to the same object instance.
To enable routing on an object's OID, specify the OID as the routing criterion in the com.beasys.Tobj.TP.create_object_reference method, and set up the UBBCONFIG file appropriately.
When you implement factory-based routing in a factory, the WLE system generates an object reference. The following example shows how the client application gets an object reference to a Teller
object when factory-based routing is implemented:
What Happens at Run Time
When the client application subsequently does an invocation on an object using the object reference, the WLE system routes the request to the group specified in the object reference.
Note: Be careful how you implement factory-based routing if you use the process-entity design pattern. The object can service only those entities that are contained in the group's database.
WLE supports the ability to configure multithreaded JavaServers. For each JavaServer, you can establish the maximum number of worker threads in the application's UBBCONFIG file.
A worker thread is a thread that is started and managed by the WLE Java software, as opposed to threads started and managed by an application program. Internally, WLE Java manages a pool of available worker threads. When a client request is received, an available worker thread from the thread pool is scheduled to execute the request. When the request is done, the worker thread is returned to the pool of available threads.
In the current WLE Java release, BEA recommends that you not establish threads programmatically. Only worker threads that are created by the run-time WLE JavaServer may access the WLE Java infrastructure. This restriction means that your Java application should not create a Java thread from a worker thread and then try to begin a new transaction in the thread. You can, however, start threads in your application to perform other, non-WLE work.
Deploying multithreaded JavaServers may not be appropriate for all applications. The potential for a performance gain from a multithreaded JavaServer depends on:
If the application is running on a single-processor machine and the application is CPU-intensive only, without any I/O or delays, in most cases the multithreaded JavaServer will not perform better. In fact, due to the overhead of switching between threads, the multithreaded JavaServer in this configuration may perform worse than a single-threaded JavaServer.
A performance gain is more likely with a multithreaded JavaServer when the application has some delays or is running on a multiprocessor machine.
Multithreaded WLE server applications appear the same as single-threaded applications, codewise. However, if you are planning to configure your Java server applications to be multithreaded, or if you want to have the flexibility to do so at some point in the future, keep the following recommendations in mind when writing your object implementations in Java:
For more information about Java synchronization techniques, see the Java Language Specification, available at the Sun Microsystems, Inc. Web site at the following URL:
http://java.sun.com
For information about defining the UBBCONFIG
parameters to implement a multithreaded JavaServer, see Chapter 3 of the Administration Guide.
The principal considerations that influence the design of the Teller
object include:
Additional Design Considerations for the Teller Object
The primary implications of these considerations are that these objects must:
The remainder of this section discusses these considerations and implications in detail.
Because the extended Bankapp server is now replicated, the WLE domain must have a means to differentiate between multiple instances of the Teller
object. That is, if there are two Bankapp server processes running in a group, the WLE domain must have a means to distinguish between, say, the Teller
object running in the first Bankapp server process and the Teller
object running in the second Bankapp server process.
The way to provide the WLE domain with the ability to distinguish among multiple instances of these objects is to make each object instance unique.
To make each Teller
object unique, the factories for those objects must change the way in which they make object references to them. For example, when the TellerFactory
object in the original Bankapp sample application created an object reference to the Teller
object, the com.beasys.Tobj.TP::create_object_reference
method specified an OID that consisted only of the string tellerName
. However, in the extended Bankapp sample application discussed in this chapter, the same create_object_reference
method uses a generated unique OID instead.
A consequence of giving each Teller
object a unique OID is that there may be multiple instances of these objects running simultaneously in the WLE domain. This characteristic is typical of the stateless object model, and is an example of how the WLE domain can be highly scalable and at the same time offer high performance.
And last, because unique Teller
objects need to be brought into memory for each client request on them, it is critical that these objects be deactivated when the invocations on them are completed so that any object state associated with them does not remain idle in memory. The Bankapp server application addresses this issue by assigning the method
activation policy to the Teller object in the XML-based Server Description File.
The chief scalability advantage of having replicated server groups is to be able to distribute processing across multiple machines. However, if your application interacts with a database, which is the case with the JDBC Bankapp sample application, it is critical that you consider the impact of these multiple server groups on the database interactions.
In many cases, you may have one database associated with each machine in your deployment. If your server application is distributed across multiple machines, you must consider how you set up your databases.
The JDBC Bankapp sample application uses factory-based routing to send one set of requests to one machine, and another set to the other machine. As mentioned earlier, factory-based routing is implemented in the TellerFactory
object by the way in which references to Teller
objects are created.
In the future, the system administrator of the Bankapp sample application may want to add capacity to the WLE domain. For example, the bank may eventually have a large increase in automated teller machines (ATMs). This can be done without modifying or rebuilding the application.
The system administrator has the following tools available to continually add capacity:
Instantiating the Teller Object
Ensuring That Account Updates Occur in the Correct Server Group
How the Bankapp Server Application Can Be Scaled Further
Doing this requires modifying the UBBCONFIG file to specify the additional groups, what server processes run in those groups, and what machines they run on.
For example, instead of routing to the four groups shown earlier in this chapter, the system administrator can modify the routing rules in the UBBCONFIG file to partition the application further among the new groups added to the WLE domain. Any modification to the routing tables must be consistent with any changes or additions made to the server groups and machines configured in the UBBCONFIG file.
Note: If you add capacity to an application that uses a database, you must also consider the impact on how the database is set up, particularly when you are using factory-based routing. For example, if the Bankapp sample application is spread across six machines, the database on each machine must be set up appropriately and in accordance with the routing tables in the UBBCONFIG file.
|
Copyright © 1999 BEA Systems, Inc. All rights reserved.
|