BEA Logo BEA WebLogic Enterprise Release 5.0

  Corporate Info  |  News  |  Solutions  |  Products  |  Partners  |  Services  |  Events  |  Download  |  How To Buy

 

   WLE Doc Home   |   CORBA Programming & Related Topics   |   Previous   |   Next   |   Contents   |   Index

Scaling a Java Server Application

This chapter shows how you can take advantage of several key scalability features of the WLE system. The descriptions demonstrate scalability features that achieve the following goals:

Some of the Bankapp examples in this chapter include sample code that is not implemented in the product sample's Bankapp files.

This chapter discusses the following topics:

Overview of the Scalability Features Available in the WLE System

Supporting highly scalable applications is one of the strengths of the WLE system. Many applications may perform well in an environment characterized by 1 to 10 server processes, and 10 to 100 client applications. However, in an enterprise environment, applications need to support:

Deploying a Java application with such demands quickly reveals the resource shortcomings and performance bottlenecks in your application. The WLE system supports such large-scale deployments in several ways, including:

Other features provided in the WLE system to make an application highly scalable include the IIOP Listener/Handler, which is summarized in Getting Started and described fully in the Administration Guide.

Scaling a WLE Server Application

Using the JDBC Bankapp sample application as an example, this section explains how to scale an application to meet a significantly greater processing capability. The basic design goal for the JDBC Bankapp sample application is to greatly scale up the number of client applications it can accommodate by doing the following:

To accommodate these design goals, the JDBC Bankapp sample application has been extended as follows:

The sections that follow describe how the JDBC Bankapp sample application uses replicated server processes and server groups, object state management, and factory-based routing to meets its scalability goals. The first section that follows provides a description of the OMG IDL changes implemented in the Bankapp sample application.

Replicating Server Processes and Server Groups

The WLE system offers a wide variety of choices for how you may configure your server applications, such as:

In summary:

The following sections describe replicated server processes and groups, and also explain how you can configure them in the WLE system.

Replicated Server Processes

When you replicate the server processes in your application:

To achieve the full benefit of replicated server processes, make sure that the objects instantiated by your server application generally have unique IDs. This way, a client invocation on an object can cause the object to be instantiated on demand, within the bounds of the number of server processes that are available, and not queued up for an already active object.

As you design your application, keep in mind that there is a tradeoff between providing:

Better failover occurs only by adding processes, and not by adding threads. This section discusses the technique of adding processes. For information about the tradeoffs of single-threaded JavaServers versus multithreaded JavaServers, see the section Enabling Multithreaded JavaServers.

Figure 4-1 shows the Bankapp server application replicated in the BANK_GROUP1 group. The replicated servers are running on a single machine.

Figure 4-1 Replicated Servers in the Bankapp Sample

When a request arrives for this group, the WLE domain has several server processes available that can process the request, and the WLE domain can choose the server process that is least busy.

In Figure 4-1, note the following:

Replicated Server Groups

The notion of server groups is specific to the WLE system and adds value to a CORBA implementation; server groups are an important part of the scalability features of the WLE system. Basically, to add more machines to a deployment, you need to add more groups.

Figure 4-2 shows the Bankapp sample application groups replicated on another machine, as specified in the application's UBBCONFIG file.

Figure 4-2 Replicating Server Groups Across Machines

Note: In the simple example shown in Figure 4-2, the content of the databases on Production Machines 1 and 2 is identical. Each database would contain all of the account records for all of the account IDs. Only the processing would be distributed, based on the ATM (atmID field). A more realistic example, one not readily adapted to the Bankapp sample application, would distribute the data and processing based on ranges of bank account IDs.

The way in which server groups are configured, where they run, and the ways in which they are replicated is specified in the UBBCONFIG file. When you replicate a server group, you can do the following:

The effect of having multiple server groups includes the following:

The section Factory-based Routing shows how the Bankapp sample application uses factory-based routing to spread the application's processing load across multiple machines.

Configuring Replicated Server Processes and Groups

To configure replicated server processes and groups in your WLE domain:

  1. Bring your application's UBBCONFIG file into a text editor, such as WordPad.

  2. In the GROUPS section, specify the names of the groups you want to configure.

  3. In the SERVERS section, enter the following information for the server process you want to replicate:

Scaling the Application Via Object State Management

As stated in Java Server Application Concepts, object state management is a fundamentally important concern of large-scale client/server systems because it is critically important that such systems achieve optimized throughput and response time. This section explains how you can use object state management to increase the scalability of the objects managed by a WLE server application, using the Teller objects in the Bankapp sample applications as an example.

The following table summarizes how you can use the object state management models supported in the WLE system to achieve major gains in scalability in your WLE applications.

State Model

How You Can Use It to Achieve Scalability

Method-bound

Method-bound objects are brought into the machine's memory only for the duration of the client invocation on them. When the invocation is complete, the object is deactivated and any state data for that object is flushed from memory.

You can use method-bound objects to create a stateless server model in your application, in which thousands of objects are managed by your application. From the client application view, all the objects are available to service requests. However, because the server application is mapping objects into memory only for the duration of client invocations on them, only comparatively few of the objects managed by the server application are in memory at any given moment.

A method-bound object is said in this document to be a stateless object.

Process-bound

Process-bound objects remain in memory from the time they are first invoked until the server process in which they are running is shut down. If appropriate for your application, process-bound objects with a large amount of state data can remain in memory to service multiple client invocations, and the system's resources need not be tied up reading and writing the object's state data on each client invocation.

A process-bound object is said in this document to be a stateful object. (Note that transaction-bound objects can also be considered stateful, since they can remain in memory between invocations on them within the scope of a transaction.)

As an example of achieving scalability, the Bankapp sample Teller object could use the method activation policy. The method activation policy assigned to this object means that the object is activated whenever a client request arrives for it. The Teller object stays in memory only for the duration of one client invocation, which is appropriate in cases where the Process-Entity design pattern is recommended. As the number of clients issuing requests on the Teller object increases, the WLE domain is able to:

Factory-based Routing

Factory-based routing is a powerful feature that provides a means to send a client request to a specific server group. Using factory-based routing, you can spread that processing load for a given application across multiple machines, because you can determine the group, and thus the machine, in which a given object is instantiated.

You can use factory-based routing to expand upon the variety of load-balancing and scalability capabilities in the WLE system. In the case of the Bankapp sample application, you can use factory-based routing to send requests to a subset of ATMs to one machine, and requests for another subset of ATMs to another machine. As you add machines to ramp up your application's processing capability, the WLE system makes it easy to modify the factory-based routing in your application to add more machines.

The chief benefit of factory-based routing is that it provides a simple means to scale up an application, and invocations on a given interface in particular, across a growing deployment environment. Spreading the deployment of an application across additional machines is strictly an administrative function that does not require any recoding or rebuilding of the application.

The chief design consideration regarding implementing factory-based routing in your client/server application is in choosing the value on which routing is based. The sections that follow describe how factory-based routing works, using the extended JDBC Bankapp sample application, which uses factory-based routing in the following way. Client application requests to the Teller object are routed based on a teller number. Requests for one subset of teller numbers go to one group; and requests on behalf of another subset of teller numbers go to another group.

How Factory-based Routing Works

Your factories implement factory-based routing by changing the way they create object references. All object references contain a group ID, and by default the group ID is the same as the factory that creates the object reference. However, using factory-based routing, the factory creates an object reference that includes routing criteria that determines the group ID. Then when client applications send an invocation using such an object reference, the WLE system routes the request to the group ID specified in the object reference. This section focuses on how the group ID is generated for an object reference.

To implement factory-based routing, you need to coordinate the following:

To describe the data that needs to be coordinated, the following two sections discuss configuring for factory-based routing in the UBBCONFIG file, and implementing factory-based routing in the factory.

Configuring for Factory-based Routing in the UBBCONFIG File

For each interface for which requests are routed, you need to establish the following information in the UBBCONFIG file:

To configure for factory-based routing, the UBBCONFIG file needs to specify the following data in the INTERFACES and ROUTING sections, and also in how groups and machines are identified:

  1. The INTERFACES section lists the names of the interfaces for which you want to enable factory-based routing. For each interface, this section specifies what kinds of criteria the interface routes on. This section specifies the routing criteria via an identifier, FACTORYROUTING , as in the following example:

    *INTERFACES
    "IDL:beasys.com/BankApp/Teller:1.0"
    FACTORYROUTING = atmID

    The preceding example shows the fully qualified Interface Repository ID for an interface in the extended Bankapp sample in which factory-based routing is used. The FACTORYROUTING identifier specifies the name of the routing value, atmID .

  2. The ROUTING section specifies the following data for each routing value:

Implementing Factory-based Routing in a Factory

Factories implement factory-based routing by the way the invocation to the com.beasys.Tobj.TP.create_object_reference method is implemented.

This operation has the following Java binding:

public static org.omg.CORBA.Object
create_object_reference(java.lang.String interfaceName,
java.lang.String stroid,
org.omg.CORBA.NVList criteria)
throws InvalidInterface,
InvalidObjectId

The criteria specifies a list of named values that can be used to provide factory-based routing for the object reference. The use of factory-based routing is optional and is dependent on the use of this argument. If you do not want to use factory-based routing, you can pass a value of 0 (zero) for this argument. The work of implementing factory-based routing in a factory is in building the NVlist .

As stated previously, the TellerFactory object in the Bankapp sample application specifies the value atmID . This value must match exactly the following in the UBBCONFIG file:

What Happens at Run Time

When you implement factory-based routing in a factory, the WLE system generates an object reference. The following example shows how the client application gets an object reference to a Teller object when factory-based routing is implemented:

  1. The client application invokes the TellerFactory object, requesting a reference to a Teller object. Included in the request is a teller name that includes an atmID .

  2. The TellerFactory inserts the atmID into an NVlist , which is used as the routing criteria.

  3. The TellerFactory invokes the com.beasys.Tobj.TP::create_object_reference method, passing the Teller Interface Repository ID, a unique OID, and the NVlist .

  4. The WLE system compares the content of the routing tables with the value in the NVlist to determine a group ID.

  5. The WLE system inserts the group ID into the object reference.

When the client application subsequently does an invocation on an object using the object reference, the WLE system routes the request to the group specified in the object reference.

Note: Be careful how you implement factory-based routing if you use the process-entity design pattern. The object can service only those entities that are contained in the group's database.

Enabling Multithreaded JavaServers

WLE supports the ability to configure multithreaded JavaServers. For each JavaServer, you can establish the maximum number of worker threads in the application's UBBCONFIG file.

A worker thread is a thread that is started and managed by the WLE Java software, as opposed to threads started and managed by an application program. Internally, WLE Java manages a pool of available worker threads. When a client request is received, an available worker thread from the thread pool is scheduled to execute the request. When the request is done, the worker thread is returned to the pool of available threads.

In the current WLE Java release, BEA recommends that you not establish threads programmatically. Only worker threads that are created by the run-time WLE JavaServer may access the WLE Java infrastructure. This restriction means that your Java application should not create a Java thread from a worker thread and then try to begin a new transaction in the thread. You can, however, start threads in your application to perform other, non-WLE work.

Deploying multithreaded JavaServers may not be appropriate for all applications. The potential for a performance gain from a multithreaded JavaServer depends on:

If the application is running on a single-processor machine and the application is CPU-intensive only, without any I/O or delays, in most cases the multithreaded JavaServer will not perform better. In fact, due to the overhead of switching between threads, the multithreaded JavaServer in this configuration may perform worse than a single-threaded JavaServer.

A performance gain is more likely with a multithreaded JavaServer when the application has some delays or is running on a multiprocessor machine.

Multithreaded WLE server applications appear the same as single-threaded applications, codewise. However, if you are planning to configure your Java server applications to be multithreaded, or if you want to have the flexibility to do so at some point in the future, keep the following recommendations in mind when writing your object implementations in Java:

For information about defining the UBBCONFIG parameters to implement a multithreaded JavaServer, see Chapter 3 of the Administration Guide.

Additional Design Considerations for the Teller Object

The principal considerations that influence the design of the Teller object include:

The primary implications of these considerations are that these objects must:

The remainder of this section discusses these considerations and implications in detail.

Instantiating the Teller Object

Because the extended Bankapp server is now replicated, the WLE domain must have a means to differentiate between multiple instances of the Teller object. That is, if there are two Bankapp server processes running in a group, the WLE domain must have a means to distinguish between, say, the Teller object running in the first Bankapp server process and the Teller object running in the second Bankapp server process.

The way to provide the WLE domain with the ability to distinguish among multiple instances of these objects is to make each object instance unique.

To make each Teller object unique, the factories for those objects must change the way in which they make object references to them. For example, when the TellerFactory object in the original Bankapp sample application created an object reference to the Teller object, the com.beasys.Tobj.TP::create_object_reference method specified an OID that consisted only of the string tellerName . However, in the extended Bankapp sample application discussed in this chapter, the same create_object_reference method uses a generated unique OID instead.

A consequence of giving each Teller object a unique OID is that there may be multiple instances of these objects running simultaneously in the WLE domain. This characteristic is typical of the stateless object model, and is an example of how the WLE domain can be highly scalable and at the same time offer high performance.

And last, because unique Teller objects need to be brought into memory for each client request on them, it is critical that these objects be deactivated when the invocations on them are completed so that any object state associated with them does not remain idle in memory. The Bankapp server application addresses this issue by assigning the method activation policy to the Teller object in the XML-based Server Description File.

Ensuring That Account Updates Occur in the Correct Server Group

The chief scalability advantage of having replicated server groups is to be able to distribute processing across multiple machines. However, if your application interacts with a database, which is the case with the JDBC Bankapp sample application, it is critical that you consider the impact of these multiple server groups on the database interactions.

In many cases, you may have one database associated with each machine in your deployment. If your server application is distributed across multiple machines, you must consider how you set up your databases.

The JDBC Bankapp sample application uses factory-based routing to send one set of requests to one machine, and another set to the other machine. As mentioned earlier, factory-based routing is implemented in the TellerFactory object by the way in which references to Teller objects are created.

How the Bankapp Server Application Can Be Scaled Further

In the future, the system administrator of the Bankapp sample application may want to add capacity to the WLE domain. For example, the bank may eventually have a large increase in automated teller machines (ATMs). This can be done without modifying or rebuilding the application.

The system administrator has the following tools available to continually add capacity: