Table of Contents Previous Next PDF


Scaling Oracle Tuxedo CORBA Applications

Scaling Oracle Tuxedo CORBA Applications
This topic introduces key concepts and tasks for scaling Oracle Tuxedo CORBA applications. This topic includes the following sections:
For more detailed information and examples for Oracle Tuxedo CORBA applications, see Chapter 2, “Scaling CORBA Server Applications.”
Notes:
Technical support for third party CORBA Java ORBs should be provided by their respective vendors. Oracle Tuxedo does not provide any technical support or documentation for third party CORBA Java ORBs.
About Scaling Oracle Tuxedo CORBA Applications
This topic includes the following sections:
Application Scalability Requirements
Many applications perform adequately in an environment where between 1 to 10 server processes and 10 to 100 client applications are running. However, in an enterprise environment, applications may need to support hundreds of execution contexts (where the context can be a thread or a process), tens of thousands of client applications, and millions of objects at satisfactory performance levels.
Subjecting an application to exponentially increasing demands quickly reveals any resource shortcomings and performance bottlenecks in the application. Scalability is therefore an essential characteristic of Oracle Tuxedo applications.
You can build highly scalable Oracle Tuxedo applications by:
Oracle Tuxedo Scalability Features
Oracle Tuxedo supports large-scale application deployments by:
Using Object State Management
This topic includes the following sections:
Object state management is a fundamental concern of large-scale client/server systems because it is critical that such systems achieve optimized throughput and response time. For more detailed information about using object state management, see “Using a Stateless Object Model” on page 2‑3 and the technical article Process-Entity Design Pattern.
CORBA Object State Models
Oracle Tuxedo CORBA supports three object state management models:
For more information about these models, see “Server Application Concepts” in Creating CORBA Server Applications
Method-bound Objects
Method-bound objects are loaded into the machine’s memory only for the duration of the client invocation. When the invocation is complete, the object is deactivated and any state data for that object is flushed from memory. In this document, a method-bound object is considered to be a stateless object.
You can use method-bound objects to create a stateless server model in your application. By using a stateless server model, you move requests that are already directed to active objects to any available server, which allows concurrent execution for thousands and even millions of objects. From the client application view, all the objects are available to service requests. However, because the server application maps objects into memory only for the duration of client invocations, few of the objects managed by the server application are in memory at any given moment.
Process-bound Objects
Process-bound objects remain in memory beginning when they are first invoked until the server process in which they are running is shut down. A process-bound object can be activated upon a client invocation or explicitly before any client invocation (a preactivated object). Applications can control the deactivation of process-bound objects. In this document, a process-bound object is considered to be a stateful object.
When appropriate, process-bound objects with a large amount of state data can remain in memory to service multiple client invocations, thereby avoiding reading and writing the object’s state data on each client invocation.
Transaction-bound Objects
Transaction-bound objects can also be considered stateful because, within the scope of a transaction, they can remain in memory between invocations. If the object is activated within the scope of a transaction, the object remains active until the transaction is either committed or rolled back. If the object is activated outside the scope of a transaction, its behavior is the same as that of a method-bound object (it is loaded for the duration of the client invocation).
Implementing Stateless and Stateful Objects
In general, application developers need to balance the costs of implementing stateless objects against the costs of implementing stateful objects.
About Stateless and Stateful Objects
The decision to use stateless or stateful objects depends on various factors. In the case where the cost to initialize an object with its durable state is expensive—because, for example, the object’s data takes up a great deal of space, or the durable state is located on a disk very remote from the servant that activates it—it may make sense to keep the object stateful, even if the object is idle during a conversation. In the case where the cost to keep an object active is expensive in terms of machine resource usage, it may make sense to make such an object stateless.
By managing object state in a way that is efficient and appropriate for your application, you can maximize your application’s ability to support large numbers of simultaneous client applications that use large numbers of objects. The way that you manage object state depends on the specific characteristics and requirements of your application. For CORBA applications, you manage object state by assigning the method activation policy to these objects, which has the effect of deactivating idle object instances so that machine resources can be allocated to other object instances.
When to Use Stateless Objects
Stateless objects generally provide good performance and optimal usage of server resources, because server resources are never used when objects are idle. Using stateless objects is a good approach to implementing server applications and are particularly appropriate when:
By making an object stateless, you can generally assure that server application resources are not being reserved unnecessarily while waiting for input from the client application.
An application that employs a stateless object model has the following characteristics:
When to Use Stateful Objects
A stateful object, once activated, remains in memory until a specific event occurs, such as the process in which the object exists is shut down, or the transaction in which the object is activated is completed.
Using stateful objects is recommended when:
Note:
Stateful objects have the following behavior:
For example, if an object has a lock on a database and is caching large amounts of data in memory, that database and the memory used by that stateful object are unavailable to other objects, potentially for the entire duration of a transaction.
Parallel Objects
Parallel objects are, by definition, stateless objects so they can exist concurrently on more than one server. In release 8.0 of Oracle Tuxedo, you can use the Implementation Configuration File (ICF) to force all objects in a specific implementation to be parallel objects. The effect is to improve performance. For more information on parallel objects, see “Using Parallel Objects” on page 1‑14.
Replicating Server Processes and Server Groups
This topic includes the following sections:
For more detailed information about replicating server processes and server groups, see the following topics:
About Replicating Server Processes and Server Groups
The Oracle Tuxedo CORBA environment allows CORBA objects to be deployed across multiple servers to provide additional failover reliability and to split the client’s workload through load balancing. Oracle Tuxedo CORBA load balancing is enabled by default. For more information about configuring load balancing, see “Enabling System-controlled Load Balancing” on page 4‑5. For more information about distributing the application workload using Oracle Tuxedo CORBA features, see Chapter 3, “Distributing CORBA Applications.”
The Oracle Tuxedo architecture provides the following server organization:
This architecture allows new servers, groups, or machines to be dynamically added or removed, to adapt the application to high- or low-demand periods, or to accommodate internal changes required to the application. The Oracle Tuxedo run time provides load balancing and failover by routing requests across available servers.
System administrators can scale an Oracle application by:
Replicating Server Processes. Increase the number of server processes to support more active objects within a group and load balancing among servers.
Replicating Server Groups. Increase the number of server groups so that Oracle can balance the load by distributing processing requests across multiple server machines.
Configuration Options
You can configure server applications as:
You can add more parallel processing capability to client/server applications by replicating server processes or add more threads. You can add more server groups to split processing across resource managers. For CORBA applications, you can implement factory-based routing, as described in “Using Factory-based Routing (CORBA Servers Only)” on page 1‑11.
Replicating Server Processes
System administrators can scale an application by replicating the servers to support more concurrent active objects, or process more concurrent requests, on the server node. To configure replicated server processes, see “Configuring Replicated Server Processes and Groups” on page 4‑5.
Note:
Benefits
The benefits of using replicated server processes include:
Guidelines
To achieve the maximum benefit of using replicated server processes, make sure that the CORBA objects instantiated by your server application have unique object IDs. This allows a client invocation on an object to cause the object to be instantiated on demand, within the bounds of the number of server processes that are available, and not queued up for an already active object.
You should also consider the trade-off between providing better application recovery by using multiple processes versus more efficient performance using threads (for some types of application patterns and processing environments).
Better failover occurs only when you add processes, not threads. For information about using single-threaded and multithreaded servers, see “When to Use Multithreaded CORBA Servers” on page 1‑10.
Replicating Server Groups
Server groups are unique to Oracle and are key to the scalability features of Oracle. A group contains one or more servers on a single node. System administrators can scale an Oracle application by replicating server groups and configuring load balancing within a domain.
Replicating a server group involves defining another server group with the same type of servers and resource managers to provide parallel access to a shared resource (such as a database). CORBA applications, for example, can use factory-based routing to split processing across the database partitions.
The UBBCONFIG file specifies how server groups are configured and where they run. By using multiple server groups, Oracle can:
To configure replicated server groups, see “Configuring Replicated Server Processes and Groups” on page 4‑5.
Using Multithreaded Servers
This topic includes the following sections:
For instructions on how to configure servers for multithreading, see “Configuring Multithreaded Servers” on page 4‑6.
About Multithreaded CORBA Servers
System administrators can scale an Oracle application by enabling multithreading in CORBA servers, and by tuning configuration parameters (the maximum number of server threads that can be created) in the application’s UBBCONFIG file.
Oracle CORBA supports the ability to configure multithreaded CORBA applications. A multithreaded CORBA server can service multiple object requests simultaneously, while a single-threaded CORBA server runs only one request at a time.
Server threads are started and managed by the Oracle CORBA software rather than an application program. Internally, Oracle CORBA manages a pool of available server threads. If a CORBA server is configured to be multithreaded, then when a client request is received, an available server thread from the thread pool is scheduled to execute the request. While the object is active, the thread is busy. When the request is complete, the thread is returned to the pool of available threads.
When to Use Multithreaded CORBA Servers
Designing an application to use multiple, independent threads provides concurrency within an application and can improve overall throughput. Using multiple threads enables applications to be structured efficiently with threads servicing several independent tasks in parallel. Multithreading is particularly useful when:
Some computer operations take a substantial amount of time to complete. A multithreaded application design can significantly reduce the wait time between the request and completion of operations. This is true in situations when operations perform a large number of I/O operations such as when accessing a database, invoking operations on remote objects, or are CPU-bound on a multiprocessor machine. Implementing multithreading in a server process can increase the number of requests a server processes in a fixed amount of time.
The primary requirement for multithreaded server applications is the simultaneous handling of multiple client requests. For more information on the requirements and benefits of using multithreaded servers, see Oracle.
Coding Recommendations
So as to be able to analyze the performance of multithreaded servers, include one of the following identifiers in each message if your client or server application sends messages to the user log (ULOG):
Configuring a Multithreaded CORBA Server
To configure a multithreaded CORBA server, you change settings in the application’s UBBCONFIG file. For information about defining the UBBCONFIG parameters to implement a multithreaded server, see “Configuring Multithreaded Servers” on page 4‑6.
Using Factory-based Routing (CORBA Servers Only)
This topic includes the following sections:
This topic introduces factory-based routing in Oracle CORBA applications. For more detailed information about using factory-based routing, see “Configuring Factory-based Routing in the UBBCONFIG File” on page 2‑11.
About Factory-based Routing
Factory-based routing enables you to a specify what server group is associated with an object reference. As a result, you can define the group and machine in which a given object is instantiated and then distribute the processing load for a given application across multiple machines.
With factory-based routing, routing is performed when a factory creates an object reference. The factory specifies field information in its call to the Oracle CORBA TP Framework to create an object reference. The TP Framework executes the routing algorithm based on the routing criteria that is defined in the ROUTING section of an application’s UBBCONFIG file. The resulting object reference has, as its target, an appropriate server group for the handling of method invocations on the object reference. Any server that implements the interface in that server group is eligible to activate the servant for the object reference.
Thus, the activation of CORBA objects can be distributed by server group based on the defined criteria and different implementations of CORBA interfaces can be supplied in different groups. So you can replicate the same CORBA interface across multiple server groups, based on defined, group-specific differences.
The primary benefit of factory-based routing is that it provides a simple means to scale an application, and invocations on a given interface in particular, across a growing deployment environment. Distributing the deployment of an application across additional machines is strictly an administrative function that does not require you to recode or rebuild the application.
Characteristics of Factory-based Routing
Factory-based routing has the following characteristics:
All server processes in a particular server group do not need to use the same CORBA interfaces.
How Factory-based Is Implemented
To implement factory-based routing, you must change the way your factories create object references. First, you must coordinate with the system designer to determine the fields and values to be used as the basis for routing. Then, for each interface, you must configure factory-based routing such that the interface definition for the factory specifies the parameter that represents the routing criteria that is used to determine the group ID.
To configure factory-based routing, define the following information in the UBBCONFIG file:
Routing criteria in the ROUTING section.
Notes:
When implementing factory-baed routing, remember that an object with a given interface and OID can be simultaneously active in two different groups if those two groups both contain the same object implementation. This can be avoided if your factories generate unique OIDs. To guarantee that only one object instance of a given interface name and OID is available at any one time in your domain, you must either:
If multiple clients have an object reference that contains a given interface name and OID, the reference will always be routed to the same object instance.
Thereafter, the object reference will contain additional information that is used to provide an indication of where the target server exists. Factory-based routing is performed once per CORBA object, when the object reference is created.
Configuring Factory-based Routing in the UBBCONFIG File
Routing criteria specify the data values used to route requests to a particular server group. To configure factory-based routing, you define routing criteria in the ROUTING section of the UBBCONFIG file (for each interface for which requests are routed). For more detailed information about configuring factory-based routing, see “Configuring Factory-based Routing in the UBBCONFIG File” on page 2‑11.
To configure factory-based routing across multiple domains, you must also configure the factory_finder.ini file to identify factory objects that are used in the current (local) domain but that are resident in a different (remote) domain. For more information, see “Configuring Multiple Domains for CORBA Applications” in the Using the Oracle Tuxedo Domains Component.
Using Parallel Objects
This topic includes the following sections:
About Parallel Objects
Support for parallel objects has been added in release 8.0 of Oracle Tuxedo as a performance enhancement. The parallel objects feature enables you to designate all business objects in particular application as stateless objects. The effect is that, unlike stateful business objects, which can only run on one server in a single domain, stateless business objects can run on all servers in a single domain. Thus, the benefits of parallel objects are as follows:
Note:
As illustrated in Figure 1‑1, if a stateful business object is active on a server on Machine 2, all subsequent requests to that business object will be sent to Group 2 on Machine 2. If the active object on Machine 2 is busy processing another request, the request is queued. Even after the business object stops processing requests on Machine 2, all subsequent requests on that stateful business object will still be sent to Group 2. After the object is deactivated on Machine 2, subsequent requests will be sent to Group 2 on Machine 2 and can be processed by other servers in Group 2.
Figure 1‑1 Using Stateful Business Objects
As illustrated in Figure 1‑2, if a parallel object is running on all the servers in Group 1 on Machine 1 (multiple instances of stateless, user-controlled business objects can run on multiple servers at the same time), subsequent requests to that business object will be sent to Machine 2 and distributed to the servers in Group 2 until a server becomes available in Group 1. As long as there is a server available on the local machine, requests will be distributed to the servers on Machine 1, unless the Oracle Tuxedo load-balancing feature determines that, due to loads on the servers, the request should be serviced by a server in Group 2. To make this determination, the load-balancing feature uses the LOAD parameter, which is set in the INTERFACES section of the
Figure 1‑2 Using Stateless Business Objects
UBBCONFIG file. For information on the LOAD parameter, see “Modifying the INTERFACES Section” on page 3‑10.
Configuring Parallel Objects
Support for parallel objects was added to Oracle Tuxedo in release 8.0. You use the ICF file to implement parallel objects for a particular CORBA application. The ICF includes a user-controlled concurrency policy option that sets all business objects implemented in the application, to which the ICF file applies, to stateless objects.
The concurrency policy determines whether the Active Object Map (AOM) is used to guarantee that an object is active in only one server at any one time. In previous releases, use of the AOM was mandatory, not optional. Use of the AOM is referred to as system-controlled concurrency. Unlike the system-controlled concurrency model, the user-controlled model, which does not use the AOM, allows the same object to be active in more than one server at a time. Thus, user-controlled concurrency can be used to improve performance and load balancing. For more information about configuring user-controlled concurrency for parallel objects, see “Parallel Objects” in the CORBA Programming Reference.
Multiplexing Incoming Client Connections
This topic includes the following sections:
System administrators can scale an Oracle Tuxedo application by increasing, in the UBBCONFIG file, the number of incoming client connections that an application site supports. Oracle Tuxedo provides a multicontexted, multistated gateway of listener/handlers to handle the multiplexing of all the requests issued by the client.
IIOP Listener and Handler
The IIOP Listener (ISL) enables access to Oracle Tuxedo CORBA objects by remote Oracle Tuxedo CORBA clients that use IIOP. The ISL is a process that listens for remote CORBA clients requesting IIOP connections. The IIOP Handler (ISH) is a multiplexor process that acts as a surrogate on behalf of the remote CORBA client. Both the ISL and ISH run on the application site. An application site can have one or more ISL processes and multiple associated ISH processes. Each ISH is associated with a single ISL.
The client connects to the ISL process using a known network address. The ISL balances the load among ISH processes by selecting the best available ISH and passing the connection directly to it. The ISL/ISH manages the context on behalf of the application client. For more information about ISL and ISH, see the description of ISL in the File Formats, Data Descriptions, MIBs, and System Processes Reference.
Increasing the Number of ISH Processes
System administrators can scale an Oracle Tuxedo CORBA application by increasing the number of ISH processes on an application site, thereby enabling the ISL to load balance among more ISH processes. By default, an ISH can handle up to 10 client connections. To increase this number, pass the optional CLOPT -x mpx-factor parameter to the ISL command, specifying in mpx-factor the number of ISH client connections each ISH can handle (up to 4096), and therefore the degree of multiplexing, for the ISH. Increasing the number of ISH processes may affect application performance as the application site services more concurrent processes.
System administrators can tune other ISH options as well to scale Oracle Tuxedo applications. For more information, see the description of ISL in the File Formats, Data Descriptions, MIBs, and System Processes Reference.

Copyright © 1994, 2017, Oracle and/or its affiliates. All rights reserved.