This section contains information on the following subjects:
WebLogic Event Server supports Jetty as Java Web server to deploy HTTP servlets and static resources.
WebLogic Event Server support for Jetty is based on Version 1.2 the OSGi HTTP Service. This API provides ability to dynamically register and unregister
javax.servlet.Servlet
objects with the runtime and static resources. This specification requires at minimum version 2.1 of the Java Servlet API.
WebLogic Event Server supports the following features for Jetty:
For details about configuring Jetty, see Configuring a Jetty Server Instance.
In addition to supporting typical (synchronous) Java servlets, WebLogic Event Server supports asynchronous servlets. An asynchronous servlet receives a request, gets a thread and performs some work, and finally releases the thread while waiting for those actions to complete before re-acquiring another thread and sending a response.
WebLogic Event Server uses network I/O (NetIO) to configure the port and listen address of Jetty services.
Note: | Jetty has a built-in capability for multiplexed network I/O. However, it does not support multiple protocols on the same port. |
WebLogic Event Server Jetty services use the WebLogic Event Server Work Manager to provide for scalable thread pooling. See </config>.
Note: | Jetty provides its own thread pooling capability. However, BEA recommends using the WebLogic Event Server self-tuning thread pool to minimize footprint and configuration complexity. |
WebLogic Event Server allows you to configure how your application prioritizes the execution of its work. Based on rules you define and by monitoring actual runtime performance, you can optimize the performance of your application and maintain service level agreements. You define the rules and constraints for your application by defining a work manager.
WebLogic Event Server uses is a single thread pool, in which all types of work are executed. WebLogic Event Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.
The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic Event Server increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic Event Server decreases the thread count.
WebLogic Event Server prioritizes work and allocates threads based on an execution model that takes into account defined parameters and run-time performance and throughput.
You can configure a set of scheduling guidelines and associate them with one or more applications, or with particular application components. For example, you can associate one set of scheduling guidelines for one application, and another set of guidelines for other application. At run-time, WebLogic Event Server uses these guidelines to assign pending work and enqueued requests to execution threads.
To manage work in your applications, you define one or more of the following work manager components:
fairshare
—Specifies the average thread-use time required to process requests.
For example, assume that WebLogic Event Server is running two modules. The Work Manager for ModuleA
specifies a fairshare
of 80 and the Work Manager for ModuleB
specifies a fairshare
of 20.
During a period of sufficient demand, with a steady stream of requests for each module such that the number requests exceed the number of threads, WebLogic Event Server allocates 80% and 20% of the thread-usage time to ModuleA
and ModuleB
, respectively.
Note: | The value of a fair share request class is specified as a relative value, not a percentage. Therefore, in the above example, if the request classes were defined as 400 and 100, they would still have the same relative values. |
max-threads-constraint
—This constraint limits the number of concurrent threads executing requests from the constrained work set. The default is unlimited. For example, consider a constraint defined with maximum threads of 10 and shared by 3 entry points. The scheduling logic ensures that not more than 10 threads are executing requests from the three entry points combined.
A max-threads-constraint
can be defined in terms of a the availability of resource that requests depend upon, such as a connection pool.
A max-threads-constraint
might, but does not necessarily, prevent a request class from taking its fair share of threads or meeting its response time goal. Once the constraint is reached the WebLogic Event Server does not schedule requests of this type until the number of concurrent executions falls below the limit. The WebLogic Event Server then schedules work based on the fair share or response time goal.
min-threads-constraint
—This constraint guarantees a number of threads the server will allocate to affected requests to avoid deadlocks. The default is zero. A min-threads-constraint
value of one is useful, for example, for a replication update request, which is called synchronously from a peer.
A min-threads-constraint
might not necessarily increase a fair share. This type of constraint has an effect primarily when the WebLogic Event Server instance is close to a deadlock condition. In that case, it the constraint causes WebLogic Event Server to schedule a request even if requests in the service class have gotten more than their fair share recently.
You use the following configuration objects to configure an instance of the Jetty HTTP server in the config.xml
file that describes your WebLogic Event Server domain:
<jetty>
: See jetty Configuration Object for details.<netio>
: See netio Configuration Object for details.<work-manager>
: See work-manager Configuration Object for details.
Use the <jetty-web-app>
configuration object to define a Web application in the Jetty instance; see jetty-web-app Configuration Object for details.
See Example Jetty Configuration for a sample of using each of the preceding configuration objects.
Use the parameters described in the following table to define a <jetty>
configuration object in your config.xml
file.
See netio Configuration Object for details.
|
||
Use the parameters described in the following table to define a <netio>
configuration object in your config.xml
file.
|
Use the parameters described in the following table to define a <work-manager>
configuration object in your config.xml
file.
Use the following configuration object to define a Web application for use by Jetty:
Overrides the
scratch-directory parameter in the Configuring a Jetty Server Instance. If not specified, a directory is created at????.
|
||
The name of the Jetty service where this web application is deployed. It must match the name of an existing Configuring a Jetty Server Instance.
|
||
WebLogic Event Server supports development of servlets for deployment to Jetty by creating a standard J2EE Web Application and configuring it using the jetty-web-app Configuration Object.
WebLogic Event Server supports deployments packaged either as WAR files or as exploded WAR files, as described in version 2.4 of the Java Servlet Specification.
You can deploy pre-configured web apps from an exploded directory or WAR file by including them in the server configuration.
Security constraints specified in the standard web.xml
file are mapped to the Common Security Services security provider. The Servlet API specifies declarative role-based security, which means that particular URL patterns can be mapped to security roles.
The following snippet of a config.xml
file provides an example Jetty configuration; only Jetty-related configuration information is shown:
<config>
<netio>
<name>JettyNetIO</name>
<port>9002</port>
</netio>
<work-manager>
<name>WM</name>
<max-threads-constraint>64</max-threads-constraint>
<min-threads-constraint>3</min-threads-constraint>
</work-manager>
<jetty>
<name>TestJetty</name>
<work-manager-name>WM</work-manager-name>
<network-io-name>JettyNetIO</network-io-name>
<debug-enabled>false</debug-enabled>
<scratch-directory>JettyWork</scratch-directory>
</jetty>
<jetty-web-app>
<name>test</name>
<context-path>/test</context-path>
<path>testWebApp.war</path>
<jetty-name>TestJetty</jetty-name>
</jetty-web-app>
</config>