6 Tuning WebLogic Server

This chapter describes how to tune WebLogic Server 12.1.3 to match your application needs.

This chapter includes the following sections:

Setting Java Parameters for Starting WebLogic Server

Java parameters must be specified whenever you start WebLogic Server. For simple invocations, this can be done from the command line with the weblogic.Server command. However, because the arguments needed to start WebLogic Server from the command line can be lengthy and prone to error, Oracle recommends that you incorporate the command into a script. To simply this process, you can modify the default values in the sample scripts that are provided with the WebLogic distribution to start WebLogic Server, as described in "Specifying Java Options for a WebLogic Server Instance" in Administering Server Startup and Shutdown for Oracle WebLogic Server.

If you used the Configuration Wizard to create your domain, the WebLogic startup scripts are located in the domain-name directory where you specified your domain. By default, this directory is ORACLE_HOME\user_projects\domain\domain-name, where ORACLE_HOME is the directory you specified as the Oracle Home when you installed Oracle WebLogic Server, and domain-name is the name of the domain directory defined by the selected configuration template.

You need to modify some default Java values in these scripts to fit your environment and applications. The important performance tuning parameters in these files are the JAVA_HOME parameter and the Java heap size parameters:

  • Change the value of the variable JAVA_HOME to the location of your JDK. For example:

    set JAVA_HOME=myjdk_location
    

    where myjdk_location is the path to your supported JDK for this release. See "Oracle Fusion Middleware Supported System Configurations."

  • For higher performance throughput, set the minimum java heap size equal to the maximum heap size. For example:

    "%JAVA_HOME%\bin\java" -server –Xms512m –Xmx512m -classpath %CLASSPATH% -
    

See Specifying Heap Size Values for details about setting heap size options.

Development vs. Production Mode Default Tuning Values

You can indicate whether a domain is to be used in a development environment or a production environment. WebLogic Server uses different default values for various services depending on the type of environment you specify. Specify the startup mode for your domain as shown in the following table.

Table 6-1 Startup Modes

Choose this mode when . . .

Development

You are creating your applications. In this mode, the configuration of security is relatively relaxed, allowing you to auto-deploy applications.

Production

Your application is running in its final form. In this mode, security is fully configured.


The following table lists the performance-related configuration parameters that differ when switching from development to production startup mode.

Table 6-2 Differences Between Development and Production Modes

Tuning Parameter In development mode . . . In production mode . . .

SSL

You can use the demonstration digital certificates and the demonstration keystores provided by the WebLogic Server security services. With these certificates, you can design your application to work within environments secured by SSL.

For more information about managing security, see "Configuring SSL" in Securing WebLogic Server.

You should not use the demonstration digital certificates and the demonstration keystores. If you do so, a warning message is displayed.

Deploying Applications

WebLogic Server instances can automatically deploy and update applications that reside in the domain_name/autodeploy directory (where domain_name is the name of a domain).

It is recommended that this method be used only in a single-server development environment.

For more information, see "Auto-Deploying Applications in Development Domains" in Deploying Applications to Oracle WebLogic Server.

The auto-deployment feature is disabled, so you must use the WebLogic Server Administration Console, the weblogic.Deployer tool, or the WebLogic Scripting Tool (WLST). For more information, see "Understanding WebLogic Server Deployment" in Deploying Applications to Oracle WebLogic Server.

Web Services Test Client

Is enabled by default.

Is disabled (and undeployed), by default. See "Enabling and Disabling the Web Services Test Client" in Administering Web Services.


For information on switching the startup mode from development to production, see "Domain Modes" in the Administering Server Environments for Oracle WebLogic Server.

Deployment

The following sections provide information on how to improve deployment performance:

On-demand Deployment of Internal Applications

WebLogic Server deploys many internal applications during startup. Many of these internal applications are not needed by every user. You can configure WebLogic Server to wait and deploy these applications on the first access (on-demand) instead of always deploying them during server startup. This can conserve memory and CPU time during deployment as well as improving startup time and decreasing the base memory footprint for the server. For a development domain, the default is for WLS to deploy internal applications on-demand. For a production-mode domain, the default is for WLS to deploy internal applications as part of server startup. For more information on how to use and configure this feature, see On-demand Deployment of Internal Applications in Deploying Applications to Oracle WebLogic Server.

Use FastSwap Deployment to Minimize Redeployment Time

In deployment mode, you can set WebLogic Server to redefine Java classes in-place without reloading the ClassLoader. This means that you do not have to wait for an application to redeploy and then navigate back to wherever you were in the Web page flow. Instead, you can make your changes, auto compile, and then see the effects immediately. For more information on how to use and configure this feature, see Using FastSwap Deployment to Minimize Redeployment in Deploying Applications to WebLogic Server.

Generic Overrides

Generic overrides allow you to override application specific property files without having to crack a jar file by placing application specific files to be overridden into the AppFileOverrides optional subdirectory. For more information on how to use and configure this feature, see "Generic File Loading Overrides" in Deploying Applications to WebLogic Server.

Thread Management

WebLogic Server provides the following mechanisms to manage threads to perform work.

Tuning a Work Manager

In this release, WebLogic Server allows you to configure how your application prioritizes the execution of its work. Based on rules you define and by monitoring actual runtime performance, WebLogic Server can optimize the performance of your application and maintain service level agreements (SLA).

You tune the thread utilization of a server instance by defining rules and constraints for your application by defining a Work Manger and applying it either globally to WebLogic Server domain or to a specific application component. The primary tuning considerations are:

See "Using Work Managers to Optimize Scheduled Work" in Administering Server Environments for Oracle WebLogic Server.

How Many Work Managers are Needed?

Each distinct SLA requirement needs a unique work manager.

What are the SLA Requirements for Each Work Manager?

Service level agreement (SLA) requirements are defined by instances of request classes. A request class expresses a scheduling guideline that a server instance uses to allocate threads. See "Understanding Work Managers" in Administering Server Environments for Oracle WebLogic Server.

Understanding the Differences Between Work Managers and Execute Queues

The easiest way to conceptually visualize the difference between the execute queues of previous releases with work managers is to correlate execute queues (or rather, execute-queue managers) with work managers and decouple the one-to-one relationship between execute queues and thread-pools.

For releases prior to WebLogic Server 9.0, incoming requests are put into a default execute queue or a user-defined execute queue. Each execute queue has an associated execute queue manager that controls an exclusive, dedicated thread-pool with a fixed number of threads in it. Requests are added to the queue on a first-come-first-served basis. The execute-queue manager then picks the first request from the queue and an available thread from the associated thread-pool and dispatches the request to be executed by that thread.

For releases of WebLogic Server 9.0 and higher, there is a single priority-based execute queue in the server. Incoming requests are assigned an internal priority based on the configuration of work managers you create to manage the work performed by your applications. The server increases or decreases threads available for the execute queue depending on the demand from the various work-managers. The position of a request in the execute queue is determined by its internal priority:

  • The higher the priority, closer it is placed to the head of the execute queue.

  • The closer to the head of the queue, more quickly the request will be dispatched a thread to use.

Work managers provide you the ability to better control thread utilization (server performance) than execute-queues, primarily due to the many ways that you can specify scheduling guidelines for the priority-based thread pool. These scheduling guidelines can be set either as numeric values or as the capacity of a server-managed resource, like a JDBC connection pool.

Migrating from Previous Releases

If you upgrade application domains from prior releases that contain execute queues, the resulting 9.x domain will contain execute queues.

  • Migrating application domains from a previous release to WebLogic Server 9.x does not automatically convert an execute queues to work manager.

  • If execute queues are present in the upgraded application configuration, the server instance assigns work requests appropriately to the execute queue specified in the dispatch-policy.

  • Requests without a dispatch-policy use the self-tuning thread pool.

See "Roadmap for Upgrading Your Application Environment" in Upgrading Oracle WebLogic Server.

Tuning the Stuck Thread Detection Behavior

WebLogic Server automatically detects when a thread in an execute queue becomes "stuck." Because a stuck thread cannot complete its current work or accept new work, the server logs a message each time it diagnoses a stuck thread.

WebLogic Server diagnoses a thread as stuck if it is continually working (not idle) for a set period of time. You can tune a server's thread detection behavior by changing the length of time before a thread is diagnosed as stuck, and by changing the frequency with which the server checks for stuck threads. Although you can change the criteria WebLogic Server uses to determine whether a thread is stuck, you cannot change the default behavior of setting the "warning" and "critical" health states when all threads in a particular execute queue become stuck. For more information, see "Configuring WebLogic Server to Avoid Overload Conditions" in Administering Server Environments for Oracle WebLogic Server. To configure stuck thread detection behavior, see "Tuning execute thread detection behavior" in Oracle WebLogic Server Administration Console Online Help.

Tuning Network I/O

The following sections provide information on network communication between clients and servers (including T3 and IIOP protocols, and their secure versions):

Tuning Muxers

WebLogic Server uses software modules called muxers to read incoming requests on the server and incoming responses on the client. WebLogic Server supports the following muxer types:

Non-Blocking IO Muxer

WebLogic Server provides a non-blocking IO muxer implementation as the default muxer configuration. In the default configuration, MuxerClass is set to weblogic.socket.NIOSocketMuxer.

Other Muxers

Native Muxers and the Java Muxer are not recommended for most environments. If you must enable these muxers, the value of the MuxerClass attribute must be explicitly set:

  • Solaris/HP-UX Native Muxer: weblogic.socket.DevPollSocketMuxer

  • POSIX Native Muxer: weblogic.socket.PosixSocketMuxer

  • Windows Native Muxer: weblogic.socket.NTSocketMuxer

  • Java Muxer: weblogic.socket.JavaSocketMuxer

For example, switching to the native NT Socket Muxer on Windows platforms may improve performance for larger messages/payloads when there is one socket connection to the WebLogic Server instance.

-Dweblogic.MuxerClass=weblogic.socket.NTSocketMux

The POSIX Native Muxer provides similar performance improvements for larger messages/payloads in UNIX-like systems that support poll system calls, such as Solaris and HP-UX:

-Dweblogic.MuxerClass=weblogic.socket.PosixSocketMuxer
Native Muxers

Native muxers use platform-specific native binaries to read data from sockets. The majority of all platforms provide some mechanism to poll a socket for data. For example, Unix systems use the poll system call and the Windows architecture uses completion ports. Native muxers provide superior scalability because they implement a non-blocking thread model. When a native muxer is used, the server creates a fixed number of threads dedicated to reading incoming requests.

With native muxers, you may be able to improve throughput for some CPU-bound applications (for example, SpecJAppServer) by using the following:

-Dweblogic.socket.SocketMuxer.DELAY_POLL_WAKEUP=xx

where xx is the amount of time, in microseconds, to delay before checking if data is available. The default value is 0, which corresponds to no delay.

Java Muxer

A Java muxer has the following characteristics:

  • Uses pure Java to read data from sockets.

  • It is also the only muxer available for RMI clients.

  • Blocks on reads until there is data to be read from a socket. This behavior does not scale well when there are a large number of sockets and/or when data arrives infrequently at sockets. This is typically not an issue for clients, but it can create a huge bottleneck for a server.

These characteristics may be acceptable if there are a small number of clients and the rate at which requests arrive at the server is fairly high. Under these conditions, the Java muxer performs as well as a native muxer and eliminates Java Native Interface (JNI) overhead. Unlike native muxers, the number of threads used to read requests is not fixed and is tunable for Java muxers by configuring the Percent Socket Readers parameter setting in the WebLogic Server Administration Console. Ideally, you should configure this parameter so the number of threads roughly equals the number of remote concurrently connected clients up to 50 percent of the total thread pool size. Each thread waits for a fixed amount of time for data to become available at a socket. If no data arrives, the thread moves to the next socket.

Network Channels

Network channels, also called network access points, allow you to specify different quality of service (QOS) parameters for network communication. Each network channel is associated with its own exclusive socket using a unique IP address and port. By default, T3 requests from a multi-threaded client are multiplexed over the same remote connection and the server instance reads requests from the socket one at a time. If the request size is large, this becomes a bottleneck.

Although the primary role of a network channel is to control the network traffic for a server instance, you can leverage the ability to create multiple custom channels to allow a multi-threaded client to communicate with server instance over multiple connections, reducing the potential for a bottleneck. To configure custom multi-channel communication, use the following steps:

  1. Configure multiple network channels using different IP and port settings. See "Configure custom network channels" in Oracle WebLogic Server Administration Console Online Help.

  2. In your client-side code, use a JNDI URL pattern similar to the pattern used in clustered environments. The following is an example for a client using two network channels:

    t3://<ip1>:<port1>,<ip2>:<port2>
    

See "Understanding Network Channels" in Administering Server Environments for Oracle WebLogic Server.

Reducing the Potential for Denial of Service Attacks

To reduce the potential for Denial of Service (DoS) attacks while simultaneously optimizing system availability, WebLogic Server allows you to specify the following settings:

  • Maximum incoming message size

  • Complete message timeout

  • Number of file descriptors (UNIX systems)

For optimal system performance, each of these settings should be appropriate for the particular system that hosts WebLogic Server and should be in balance with each other, as explained in the sections that follow.

Tuning Message Size

WebLogic Server allows you to specify a maximum incoming request size to prevent server from being bombarded by a series of large requests. You can set a global value or set specific values for different protocols and network channels. Although it does not directly impact performance, JMS applications that aggregate messages before sending to a destination may be refused if the aggregated size is greater than specified value. See "Servers: Protocols: General" in Oracle WebLogic Server Administration Console Online Help and Tuning Applications Using Unit-of-Order.

Tuning Complete Message Timeout

Make sure that the complete message timeout parameter is configured properly for your system. This parameter sets the maximum number of seconds that a server waits for a complete message to be received.

The default value is 60 seconds, which applies to all connection protocols for the default network channel. This setting might be appropriate if the server has a number of high-latency clients. However, you should tune this to the smallest possible value without compromising system availability.

If you need a complete message timeout setting for a specific protocol, you can alternatively configure a new network channel for that protocol.

For information about displaying the WebLogic Server Administration Console page from which the complete message timeout parameter can be set, see "Configure protocols" in the Oracle WebLogic Server Administration Console Online Help.

Tuning Number of File Descriptors

On UNIX systems, each socket connection to WebLogic Server consumes a file descriptor. To optimize availability, the number of file descriptors for WebLogic Server should be appropriate for the host machine. By default, WebLogic Server configures 1024 file descriptors. However, this setting may be low, particularly for production systems.

Note that when you tune the number of file descriptors for WebLogic Server, your changes should be in balance with any changes made to the complete message timeout parameter. A higher complete message timeout setting results in a socket not closing until the message timeout occurs, which therefore results in a longer hold on the file descriptor. So if the complete message timeout setting is high, the file descriptor limit should also be set high. This balance provides optimal system availability with reduced potential for denial-of-service attacks.

For information about how to tune the number of available file descriptors, consult your UNIX vendor's documentation.

Tuning Connection Backlog Buffering

You can tune the number of connection requests that a WebLogic Server instance will accept before refusing additional requests. The Accept Backlog parameter specifies how many Transmission Control Protocol (TCP) connections can be buffered in a wait queue. This fixed-size queue is populated with requests for connections that the TCP stack has received, but the application has not accepted yet.

You can tune the number of connection requests that a WebLogic Server instance will accept before refusing additional requests, see "Tune connection backlog buffering" in Oracle WebLogic Server Administration Console Online Help.

Tuning Cached Connections

Use the http.keepAliveCache.socketHealthCheckTimeout system property for tuning the behavior of how a socket connection is returned from the cache when keep-alive is enabled when using HTTP 1.1 protocol. By default, the cache does not check the health condition before returning the cached connection to the client for use. Under some conditions, such as due to an unstable network connection, the system needs to check the connection's health condition before returning it to the client. To enable this behavior (checking the health condition), set http.keepAliveCache.socketHealthCheckTimeout to a value greater than 0.

Optimize Java Expressions

Set the optimize-java-expression element to optimize Java expressions to improve runtime performance. See jsp-descriptor in Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server.

Using WebLogic Server Clusters to Improve Performance

A WebLogic Server cluster is a group of WebLogic Servers instances that together provide fail-over and replicated services to support scalable high-availability operations for clients within a domain. A cluster appears to its clients as a single server but is in fact a group of servers acting as one to provide increased scalability and reliability.

A domain can include multiple WebLogic Server clusters and non-clustered WebLogic Server instances. Clustered WebLogic Server instances within a domain behave similarly to non-clustered instances, except that they provide failover and load balancing. The Administration Server for the domain manages all the configuration parameters for the clustered and non-clustered instances.

For more information about clusters, see "Understanding WebLogic Server Clustering" in Administering Clusters for Oracle WebLogic Server.

Scalability and High Availability

Scalability is the ability of a system to grow in one or more dimensions as more resources are added to the system. Typically, these dimensions include (among other things), the number of concurrent users that can be supported and the number of transactions that can be processed in a given unit of time.

Given a well-designed application, it is entirely possible to increase performance by simply adding more resources. To increase the load handling capabilities of WebLogic Server, add another WebLogic Server instance to your cluster—without changing your application. Clusters provide two key benefits that are not provided by a single server: scalability and availability.

WebLogic Server clusters bring scalability and high-availability to Java EE applications in a way that is transparent to application developers. Scalability expands the capacity of the middle tier beyond that of a single WebLogic Server or a single computer. The only limitation on cluster membership is that all WebLogic Servers must be able to communicate by IP multicast. New WebLogic Servers can be added to a cluster dynamically to increase capacity.

A WebLogic Server cluster guarantees high-availability by using the redundancy of multiple servers to insulate clients from failures. The same service can be provided on multiple servers in a cluster. If one server fails, another can take over. The ability to have a functioning server take over from a failed server increases the availability of the application to clients.

Note:

Provided that you have resolved all application and environment bottleneck issues, adding additional servers to a cluster should provide linear scalability. When doing benchmark or initial configuration test runs, isolate issues in a single server environment before moving to a clustered environment.

Clustering in the Messaging Service is provided through distributed destinations; connection concentrators, and connection load-balancing (determined by connection factory targeting); and clustered Store-and-Forward (SAF). Client load-balancing with respect to distributed destinations is tunable on connection factories. Distributed destination Message Driven Beans (MDBs) that are targeted to the same cluster that hosts the distributed destination automatically deploy only on cluster servers that host the distributed destination members and only process messages from their local destination. Distributed queue MDBs that are targeted to a different server or cluster than the host of the distributed destination automatically create consumers for every distributed destination member. For example, each running MDB has a consumer for each distributed destination queue member.

How to Ensure Scalability for WebLogic Clusters

In general, any operation that requires communication between the servers in a cluster is a potential scalability hindrance. The following sections provide information on issues that impact the ability to linearly scale clustered WebLogic servers:

Database Bottlenecks

In many cases where a cluster of WebLogic servers fails to scale, the database is the bottleneck. In such situations, the only solutions are to tune the database or reduce load on the database by exploring other options. See Chapter 8, "Database Tuning" and Chapter 11, "Tuning Data Sources".

Session Replication

User session data can be stored in two standard ways in a Java EE application: stateful session EJBs or HTTP sessions. By themselves, they are rarely a impact cluster scalability. However, when coupled with a session replication mechanism required to provide high-availability, bottlenecks are introduced. If a Java EE application has Web and EJB components, you should store user session data in HTTP sessions:

  • HTTP session management provides more options for handling fail-over, such as replication, a shared DB or file.

  • Superior scalability.

  • Replication of the HTTP session state occurs outside of any transactions. Stateful session bean replication occurs in a transaction which is more resource intensive.

  • The HTTP session replication mechanism is more sophisticated and provides optimizations a wider variety of situations than stateful session bean replication.

See Session Management.

Asynchronous HTTP Session Replication

Asynchronous replication of http sessions provides the option of choosing asynchronous session replication using:

Asynchronous HTTP Session Replication using a Secondary Server

Set the PersistentStoreType to async-replicated or async-replicated-if-clustered to specify asynchronous replication of data between a primary server and a secondary server. See session-descriptor section of Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server. To tune batched replication, adjust the SessionFlushThreshold parameter.

Replication behavior depends on cluster type. The following table describes how asynchronous replication occurs for a given cluster topology.

Table 6-3 Asynchronous Replication Behavior by Cluster Topology

Topology Behavior

LAN

Replication to a secondary server within the same cluster occurs asynchronously with the "async-replication" setting in the webapp.

MAN

Replication to a secondary server in a remote cluster. This happens asynchronously with the "async-replication" setting in the webapp.

WAN

Replication to a secondary server within the cluster happens asynchronously with the "async-replication" setting in the webapp. Persistence to a database through a remote cluster happens asynchronously regardless of whether "async-replication" or "replication" is chosen.


The following section outlines asynchronous replication session behavior:

  • During undeployment or redeployment:

    • The session is unregistered and removed from the update queue.

    • The session on the secondary server is unregistered.

  • If the application is moved to admin mode, the sessions are flushed and replicated to the secondary server. If secondary server is down, the system attempts to failover to another server.

  • A server shutdown or failure state triggers the replication of any batched sessions to minimize the potential loss of session information.

Asynchronous HTTP Session Replication using a Database

Set the PersistentStoreType to async-jdbc to specify asynchronous replication of data to a database. See session-descriptor section of Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server. To tune batched replication, adjust the SessionFlushThreshold and the SessionFlushInterval parameters.

The following section outlines asynchronous replication session behavior:

  • During undeployment or redeployment:

    • The session is unregistered and removed from the update queue.

    • The session is removed from the database.

  • If the application is moved to admin mode, the sessions are flushed and replicated to the database.

Invalidation of Entity EJBs

This applies to entity EJBs that use a concurrency strategy of Optimistic or ReadOnly with a read-write pattern.

Optimistic—When an Optimistic concurrency bean is updated, the EJB container sends a multicast message to other cluster members to invalidate their local copies of the bean. This is done to avoid optimistic concurrency exceptions being thrown by the other servers and hence the need to retry transactions. If updates to the EJBs are frequent, the work done by the servers to invalidate each other's local caches become a serious bottleneck. A flag called cluster-invalidation-disabled (default false) is used to turn off such invalidations. This is set in the rdbms descriptor file.

ReadOnly with a read-write pattern—In this pattern, persistent data that would otherwise be represented by a single EJB are actually represented by two EJBs: one read-only and the other updatable. When the state of the updateable bean changes, the container automatically invalidates corresponding read-only EJB instance. If updates to the EJBs are frequent, the work done by the servers to invalidate the read-only EJBs becomes a serious bottleneck.

Invalidation of HTTP sessions

Similar to Invalidation of Entity EJBs, HTTP sessions can also be invalidated. This is not as expensive as entity EJB invalidation, since only the session data stored in the secondary server needs to be invalidated. HTTP sessions should be invalidated if they are no longer in use.

JNDI Binding, Unbinding and Rebinding

In general, JNDI binds, unbinds and rebinds are expensive operations. However, these operations become a bigger bottleneck in clustered environments because JNDI tree changes have to be propagated to all members of a cluster. If such operations are performed too frequently, they can reduce cluster scalability significantly.

Running Multiple Server Instances on Multi-Core Machines

With multi-core machines, additional consideration must be given to the ratio of the number of available cores to clustered WebLogic Server instances. Because WebLogic Server has no built-in limit to the number of server instances that reside in a cluster, large, multi-core servers, can potentially host very large clusters or multiple clusters.

Consider the following when determining the optimal ratio of cores to WebLogic server instances:

  • The memory requirements of the application. Choose the heap sizes of individual instance and the total number of instances to ensure that you're providing sufficient memory for the application and achieving good GC performance. For some applications, allocating very large heaps to a single instance may lead to longer GC pause times. In this case the performance may benefit from increasing the number of instances and giving each instance a smaller heap.

  • Maximizing CPU utilization. While WebLogic Server is capable of utilizing multiple cores per instance, for some applications increasing the number of instances on a given machine (reducing the number of cores per instance) can improve CPU utilization and overall performance.

Improving Cluster Throughput using XA Transaction Cluster Affinity

XA transaction cluster affinity allows server instances that are participating in a global transactions to service related requests rather than load-balancing these requests to other member servers. When Enable Transaction Affinity=true, cluster throughput is increased by:

  • Reducing inter-server transaction coordination traffic

  • Improving resource utilization, such as reducing JDBC connections

  • Simplifying asynchronous processing of transactions

See "Configure clusters" in Oracle WebLogic Server Administration Console Online Help and "XA Transaction Affinity" in Administering Clusters for Oracle WebLogic Server.

Monitoring a WebLogic Server Domain

The following sections provide information on how to monitor WebLogic Server domains:

Using the Administration Console to Monitor WebLogic Server

The tool for monitoring the health and performance of your WebLogic Server domain is the Administration Console. See "Monitor servers" in Oracle WebLogic Server Administration Console Online Help.

Using the WebLogic Diagnostic Framework

The WebLogic Diagnostic Framework (WLDF) is a monitoring and diagnostic framework that defines and implements a set of services that run within WebLogic Server processes and participate in the standard server life cycle. See "Overview of the WLDF Architecture" in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server.

Using JMX to Monitor WebLogic Server

WebLogic Server provides its own set of MBeans that you can use to configure, monitor, and manage WebLogic Server resources. See "Understanding WebLogic Server MBeans" in Developing Custom Management Utilities Using JMX for Oracle WebLogic Server.

Using WLST to Monitor WebLogic Server

The WebLogic Scripting Tool (WLST) is a command-line scripting interface that system administrators and operators use to monitor and manage WebLogic Server instances and domains. See "Understanding WebLogic Server MBeans" in Developing Custom Management Utilities Using JMX for Oracle WebLogic Server.

Resources to Monitor WebLogic Server

The Oracle Technology Network at http://www.oracle.com/technology/index.html provides product downloads, articles, sample code, product documentation, tutorials, white papers, news groups, and other key content for WebLogic Server.

Tuning Class and Resource Loading

The default class and resource loading default behavior in WebLogic Server is to search the classloader hierarchy beginning with the root. As a result, the full system classpath is searched for every class or resource loading request, even if the class or resource belongs to the application. For classes and resources that are only looked up once (for example: classloading during deployment), the cost of the full classpath search is typically not a serious problem. For classes and resources that are requested repeatedly by an application at runtime (explicit application calls to loadClass or getResource) the CPU and memory overhead of repeatedly searching a long system and application classpath can be significant. The worst case scenario is when the requested class or resource is missing. A missing class or resource results in the cost of a full scan of the classpath and is compounded by the fact that if an application fails to find the class/resource it is likely to request it repeatedly. This problem is more common for resources than for classes.

Ideally, application code is optimized to avoid requests for missing classes and resources and frequent repeated calls to load the same class/resource. While it is not always possible to fix the application code (for example, a third party library), an alternative is to use WebLogic Server's "Filtering Loader Mechanism".

Filtering Loader Mechanism

WebLogic Server provides a filtering loader mechanism that allows the system classpath search to be bypassed when looking for specific application classes and resources that are on the application classpath. This mechanism requires a user configuration that specifies the specific classes and resources that bypass the system classpath search. See "Using a Filtering Classloader" in Developing Applications for Oracle WebLogic Server.

New for this release is the ability to filter resource loading requests. The basic configuration of resource filtering is specified in META-INF/weblogic-application.xml file and is similar to the class filtering. The the syntax for filtering resources is shown in the following example:

<prefer-application-resources>
<resource-name>x/y</resource-name>
<resource-name>z*</resource-name>
</prefer-application-resources>

In this example, resource filtering has been configured for the exact resource name "x/y" and for any resource whose name starts with "z". '*' is the only wild card pattern allowed. Resources with names matching these patterns are searched for only on the application classpath, the system classpath search is skipped.

Note:

If you add a class or resource to the filtering configuration and subsequently get exceptions indicating the class or resource isn't found, the most likely cause is that the class or resource is on the system classpath, not on the application classpath.

Class Caching

WebLogic Server allows you to enable class caching for faster start ups. Once you enable caching, the server records all the classes loaded until a specific criterion is reached and persists the class definitions in an invisible file. When the server restarts, the cache is checked for validity with the existing code sources and the server uses the cache file to bulk load the same sequence of classes recorded in the previous run. If any change is made to the system classpath or its contents, the cache will be invalidated and re-built on server restart.

The advantages of using class caching are:

  • Reduces server startup time.

  • The package level index reduces search time for all classes and resources.

For more information, see Configuring Class Caching in Developing Applications for Oracle WebLogic Server.

Note:

Class caching is supported in development mode when starting the server using a startWebLogic script. Class caching is disabled by default and is not supported in production mode. The decrease in startup time varies among different JRE vendors.

SSL Considerations

If WebLogic Server is configured with JDK 7, you may find that the out-of-the-box SSL performance slower than in previous WebLogic Server releases. This performance change is due to the stronger cipher and MAC algorithm used by default when JDK 7 is used with the JSSE-based SSL provider in WebLogic Server. See "SSL Performance Considerations" in Administering Security for Oracle WebLogic Server.