Java parameters must be specified whenever you start WebLogic Server.
For simple invocations, this can be done from the command line with the
weblogic.Server command. However, because the arguments needed to start WebLogic Server from the command line can be lengthy and prone to error, Oracle recommends that you incorporate the command into a script. To simplify this process, you can modify the default values in the sample scripts that are provided with the WebLogic Server distribution, as described in Specifying Java Options for a WebLogic Server Instance in Administering Server Startup and Shutdown for Oracle WebLogic Server.
If you used the Configuration Wizard to create your domain, the WebLogic startup scripts are located in the domain-name directory where you specified your domain. By default, this directory is
ORACLE_HOME is the directory you specified as the Oracle Home when you installed Oracle WebLogic Server, and
domain-name is the name of the domain directory defined by the selected configuration template.
You need to modify some default Java values in these scripts to fit your environment and applications. The important performance tuning parameters in these files are the
JAVA_HOME parameter and the Java heap size parameters:
Change the value of the variable
JAVA_HOME to the location of your
JDK. For example:
where myjdk_location is the path to your supported JDK for this release. See Oracle Fusion Middleware Supported System Configurations.
For higher performance throughput, set the minimum Java heap size equal to the maximum heap size. For example:
"%JAVA_HOME%\bin\java" -server –Xms512m –Xmx512m -classpath %CLASSPATH% -
See Specifying Heap Size Values for details about setting heap size options.
You can indicate whether a domain is to be used in a development environment or a production environment. WebLogic Server uses different default values for various services depending on the type of environment you specify.
Specify the startup mode for your domain as shown in the following table.
Table 6-1 Startup Modes
|Choose this mode||when . . .|
You are creating your applications. In this mode, the configuration of security is relatively relaxed, allowing you to auto-deploy applications.
Your application is running in its final form. In this mode, security is fully configured.
Your application is running in its final form and you want rigid policies and configuration to ensure a highly secure environment for your production domain.
For information about how the security and performance-related configuration parameters differ when switching from one domain mode to another, see How Domain Mode Affects the Default Security Configuration in Securing a Production Environment for Oracle WebLogic Server .
Learn techniques to improve deployment performance.
WebLogic Server deploys many internal applications during startup. Many of these internal applications are not needed by every user. You can configure WebLogic Server to wait and deploy these applications on the first access (on-demand) instead of always deploying them during server startup. This can conserve memory and CPU time during deployment as well as improving startup time and decreasing the base memory footprint for the server. For a development domain, the default is for WLS to deploy internal applications on-demand. For a production-mode domain, the default is for WLS to deploy internal applications as part of server startup. For more information on how to use and configure this feature, see On-demand Deployment of Internal Applications in Deploying Applications to Oracle WebLogic Server.
In deployment mode, you can set WebLogic Server to redefine Java classes in-place without reloading the ClassLoader. This means that you do not have to wait for an application to redeploy and then navigate back to wherever you were in the Web page flow. Instead, you can make your changes, auto compile, and then see the effects immediately. For more information on how to use and configure this feature, see Using FastSwap Deployment to Minimize Redeployment in Deploying Applications to WebLogic Server.
Generic overrides allow you to override application specific property files without having to crack a jar file by placing application specific files to be overridden into the
AppFileOverrides optional subdirectory. For more information on how to use and configure this feature, see Generic File Loading Overrides in Deploying Applications to WebLogic Server.
WebLogic Server provides the following mechanisms to manage threads to perform work.
In this release, WebLogic Server allows you to configure how your application prioritizes the execution of its work. Based on rules you define and by monitoring actual runtime performance, WebLogic Server can optimize the performance of your application and maintain service level agreements (SLA).
You tune the thread utilization of a server instance by defining rules and constraints for your application by defining a Work Manager and applying it either globally to WebLogic Server domain or to a specific application component. The primary tuning considerations are:
See Using Work Managers to Optimize Scheduled Work in Administering Server Environments for Oracle WebLogic Server.
The thread pool allocates threads to process the requests of service servers and client servers. The default value of the
selfTuningThreadPoolSizeMax MBean attribute is 400. Depending on the provider and consumer requests, you can increase the pool size to a maximum of 65534.
We recommend that you increase the pool size if:
The service provider and the service consumer share the same WebLogic server.
The number of concurrent requests from the service consumer is greater than the maximum thread pool size of the work manager.
Service consumer requests occupy all the threads from the thread pool, and no thread is available for the service provider to respond to the requests.
See Self-Tuning Thread Pool in Administering Server Environments for Oracle WebLogic Server.
Service level agreement (SLA) requirements are defined by instances of request classes. A request class expresses a scheduling guideline that a server instance uses to allocate threads. See Understanding Work Managers in Administering Server Environments for Oracle WebLogic Server.
The easiest way to conceptually visualize the difference between the execute queues of previous releases with work managers is to correlate execute queues (or rather, execute-queue managers) with work managers and decouple the one-to-one relationship between execute queues and thread pools.
For releases prior to WebLogic Server 9.0, incoming requests are put into a default execute queue or a user-defined execute queue. Each execute queue has an associated execute queue manager that controls an exclusive, dedicated thread-pool with a fixed number of threads in it. Requests are added to the queue on a first-come-first-served basis. The execute-queue manager then picks the first request from the queue and an available thread from the associated thread pool and dispatches the request to be executed by that thread.
For releases of WebLogic Server 9.0 and higher, there is a single priority-based execute queue in the server. Incoming requests are assigned an internal priority based on the configuration of work managers you create to manage the work performed by your applications. The server increases or decreases threads available for the execute queue depending on the demand from the various work-managers. The position of a request in the execute queue is determined by its internal priority:
The higher the priority, closer it is placed to the head of the execute queue.
The closer to the head of the queue, more quickly the request will be dispatched a thread to use.
Work managers provide you the ability to better control thread utilization (server performance) than execute-queues, primarily due to the many ways that you can specify scheduling guidelines for the priority-based thread pool. These scheduling guidelines can be set either as numeric values or as the capacity of a server-managed resource, like a JDBC connection pool.
If you upgrade application domains from prior releases that contain execute queues, the resulting 9.x domain will contain execute queues.
Migrating application domains from a previous release to WebLogic Server 9.x does not automatically convert an execute queues to work manager.
If execute queues are present in the upgraded application configuration, the server instance assigns work requests appropriately to the execute queue specified in the
Requests without a
dispatch-policy use the self-tuning thread pool.
See Roadmap for Upgrading Your Application Environment in Upgrading Oracle WebLogic Server.
WebLogic Server automatically detects when a thread in an execute queue becomes "stuck." Because a stuck thread cannot complete its current work or accept new work, the server logs a message each time it diagnoses a stuck thread.
WebLogic Server diagnoses a thread as stuck if it is continually working (not idle) for a set period of time. You can tune a server's thread detection behavior by changing the length of time before a thread is diagnosed as stuck, and by changing the frequency with which the server checks for stuck threads. Although you can change the criteria WebLogic Server uses to determine whether a thread is stuck, you cannot change the default behavior of setting the "warning" and "critical" health states when all threads in a particular execute queue become stuck. See Configuring WebLogic Server to Avoid Overload Conditions in Administering Server Environments for Oracle WebLogic Server. To configure stuck thread detection behavior, see Tuning execute thread detection behavior in Oracle WebLogic Server Administration Console Online Help.
Learn about network communication between clients and servers (including T3 and IIOP protocols, and their secure versions).
WebLogic Server uses software modules called muxers to read incoming requests on the server and incoming responses on the client. WebLogic Server supports the following muxer types:
WebLogic Server provides a non-blocking IO muxer implementation as the default muxer configuration. In the default configuration,
MuxerClass is set to
Native Muxers are not recommended for most environments. If you must enable these muxers, the value of the
MuxerClass attribute must be explicitly set:
Solaris/HP-UX Native Muxer:
POSIX Native Muxer:
Windows Native Muxer:
For example, switching to the native NT Socket Muxer on Windows platforms may improve performance for larger messages/payloads when there is one socket connection to the WebLogic Server instance.
The POSIX Native Muxer provides similar performance improvements for larger messages/payloads in UNIX-like systems that support poll system calls, such as Solaris and HP-UX:
Native muxers use platform-specific native binaries to read data from sockets. The majority of all platforms provide some mechanism to poll a socket for data. For example, Unix systems use the poll system call and the Windows architecture uses completion ports. Native muxers implement a non-blocking thread model. When a native muxer is used, the server creates a fixed number of threads dedicated to reading incoming requests. Prior to WebLogic Server 12.1.2, Oracle recommended to use native muxers and referred to as performance packs.
For WebLogic Server 12.1.2 and subsequent releases, the Non-Blocking IO (NIO) muxer is recommended by default. However, Oracle still provides native muxer as an option for users upgrading WebLogic Server versions prior to 12.1.2 to maximize consistency of the runtime environment after upgrading. See Enable Native IO in Oracle WebLogic Server Administration Console Help.
With native muxers, you may be able to improve throughput for some cpu-bound applications by using the following:
where xx is the amount of time, in microseconds, to delay before checking if data is available. The default value is 0, which corresponds to no delay.
You can refer to the below example for WebLogic Server installation and supported platforms.
[/home/gmcdermo/Oracle/src1221_GA] find wlserver/server/native/ -type f wlserver/server/native/linux/x86_64/libjmsc.so wlserver/server/native/linux/x86_64/libcloudstore1.so wlserver/server/native/linux/x86_64/libwlenv.so wlserver/server/native/linux/x86_64/libwlfileio3.so wlserver/server/native/linux/x86_64/libnodemanager.so wlserver/server/native/linux/x86_64/libweblogicunix1.so wlserver/server/native/linux/x86_64/libipc1.so wlserver/server/native/linux/x86_64/libwlrepstore1.so wlserver/server/native/linux/x86_64/libmuxer.so wlserver/server/native/linux/x86_64/wlkeytool wlserver/server/native/linux/x86_64/rs_daemon wlserver/server/native/linux/x86_64/rs_admin wlserver/server/native/linux/x86_64/libstackdump.so wlserver/server/native/linux/x86_64/libmql1.so
The native library supports the following platforms:
oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/solaris/sparc64/libmuxer.so" source="wlserver/server/native/solaris/sparc64/libmuxer.so" oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/solaris/sparc/libmuxer.so" source="wlserver/server/native/solaris/sparc/libmuxer.so" oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/solaris/x64/libmuxer.so" source="wlserver/server/native/solaris/x64/libmuxer.so" oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/solaris/x86/libmuxer.so" source="wlserver/server/native/solaris/x86/libmuxer.so" oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/linux/s390x/libmuxer.so" source="wlserver/server/native/linux/s390x/libmuxer.so" oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/linux/ia64/libmuxer.so" source="wlserver/server/native/linux/ia64/libmuxer.so" oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/aix/ppc64/libmuxer.so" source="wlserver/server/native/aix/ppc64/libmuxer.so" oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/aix/ppc/libmuxer.so" source="wlserver/server/native/aix/ppc/libmuxer.so" oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/macosx/libmuxer.jnilib" source="wlserver/server/native/macosx/libmuxer.jnilib" oracle.wls.core.app.server.nativelib/template.xml: dest="server/native/hpux11/IPF64/libmuxer.so" source="wlserver/server/native/hpux11/IPF64/libmuxer.so" oracle.wls.core.app.server.tier1nativelib/template.xml: dest="server/native/linux/i686/libmuxer.so" source="wlserver/server/native/linux/i686/libmuxer.so" oracle.wls.core.app.server.tier1nativelib/template.xml: dest="server/native/linux/x86_64/libmuxer.so" source="wlserver/server/native/linux/x86_64/libmuxer.so"
Network channels, also called network access points, allow you to specify different quality of service (QOS) parameters for network communication. Each network channel is associated with its own exclusive socket using a unique IP address and port. By default, T3 requests from a multi-threaded client are multiplexed over the same remote connection and the server instance reads requests from the socket one at a time. If the request size is large, this becomes a bottleneck.
Although the primary role of a network channel is to control the network traffic for a server instance, you can leverage the ability to create multiple custom channels to allow a multi-threaded client to communicate with server instance over multiple connections, reducing the potential for a bottleneck. To configure custom multi-channel communication, use the following steps:
See Understanding Network Channels in Administering Server Environments for Oracle WebLogic Server.
To reduce the potential for Denial of Service (DoS) attacks while simultaneously optimizing system availability, WebLogic Server allows you to specify the following settings:
Maximum incoming message size
Complete message timeout
Number of file descriptors (UNIX systems)
For optimal system performance, each of these settings should be appropriate for the particular system that hosts WebLogic Server and should be in balance with each other, as explained in the sections that follow.
WebLogic Server allows you to specify a maximum incoming request size to prevent server from being bombarded by a series of large requests. You can set a global value or set specific values for different protocols and network channels. Although it does not directly impact performance, JMS applications that aggregate messages before sending to a destination may be refused if the aggregated size is greater than specified value. See Servers: Protocols: General in Oracle WebLogic Server Administration Console Online Help and Tuning Applications Using Unit-of-Order.
Make sure that the complete message timeout parameter is configured properly for your system. This parameter sets the maximum number of seconds that a server waits for a complete message to be received.
The default value is 60 seconds, which applies to all connection protocols for the default network channel. This setting might be appropriate if the server has a number of high-latency clients. However, you should tune this to the smallest possible value without compromising system availability.
If you need a complete message timeout setting for a specific protocol, you can alternatively configure a new network channel for that protocol.
For information about displaying the WebLogic Server Administration Console page from which the complete message timeout parameter can be set, see Configure protocols in the Oracle WebLogic Server Administration Console Online Help.
On UNIX systems, each socket connection to WebLogic Server consumes a file descriptor. To optimize availability, the number of file descriptors for WebLogic Server should be appropriate for the host machine. By default, WebLogic Server configures 1024 file descriptors. However, this setting may be low, particularly for production systems.
Note that when you tune the number of file descriptors for WebLogic Server, your changes should be in balance with any changes made to the complete message timeout parameter. A higher complete message timeout setting results in a socket not closing until the message timeout occurs, which therefore results in a longer hold on the file descriptor. So if the complete message timeout setting is high, the file descriptor limit should also be set high. This balance provides optimal system availability with reduced potential for denial-of-service attacks.
For information about how to tune the number of available file descriptors, consult your UNIX vendor's documentation.
You can tune the number of connection requests that a WebLogic Server instance will accept before refusing additional requests. The
Accept Backlog parameter specifies how many Transmission Control Protocol (TCP) connections can be buffered in a wait queue. This fixed-size queue is populated with requests for connections that the TCP stack has received, but the application has not accepted yet.
You can tune the number of connection requests that a WebLogic Server instance will accept before refusing additional requests, see Tune connection backlog buffering in Oracle WebLogic Server Administration Console Online Help.
http.keepAliveCache.socketHealthCheckTimeout system property for tuning the behavior of how a socket connection is returned from the cache when keep-alive is enabled when using HTTP 1.1 protocol. By default, the cache does not check the health condition before returning the cached connection to the client for use. Under some conditions, such as due to an unstable network connection, the system needs to check the connection's health condition before returning it to the client. To enable this behavior (checking the health condition), set
http.keepAliveCache.socketHealthCheckTimeout to a value greater than 0.
If you are running multiple partitions under heavy load on a system with a large number of cores, you might need to modify some of the WLS thread pool settings to improve CPU utilization. Tuning the thread pools can result in increased throughput, reduced response times, and better CPU utilization.
The following guidelines should be considered a starting point. You will likely need to experiment with different values to find optimal settings for your specific work load.
Muxer Threads (Socket Readers)
Muxer threads are responsible for handling incoming network requests and dispatching them to appropriate work threads. For the native muxer (typically the default muxer), WLS uses a fairly low number of muxer threads (4) but this value might not be sufficient to keep up with the throughput capacity of a high core count system.
To increase the number of muxer threads on a WLS server you can:
Set it using
Set it by passing the following JAVA_OPTION on server startup:
To confirm the number of socket readers, look for the log message:
Allocating N reader threadsat server startup.
For a starting point, try setting the number of muxer threads to be roughly 20% of the number of system hardware threads.
The thread pool is responsible for allocating threads to do the work in WLS after requests have been dispatched by the muxer threads. By default WLS uses a self tuning pool that generally works well for a variety of work loads but under constant, heavy load, it can sometimes be more efficient to tune the thread pool size to better match the core count of the server.
To modify the thread pool settings on a WLS server you can:
Set it using
Set it by passing the following JAVA_OPTIONs on server startup:
For a starting point, try setting the thread pool min and max size to be roughly 80% of the number of system hardware threads.
To improve performance when Resource Consumption Management (RCM) is enabled with a varying number of partitions, WLS has introduced a system property,
weblogic.work.rcm.perPartitionPoolSize, for tuning the per partition pool size.
When RCM is enabled, the WLS self-tuning thread pool will try to use threads that have previously executed work requests for a partition to perform the next work request for that same partition. WLS maintains a cache of threads for each partition. The default size of this cache is 16 threads. You can configure the cache size using the
weblogic.work.rcm.perPartitionPoolSizesystem property. When specified, its value should be between 1 and 256, and will be rounded up to the next power of 2. A smaller value reduces memory usage while a larger value increases the chance of finding a cached thread from the same partition for executing the work request.
For additional resource sharing topics, see Configuring Resource Consumption Management in Using WebLogic Server MT.
By default, the queue size for the Work Manager’s maximum threads constraint is 8,192 (8K). During times of high load (when the machine CPU runs at 100% utilization), Work Manager instances may be unable to process messages in the queue quickly enough using this default setting.
In multitenant environments, you may want to increase the queue size, particularly if you anticipate running Managed Servers under high load for long periods of time. You may also need to increase the queue size in response to the following runtime exception:
java.lang.RuntimeException: [WorkManager:002943]Maximum Threads Constraint "ClusterMessaging" queue for work manager "ClusterMessaging" reached maximum capacity of 8,192 elements. Consider setting a larger queue size for the maximum threads constraint.
In the following example, the target is specified by the server name (Server-0), and the queue size is increased to 65,536 (64K).
<max-threads-constraint> <name>ClusterMessaging-max</name> <target>Server-0</target> <count>1</count> <queue-size>65536</queue-size> </max-threads-constraint> <work-manager> <name>ClusterMessaging</name> <target>Server-0</target> <max-threads-constraint>ClusterMessaging-max</max-threads-constraint> </work-manager>
You can specify the target using either the server name or the cluster name. You can use either server or cluster as target names.
optimize-java-expression element to optimize Java expressions to improve runtime performance.
jsp-descriptor in Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server.
A WebLogic Server cluster is a group of WebLogic Servers instances that together provide fail-over and replicated services to support scalable high-availability operations for clients within a domain. A cluster appears to its clients as a single server but is in fact a group of servers acting as one to provide increased scalability and reliability.
A domain can include multiple WebLogic Server clusters and non-clustered WebLogic Server instances. Clustered WebLogic Server instances within a domain behave similarly to non-clustered instances, except that they provide failover and load balancing. The Administration Server for the domain manages all the configuration parameters for the clustered and non-clustered instances.
For more information about clusters, see Understanding WebLogic Server Clustering in Administering Clusters for Oracle WebLogic Server.
For information about improving cluster throughput of global transactions, see Improving Throughput Using XA Transaction Cluster Affinity.
Scalability is the ability of a system to grow in one or more dimensions as more resources are added to the system. Typically, these dimensions include (among other things), the number of concurrent users that can be supported and the number of transactions that can be processed in a given unit of time.
Given a well-designed application, it is entirely possible to increase performance by simply adding more resources. To increase the load handling capabilities of WebLogic Server, add another WebLogic Server instance to your cluster—without changing your application. Clusters provide two key benefits that are not provided by a single server: scalability and availability.
WebLogic Server clusters bring scalability and high-availability to Java EE applications in a way that is transparent to application developers. Scalability expands the capacity of the middle tier beyond that of a single WebLogic Server or a single computer. The only limitation on cluster membership is that all WebLogic Servers must be able to communicate by IP multicast. New WebLogic Servers can be added to a cluster dynamically to increase capacity.
A WebLogic Server cluster guarantees high-availability by using the redundancy of multiple servers to insulate clients from failures. The same service can be provided on multiple servers in a cluster. If one server fails, another can take over. The ability to have a functioning server take over from a failed server increases the availability of the application to clients.
Provided that you have resolved all application and environment bottleneck issues, adding additional servers to a cluster should provide linear scalability. When doing benchmark or initial configuration test runs, isolate issues in a single server environment before moving to a clustered environment.
Clustering in the Messaging Service is provided through distributed destinations; connection concentrators, and connection load-balancing (determined by connection factory targeting); and clustered Store-and-Forward (SAF). Client load-balancing with respect to distributed destinations is tunable on connection factories. Distributed destination Message Driven Beans (MDBs) that are targeted to the same cluster that hosts the distributed destination automatically deploy only on cluster servers that host the distributed destination members and only process messages from their local destination. Distributed queue MDBs that are targeted to a different server or cluster than the host of the distributed destination automatically create consumers for every distributed destination member. For example, each running MDB has a consumer for each distributed destination queue member.
In general, any operation that requires communication between the servers in a cluster is a potential scalability hindrance. The following sections provide information on issues that impact the ability to linearly scale clustered WebLogic servers:
User session data can be stored in two standard ways in a Java EE application: stateful session EJBs or HTTP sessions. By themselves, they are rarely a impact cluster scalability. However, when coupled with a session replication mechanism required to provide high-availability, bottlenecks are introduced. If a Java EE application has Web and EJB components, you should store user session data in HTTP sessions:
HTTP session management provides more options for handling fail-over, such as replication, a shared DB or file.
Replication of the HTTP session state occurs outside of any transactions. Stateful session bean replication occurs in a transaction which is more resource intensive.
The HTTP session replication mechanism is more sophisticated and provides optimizations a wider variety of situations than stateful session bean replication.
See Session Management.
Asynchronous replication of http sessions provides the option of choosing asynchronous session replication using:
Set the PersistentStoreType to async-replicated or async-replicated-if-clustered to specify asynchronous replication of data between a primary server and a secondary server. See session-descriptor section of Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server. To tune batched replication, adjust the SessionFlushThreshold parameter.
Replication behavior depends on cluster type. The following table describes how asynchronous replication occurs for a given cluster topology.
Table 6-2 Asynchronous Replication Behavior by Cluster Topology
Replication to a secondary server within the same cluster occurs asynchronously with the "async-replication" setting in the webapp.
Replication to a secondary server in a remote cluster. This happens asynchronously with the "async-replication" setting in the webapp.
Replication to a secondary server within the cluster happens asynchronously with the "async-replication" setting in the webapp. Persistence to a database through a remote cluster happens asynchronously regardless of whether "async-replication" or "replication" is chosen.
The following section outlines asynchronous replication session behavior:
During undeployment or redeployment:
The session is unregistered and removed from the update queue.
The session on the secondary server is unregistered.
If the application is moved to admin mode, the sessions are flushed and replicated to the secondary server. If secondary server is down, the system attempts to failover to another server.
A server shutdown or failure state triggers the replication of any batched sessions to minimize the potential loss of session information.
Set the PersistentStoreType to async-jdbc to specify asynchronous replication of data to a database. See session-descriptor section of Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server. To tune batched replication, adjust the SessionFlushThreshold and the SessionFlushInterval parameters.
The following section outlines asynchronous replication session behavior:
During undeployment or redeployment:
The session is unregistered and removed from the update queue.
The session is removed from the database.
If the application is moved to admin mode, the sessions are flushed and replicated to the database.
This applies to entity EJBs that use a concurrency strategy of
ReadOnly with a read-write pattern.
Optimistic concurrency bean is updated, the EJB container sends a multicast message to other cluster members to invalidate their local copies of the bean. This is done to avoid optimistic concurrency exceptions being thrown by the other servers and hence the need to retry transactions. If updates to the EJBs are frequent, the work done by the servers to invalidate each other's local caches become a serious bottleneck. A flag called
cluster-invalidation-disabled (default false) is used to turn off such invalidations. This is set in the
rdbms descriptor file.
ReadOnly with a read-write pattern—In this pattern, persistent data that would otherwise be represented by a single EJB are actually represented by two EJBs: one read-only and the other updatable. When the state of the updateable bean changes, the container automatically invalidates corresponding read-only EJB instance. If updates to the EJBs are frequent, the work done by the servers to invalidate the read-only EJBs becomes a serious bottleneck.
Similar to Invalidation of Entity EJBs, HTTP sessions can also be invalidated. This is not as expensive as entity EJB invalidation, since only the session data stored in the secondary server needs to be invalidated. HTTP sessions should be invalidated if they are no longer in use.
In general, JNDI binds, unbinds and rebinds are expensive operations. However, these operations become a bigger bottleneck in clustered environments because JNDI tree changes have to be propagated to all members of a cluster. If such operations are performed too frequently, they can reduce cluster scalability significantly.
With multi-core machines, additional consideration must be given to the ratio of the number of available cores to clustered WebLogic Server instances. Because WebLogic Server has no built-in limit to the number of server instances that reside in a cluster, large, multi-core servers, can potentially host very large clusters or multiple clusters.
Consider the following when determining the optimal ratio of cores to WebLogic Server instances:
The memory requirements of the application. Choose the heap sizes of an individual instance and the total number of instances to ensure that you're providing sufficient memory for the application and achieving good GC performance. For some applications, allocating very large heaps to a single instance may lead to longer GC pause times. In this case the performance may benefit from increasing the number of instances and giving each instance a smaller heap.
Maximizing CPU utilization. While WebLogic Server is capable of utilizing multiple cores per instance, for some applications, increasing the number of instances on a given machine (reducing the number of cores per instance) can improve CPU utilization and overall performance.
Learn several different ways to monitor a WebLogic Server domain.
The tool for monitoring the health and performance of your WebLogic Server domain is the Administration Console. See Monitor servers in Oracle WebLogic Server Administration Console Online Help.
The WebLogic Diagnostic Framework (WLDF) is a monitoring and diagnostic framework that defines and implements a set of services that run within WebLogic Server processes and participate in the standard server life cycle. See Overview of the WLDF Architecture in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server.
WebLogic Server provides its own set of MBeans that you can use to configure, monitor, and manage WebLogic Server resources. See Understanding WebLogic Server MBeans in Developing Custom Management Utilities Using JMX for Oracle WebLogic Server.
The WebLogic Scripting Tool (WLST) is a command-line scripting interface that system administrators and operators use to monitor and manage WebLogic Server instances and domains. See Understanding WebLogic Server MBeans in Developing Custom Management Utilities Using JMX for Oracle WebLogic Server.
The Oracle Technology Network at
http://www.oracle.com/technetwork/index.html provides product downloads, articles, sample code, product documentation, tutorials, white papers, news groups, and other key content for WebLogic Server.
The default class and resource loading default behavior in WebLogic Server is to search the classloader hierarchy beginning with the root. As a result, the full system
classpath is searched for every class or resource loading request, even if the class or resource belongs to the application.
For classes and resources that are only looked up once (for example: classloading during deployment), the cost of the full
classpath search is typically not a serious problem. For classes and resources that are requested repeatedly by an application at runtime (explicit application calls to
getResource) the CPU and memory overhead of repeatedly searching a long system and application
classpath can be significant. The worst case scenario is when the requested class or resource is missing. A missing class or resource results in the cost of a full scan of the
classpath and is compounded by the fact that if an application fails to find the class/resource it is likely to request it repeatedly. This problem is more common for resources than for classes.
Ideally, application code is optimized to avoid requests for missing classes and resources and frequent repeated calls to load the same class/resource. While it is not always possible to fix the application code (for example, a third party library), an alternative is to use WebLogic Server's Filtering Loader Mechanism.
WebLogic Server provides a filtering loader mechanism that allows the system
classpath search to be bypassed when looking for specific application classes and resources that are on the application
classpath. This mechanism requires a user configuration that specifies the specific classes and resources that bypass the system
classpath search. See Using a Filtering Classloader in Developing Applications for Oracle WebLogic Server.
New for this release is the ability to filter resource loading requests. The basic configuration of resource filtering is specified in
META-INF/weblogic-application.xml file and is similar to the class filtering. The the syntax for filtering resources is shown in the following example:
<prefer-application-resources> <resource-name>x/y</resource-name> <resource-name>z*</resource-name> </prefer-application-resources>
In this example, resource filtering has been configured for the exact resource name "x/y" and for any resource whose name starts with "z". '*' is the only wild card pattern allowed. Resources with names matching these patterns are searched for only on the application
classpath, the system
classpath search is skipped.
If you add a class or resource to the filtering configuration and subsequently get exceptions indicating the class or resource isn't found, the most likely cause is that the class or resource is on the system
classpath, not on the application
WebLogic Server allows you to enable class caching for faster start ups. Once you enable caching, the server records all the classes loaded until a specific criterion is reached and persists the class definitions in an invisible file. When the server restarts, the cache is checked for validity with the existing code sources and the server uses the cache file to bulk load the same sequence of classes recorded in the previous run. If any change is made to the system classpath or its contents, the cache will be invalidated and re-built on server restart.
The advantages of using class caching are:
Reduces server startup time.
The package level index reduces search time for all classes and resources.
See Configuring Class Caching in Developing Applications for Oracle WebLogic Server.
Class caching is supported in development mode when starting the server using a
startWebLogic script. Class caching is disabled by default and is not supported in production mode. The decrease in startup time varies among different JRE vendors.
If WebLogic Server is configured with JDK 7, you may find that the out-of-the-box SSL performance slower than in previous WebLogic Server releases. This performance change is due to the stronger cipher and MAC algorithm used by default when JDK 7 is used with the JSSE-based SSL provider in WebLogic Server.
See SSL Performance Considerations in Administering Security for Oracle WebLogic Server.