The following sections contain guidelines and instructions for configuring a WebLogic Server cluster:
This section summarizes prerequisite tasks and information for setting up a WebLogic Server Cluster.
The information in this section will be most useful to you if you have a basic understanding of the cluster configuration process and how configuration tasks are accomplished.
For information about the configuration facilities available in WebLogic Server and the tasks they support, see Understanding Cluster Configuration.
Determine what cluster architecture best suits your needs. Key architectural decisions include:
To guide these decisions, see Cluster Architectures, and Load Balancing in a Cluster.
The architecture you choose affects how you set up your cluster. The cluster architecture may also require that you install or configure other resources, such as load balancers, HTTP servers, and proxy plug-ins.
Your security requirements form the basis for designing the appropriate security topology. For a discussion of several alternative architectures that provide varying levels of application security, see Security Options for Cluster Architectures.
|Notes:||Some network topologies can interfere with multicast communication. If you are deploying a cluster across a WAN, see If Your Cluster Spans Multiple Subnets In a WAN.|
|Note:||Avoid deploying server instances in a cluster across a firewall. For a discussion of the impact of tunneling multicast traffic through a firewall, see Firewalls Can Break Multicast Communication.|
Identify the machine or machines where you plan to install WebLogic Server—throughout this section we refer to such machines as “hosts”—and ensure that they have the resources required. WebLogic Server allows you to set up a cluster on a single, non-multihomed machine. This new capability is useful for demonstration or development environments.
|Note:||Do not install WebLogic Server on machines that have dynamically assigned IP addresses.|
BEA WebLogic Server has no built-in limit for the number of server instances that can reside in a cluster. Large, multi-processor servers such as Sun Microsystems, Inc. Sun Enterprise 10000 can host very large clusters or multiple clusters.
In most cases, WebLogic Server clusters scale best when deployed with one WebLogic Server instance for every two CPUs. However, as with all capacity planning, you should test the actual deployment with your target Web applications to determine the optimal number and distribution of server instances. Seein BEA WebLogic Server Performance and Tuning for additional information.
For best socket performance, configure the WebLogic Server host machine to use the native socket reader implementation for your operating system, rather than the pure-Java implementation. To understand why, and for instructions for configuring native sockets or optimizing pure-Java socket communications, see Peer-to-Peer Communication Using IP Sockets.
If you want to demonstrate a WebLogic Server cluster on a single, disconnected Windows machine, you must force Windows to load the TCP/IP stack. By default, Windows does not load the TCP/IP stack if it does not detect a physical network connection.
To force Windows to load the TCP/IP stack, disable the Windows media sensing feature using the instructions in “How to Disable Media Sense for TCP/IP in Windows” at.
During the cluster configuration process, you supply addressing information—IP addresses or DNS names, and port numbers—for the server instances in the cluster.
For information on intra-cluster communication, and how it enables load balancing and failover, see WebLogic Server Communication In a Cluster.
When you set up your cluster, you must provide location information for:
Read the sections that follow for an explanation of the information you must provide, and factors that influence the method you use to identify resources.
As you configure a cluster, you can specify address information using either IP addresses or DNS names.
Consider the purpose of the cluster when deciding whether to use DNS names or IP addresses. For production environments, the use of DNS names is generally recommended. The use of IP addresses can result in translation errors if:
You can avoid translation errors by binding the address of an individual server instance to a DNS name. Make sure that a server instance’s DNS name is identical on each side of firewalls in your environment, and do not use a DNS name that is also the name of an NT system on your network.
For more information about using DNS names instead of IP addresses, see Firewall Considerations.
If the internal and external DNS names of a WebLogic Server instance are not identical, use the
ExternalDNSName attribute for the server instance to define the server's external DNS name. Outside the firewall the
ExternalDNSName should translate to external IP address of the server. If clients are accessing WebLogic Server over the default channel and T3, do not set the
ExternalDNSName attribute, even if the internal and external DNS names of a WebLogic Server instance are not identical.
If you identify a server instance’s Listen Address as localhost, non-local processes will not be able to connect to the server instance. Only processes on the machine that hosts the server instance will be able to connect to the server instance. If the server instance must be accessible as localhost (for instance, if you have administrative scripts that connect to localhost), and must also be accessible by remote processes, leave the Listen Address blank. The server instance will determine the address of the machine and listen on it.
Make sure that each configurable resource in your WebLogic Server environment has a unique name. Each, domain, server, machine, cluster, JDBC data source, virtual host, or other resource must have a unique name.
Identify the DNS name or IP address and Listen Port of the Administration Server you will use for the cluster.
The Administration Server is the WebLogic Server instance used to configure and manage all the Managed Servers in its domain. When you start a Managed Server, you identify the host and port of its Administration Server.
Identify the DNS name or IP address of each Managed Server planned for your cluster.
Each Managed Server in a cluster must have a unique combination of address and Listen Port number. Clustered server instances on a single non-multihomed machine can have the same address, but must use a different Listen Port.
Identify the address and port you will dedicate to multicast communications for your cluster. A multicast address is an IP address between 22.214.171.124 and 126.96.36.199.
|Note:||The default multicast value used by WebLogic Server is 188.8.131.52. You should not use any multicast address with the value x.0.0.1.|
Server instances in a cluster communicate with each other using multicast—they use multicast to announce their services, and to issue periodic heartbeats that indicate continued availability.
The multicast address for a cluster should not be used for any purpose other than cluster communications. If the machine where the cluster multicast address exists hosts or is accessed by cluster-external programs that use multicast communication, make sure that those multicast communications use a different port than the cluster multicast port.
Multiple clusters on a network may share a multicast address and multicast port combination if necessary.
If you are setting up the Recommended Multi-Tier Architecture, described in Cluster Architectures, with a firewall between the clusters, you will need two dedicated multicast addresses: one for the presentation (servlet) cluster and one for the object cluster. Using two multicast addresses ensures that the firewall does not interfere with cluster communication.
In WebLogic Server cluster, the cluster address is used in entity and stateless beans to construct the host name portion of request URLs.
You can explicitly define the cluster address when you configure the a cluster; otherwise, WebLogic Server dynamically generates the cluster address for each new request. Allowing WebLogic Server to dynamically generate the cluster address is simplest, in terms of system administration, and is suitable for both development and production environments.
If you do not explicitly define a cluster address when you configure a cluster, when a clustered server instance receives a remote request, WebLogic Server generates the cluster address, in the form:
listen address:listen port combination in the cluster address corresponds to Managed Server and network channel that received the request.
listen address:listen portcombinations in the cluster address reflect the
ListenPortvalues from the associated
SSLMBeaninstances. For more information, see in Configuring WebLogic Server Environments.
listen address:listen portin the cluster address reflect the
NetworkAccessPointMBeanthat defines the channel. For more information about network channels in a cluster, see in Configuring WebLogic Server Environments.
The number of
ListenAddress:ListenPort combinations included in the cluster address is governed by the value of the
NumberOfServersInClusterAddress attribute on the
ClusterMBean, which is 3 by default.
You can modify the value of
NumberOfServersInClusterAddress on the Environments—>Clusters—>ClusterName->Configuration->General page of the Administration Console.
NumberOfServersInClusterAddress, the dynamically generated cluster address contains a
ListenAddress:ListenPortcombination for each of the running Managed Servers.
NumberOfServersInClusterAddress, WebLogic Server randomly selects a subset of the available instances—equal to the value of
NumberOfServersInClusterAddress—and uses the
ListenAddress:ListenPortcombination for those instances to form the cluster address.
The order in which the
ListenAddress:ListenPort combinations appear in the cluster address is random—from request to request, the order will vary.
If you explicitly define a cluster address for a cluster in a production environment, specify the cluster address as a DNS name that maps to the IP addresses or DNS names of each WebLogic Server instance in the cluster.
If you define the cluster address as a DNS name, the Listen Ports for the cluster members are not specified in the cluster address—it is assumed that each Managed Server in the cluster has the same Listen Port number. Because each server instance in a cluster must have a unique combination of address and Listen Port, if a cluster address is a DNS name, each server instance in the cluster must have:
When clients obtain an initial JNDI context by supplying the cluster DNS name,
weblogic.jndi.WLInitialContextFactory obtains the list of all addresses that are mapped to the DNS name. This list is cached by WebLogic Server instances, and new initial context requests are fulfilled using addresses in the cached list with a round-robin algorithm. If a server instance in the cached list is unavailable, it is removed from the list. The address list is refreshed from the DNS service only if the server instance is unable to reach any address in its cache.
Using a cached list of addresses avoids certain problems with relying on DNS round-robin alone. For example, DNS round-robin continues using all addresses that have been mapped to the domain name, regardless of whether or not the addresses are reachable. By caching the address list, WebLogic Server can remove addresses that are unreachable, so that connection failures aren't repeated with new initial context requests.
|Note:||The Administration Server should not participate in a cluster. Ensure that the Administration Server's IP address is not included in the cluster-wide DNS name. For more information, see Administration Server Considerations.|
If you explicitly define a cluster address for use in development environments, you can use a cluster DNS name for the cluster address, as described in the previous section.
Alternatively, you can define the cluster address as a list that contains the DNS name (or IP address) and Listen Port of each Managed Server in the cluster, as shown in the examples below:
Note that each cluster member has a unique address and port combination.
If your cluster runs on a single, multihomed machine, and each server instance in the cluster uses a different IP address, define the cluster address using a DNS name that maps to the IP addresses of the server instances in the cluster. If you define the cluster address as a DNS name, specify the same Listen Port number for each of the Managed Servers in the cluster.
This section describes how to get a clustered application up and running, from installation of WebLogic Server through initial deployment of application components.
This section lists typical cluster implementation tasks, and highlights key configuration considerations. The exact process you follow is driven by the unique characteristics of your environment and the nature of your application. These tasks are described:
Not every step is required for every cluster implementation. Additional steps may be necessary in some cases.
If you have not already done so, install WebLogic Server. For instructions, see Installing WebLogic Server.
/beadirectory to use for all clustered instances.
|Note:||Do not use a shared filesystem and a single installation to run multiple WebLogic Server instances on separate machines. Using a shared filesystem introduces a single point of contention for the cluster. All server instances must compete to access the filesystem (and possibly to write individual log files). Moreover, should the shared filesystem fail, you might be unable to start clustered server instances.|
The are multiple methods of creating a clustered domain. For a list, see Methods of Configuring Clusters.
For instructions to create a cluster using the:
There are multiple methods of starting a cluster—available options include the command line interface, scripts that contain the necessary commands, and Node Manager.
|Note:||Node Manager eases the process of starting servers, and restarting them after failure.|
|Note:||To use Node Manager, you must first configure a Node Manager process on each machine that hosts Managed Servers in the cluster. See Configure Node Manager.|
Regardless of the method you use to start a cluster, start the Administration Server first, then start the Managed Servers in the cluster.
Follow the instructions below to start the cluster from a command shell. Note that each server instance is started in a separate command shell.
|Note:||Note: After you start a Managed Server, it listens for heartbeats from other running server instances in the cluster. The Managed Server builds its local copy of the cluster-wide JNDI tree, as described in How WebLogic Server Updates the JNDI Tree, and displays status messages when it has synchronized with each running Managed Server in the cluster. The synchronization process can take a minute or so.|
Node Manager is a standalone Java program provided with WebLogic Server that is useful for starting a Managed Server that resides on a different machine than its Administration Server. Node Manager also provides features that help increase the availability of Managed Servers in your cluster. For more information, and for instructions to configure and use Node Manager, seein Designing and Configuring WebLogic Server Environments.
Follow the instructions in this section to select the load balancing algorithm for EJBs and RMI objects.
Unless you explicitly specify otherwise, WebLogic Server uses the round-robin algorithm as the default load balancing strategy for clustered object stubs. To understand alternative load balancing algorithms, see Load Balancing for EJBs and RMI Objects. To change the default load balancing algorithm:
You can enable a timeout option when making calls to the ReplicationManager by setting the ReplicationTimeoutEnabled in the ClusterMBean to true.
The timeout value is equal to the multicast heartbeat timeout. Although you can customize the multicast timeout value, the ReplicationManager timeout cannot be changed. This restriction exists because the ReplicationManager timeout does not affect cluster membership. A missing multicast heartbeat causes the member to be removed from the cluster and the timed out ReplicationManager call will choose a new secondary server to connect to.
|Note:||It is possible that a cluster member will continue to send multicast heartbeats, but will be unable to process replication requests. This could potentially cause an uneven distribution of secondary servers. When this situation occurs, a warning message is recorded in the server logs.|
To understand the server affinity support provided by WebLogic Server for JMS, see Load Balancing for JMS.
Load balancers that support passive cookie persistence can use information from the WebLogic Server session cookie to associate a client with the WebLogic Server instance that hosts the session. The session cookie contains a string that the load balancer uses to identify the primary server instance for the session.
For a discussion of external load balancers, session cookie persistence, and the WebLogic Server session cookie, see Load Balancing HTTP Sessions with an External Load Balancer
To configure the load balancer to work with your cluster, use the facilities of the load balancer to define the offset and length of the string constant.
Assuming that the Session ID portion of the session cookie is the default length of 52 bytes, on the load balancer, set:
If your application or environmental requirements dictate that you change the length of the Random Session ID from its default value of 52 bytes, set the string offset on the load balancer accordingly. The string offset must equal the length of the Session ID plus 1 byte for the delimiter character.
|Notes:||For vendor-specific instructions for configuring Big-IP load balancers, see Configuring BIG-IP™ Hardware with Clusters.|
Refer to the instructions in this section if you wish to load balance servlets and JSPs using a proxy plug-in. A proxy plug-in proxies requests from a web server to WebLogic Server instances in a cluster, and provides load balancing and failover for the proxied HTTP requests.
For information about load balancing using proxy plug-ins, see Load Balancing with a Proxy Plug-in. For information about connection and failover using proxy plug-ins, see Replication and Failover for Servlets and JSPs, and Accessing Clustered Servlets and JSPs Using a Proxy.
HttpClusterServletusing the instructions in Set Up the HttpClusterServlet.
|Note:||Each web server that proxies requests to a cluster must have an identically configured plug-in.|
To use the HTTP cluster servlet, configure it as the default web application on your proxy server machine, as described in the steps below. For an introduction to web applications, seein Developing Web Applications for WebLogic Server.
web.xmldeployment descriptor file for the servlet. This file must reside in the
\WEB-INFsubdirectory of the web application directory. A sample deployment descriptor for the proxy servlet is provided in Sample web.xml. For more information on
web.xml, see in Developing Web Applications, Servlets, and JSPs for WebLogic Server.
web.xml. The servlet name is
HttpClusterServlet. The servlet class is
web.xml, by defining the
<KeyStore>initialization parameters to use two-way SSL with your own identity certificate and key. If no
<KeyStore>is specified in the deployment descriptor, the proxy will assume one-way SSL.
<KeyStore>– The key store location in your Web application.
<KeyStoreType>– The key store type. If it is not defined, the default type will be used instead.
<PrivateKeyAlias>– The private key alias.
<KeyStorePasswordProperties>– A property file in your Web application that defines encrypted passwords to access the key store and private key alias. The file contents looks like this:
weblogic.security.Encryptcommand-line utility to encrypt the password. For more information on the
Encryptutility, as well as the
der2pemutilities, see in the WebLogic Server Command Reference.
<servlet-mapping>stanzas to specify the requests that the servlet will proxy to the cluster, using the
<url-pattern>element to identify specific file extensions, for example
*.jsp, or *
.html. Define each pattern in a separate
You can set the
<url-pattern> to “
/” to proxy any request that cannot be resolved by WebLogic Server to the remote server instance. If you do so, you must also specifically map the following extensions:
*.html, to proxy files ending with those extensions. For an example, see Sample web.xml.
weblogic.xmldeployment descriptor file for the servlet. This file must reside in the
\WEB-INFsubdirectory of the web application directory.
Assign the proxy servlet as the default web application for the Managed Server on the proxy machine by setting the
<context-root> element to a forward slash character (/) in the
<weblogic-web-app> stanza. For an example, see Sample weblogic.xml.
This section contains a sample deployment descriptor file (
web.xml defines parameters that specify the location and behavior of the proxy servlet: both versions of the servlet:
DOCTYPEstanza specifies the DTD used by WebLogic Server to validate
WL_HOME/server/libdirectory. You do not have to specify the servlet’s full directory path in
weblogic.jaris put in your CLASSPATH when you start WebLogic Server.
servlet-mappingstanzas specify that the servlet will proxy URLs that end in '/', 'htm', 'html', or 'jsp' to the cluster.
For parameter definitions see Proxy Servlet Deployment Parameters.
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd";>
This section contains a sample
weblogic.xml file. The <context-root> deployment parameter is set to "/". This makes the proxy servlet the default web application for the proxy server.
<!DOCTYPE weblogic-web-app PUBLIC "-//BEA Systems, Inc.//DTD Web Application 9.1//EN" "http://www.bea.com/servers/wls810/dtd/weblogic
Key parameters for configuring the behavior of the proxy servlet in
web.xml are listed in Table 10-1.
The parameters for the proxy servlet are the same as those used to configure WebLogic Server plug-ins for Apache, Microsoft, and Netscape web servers. For a complete list of parameters for configuring the proxy servlet and the plug-ins for third-part web servers seein Using WebLogic Server with Plug-ins.
The syntax for specifying the parameters, and the file where they are specified, is different for the proxy servlet and for each of the plug-ins.
For the proxy servlet, specify the parameters in
web.xml, each in its own
<init-param> stanza within the
<servlet> stanza of
web.xml. For example:
If set to ON, you can query the
Interval in seconds that the servlet will sleep between attempts to connect to a server instance. Assign a value less than
The number of connection attempts the servlet makes before returning an
Maximum time in seconds that the servlet will attempt to connect to a server instance. Assign a value greater than
String trimmed by the plug-in from the beginning of the original URL, before the request is forwarded to the cluster.
This setting is useful if user authentication is performed on the proxy server—setting
String that the servlet prepends to the original URL, after
Ensure that applications clients will access via the proxy server are deployed to your cluster. Address client requests to the listen address and listen port of the proxy server.
If you have problems:
To support automatic failover for servlets and JSPs, WebLogic Server replicates HTTP session states in memory. You can further control where secondary states are placed using replication groups. A replication group is a preferred list of clustered instances to be used for storing session state replicas.
If your cluster will host servlets or stateful session EJBs, you may want to create replication groups of WebLogic Server instances to host the session state replicas.
For instructions on how to determine which server instances should participate in each replication group, and to determine each server instance’s preferred replication group, follow the instructions in Using Replication Groups.
Then follow these steps to configure replication groups for each WebLogic Server instance:
To configure replication groups for a WebLogic Server instance:
WebLogic Server enables you to configure an optional migratable target, which is a special target that can migrate from one server in a cluster to another. As such, a migratable target provides a way to group pinned services that should move together. When the migratable target is migrated, all services hosted by that target are migrated. Pinned services include JMS-related services (e.g., JMS servers, SAF agents, path services, and persistent stores) or the JTA Transaction Recovery Service.
If you want to use a migratable target, configure the target server list before deploying or activating the service in the cluster. If you do not configure a migratable target in the cluster, migratable services can be migrated to any available WebLogic Server instance in the cluster. For more details on migratable targets, see Understanding Migratable Targets In a Cluster.
This section provides instructions for configuring JDBC components using the Administration Console. The choices you make as you configure the JDBC components are reflected in the configuration files for the WebLogic Server domain that contains the cluster.
First you create the data sources and optionally create a multi data source.
Perform these steps to set up a basic data source in a cluster:
Perform these steps to create a clustered multi data source for increased availability, and optionally, load balancing.
|Note:||Multi data sources are typically used to provide increased availability and load balancing of connections to replicated, synchronized instances of a database. For more information, see JDBC Connections.|
You must package applications before you deploy them to WebLogic Server. For more information, see the packaging topic inin Developing Applications for WebLogic Server.
Clustered objects in WebLogic Server should be deployed homogeneously. To ensure homogeneous deployment, when you select a target use the cluster name, rather than individual WebLogic Server instances in the cluster.
The console automates deploying replica-aware objects to clusters. When you deploy an application or object to a cluster, the console automatically deploys it to all members of the cluster (whether they are local to the Administration Server machine or they reside on remote machines.) For a discussion of application deployment in clustered environments see Methods of Configuring Clusters. For a broad discussion of deployment topics, see Deploying WebLogic Server Applications.
|Note:||All server instances in your cluster should be running when you deploy applications to the cluster using the Administration Console|
Deploying a application to a server instance, rather than the all cluster members is called a pinned deployment. Although a pinned deployment targets a specific server instance, all server instances in the cluster must be running during the deployment process.
You can perform a pinned deployment using the Administration Console or from the command line, using
From a command shell, use the following syntax to target a server instance:
java weblogic.Deployer -activate -name ArchivedEarJar -source C:/MyApps/JarEar.ear -target server1
You can cancel a deployment using the Administration Console or from the command line, using
From a command shell, use the following syntax to cancel the deployment task ID:
java weblogic.Deployer -adminurl http://admin:7001 -cancel -id tag
In the Administration Console, open the Tasks node to view and to cancel any current deployment tasks.
To view a deployed application in the Administration Console:
To undeploy a deployed application from the WebLogic Server Administration Console:
The sections that follow provide guidelines and instructions for deploying, activating, and migrating migratable services.
The migratable target that you create defines the scope of server instances in the cluster that can potentially host a migratable service. You must deploy or activate a pinned service on one of the server instances listed in the migratable target in order to migrate the service within the target server list at a later time. Use the instructions that follow to deploy a JMS service on a migratable target, or activate the JTA transaction recovery system so that you can migrate it later.
|Note:||If you did not configure a migratable target, simply deploy the JMS server to any WebLogic Server instance in the cluster; you can then migrate the JMS server to any other server instance in the cluster (no migratable target is used).|
Before you begin, use the instructions into create a migratable target for the cluster. Next, deploy JMS-related services to a migratable target, as described in the following topics in the Administration Console Online Help:
The JTA recovery service is automatically started on one of the server instances listed in the migratable target for the cluster; you do not have to deploy the service to a selected server instance.
If you did not configure a JTA migratable target, WebLogic Server activates the service on any available WebLogic Server instance in the cluster. To change the current server instance that hosts the JTA service, use the instructions in Migrating a Pinned Service to a Target Server Instance.
After you have deployed a migratable service, you can use the Administration Console to manually migrate the service to another server instance in the cluster. If you configured a migratable target for the service, you can migrate to any other server instance listed in the migratable target, even if that server instance is not currently running. If you did not configure a migratable target, you can migrate the service to any other server instance in the cluster.
If you migrate a service to a stopped server instance, the server instance will activate the service upon the next startup. If you migrate a service to a running WebLogic Server instance, the migration takes place immediately.
Before you begin, use the instructions in Deploying JMS to a Migratable Target Server Instance to deploy a pinned service to the cluster. Next, migrate the pinned service using the Administration Console by following the appropriate instructions in the Administration Console Online Help:
Here are some additional steps that are not covered in the console help instructions:
Please ensure that server MyServer-1 is NOT running! If the administration server cannot reach server MyServer-1 due to a network partition, inspect the server directly to verify that it is not running. Continue the migration only if MyServer-1 is not running. Cancel the migration if MyServer-1 is running, or if you do not know whether it is running.
If this message is displayed, perform the procedure described in Migrating When the Currently Active Host is Unavailable.
Use this migration procedure if a clustered Managed Server that was the active server for the migratable service crashes or becomes unreachable.
This procedure purges the failed Managed Server’s configuration cache. The purpose of purging the cache is to ensure that, when the failed server instance is once again available, it does not re-deploy a service that you have since migrated to another Managed Server. Purging the cache eliminates the risk that Managed Server which was previously the active host for the service uses local, out-of-date configuration data when it starts up again.
To support automatic failover for servlets and JSPs, WebLogic Server replicates HTTP session states in memory.
|Note:||WebLogic Server can also maintain the HTTP session state of a servlet or JSP using file-based or JDBC-based persistence. For more information on these persistence mechanisms, seein Developing Web Applications, Servets, and JSPs for WebLogic Server.|
In-memory HTTP Session state replication is controlled separately for each application you deploy. The parameter that controls it—
PersistentStoreType—appears within the session-descriptor element, in the WebLogic deployment descriptor file,
weblogic.xml, for the application.
To use in-memory HTTP session state replication across server instances in a cluster, set the
replicated. The fragment below shows the appropriate XML from
<param-name> PersistentStoreType </param-name>
<param-value> replicated </param-value>
The sections below contain useful tips for particular cluster configurations.
For best socket performance, BEA recommends that you use the native socket reader implementation, rather than the pure-Java implementation, on machines that host WebLogic Server instances.
If you must use the pure-Java socket reader implementation for host machines, you can still improve the performance of socket communication by configuring the proper number of socket reader threads for each server instance and client machine.
The sections that follow have instructions on how to configure native socket reader threads for host machines, and how to set the number of reader threads for host and client machines.
To configure a WebLogic Server instance to use the native socket reader threads implementation:
By default, a WebLogic Server instance creates three socket reader threads upon booting. If you determine that your cluster system may utilize more than three sockets during peak periods, increase the number of socket reader threads:
On client machines, you can configure the number socket reader threads in the Java Virtual Machine (JVM) that runs the client. Specify the socket readers by defining the
=value options in the Java command line for the client.
If your cluster spans multiple subnets in a WAN, the value of the Multicast Time-To-Live (TTL) parameter for the cluster must be high enough to ensure that routers do not discard multicast packets before they reach their final destination. The Multicast TTL parameter sets the number of network hops a multicast message makes before the packet can be discarded. Configuring the Multicast TTL parameter appropriately reduces the risk of losing the multicast messages that are transmitted among server instances in the cluster.
For more information about planning your network topology to ensure that multicast messages are reliably transmitted see If Your Cluster Spans Multiple Subnets In a WAN.
To configure the Multicast TTL for a cluster, change the Multicast TTL value in the Multicast tab for the cluster in the Administration Console. The
config.xml excerpt below shows a cluster with a Multicast TTL value of three. This value ensures that the cluster’s multicast messages can pass through three routers before being discarded:
|Note:||When relying upon the Multicast TTL value, it is important to remember that within a clustered environment it is possible that timestamps across servers may not always be synchronized. This can occur in replicated HTTP sessions and EJBs for example.|
|Note:||When the ClusterDebug flag is enabled, an error is printed to the server log when cluster members clocks are not synchronized.|
If multicast storms occur because server instances in a cluster are not processing incoming messages on a timely basis, you can increase the size of multicast buffers. For information on multicast storms, see If Multicast Storms Occur.
TCP/IP kernel parameters can be configured with the UNIX
ndd utility. The
udp_max_buf parameter controls the size of send and receive buffers (in bytes) for a UDP socket. The appropriate value for
udp_max_buf varies from deployment to deployment. If you are experiencing multicast storms, increase the value of
udp_max_buf by 32K, and evaluate the effect of this change.
Do not change
udp_max_buf unless necessary. Before changing
udp_max_buf, read the Sun warning in the “UDP Parameters with Additional Cautions” section in the “TCP/IP Tunable Parameters” chapter in Solaris Tunable Parameters Reference Manual at
WebLogic server allows you to encrypt multicast messages that are sent between clusters. You can enable this option by checking Enable Multicast Data Encryption from the Administration Console by navigating to the Environment —>Clusters—><cluster_name>—>Multicast node and selecting the Advanced options.
Only the data portion of the multicast message is encrypted. Information contained in the multicast header is not encrypted.
Configure a Machine Name if:
WebLogic Server uses configured machine names to determine whether or not two server instances reside on the same physical hardware. Machine names are generally used with machines that host multiple server instances. If you do not define machine names for such installations, each instance is treated as if it resides on separate physical hardware. This can negatively affect the selection of server instances to host secondary HTTP session state replicas, as described in Using Replication Groups.
If your cluster has a multi-tier architecture, see the configuration guidelines in Configuration Considerations for Multi-Tier Architecture.
In its default configuration, WebLogic Server uses client-side cookies to keep track of the primary and secondary server instance that host the client’s servlet session state. If client browsers have disabled cookie usage, WebLogic Server can also keep track of primary and secondary server instances using URL rewriting. With URL rewriting, both locations of the client session state are embedded into the URLs passed between the client and proxy server. To support this feature, you must ensure that URL rewriting is enabled on the WebLogic Server cluster. For instructions on how to enable URL rewriting, seein Developing Web Applications for WebLogic Server.