BEA Logo BEA WebLogic Enterprise Release 5.1

  Corporate Info  |  News  |  Solutions  |  Products  |  Partners  |  Services  |  Events  |  Download  |  How To Buy

 

   WebLogic Enterprise Doc Home   |   Administration Topics   |   Previous Topic   |   Next Topic   |   Contents   |   Index

Building Networked Applications

 

This topic includes the following sections:

 


Terms and Definitions

asynchronous connections

Virtual circuits set up to execute independently of each other or asynchronously. An asynchronous connection does not block the processing of working circuits while attempts are being made to reconnect failed circuits. The BEA Tuxedo system BRIDGE allows the use of nonfailing network paths by listening and transferring data using multiple network address endpoints.

failover and failback

Network failover occurs when a redundant unit seamlessly takes over the network load for the primary unit. Some operating system and hardware bundles transparently detect a problem on one network card and have a spare automatically replace it. When done quickly enough, application-level TCP virtual circuits have no indication a fault happened.

In the BEA WebLogic Enterprise or BEA Tuxedo system, data flows over the highest available priority circuit. If network groups have the same priority, data travels over all networks simultaneously. If all circuits at the current priority fail, data is sent over the next lower priority circuit. This is called failover.

When a higher priority circuit becomes available, the data flow is shifted to flow over the higher priority circuit. This is called failback.

When a failover condition is detected, all higher priority circuits are retried periodically. After connections to all network addresses have been tried and failed, connections are tried again the next time data needs to be sent between machines.

multiple listening addresses

Having addresses available on separate networks means that even if one virtual circuit is disrupted, the other circuit can continue undisturbed. Only a failure on all configured networks makes reconnection of the BRIDGES impossible. For example, when a high priority network fails, its load can be switched to an alternate network that has a lower priority. When the higher priority network returns to service, the network load returns to it.

parallel data circuits

Parallel data circuits enable data to flow simultaneously on more than one circuit. When you configure parallel data circuits, network traffic is scheduled over the circuit with the largest network group number (NETGRPNO). When this circuit is busy, the traffic is scheduled automatically over the circuit with the next lower network group number. When all circuits are busy, data is queued until a circuit is available.

Note: Alternate scheduling algorithms may be introduced in future releases.

 


Configuring Networked Applications

To configure a networked application, make these changes in the configuration file.

  1. Check the following settings in the RESOURCES section:

  2. Check the following settings in the MACHINES section:

  3. Check the following settings in the NETGROUPS section:

  4. Check the following settings in the NETWORK section:

 


Example: A Network Configuration

The following example illustrates the configuration of a simple network:

# The following configuration file excerpt shows a NETWORK
# section for a 2-site configuration.
*NETWORK
SITE1 NADDR="//mach1:80952"
NLSADDR="//mach1:serve"# SITE2 NADDR="//mach386:80952" NLSADDR="//mach386:serve"

 


Example: A Network Configuration with Multiple Netgroups

The hypothetical First State Bank has a network of five machines (A-E). It serves the bank's business best interest to have four network groups and to have each machine belong to two or three of the four groups.

Note: Configuration of multiple NETGROUPS has both hardware and system software prerequisites that are beyond the scope of this document. For example, NETGROUPS commonly requires machines with more than one directly attached network. Each TCP/IP symbolic address must be identified in the /etc/hosts file or in the DNS (Domain Name Services). In the example that follows, addresses in the form "//A_CORPORATE:5345" assume that the string "A_CORPORATE" is in the /etc/hosts file or in DNS.

The four groups in the First State Bank example are as follows:

All machines belong to DEFAULTNET (the corporate WAN). In addition, each machine is associated with either the MAGENTA_GROUP or the BLUE_GROUP. Finally, some machines in the MAGENTA_GROUP also belong to the GREEN_GROUP. Figure 6-1 illustrates group assignments for the network.

Figure 6-1 Example of a Network Grouping

In this example, machines A and B have addresses for the following:

Machine C has addresses for the following:

Machines D and E have addresses for the following:

Because the local area networks are not routed among the locations, machine D (in the BLUE_GROUP LAN) may contact machine A (in the GREEN_GROUP LAN) only by using the single address they have in common: the corporate WAN network address.

The UBBCONFIG File for the Network Example

To set up the configuration described in the preceding section, the First State Bank administrator defined each group in the NETGROUPS and NETWORK sections of the UBBCONFIG file as follows:

*NETGROUPS

DEFAULTNET NETGRPNO = 0 NETPRIO = 100 #default
BLUE_GROUP NETGRPNO = 9 NETPRIO = 100
MAGENTA_GROUP NETGRPNO = 125 NETPRIO = 200
GREEN_GROUP NETGRPNO = 13 NETPRIO = 200

*NETWORK

A NETGROUP=DEFAULTNET NADDR="//A_CORPORATE:5723"
A NETGROUP=MAGENTA_GROUP NADDR="//A_MAGENTA:5724"
A NETGROUP=GREEN_GROUP NADDR="//A_GREEN:5725"

B	NETGROUP=DEFAULTNET				NADDR="//B_CORPORATE:5723"
B NETGROUP=MAGENTA_GROUP NADDR="//B_MAGENTA:5724"
B NETGROUP=GREEN_GROUP NADDR="//B_GREEN:5725"

C	NETGROUP=DEFAULTNET				NADDR="//C_CORPORATE:5723"
C NETGROUP=MAGENTA_GROUP NADDR="//C_MAGENTA:5724"

D NETGROUP=DEFAULTNET NADDR="//D_CORPORATE:5723"
D NETGROUP=BLUE_GROUP NADDR="//D_BLUE:5726"

E	NETGROUP=DEFAULTNET				NADDR="//E_CORPORATE:5723"
E NETGROUP=BLUE_GROUP NADDR="//E_BLUE:5726"

Assigning Priorities for Each Network Group

Appropriately assigning priorities for each NETGROUP enables you to maximize the capability of network BRIDGE processes. When determining your NETGROUP priorities, keep in mind the following considerations:

Figure 6-2 illustrates how the First State Bank administrator can assign priorities to the network groups.

Figure 6-2 Assigning Priorities to Network Groups

The UBBCONFIG Example Considerations

You can specify the value of NETPRIO for DEFAULTNET just as you do for any other netgroup. If you do not specify a NETPRIO for DEFAULTNET, a default of 100 is used, as in the following example:

*NETGROUP
DEFAULTNET NETGRPNO = 0 NETPRIO = 100

For DEFAULTNET, the value of the network group number must be zero; any other number is invalid. If the BLUE_GROUP's network priority is commented out, the priority defaults to 100. Network group number entries are unique. Some of the network priority values are equal, as in the case of MAGENTA_GROUP and GREEN_GROUP (200).

Each network address is associated by default with the network group, DEFAULTNET. It may be specified explicitly for uniformity or to associate the network address with another netgroup.

*NETWORK
D NETGROUP=BLUE_GROUP NADDR="//D_BLUE:5726"

In this case, MAGENTA_GROUP and GREEN_GROUP have the same network priority of 200. Note that a lower priority network, such as DEFAULTNET, could be a charge-per-minute satellite link.

 


Running a Networked Application

For the most part, the work of running a BEA WebLogic Enterprise or BEA Tuxedo networked application takes place in the configuration phase. Once you have defined the network for an application and you have booted the system, the software automatically takes care of running the network for you.

In this section, we discuss some aspects of running a networked application to give you a better understanding of how the software works. Knowledge of how the software works can often make configuration decisions easier.

Scheduling Network Data Over Parallel Data Circuits

If you have configured a networked application that uses parallel data circuits, scheduling network data proceeds as follows:

Figure 6-3 is a flow diagram that illustrates how the BRIDGE processes data by priority.

Figure 6-3 Flow of Data over the BRIDGE

Figure 6-3 illustrates the flow of data when machine A attempts to contact machine B. First, the BRIDGE determines which network groups are common to both machine A and machine B. They are the MAGENTA_GROUP, the GREEN_GROUP, and the DEFAULTNET.

The highest priority network addresses originate from the network groups with the highest network priority. Network groups with the same NETPRIO value flow network data in parallel. All network groups with a higher priority than that of the network groups that are flowing data are retried, periodically.

Once network connections have been established with different NETPRIO values, no further data is scheduled for the lower priority connection. The lower priority connection is disconnected in an orderly fashion.

Network Data in Failover and Failback

Data flows over the highest available priority circuit. If network groups have the same priority, data travels over all networks simultaneously. If all circuits at the current priority fail, data is sent over the next lower priority circuit. This is called failover.

When a higher priority circuit becomes available, data flow is restored to the higher priority circuit. This is called failback.

All unavailable higher priority circuits are retried periodically. After connections to all network addresses have been tried and have failed, connections are tried again the next time data needs to be sent between machines.

Using Data Compression for Network Data

When data is sent between processes of an application, you can elect to have it compressed. Several aspects of data compression are described in the sections that follow.

Taking Advantage of Data Compression

Data compression is useful in most applications and is in fact vital to supporting large configurations. Following is a list of recommendations for when to use data compression and for how the limits should be set.

When should I set remote data compression and what setting should be used?

You should always use remote data compression as long as all of your sites are running BEA Tuxedo Release 4.2.1 or later. The setting used depends on the speed of your network. In general, you can separate the decision into high-speed (for example, Ethernet) and low-speed (for example, X.25) networks.

High-speed networks. Set remote data compression to the lowest limit for BEA WebLogic Enterprise or BEA Tuxedo generated file transfers (see note below on file transfers). That is, compress only the messages that are large enough to be candidates for file transfer either on the sending site or on the receiving site. Note that each machine in an application may have a different limit and the lowest limit should be chosen.

Low-speed networks. Set remote data compression to zero on all machines; that is, compress all application and system messages.

When should I set local data compression and what setting should be used?

You should always set local data compression for sites running BEA Tuxedo Release 4.2.1 or later, even if they are interoperating with pre-4.2.1 sites. The setting should be the local limit for file transfers generated by the BEA Tuxedo system (see note below). This setting enables you to avoid file transfers in many cases that might otherwise have required a transfer, and greatly reduces the size of files used if file transfers are still necessary.

Note: For high-traffic applications that involve a large volume of timeouts and discarding of messages due to queue blocking, you may want to set local compression to always occur, thus lowering the demand of the application on the queuing subsystem.

Setting the Compression Level

An environment variable, TMCMPPRFM, can be used to set the level of compression. This variable adds further control to data compression by allowing you to go beyond the simple choice of "compress or do not compress" that is provided by CMPLIMIT. You can specify any of nine levels of compression. The TMCMPPRFM environment variable takes as its value a single digit in the range of 1 through 9. A value of 1 specifies the lowest level of compression; 9 is the highest. When a low number is specified, the compression routine does its work more quickly. (See tuxenv(5) in the BEA Tuxedo Reference Manual for details.)

Balancing Network Request Loads

If load balancing is on (LDBAL set to Y in the RESOURCES section of the configuration file), the BEA WebLogic Enterprise or BEA Tuxedo system attempts to balance requests across the network. Because load information is not updated globally, each site will have its own view of the load at remote sites. This means the local site views will not all be the same.

The TMNETLOAD environment variable (or the NETLOAD parameter in the MACHINES section) can be used to force more requests to be sent to local queues. The value expressed by this variable is added to the remote values to make them appear to have more work. This means that load balancing can be on, but that local requests will be sent to local queues more often.

NETLOAD

The NETLOAD parameter affects the load balancing behavior of a system when a service is available on both local and remote machines. NETLOAD is a numeric value (of arbitrary units) that is added to the load factor of services remote from the invoking client. This provides a bias for choosing a local server over a remote server.

As an example, assume servers A and B offer a service with load factor 50. Server A is running on the same machine as the calling client (local), and server B is running on a different machine (remote). If NETLOAD is set to 100, approximately three requests will be sent to A for every one sent to B.

Another enhancement to load balancing is local idle server preference. Requests are preferentially sent to a server on the same machine as the client, assuming it offers the desired service and is idle. This decision overrides any load balancing considerations, since the local server is known to be immediately available.

SPINCOUNT

SPINCOUNT determines the number of times a process tries to get the shared memory latch before the process stops trying. Setting SPINCOUNT to a value greater than 1 gives the process that is holding the latch enough time to finish.

Using Link-level Encryption (BEA Tuxedo Servers)

Note: This section is specific to BEA Tuxedo servers; however, see the note below for benefits to BEA WebLogic Enterprise servers.

Link-level encryption (LLE) is the encryption of messages going across network links. This functionality is provided in the BEA Tuxedo system Security Package, which is offered in two versions: U.S./Canada and International. The difference between the two versions consists solely in the number of bits of the 128-bit encryption key that remain private. The U.S./Canada version has a key length of 128 bits; the International version now has an effective key length of 56 bits.

The Security Package allows encryption of data that flows over BEA Tuxedo system network links. The objective is to ensure data privacy, so a network-based eavesdropper cannot learn the content of BEA Tuxedo system messages or application-generated messages.

Link-level encryption applies to the following types of BEA Tuxedo links:

How LLE Works

Link-level encryption control parameters and underlying communication protocols are different for various link types, but there are some common themes, as follows:

Encryption Key Size Negotiation

The first step in negotiating the key size is for the two processes to agree on the largest common key size supported by both. This negotiation need not be encrypted or hidden.

Once encryption is negotiated, it remains in effect for the lifetime of the network connection.

A preprocessing step temporarily reduces the maximum key size parameter configured to agree with the installed software's capabilities. This must be done at link negotiation time, because at configuration time it may not be possible to verify a particular machine's installed encryption package. For example, the administrator may configure (0, 128) encryption for an unbooted machine that has only a 40-bit encryption package installed. When the machine actually negotiates a key size, it should represent itself as (0, 40). In some cases this may cause a run-time error; for example (128, 128) is not possible with a 40-bit encryption package.

In some cases, international link level is upgraded automatically from 40 bits to 56 bits. The encryption strength upgrade requires that both sides of a network connection are running BEA Tuxedo Release 6.5 software, with the optional U.S./Canada or International Encryption Security Add-on Package installed. You can verify a server machine's encryption package by running the tmadmin -v command. Both machines must also be configured to accept 40-bit encryption. When these conditions are met, the encryption strength is upgraded automatically to 56 bits.

Table 6-1 shows the outcome for all possible combinations of min/max parameters.

Table 6-1 Encryption Key Matrix

Inter-

Process

Negotiation

Results

(0,0)

(0,40)

(0, 128)

(40, 40)

(40,128)

(128, 128)

(0,0)

0

0

0

ERROR

ERROR

ERROR

(0,40)

0

56

56

56

56

ERROR

(0,128)

0

56

128

56

128

128

(40,40)

ERROR

56

56

56

56

ERROR

(40,128)

ERROR

56

128

56

128

128

(128,128)

ERROR

ERROR

128

ERROR

128

128

Note: Shaded cells show the result of an automatic upgrade from 40-bit to 56-bit encryption when both machines are running BEA Tuxedo Release 6.5. When communicating with an older release, encryption remains at 40-bit strength in the shaded cells.

MINENCRYPTBITS/MAXENCRYPTBITS

When a network link is established to the machine identified by the LMID for the current entry, the MIN and MAX parameters are used to specify the number of significant bits of the encryption key. MINENCRYPTBITS says, in effect, "at least this number of bits are meaningful." MAXENCRYPTBITS, on the other hand, says, "encryption should be negotiated up to this level." The possible values are 0, 40, and 128. A value of zero means no encryption is used, while 40 and 128 specify the number of significant bits in the encryption key.

The BEA Tuxedo system U.S./Canada security package permits use of up to 128 bits; the International package allows specification of no more than 56 bits.

How to Change Network Configuration Parameters

Use tmconfig(1) to change configuration parameters while the application is running. In effect, tmconfig is a shell-level interface to the BEA Tuxedo system Management Information Base (MIB). See the tmconfig(1), MIB(5), and TM_MIB(5) reference pages in the BEA Tuxedo Reference Manual.