Previous Next Contents Index


Chapter 2 Planning Your Environment

This chapter explains how to fit Netscape Application Server (NAS) into your overall enterprise. Planning your environment is one of the first phases of deployment. You may realize during this process that you must change the structure and components of your network to accommodate your NAS performance needs. Or, if your network structure is not flexible at this time, use the environment planning process to determine how you can best deploy NAS to fit within the existing network setup while maximizing server performance.

This chapter contains the following sections:


Establishing Performance Goals
Before beginning any discussion on how to deploy Netscape Application Server (NAS), make sure you understand your performance goals and what you want to achieve when you integrate NAS into your enterprise. As explained in Chapter 1, "Overview of Netscape Application Server Deployment," one of your main goals in deployment is to maximize performance. This translates into maximizing throughput and reducing response time. You will make decisions about your network configuration and how NAS fits into it based on your expectations of throughput and response time.

Maximizing Throughput Throughput refers to capacity, or the number of requests per minute that your system can process. As explained in Chapter 1, "Overview of Netscape Application Server Deployment," a request consists of a single user's request for data, and the return of that data by the server. The request makes a round trip, from the user submitting the request, to the server, and then back from the server returning the result of that request to the user.

A simple example would be a shopping cart application, in which the user must click OK on a web page to submit a request to purchase an item. The result of this click, the purchase being processed, is considered a single request. Another example is a 401(k) application, where a user clicks OK to request a snapshot of an account balance, and the snapshot is returned to the user. A request is not a running application, but rather a transaction generated by an application to read or write data.

Improving throughput means increasing the number of requests per minute that can be handled by the server. A lower throughput capacity means that some or even many users are unable to process their transactions immediately, causing them to wait longer to obtain request results.

When planning your network environment and determining how NAS fits into your overall enterprise, consider what you can do to your network to increase the number of requests per minute that the system can handle.

Improving Response Time Response time refers to the number of seconds it takes for request results to be returned to the user. Consider the 401(k) application example provided in "Maximizing Throughput." When a user requests a 401(k) account balance on a web page, ideally the information should be displayed on the page within a few milliseconds from the moment the user clicks OK. However, if performance is not optimal, the user may have to wait several seconds, perhaps even minutes, for the account balance to appear. Response time is discussed in detail in Chapter 3, "Determining System Capacity," but when considering how to integrate Netscape Application Server into your overall network, think about what you can do to your network to improve the average response time of user requests.


Assessing Your Overall Network Configuration
When planning how to integrate Netscape Application Server (NAS) into your network so that performance is maximized, consider the following areas:

Assessing Network Components Network components affect the performance of your NAS system. As you decide on the desired size and bandwidth of each, first determine your network traffic and identify its peak. See whether there is a particular hour, day of the week, or day of the month in which overall volume peaks, and then determine the duration of that peak. Consider the additional traffic that adding NAS to your overall network will generate. At all times consult network experts at your site about the size and type of all network components you are considering adding.

Internet Access Lines
In making decisions about how to accommodate or improve Internet access, remember that your primary goal should be to accommodate the traffic of as many packets of data as possible. User requests and the responses to them are bundled into packets that travel across Internet access lines. A variety of Internet access lines exist, and what you use at your site affects traffic. Some examples of access lines include T-1 at 1.544 Megabits per second (Mbps), T-3 at 44.184 Mbps, fiber optic at 10-100 Mbps, and ISDN at 128-150 Kilobits per second (Kbps). The more access line bandwidth you have on any of these, the greater the number of packets that can travel back and forth along your lines.

Plan for Peak Load Times

During peak load times, the number of packets that are being sent is at its highest level. In general, scale your system with the goal of handling 100 percent of peak volume. But bear in mind that any network behaves unpredictably and that despite your scaling efforts, 100 percent of peak volume cannot always be handled.

For example, assume that at peak load 5 percent of your users occasionally do not have immediate Internet access when running their NAS applications. Of that 5 percent, determine how many users retry access after the first attempt. Again, not all of those users may get through, and of that unsuccessful portion, another percentage will retry. As a result, the peak appears longer because peak use is spread out over time as users continue to attempt access.

To ensure optimal access during the peak, start by verifying that your Internet service provider (ISP) has a backbone network connection that can reach an Internet hub without degradation.

Increase Bandwidth

Next, determine by how much you must increase bandwidth. Depending on your method of access (T-1 lines, ISDN, and so on), you can calculate the amount of increased capacity you require to handle your estimated load. For example, suppose your site uses T-1 or the higher-speed T-3 links for Internet access. Given their bandwidth, you can estimate how many lines you'll need on your network based on the average number of requests generated per second at your site and the maximum peak load. You can calculate both of these figures using any of the web-site-analysis and-site monitoring tools currently available on the market.

A single T-1 line can handle 1.544 Mbps. So a network of four T-1 lines carrying 1.544 Mbps each can handle approximately 6 Mbps of data. Assuming that the average HTML page sent back to a client is 30 kilobytes (KB), this network of four T-1 lines can handle the following traffic per second:

6,176,000 bits / 8 bits = 772,000 bytes per second

772,000 bytes per second / 30 KB = approximately 25 concurrent client requests for pages per second

At traffic of 25 pages per second, this system can handle 90,000 pages per hour (25 x 60 seconds x 60 minutes), and therefore 2,160,000 pages per day maximum, assuming an even load throughout the day.

Accommodate for Peak Load

Keep in mind, however, that having an even load throughout the day is probably not realistic. You need to determine when peak load occurs, how long it lasts, and what percentage of the total load it is. For example, in the scenario outlined here, if peak load lasts for two hours and takes up 30 percent of the total load of 2,160,000 pages, this means that 648,000 pages must be carried over the T-1 lines during two hours of the day. Therefore, to accommodate peak load during those two hours, you must increase the number of T-1 lines from four to 16:

648,000 pages / 120 minutes = 5,400 pages per minute

5,400 pages per minute / 60 seconds = 90 pages per second

If four lines can handle 25 pages per second, then approximately four times that many pages requires four times that many lines, in this case 16 lines. The 16 lines are meant for handling the realistic maximum of a 30 percent peak load. Naturally, the other 70 percent of your load can be handled throughout the rest of the day by these many lines. Note that instead of installing 16 T-1 lines, you get more value by installing a single T-3 line, which at bandwidth of 44.184 Mbps is equivalent to 28 T-1 lines, but equivalent in cost to only eight T-1 lines.

As a general rule, given existing Internet access technology, you should design NAS applications that return HTML pages of no more than 30 KB each. In addition to improving overall performance, you can better calculate and predict traffic over Internet access lines if the maximum size of your output pages remains constant. And remember that as ADSL, ISDN and cable modem technology becomes more popular, you'll get better throughput and speed on your applications.

Routers and Subnets
Routers and subnets have an impact on load balancing in NAS. Chances are, your network is already divided into subnets using routers. Or, perhaps, as a result of deploying NAS, you are planning to split up your network into new subnets. For example, you may wish to cordon off certain servers because the databases they communicate with are used by a specific functional group, for instance accounting or customer service. Consider the effect of subnets and routers on the load-balancing capabilities of NAS.

Load Balancing and Subnets

NAS scalability is increased by dynamic load balancing, a feature that distributes user request loads across designated servers by selecting the "best" server to process incoming requests. The best server is one that is determined to be least loaded according to the load-balancing system's server statistics. You can add servers as needed and balance loads so that all incoming user requests are processed immediately.

There are several advantages to load balancing:

If you intend to use load balancing, then all the servers that participate in the process must be in the same physical subnet. If they are not, you must enable multicasting across the subnets by enabling it on the router that connects the servers in the separate subnets. Consult your network router documentation and your network administrator for details about configuring subnets.

Network Cards
For greater bandwidth and optimal network performance, use NAS with 10 Mbps Ethernet cards or, preferably, 100 Mbps Ethernet cards.

Preparing for Network Bottlenecks Being prepared for network bottlenecks can help you avoid problems with throughput and response time, especially during peak load times. To avoid bottlenecks, focus on the following areas:

Improve your Internet access lines. Make sure that you have enough bandwidth on the physical media used to carry packets to and from your web browser client and NAS, via the web server.

Reduce the number of routers. Routers can create bottlenecks, particularly when too many are configured into one network. Consult a network expert at your site about how to set up your network topology and avoid installing more routers than are absolutely necessary.

Reduce the number of subnets . One of the outcomes of having more routers is that your network becomes a composite of multiple subnets. For this reason, it's best to have as few subnets as possible. A high number of subnets results in a complicated network that packet traffic must travel on, sometimes negatively affecting response time.

Planning Firewall Location Typically, you deploy NAS with a web server front end, such as Netscape Enterprise Server (NES). A user accesses NAS data by sending a client request from a web browser through a web server to the Executive Server (KXS) process running inside NAS. In turn, the KXS manages the request and sends data back to the client through the web server.

The communication between the web server and NAS is enabled by the Web Connector plug-in installed on the web server. If you are not using one of the NAS-recommended web servers, the communication is handled through a CGI process. Its is strongly recommended that you use NES.

Any entry into NAS from the Internet, or even from an intranet, exposes some or all of your network to a far-reaching audience that can go beyond your enterprise. To protect your network, it is strongly recommended that you deploy one or more firewalls. Typical firewall schemes are described in "Firewall Topologies" on page 29.

Where you place the firewall and how you configure it to allow connections to pass through depends on your security priorities and existing network framework. Consider two issues in particular:

Communication Protocols
Two communication protocols exist between the web server and NAS- Transmission Control Protocol (TCP/IP) and User Datagram Protocol (UDP):

TCP/IP and UDP connections also exist between NAS installations, along with an IP Multicasting protocol connection, which is used for load balancing across the servers. Multicasting connections are always blocked on firewalls and can occur only through multicast-enabled routers.

Firewall Filters
The most common type of firewall technology is packet filtering. When a firewall is set up, connections into your network from the outside, and replies to these connections leaving the network, must pass through a packet filter. Think of a connection as a packet that has been sent over a network line. When it arrives at its destination, the connection has occurred. A reply to this packet is the response.

A packet filter examines the information contained in the header of each packet. Headers usually contain source and address data, such as address and port number, as well as the direction or flow of the packet. The flow of these connections and replies in and out of the firewall through these filters are as follows:

Initially, the filters are configured with the following defaults for enabling or blocking the connections and replies:


Incoming from web server to firewall
Outgoing from firewall to NAS
Incoming from NAS to firewall
Outgoing from firewall to web server
Connection
Blocked
Blocked
Allowed
Allowed

Reply
Allowed
Allowed
Allowed
Allowed

Opening Ports to Allow Firewall Connections
To allow a particular connection or reply to pass through the firewall, you need to open up a port on the firewall. Each kind of connection, depending on the protocol used, requires opening up a specific port on the firewall.

The following table lists the connections and replies that pass through the firewall, the protocols used for each, and their default ports.

Connection
Protocol
Default port
Web Connector plug-in to KXS engine
TCP/IP
10818

Web Connector plug-in return connection between the web server and NAS

TCP/IP
Any port greater than 1024
Web Connector ping request from web server to the KXS engine

UDP
9610
Web Connector ping return from KXS engine to the web server

UDP
Any port greater than 32768
KXS load-balancing information from one KXS engine to another server's KXS engine

Multicasting
9607
Administration information from one KAS engine to another server's KAS engine
Multicasting
9608
Netscape Application Server Administrator UI to KAS engine
TCP/IP
10817


Firewall Topologies
A topology is a schematic layout of your network. It is the logical map of your servers, hosts, clients, and other elements, and it shows the connections made between them. A firewall topology focuses on where the firewall exists within your network and how the rest of your enterprise interacts with it. The topologies described here are high-level, logical organizations of where the firewall falls within the system.

When deciding on a firewall type and where to place it, think about your security needs and existing network framework. For example, a firewall may already be in place at your site. If so, you should decide where you want to place NAS in relation to the existing firewall.

The following topologies are discussed in this section:

These topologies represent some of the more common ways that firewalls are configured within networks in which NAS is installed. No one topology is better than the others; however, one may make better sense for your deployment needs.

Single Firewall Topology

The most common firewall configuration is the single firewall, located either between the web server and the Internet, as shown next, or between the web server and NAS.

In this topology, you must configure the firewall in the following manner to allow HTTP requests to traverse through the firewall from the web browser client to NAS:

When the firewall is located between the web server and NAS, as shown next, the web server is exposed to the Internet and is thus vulnerable to attacks.

In this topology, you must configure the firewall in the following manner:

DMZ Firewall Topology

The increasingly popular DMZ ("demilitarized zone") topology is useful if you want to open up your private network to business partners and customers. It adds an extra layer of security beyond the single firewall layer. In this topology, not only do you get a double layer of security from two firewalls, you also gain from having additional monitoring activities in the area within the two firewalls.

The outer firewall, that is, the one between the web browser and the web server, creates a public DMZ. The inner firewall, the one between the web server and Netscape Application Server, creates a private DMZ. You can set up a proxy server on this inner firewall that creates alias IP addresses of the NAS machines that are exposed, via the web server, to the web browser client.

In this topology, configure the outer firewall to allow incoming TCP/IP requests from the web client to the web server by enabling port 80.

Configure the inner firewall to allow incoming TCP/IP requests from KXS, outgoing TCP/IP replies from the KXS engine to the web server, incoming UDP requests from the web server to the KXS engine, and outgoing UDP replies from the KXS engine to the web server by enabling the various ports that allow these connections. See the previous section "Opening Ports to Allow Firewall Connections" for details.

DMZ-Database Protection Firewall Topology

On rare occasions, you may want to structure your enterprise so that corporate databases reside on their own subnets. For example, if you want to protect sensitive financial data, you may decide to separate into a subnet the database or databases used by the finance department. In addition to the deployment issues described earlier in "Routers and Subnets" on page 24, you should also consider deployment issues around setting up a firewall between the database subnet and the rest of your network. Consult your network administrator for details.


Netscape Application Server Topology
There are many ways you can set up your Netscape Application Server (NAS) system. This section describes some common, as well as recommended, configurations and contains the following topics:

Examples of NAS Topologies The following topologies range from a very simple scheme to the most complex. No single topology is best for all companies, since every enterprise has its own unique set of requirements and circumstances. Deciding which topology is best suited for your site is something you'll determine over time, as you fine-tune and continue to use NAS within your organization.

In all of the following topologies, the "back-end data source" represents any type of back end that provides and stores data. Some examples of a back end include a mainframe system, a database, another application server, a legacy application, and so on.

Topology 1: Single-Machine Configuration
The simplest NAS topology is one that consists of a single machine on which you have installed a web server, a Netscape Application Server, and a back-end data source.

This topology has the following advantages:

This topology has the following disadvantages:

Table 2.1 summarizes deployment issues and how they rate for Topology 1. A high score (5 is the highest; 1 is the lowest) means that for this topology, the particular issue rates well for deployment. All the topologies described in this section are rated according to the same criteria so that you can easily compare and decide which topology is best suited for your enterprise.

Table 2.1 Ratings for Topology 1
Deployment issue
Score

Details
System administration
3




5
Ease of daily administrative tasks


1
Ease of troubleshooting system-level problems
Hardware/OS resource usage
1




1
Competition for memory


1
Competition for I/O


1
Competition for CPU
Availability
2




1
System availability during regular maintenance


1
System availability during machine-level failure


5
System availability during process-level failure
Impact on existing (legacy) environment
3




3
Impact on stability of legacy systems and applications


3
Impact on performance of legacy systems and applications

Topology 2: Two Machines
Another simple topology that improves on performance by easing the load is one in which the web server and a single installation of NAS reside on the same machine, while the back-end data source is on another machine.

As with Topology 1, this topology is adequate in a setting with few concurrent users, such as in a simple intranet. The advantages and disadvantages of this topology are similar to Topology 1, with the exception of the following three points:

Table 2.2 summarizes deployment issues and how they rate for Topology 2. A high score (5 is the highest; 1 is the lowest) means that for this topology, the particular issue rates well for deployment. All the topologies described in this section are rated according to the same criteria so that you can easily compare and decide which topology is best suited for your enterprise.

Table 2.2 Ratings for Topology 2
Deployment issue
Score

Details
System administration
4




5
Ease of daily administrative tasks


3
Ease of troubleshooting system-level problems
Hardware/OS resource usage
3




3
Competition for memory


3
Competition for I/O


3
Competition for CPU
Availability
2




1
System availability during regular maintenance


1

System availability during machine-level failure


5

System availability during process-level failure
Impact on existing (legacy) environment
4




4
Impact on stability of legacy systems and applications


4
Impact on performance of legacy systems and applications

Topology 3: Three Machines
The following topology consists of three separate machines: the web server is installed on one machine, a single installation of NAS is installed on the second machine, and the back-end data source is installed on the third machine.

This topology lends itself to more accurately analyzing performance and identifying bottlenecks as each of the components resides on a separate machine. If performance problems arise, you can more easily isolate them and determine where the bottleneck exists.

One or more web servers reside on separate machines and thus can handle more multiple concurrent requests than Topologies 1 and 2. In this topology, if the web server is to be used strictly for front-ending NAS requests, make a point of installing the web server on a low-end machine, since the kind of processing the web server performs is simple traffic control of requests. This frees up your more powerful machine for the NAS installation, which typically requires far more resources than the web server.

In this topology, if you find that the web server cannot handle the required number of concurrent requests, it is recommended that you run more web server instances on the web server machine. Note that, via the MaxProcs registry setting, Unix supports multiple web server instances running on a single machine, whereas NT does not. On NT, if you intend to have more than one web server instance, install each web server on a separate NT system.

As long as a set of web server installations or instances are serving a single web site, you can also improve web server load by placing a web server load balancer, such as Cisco's LocalDirector, between the client browser and the web servers or instances. The purpose of such a load balancer is to distribute the request load evenly among the web servers or web server instances.

Table 2.3 summarizes deployment issues and how they rate for Topology 3. A high score (5 is the highest; 1 is the lowest) means that for this topology, the particular issue rates well for deployment. All the topologies described in this section are rated according to the same criteria so that you can easily compare and decide which topology is best suited for your enterprise.

Table 2.3 Ratings for Topology 3
Deployment issue
Score

Details
System administration
5




5
Ease of daily administrative tasks


5
Ease of troubleshooting system-level problems
Hardware/OS resource usage
5




5
Competition for memory


5
Competition for I/O


5
Competition for CPU
Availability
2




1
System availability during regular maintenance


1

System availability during machine-level failure


5
System availability during process-level failure
Impact on existing (legacy) environment
5




5
Impact on stability of legacy systems and applications


5
Impact on performance of legacy systems and applications

Topology 4: Scaling to Two NAS Machines
In addition to increasing web server resources, you may also want to increase NAS resources to handle your system's throughput. By adding another NAS machine, the two systems can form a cluster (shown here), which is a group of NAS installations that participate in synchronization of state and session data.

Topology 4 has the following advantages:

This topology can be further augmented by dedicating multiple web servers for each application server. It is recommended that you upgrade to this topology if stress testing proves that Topology 3, described on page 38, is inappropriate for your processing needs. Using this variation of the topology also depends on whether or not the web server is dedicated to serving NAS only or static and dynamic transactions of its own.

Table 2.4 summarizes deployment issues and how they rate for Topology 4. A high score (5 is the highest; 1 is the lowest) means that for this topology, the particular issue rates well for deployment. All the topologies described in this section are rated according to the same criteria so that you can easily compare and decide which topology is best suited for your enterprise.

Table 2.4 Ratings for Topology 4
Deployment issue
Score

Details
System administration
5




4
Ease of daily administrative tasks


5

Ease of troubleshooting system-level problems
Hardware/OS resource usage
5




5
Competition for memory


5
Competition for I/O


5
Competition for CPU
Availability
4




4
System availability during regular maintenance


4
System availability during machine-level failure


5
System availability during process-level failure
Impact on existing (legacy) environment
5




5
Impact on stability of legacy systems and applications


5
Impact on performance of legacy systems and applications

Topology 5: Scaling to More Than Two NAS Machines
The final topology is one in which the cluster consists of more than two Netscape Application Servers. This configuration is typically accompanied by a proportional increase in the number of web servers, as shown here.

There are two important reasons for increasing the number of NAS servers beyond two:

Do not assume that scaling up the number of NAS servers in a cluster results in a linear scaling of throughput. Some of the resources of each machine are used to maintain load balancing and state and session management information across the various servers. Adding more servers does improve throughput, but each additional server introduces more overhead in the form of load balancing and data synchronization. This overhead is referred to as the scaling factor. The added servers must devote resources to the scaling factor as well as to the goal of improving performance.

Table 2.5 summarizes deployment issues and how they rate for Topology 5. A high score (5 is the highest, 1 is the lowest) means that for this topology, the particular issue rates well for deployment. All the topologies described in this section are rated according to the same criteria, so that you can easily compare and decide which topology is best suited for your enterprise.

Table 2.5 Ratings for Topology 5
Deployment issue
Score

Details
System administration
5




4
Ease of daily administrative tasks


5
Ease of troubleshooting system-level problems
Hardware/OS resource usage
5




5
Competition for memory


5
Competition for I/O


5
Competition for CPU
Availability
5




5
System availability during regular maintenance


5
System availability during machine-level failure


5
System availability during process-level failure
Impact on existing (legacy) environment
5




5
Impact on stability of legacy systems and applications


5
Impact on performance of legacy systems and applications

Determining Which Topology to Use You should test different NAS configurations and machine-CPU mixes to determine which combination works best for you. Naturally, performance, particularly response time, is also affected by the nature of your applications and what kinds of processing power they require. For example, do the applications involve simple I/O activity, or are complicated calculations used? This and other application capacity issues are explained in detail in Chapter 3, "Determining System Capacity" and in Chapter 4, "Performance Testing and Fine-Tuning Your System."

Topologies 4 and 5 consist of more than one installation of NAS on your network. You can choose one of these topologies as a way of handling greater loads, or you can elect to scale the number of CPUs within each machine.

A single-machine topology (Topology 1) may not perform as well as you would like. If your main concern is to maximize throughput, then as a general rule of thumb, adding a second machine with NAS installed on it is a desirable solution. A second machine can handle a larger volume of user requests than a single machine can. However, if your main concern is response time, then consider adding CPUs to the machine or machines you already have. Having more CPUs speeds processing. But remember that system performance is also related to application design, so these suggestions will help improve performance in varying degrees, depending on how you design and deploy your applications.

Determining Backup Requirements Your server backup requirements may affect the number of NAS installations at your site. Your server backup objective is to achieve fault tolerance, eliminating any single point of failure in your NAS system. If one or more servers fail, fault tolerance and fail-over capabilities ensure that requests continue to be processed without interruption. This is achieved through data synchronization, which enables distributed state and session management services among servers in the same cluster. Distributed state and session management services preserve data generated during a user session so that user sessions continue without interruption, and with no loss of data, even if one or more servers or processes in the cluster become unavailable.

A cluster is a group of NAS machines that participate as a group in synchronization of state and session data. Each server within a cluster can assume one of several Sync Server roles:

If your configuration consists of only one NAS machine, then cluster planning is not necessary. However, if you want to perform data synchronization, you must plan your cluster carefully and consider scaling the number of machines you deploy according to how you want to structure the cluster.

For information about how to assign a Sync Server role to one of the servers in your cluster, see Chapter 14, "Managing Distributed Data Synchronization," in the Administration Guide.

Adding a Sync Backup to the Cluster
Adding another machine as a Sync Backup ensures that you always have a copy of your data, regardless of what happens to your Sync Primary; but it also means that overall performance is affected by the added load of data synchronization processing. Every change in the state of the Sync Primary must be replicated on the Sync Backup. This replication can significantly increase network traffic and memory allocation and deallocation. Therefore, do not use a Sync Backup unless it's absolutely necessary.

For any cluster, the Sync Server role of each server is dependent on your Data Synchronization needs. For additional scalability, increase the number of CPUs within a machine, or increase the number of machines. Another option, if you want to increase the number of machines, is to add another cluster that contains the additional machines. Your course of action will depend on the demands your application makes on your system.

For more information about failover capacity planning, see Chapter 3, "Determining System Capacity."

Integrating a Database Back End Database integration and connectivity are two very important areas of NAS deployment. There are two main issues to consider:

Avoiding Database Bottlenecks
When integrating a database back end into your overall NAS topology, make sure that the database does not become the system bottleneck. Talk to your database administrator and discuss the feasibility of performing the following performance-enhancing steps:

Using Global Transactions or Local Transactions
If your NAS system has a variety of database back ends (Oracle, SQL, Sybase, and DB2), consider designing your Java-based applications to work with global transactions, which can update multiple database types. Also, if your NAS system integrates databases that are in different locations, global transactions can update these distributed databases.

Global Transactions

A global transaction updates a database using one or more Enterprise Java Beans (EJBs) running concurrently for the same global transaction, from within one or more KJS processes. Multiple EJB processing occurs when an EJB triggers another EJB to run and they both participate in the same transaction. A global transaction also updates multiple databases, of different types (Oracle, Sybase, and so on) that are distributed over different geographic locations. A local transaction, on the other hand, is managed not by an external transaction manager, but rather by a database itself.

As part of the deployment process, decide if you want to design your EJB applications to use global transactions for databases. Global transactions, also called distributed transactions, are managed by an external transaction manager, a feature you can enable at installation time or later by updating the NAS registry. For details about how to configure transaction manager at installation time, see the Installation Guide. For information about how to maintain and use transaction manager, see Chapter 10, "Administering Transactions," in the Administration Guide.

If you decide to use global transactions for certain databases (for example, those of Oracle and Sybase), keep in mind that you are allowed one connection per thread. Make sure that the maximum number of threads and database connections in your KJS processes are equal. Likewise, make sure that your database is configured to allow as many connections as the total threads and database connections of all your KJS processes.

For information about how to increase the number of threads and processes, see Chapter 7, "Increasing Fault Tolerance and Server Resources," in the Administration Guide.

Local Transactions

Local transactions handle threads differently: here you can specify a connection pool of an unlimited size, causing performance at times to be better than with global transactions. Depending on the nature of your applications, you may decide that enabling local transactions is a better option.

Deciding Between Global and Local Transactions

Here are some application factors to consider when deciding between global and local transactions:

Depending on all of these factors and your database structure and integration with NAS, you may decide to use a mix of both local and global transactions, if your database client supports this. Consult with your database administrator before making any decisions.

Before using transaction manager and resource manager, configure your database back ends for XA transactions. Consult your database vendor documentation for details.

Integrating NAS with Directory Server The NAS installation program allows you to install Directory Server, Netscape's implementation of the Lightweight Directory Access Protocol (LDAP). NAS uses Directory Server to store NAS configuration information, most of which was stored in the registry of earlier NAS releases, and also uses it as a central repository for user and group information. Integrating NAS with Directory Server is particularly useful if you are installing multiple NAS machines at your site, because the configuration information for all your NAS installations is centralized in one place rather than being distributed across the registries of each NAS installation.

LDAP provides an open directory access protocol running over TCP/IP. Netscape Directory Server 4.0 supports LDAP versions 2 and 3, and provides the software necessary for an entire directory solution. It includes Directory Server, which is the server-side software that implements the LDAP protocol. Other LDAP clients are also available. These include the Users and Groups area in Netscape Administration Server and the addressbook feature in Netscape Communicator 4.0.

NAS Settings Stored on Directory Server
When you install Directory Server, many of the NAS configuration settings are stored on Directory Server. If you are upgrading from an earlier version of NAS, remember that all of these settings were previously stored on the NAS registry of each NAS installation at your site.

The NAS configuration settings that now reside on Directory Server are listed in Table 2.6.

Table 2.6 NAS configuration settings residing on Directory Server
Configuration Setting
Definition
ClassDef
All the registered applications that all NAS installations at your site use
NameTrans
The list of user-specified names for all applications registered to all your NAS servers and their corresponding GUIDs (globally unique identifiers).
Clusters
All the clusters you've created on your network and the servers within each cluster.
ACL
The list of access control lists (ACLs) that you can use to perform access checks to each application resource
EJB-Components
The list of user-specified names for all Enterprise Java Beans (EJBs) registered to all your NAS servers and their corresponding GUIDs (globally unique identifiers)
GMS
The Global Message Service (GMS) multicasting parameters. The load balancer module in each server communicates with load balancer modules in other servers using these multicasting messages.
NLS
The international environment settings for National Language Support (NLS). Used for developing single/multi lingual applications with legacy character sets or Unicode character sets. Use this flag to enable or disable the NLS application programming interfaces (APIs).
Principal
The user and group security information for all installed NAS machines
DAE\DataSources
The mapping of data-source names to drivers
DAE2\DataSources
(JDBC) The mapping of data-source names to drivers
EB
The settings that control how enterprise beans are handled across all your NAS installations
Extensions
The extensions that are loaded into all your NAS installations when the servers are started up
LoadB
The load balancing parameters that control how requests are handled across all your NAS installations
REQ
The request manager settings used to configure threads in the thread pool
Security
The encryption parameters that control encryption between your web servers and your NAS installations

NAS-Directory Server Deployment Considerations
In addition to knowing what areas of NAS you can configure from Directory Server, note the following deployment tasks:

Read the Installation Guide for Netscape Directory Server 4.0 and the Installation Guide for Netscape Application Server 4.0 for details about LDAP issues.

 

© Copyright 1999 Netscape Communications Corp.