Complete Contents
About This Guide
Chapter 1 Introduction to Enterprise Server
Chapter 2 Administrating Enterprise Servers
Chapter 3 Setting Administration Preferences
Chapter 4 Managing Users and Groups
Chapter 5 Working with Server Security
Chapter 6 Managing Server Clusters
Chapter 7 Configuring Server Preferences
Chapter 8 Understanding Log Files
Chapter 9 Using SNMP to Monitor Servers
Chapter 10 Configuring the Server for Performance
Chapter 11 Extending Your Server with Programs
Chapter 12 Working with Configuration Styles
Chapter 13 Managing Server Content
Chapter 14 Controlling Access to Your Server
Chapter 15 Configuring Web Publishing
Chapter 16 Using Search
Appendix A HyperText Transfer Protocol
Appendix B ACL File Syntax
Appendix C Internationalized Enterprise Server
Appendix D Server Extensions for Microsoft FrontPage
Appendix E Enterprise Server User Interface
Glossary
Index
Netscape Enterprise Server Administrator's Guide: Configuring the Server for
Previous Next Contents Index


Chapter 10 Configuring the Server for Performance

This chapter is intended for advanced administrators only. Be cautious when you tune your server. Do not change any values except in exceptional circumstances. Read this chapter and other relevant server documentation before making any changes. Always backup your configuration files first.

Note. Some internal Enterprise Server 4.0 tuning parameters are different from those in previous versions of Netscape Enterprise Server.

This chapter includes the following sections:


About Server Performance
Web servers have become increasingly important for both internal and external business communications. As Web servers become more and more business-critical, server performance takes on added significance. Netscape's Enterprise Server continues to lead in this area, by setting a new standard for performance.

Netscape Enterprise Server 4.0 was designed to meet the needs of the most demanding, high traffic sites in the world. It flexibly runs on both Unix and Windows NT and can serve both static and dynamically generated content. Enterprise Server can also run in SSL mode, enabling the secure transfer of information.

Because Netscape Enterprise Server is such a flexible tool for publishing, customer needs vary significantly. This document guides you through the process of defining your server workload and sizing a system to meet your performance needs. This document addresses miscellaneous configuration and Unix platform-specific issues, CGI-related performance tuning problems, and other common situations. It also describes the perfdump performance utility and tuning parameters that are built into the Netscape Enterprise Server 4.0. The document concludes with a discussion of the two web server benchmarking packages: SpecWeb and Webstone.


Performance Issues
The first step toward sizing your Enterprise 4.0 server is to determine your requirements. Performance means different things to users and to webmasters. Users want fast response times (typically less than 100 ms), high availability (no "connection refused" messages), and as much interface control as possible. Webmasters and system administrators, on the other hand, want to see high connection rates, high data throughput, and uptime approaching 100%. You need to define what performance means for your particular situation.

Here are some areas to consider:


Unix Platform-Specific Issues
The various Unix platforms all have limits on the number of files that can be open in a single process at one time. For busy sites, increase that number to 1024.

These Unix platforms have proprietary sites for additional information about tuning their systems for web servers:


Performance Buckets
Performance buckets allow users to define buckets, and link them to various server functions. Every time one of these functions is invoked, the server collects statistical data and adds it to the bucket. For example, "send-cgi" and "NSServletService" are functions used to serve the CGI and Java servlet requests respectively. You can either define two buckets to maintain separate counters for CGI and servlet requests, or create one bucket that counts requests for both types of dynamic content. The cost collecting this information is little and impact on the server performance is negligible. This information can later be accessed using The perfdump Utility. The following information is stored in a bucket:

The following buckets are pre-defined by the server:

Configuration
All the configuration information for performance buckets is specified in the obj.conf file. By default the feature is disabled. To enable performance measurement add the following line in obj.conf file:

The following examples show how to define new buckets.

Init fn="define-perf-bucket" name="acl-bucket" description="ACL bucket"
Init fn="define-perf-bucket" name="file-bucket" description="Non-cached responses"
Init fn="define-perf-bucket" name="cgi-bucket" description="CGI Stats"

The prior example creates three buckets: acl-bucket, file-bucket, and cgi-bucket. To associate these buckets with functions, add bucket=bucket-name in front of the function in the obj.conf file for which you wish to measure performance.

PathCheck fn="check-acl" acl="default" bucket="acl-bucket"
Service method="(GET|HEAD|POST)" type="*~magnus-internal/*" fn="send- file" bucket="file-bucket"
<Object name="cgi">
ObjectType fn="force-type" type="magnus-internal/cgi"
Service fn="send-cgi" bucket="cgi-bucket"
</Object>

Performance Report
The server statistics in buckets can be accessed using The perfdump Utility. The performance buckets information is located in the last section of the report that perfdump returns. To enable reports on performance buckets, complete the following steps:

  1. Define an extension for the performance bucket report. Add the following line to the mime.types file:
  2. Associate the type you declared in mime.types with the service-dump function in the obj.conf file:
  3. Use the URL http://server_name:port_number/.perf to view the performance report.
Note. You must include a period (.) before the extension you defined in the mime.types file (in this case, .perf).

The report contains the following information:


Miscellaneous magnus.conf Directives
You can use the following magnus.conf directives to configure your server to function more effectively:

Multi-process Mode
The Enterprise Server can be configured to handle requests using multiple processes and multiple threads in each process. This flexibility provides optimal performance for sites using threads and also provides backward compatibility to sites running legacy applications that are not ready to run in a threaded environment. Because applications on Windows NT generally already take advantage of multi-process considerations, this feature mostly applies to Unix platforms.

The advantage of multiple processes is that legacy applications which are not thread-aware or thread safe can now be run more effectively in Enterprise Server 4.0. However, because all the Netscape extensions are built to support a single-process, threaded environment, they cannot run in the multi-process mode. WAI, LiveWire, Java, Server-side JavaScript, LiveConnect and the Web Publishing and Search plugins fail on startup if the server is in multi-process mode. There are two approaches to multi-thread design:

In the single-process design, the server receives requests from web clients to a single process. Inside the single server process, many threads are running which are waiting for new requests to arrive. When a request arrives, it is handled by the thread receiving the request. Because the server is multi-threaded, all extensions written to the server (NSAPI) must be thread-safe. This means that if the NSAPI extension uses a global resource (like a shared reference to a file or global variable) then the use of that resource must be synchronized so that only one thread accesses it at a time. All plugins provided by Netscape are thread-safe and thread-aware, providing good scalability and concurrency. However, your legacy applications may be single-threaded. When the server runs the application, it can only execute one at a time. This leads to severe performance problems when put under load. Unfortunately, in the single-process design, there is no real workaround.

In the multi-process design, the server spawns multiple server processes at startup. Each process contains one or more threads (depending on the configuration) which receive incoming requests. Since each process is completely independent, each one has its own copies of global variables, shared libraries, caches, and other resources. Using multiple processes requires more resources from your system. Also, if you try to install an application which requires shared state, it has to synchronize that state across multiple processes. NSAPI provides no helper functions for implementing cross-process synchronization.

If you are not running any NSAPI in your server, you should use the default settings: one process and many threads. If you are running an application which is not scalable in a threaded environment, you should use a few processes and many threads, for example, 4 or 8 processes and 256 or 512 threads per process.

Prior to HPUX 10.30, HPUX thread support consisted of DCE threads. The implementation of DCE threads were "user level" threads, which means that they are schedule only in user space. Since the kernel does not know anything about the user threads, these threads are cooperatively scheduled within the context of a single process. This means that on multi-processor systems, all the DCE threads will only be scheduled on one processor at a time. This prevents MP scaling. With HPUX 10.30, HPUX introduced its first kernel based threading model (pthreads). Since the kernel has knowledge of these threads, and since the kernel is responsible for scheduling these threads, programs written to the pthreads API can be scheduled on all processors in a multi-processor system. Enterprise Server 4.0 now uses HP's kernel-level threads.

MaxProcs (Unix)

Use this directive to set your Unix server in multi-process mode, which allows for higher scalability on multi-processor machines. If, for example, you are running on a four-processor CPU, setting MaxProcs to 4 improves performance: one process per processor.

If you are running Enterprise Server 4.0 in multi-process mode, you cannot run LiveWire, Web Publisher, and WAI.

This directive results in one primordial process and four active processes:

Note. This value is not tunable from the server.

Accept Thread Information
MinAcceptThreadsPerSocket / MaxAcceptThreadsPerSocket

Use this directive to specify how many threads you want in accept mode on a listen socket at any time. It's a good practice to set this to equal the number of processes. You can set this to twice (2x) the number of processes, but setting it to a number that is too great (such as ten (10x) or fifty (50x)) allows too many threads to be created and slows the server down.

CGIStub Processes (Unix)
You can adjust the CGIStub parameters on Unix systems. In Enterprise Server 4.0, the CGI engine creates CGIStub processes as needed to handle CGI processes. On systems that serve a large load and rely heavily on CGI-generated content, it is possible for the CGIStub processes spawned to consume all system resources. If this is happening on your server, the CGIStub processes can be tuned to restrict how many new CGIStub processes can be spawned, their timeout value, and the minimum number of CGIStub processes that will be running at any given moment.

Note. If you have an init-cgi function in the obj.conf file and you are running in multi-process mode, you must add LateInit = yes to the init-cgi line.

MinCGIStubs/MaxCGIStubs/CGIStubIdleTimeout

The three directives (and their defaults) that can be placed in the magnus.conf file to control Cgistub are :

MinCGIStubs 2
MaxCGIStubs 10
CGIStubIdleTimeout 45

MinCGIStubs controls the number of processes that are started by default. The first CGIStub process is not started until a CGI program has been accessed. The default value is 2. Note that if you have a init-cgi directive in the obj.conf file, the minimum number of CGIStub processes are spawned at startup.

MaxCGIStubs controls the maximum number of CGIStub processes the server can spawn. This is the maximum concurrent CGIStub processes in execution, not the maximum number of pending requests. The default value shown should be adequate for most systems. Setting this too high may actually reduce throughput. The default value is 10.

CGIStubIdleTimeout causes the server to kill any CGIStub processes that have been idle for the number of seconds set by this directive. Once the number of processes is at MinCGIStubs it does not kill any more processes.

Buffer Size
SndBufSize/RcvBufSize

You can specify the size of the send buffer (SndBufSize) and the receiving buffer (RcvBufSize) at the server's sockets. For more information regarding these buffers, see your Unix documentation.

Native Thread Pool Size
NativePoolStackSize/NativePoolQueueSize/NativePoolMaxThreads/NativePoolMinThreads

In preiveous versions of Enterprise Server, you could control the native thread pool by setting the system environment variables NSCP_POOL_STACKSIZE, NSCP_POOL_THREADMAX, and NSCP_POOL_WORKQUEUEMAX. In Enterprise Server 4.0, you can use the directives in magnus.conf to control the size of the native kernel thread pool.

NativePoolStackSize determines the stack size of each thread in the native (kernel) thread pool.

NativePoolQueueSize determines the number of threads that can wait in the queue for the thread pool. If all threads int he pool are busy, then the next request-handling thread that needs to use a thread in the native pool must wait in the queue. If the queue is full, the next request-handling thread that thres to get in the queue is rejected, with the result that it reutnrs a busy response to the client. It is then free to handle another incoming request instead of being tied up waiting in the queue.

NativePoolMaxThreads determines the maximum number of threas in the native (kernel) thread pool.

NativePoolMinThreads determines the minimum number of threads in the native (kernel) thread pool.


About RqThrottle
The RqThrottle parameter in the magnus.conf file specifies the maximum number of simultaneous transactions the web server can handle. The default value is 512. Changes to this value can be used to throttle the server, minimizing latencies for the transactions that are performed. The RqThrottle value acts across multiple virtual servers, but does not attempt to load-balance.

To compute the number of simultaneous requests, the server counts the number of active requests, adding one to the number when a new request arrives, subtracting one when it finishes the request. When a new request arrives, the server checks to see if it is already processing the maximum number of requests. If it has reached the limit, it defers processing new requests until the number of active requests drops below the maximum amount.

In theory, you could set the maximum simultaneous requests to 1 and still have a functional server. Setting this value to 1 would mean that the server could only handle one request at a time, but since HTTP requests generally have a very short duration (response time can be as low as 5 milliseconds), processing one request at a time would still allow you to process up to 200 requests per second.

However, in actuality, Internet clients frequently connect to the server and then do not complete their requests. In these cases, the server waits 30 seconds or more for the data before timing out. (You can define this timeout period in obj.conf. It has a default of 5 minutes.) Also, some sites do heavyweight transactions that take minutes to complete. Both of these factors add to the maximum simultaneous requests that are required. If your site is processing many requests that take many seconds, you may need to increase the number of maximum simultaneous requests.

In the 3.0 server the defaults were 48/128. With 4.0, the limits are increased to 48/512. This is because 128 can be a gating factor for performance even on sites with as few as 750,000 hits per day. If your site is experiencing slowness and the ActiveThreads count remains close to the limit, consider increasing the maximum threads limit. To find out the active thread count, use The perfdump Utility.

A suitable RqThrottle value ranges from 200-2000 depending on the load. If you want your server to use all the available resources on the system (that is, you don't run other server software on the same machine), then you can increase RqThrottle to a larger value than necessary without negative consequences.

Note. If you are using older NSAPI plugins that are not reentrant, they will not work with the multithreading model described in this document. To continue using them, you should revise them so that they are reentrant. If this is not possible, you can configure your server to work with them by setting RqThrottle to 1 and then using a high value for MaxProcs, such as 48 or greater, but this will adversely impact your server's performance.

Tuning

There are two ways to tune the thread limit: through editing the magnus.conf file and through the Server Manager.

If you edit the magnus.conf file, RqThrottleMinPerSocket is the minimum value and RqThrottle is the maximum value.

The minimum limit is a goal for how many threads the server attempts to keep in the WaitingThreads state. This number is just a goal. The number of actual threads in this state may go slightly above or below this value. The default value is 48. The maximum threads represents a hard limit for the maximum number of active threads that can run simultaneously, which can become a bottleneck for performance. The default value is 512.

If you use the Server Manager, follow these steps:

  1. Go to the Preferences tab.
  2. Click the Performance Tuning link.
  3. Enter the desired value in the Maximum simultaneous requests field.
For additional information, see The Performance Tuning Page.


The perfdump Utility
The perfdump utility is a service function built into Enterprise Server. It collects various pieces of performance data from the web server internal statistics and displays them in ASCII text.

To install perfdump, you need to make the following modifications in obj.conf in the netscape/server4/https-server_name/config directory:

  1. Add the following object to your obj.conf file (after the default object):
  2. Edit the "ppath=" line if your document root is different than the example above. Make sure to put ".perf" after the path to the document root, as shown above.
  3. Restart your server software, and access perfdump by accessing this URL:
You can request the perfdump statistics and inform the browser to automatically refresh the statistics every n seconds by using this URL, which sets the refresh to every 5 seconds:

Sample Output
ns-httpd pid: 133

ListenSocket #0:

------------------

Address https:\\INADDR_ANY:80

ActiveThreads 48

WaitingThreads 47

BusyThreads 1

Thread limits 48/512

KeepAliveInfo:

------------------

KeepAliveCount 0/200

KeepAliveHits 0

KeepAliveFlushes 0

KeepAliveTimeout 30 seconds

CacheInfo:

------------------

enabled yes

CacheEntries 0/4096

CacheSize(bytes) 0/10485760

Hit Ratio 0/1 ( 0.00)

pollInterval 5

maxFileSize 537600

Native Thread Pool Data:

------------------------

Idle/Peak/Limit 1/1/100

Work queue length/Limit 0/2147483647

Peak work queue length 1

Work queue rejections 0

Server DNS cache disabled


Using perfdump Statistics to Tune Your Server
This section describes the information available through the perfdump utility and discusses how to tune some parameters to improve your server's performance. The default tuning parameters are appropriate for all sites except those with very high volume. The only parameter that large sites may regularly need to change is the RqThrottle parameter, which is tunable from Server Manager.

The perfdump utility monitors these statistics:

ListenSocket Information
The ListenSocket is the listen-queue size which is a socket-level parameter that specifies the number of incoming connections the system will accept for that socket. The default setting is 128 (for Unix) or 100 (for Windows NT) incoming connections.

Make sure your system's listen-queue size is large enough to accommodate the ListenSocket size set in Enterprise Server. The ListenSocket size set from Enterprise Server changes the listen-queue size requested by your system. If Enterprise Server requests a ListenSocket size larger than your system's maximum listen-queue size, the size defaults to the system's maximum.

Warning. Setting ListenSocket too high can degrade server performance. ListenSocket was designed to prevent the server from becoming overloaded with connections it cannot handle. If your server is overloaded and you increase ListenSocket, the server will only fall further behind.

The first set of perfdump statistics is the ListenSocket information. For each hardware virtual server you have enabled in your server, there is one ListenSocket structure. For most sites, only one is listed.

ListenSocket #0:

------------------

Address https:\\INADDR_ANY:80

ActiveThreads 48

WaitingThreads 47

BusyThreads 1

Thread limits 48/512

Note. The "thread" fields specify the current thread use counts and limits for this listen socket. Keep in mind that the idea of a "thread" does not necessarily reflect the use of a thread known to the operating system. "Thread" in these fields really means an HTTP session. If you check the operating system to see how many threads are running in the process, it is not going to be the same as the numbers reported in these fields.

Tuning

There are two ways to create virtual servers: Using the virtual.conf file and using the obj.conf file. If you use the virtual.conf method, the 512 default maximum threads are available to all virtual servers on an as-needed basis. If you use the obj.conf method, the 512 threads are allocated equally to each of the defined virtual servers. For example, if you had two servers, each would have 256 threads available. This is less efficient. To maximize performance in this area, use the virtual.conf method. You can also configure the listen-queue size in The Performance Tuning Page of the Server Manager.

Address
This field contains the base address that this listen socket is listening to. For most sites that are not using hardware virtual servers, the URL is:

http://INADDR_ANY:80"

The constant value "INADDR_ANY" is known internally to the server that specifies that this listen socket is listening on all IP addresses for this machine.

Tuning

This setting is not tunable except as described above.

ActiveThreads
The total number of "threads" (HTTP sessions) that are in any state for this listen socket. This is equal to WaitingThreads + BusyThreads.

This setting is not tunable.

WaitingThreads
The number of "threads" (HTTP sessions) waiting for a new TCP connection for this listen socket.

Tuning

This is not directly tunable, but it is loosely equivalent to the RqThrottleMinPerSocket. See Thread limits <min/max>.

BusyThreads
The number of "threads" (HTTP sessions) actively processing requests which arrived on this listen socket.

This setting is not tunable.

Thread limits <min/max>
The minimum thread limit is a goal for how many threads the server attempts to keep in the WaitingThreads state. This number is just a goal. The number of actual threads in this state may go slightly above or below this value.

The maximum threads represents a hard limit for the maximum number of active threads that can run simultaneously, which can become a bottleneck for performance. Enterprise Server 3.6, has default limits of 48/512. For more information, see About RqThrottle.

Tuning

See About RqThrottle.

KeepAlive Information
KeepAliveInfo:
------------------
KeepAliveCount 0/200
KeepAliveHits 0
KeepAliveFlushes 0
KeepAliveTimeout 30 seconds

This section reports statistics about the server's HTTP-level KeepAlive system.

Note. The name "KeepAlive" should not be confused with TCP "KeepAlives." Also, note that the name "KeepAlive" was changed to "Persistent Connections" in HTTP/1.1, but for clarity this document continues to refer to them as "KeepAlive" connections.

Both HTTP/1.0 and HTTP/1.1 support the ability to send multiple requests across a single HTTP session. A web server can receive hundreds of new HTTP requests per second. If every request was allowed to keep the connection open indefinitely, the server could become overloaded with connections. On Unix systems, this could lead to a file table overflow very easily.

To deal with this problem, the server maintains a "Maximum number of `waiting' keepalive connections" counter. A `waiting' keepalive connection is a connection that has fully completed processing of the previous request over the connection and is now waiting for a new request to arrive on the same connection. If the server has more than the maximum waiting connections open when a new connection starts to wait for a keepalive request, the server closes the oldest connection. This algorithm keeps an upper bound on the number of open, waiting keepalive connections that the server can maintain.

Enterprise Server does not always honor a KeepAlive request from a client. The following conditions cause the server to close a connection even if the client has requested a KeepAlive connection:

When SSL is enabled, KeepAliveTimeout defaults to 0, which effectively disables persistent connections. If you want to use persistent connections with SSL, set KeepAliveTimeout to a non-zero value.

You can also change KeepAliveTimeout in The Performance Tuning Page in the Server Manager.

AcceptTimeout
The number of seconds the server will wait for data from the client after accepting the connection before sending the request to the unaccelerated path.

Tuning

If AcceptTimout is set to a small number, rather than reading the initial read from clients with slow responses, the read may timeout and have to be read on the second attempt. This may impact performance.

KeepAliveCount <KeepAliveCount/KeepAliveMaxCount>
The number of sessions currently waiting for a keepalive connection and the maximum number of sessions that the server allows to wait at one time.

Tuning

Edit the MaxKeepAliveConnections parameter in the magnus.conf file.

KeepAliveHits
The number of times a request was successfully received from a connection that had been kept alive.

This setting is not tunable.

KeepAliveFlushes
The number of times the server had to close a connection because the KeepAliveCount exceeded the KeepAliveMaxCount.

This setting is not tunable.

Cache Information
CacheInfo:
------------------
enabled yes
CacheEntries 0/4096
CacheSize(bytes) 0/10485760

Hit Ratio 0/1 ( 0.00)
pollInterval 5
maxFileSize 537600

This section describes the server's cache information. The contents of a file are cached to a specific static file on disk, with the keys being the file's URI. If multiple virtual servers are set up, the key also includes the virtual server's host ID and the port number.

enabled
If the cache is disabled, the rest of this section is not displayed.

Tuning

To disable the server cache, add the following line to the obj.conf file:

Init fn=cache-init disable=true

CacheEntries <CurrentCacheEntries / MaxCacheEntries>
The number of current cache entries and the maximum number of cache entries. A single cache entry represents a single URI.

Tuning

To set the maximum number of cached files in the cache, add the following line to the obj.conf file:

Init fn=cache-init MaxNumberOfCachedFiles=xxxxx

CacheSize <CurrentCacheSize / MaxCacheSize>
The current size of the cache in bytes and the maximum size of the cache in bytes. The default size is 10MB, and the cache cannot insert new entries once that size is reached.

Tuning

To set the maximum size of the cache (in kilobytes), add the following line to the obj.conf file:

Init fn=cache-init MaxTotalCachedFileSize=xxxxx

Hit Ratio <CacheHits / CacheLookups (Ratio)>
The hit ratio value tells you how efficient your site is. The hit ratio should be above 90%. If the number is 0, you need to optimize your site. See the troubleshooting section for more information on how to improve your site.

This setting is not tunable.

pollInterval
When a file is in the cache, the server constantly goes back to the disk to make sure that it hasn't changed since it was last cached. The pollInterval represents a maximum amount of time that can pass before the server checks the disk again. The default value is 5 seconds. If you'd like to check the file with every access, you can set this value to 0.

Tuning

To set the polling interval (in seconds), add the following line to the obj.conf file:

Init fn=cache-init PollInterval==xxxxx

maxFileSize
The maxFileSize is the maximum size of a file that we will cache. The default size is 537600 bytes. This means that a file which is 600K will not be cached. It is recommended that you avoid caching large files unless you have lots of RAM available.

Tuning

To set the maximum cachable file size (in bytes), add the following line to the obj.conf file:

Init fn=cache-init MaxCachedFileSize=xxxxx

DNS Cache Information
Server DNS cache disabled

The DNS cache caches IP addresses and DNS names.

enabled
If the cache is disabled, the rest of this section is not displayed.

Tuning

By default, the DNS cache is off. Add the following line to obj.conf to enable the cache:

Init fn=dns-cache-init

CacheEntries <CurrentCacheEntries / MaxCacheEntries>
The number of current cache entries and the maximum number of cache entries. A single cache entry represents a single IP address or DNS name lookup.

Tuning

To set the maximum size of the DNS cache, add the following line to the obj.conf file:

Init fn=dns-cache-init cache-size=xxxxx

HitRatio <CacheHits / CacheLookups (Ratio)>
The hit ratio is displays the number of cache hits and the number of cache lookups. A good hit ratio for the DNS cache is ~60-70%.

This setting is not tunable.

Native Threads Pool
Native Thread Pool Data:
------------------------
Idle/Peak/Limit 1/1/100
Work queue length/Limit 0/2147483647
Peak work queue length 1
Work queue rejections 0

The native thread pool is used internally by the server to execute NSAPI functions that require a native thread for execution.

Enterprise Server uses NSPR, which is an underlying portability layer that provides access to the host OS services. This layer provides abstractions for threads that are not always the same asthose for the OS-provided threads. These non-native threads have lower scheduling overhead so their use improves performance. However, these threads are sensitive to blocking calls to the OS, such as I/O calls. To make it easier to write NSAPI extensions that can make use of blocking calls, the server keeps a pool of threads that safely support blocking calls (usually this means it is a native OS thread). During request processing, any NSAPI function that is not marked as being safe for execution on a non-native thread is scheduled for execution on one of the threads in the native thread pool.

If you have written your own NSAPI plug-ins such as NameTrans, Service, or PathCheck functions, these execute by default on a thread from the native thread pool. If your plug-in makes use of the NSAPI functions for I/O exclusively or does not use the NSAPI I/O functions at all, then it can execute on a non-native thread. For this to happen, the function must be loaded with a "NativeThread=no" option indicating that it does not require a native thread. To do this, add this to the "load-modules" Init line in the obj.conf file:

Init funcs="pcheck_uri_clean_fixed_init" shlib="C:/Netscape/p186244/ P186244.dll" fn="load-modules" NativeThread="no"

The NativeThread flag affects all functions in the funcs list, so if you have more than one function in a library but only some of them use native threads, use separate Init lines.

Idle/Peak/Limit
Idle indicates the number of threads that are currently idle. Peak indicates the peak number in the pool. Limit indicates the maximum number of native threads allowed in the thread pool, and is determined by the setting of the NSCP_POOL_THREADMAX environment variable.

Tuning

Modify the NSCP_POOL_THREADMAX environment variable.

Work queue length/Limit
These numbers refer to a queue of server requests that are waiting for the use of a native thread from the pool. The Work Queue Length is the current number of requests waiting for a native thread. Limit is the maximum number of requests that can be queued at one time to wait for a native thread., and is determined by the setting of the NSCP_POOL_WORKQUEUEMAX environment variable.

Tuning

Modify the NSCP_POOL_WORKQUEUEMAX environment variable.

Peak work queue length
This is the highest number of requests that were ever queued up simultaneously for the use of a native thread since the server was started. This value can be viewed as the maximum concurrency for requests requiring a native thread.

This setting is not tunable.

Work queue rejections
This is the cumulative number of requests that have needed the use of a native thread, but that have been rejected due to the work queue being full. By default, these requests are rejected with a "503 - Service Unavailable" response.

This setting is not tunable.

PostThreadsEarly
This advanced tuning parameter changes the thread allocation algorithm by causing the server to check for threads available for accept before executing a request. The default is set to Off. Recommended only in those situations when the load on the server is primarily comprised of lengthy transactions such as LiveWire and the Netscape Application Server or custom applications that access databases and other complex back-end systems. Turning this on allows the server to grow its thread pool more rapidly.

Tuning

Turn this parameter on by adding this directive to magnus.conf:

PostThreadsEarly 1

Thread Pool Environmental Variables
NSCP_POOL_WORKQUEUEMAX

This value defaults to 0x7FFFFFFF (a very large number). Setting this below the RqThrottle value causes the server to execute a busy function instead of the intended NSAPI function whenever the number of requests waiting for service by pool threads exceeds this value. The default returns a "503 Service Unavailable" response and logs a message if LogVerbose is enabled. Setting this above RqThrottle causes the server to reject connections before a busy function can execute.

This value represents the maximum number of concurrent requests for service which require a native thread. If your system is unable to fulfill requests due to load, letting more requests queue up increases the latency for requests and could result in all available request threads waiting for a native thread. In general, set this value to be high enough to avoid rejecting requests under "normal" conditions, which would be the anticipated maximum number of concurrent users who would execute requests requiring a native thread.

The difference between this value and RqThrottle is the number of requests reserved for non-native thread requests (such as static html, gif, and jpeg files). Keeping a reserve (and rejecting requests) ensures that your server continues to fill requests for static files, which prevents it from becoming unresponsive during periods of very heavy dynamic content load. If your server consistently rejects connections, this value is set too low or your server hardware is overloaded.

NSCP_POOL_THREADMAX

This value represents the maximum number of threads in the pool. Set this value as low as possible to sustain the optimal volume of requests. A higher value allows more requests to execute concurrently, but has more overhead due to context switching, so "bigger is not always better." If you are not saturating your CPU but you are seeing requests queue up, then increase this number. Typically, you will not need to increase this number.

Busy Functions
The default busy function returns a "503 Service Unavailable" response and logs a message if LogVerbose is enabled. You may wish to modify this behavior for your application. You can specify your own busy functions for any NSAPI function in the obj.conf file by including a service function in the configuration file in this format:

For example, you could use this sample service function:

This allows different responses if the server become too busy in the course of processing a request that includes a number of types (such as Service, AddLog, and PathCheck). Note that your busy function will apply to all functions that require a native thread to execute when the default thread type is non-native.

To use your own busy function instead of the default busy function for the entire server, you can write an NSAPI init function that includes a func_insert call as shown below:

Busy functions are never executed on a pool thread, so you must be careful to avoid using function calls that could cause the thread to block.

Asynchronous DNS Lookup (Unix)
You can configure the server to use Domain Name System (DNS) lookups during normal operation. By default, DNS is not enabled; if you enable DNS, the server looks up the host name for a system's IP address. Although DNS lookups can be useful for server administrators when looking at logs, they can impact performance. When the server receives a request from a client, the client's IP address is included in the request. If DNS is enabled, the server must look up the hostname for the IP address for every client making a request.

DNS causes multiple threads to be serialized when you use DNS services. If you do not want serialization, enable asynchronous DNS. You can enable it only if you have also enabled DNS. Enabling asynchronous DNS can improve your system's performance if you are using DNS.

Note. If you turn off DNS lookups on your server, host name restrictions will not work, and hostnames will not appear in your log files. Instead, you'll see IP addresses.

You can also specify whether to cache the DNS entries. If you enable the DNS cache, the server can store hostname information after receiving it. If the server needs information about the client in the future, the information is cached and available without further querying. You can specify the size of the DNS cache and an expiration time for DNS cache entries. The DNS cache can contain 32 to 32768 entries; the default value is 1024 entries. Values for the time it takes for a cache entry to expire can range from 1 second to 1 year (specified in seconds); the default value is 1200 seconds (20 minutes).

It is recommended that you do not use DNS lookups in server processes because they are so resource intensive. If you must include DNS lookups, be sure to make them asynchronous. For more information on asynchronous DNS, see The Performance Tuning Page.

enabled
If asynchronous DNS is disabled, the rest of this section will not be displayed.

Tuning

Add "AsyncDNS on" to magnus.conf.

NameLookups
The number of name lookups (DNS name to IP address) that have been done since the server was started.

This setting is not tunable.

AddrLookups
The number of address loops (IP address to DNS name) that have been done since the server was started.

This setting is not tunable.

LookupsInProgress
The current number of lookups in progress.

This setting is not tunable.


File Cache in Enterprise Server 4.0
Enterprise Server 4.0 uses anew file cache module, NSFC, which caches static HTML, image, and sound files. In previous versions of Enterprise Server, the file cache was integrated with the accelerator cache for static pages. Therefore, an HTTP request was serviced by the accelerator, or passed to the NSAPI engine for full processing, and requests that could not be accelerated did not have the benefit of file caching. This prevented many sites with NSAPI plugins, customized logs, or used server-parsed HTML from taking advantage of the accelerator.

The NSFC module implements an independent file cache used in the NSAPI engine to cache static files that could not be accelerated. It is also used by the accelerator cache, replacing its previously integrated file cache. NSFC has also been used to cache information that is used to speed up processing of server-parsed HTML.

File Cache Configuration
The file cache is configured in the nsfc.conf configuration file located in the server_root/https-server-id/config directory. You can tune the file cache configuration parameters for improved performance.

FileCacheEnable
Whether the file cache is enabled or not.

By default, this is set to true.

FileCacheEnable=true

MaxAge
The maximum age (in seconds) of a valid cache entry. This setting controls how long cached information will continue to be used once a file has been cached. An entry older than MaxAge is replaced by a new entry for the same file if the same file is referenced through the cache. This setting works in conjuctions with FlushInterval.

By default, this is set to 30.

MaxAge=30

MaxFiles
The maximum number of files that may be in the cache at once.

By default, this is set to 256.

MaxFiles=256

FlushInterval
The interval (in seconds) at which a reaper thread looks for cache entries that are older th an MaxAge and deletes them.

By default, this is set to 60.

FlushInterval=60

SmallFileSizeLimit
The size (in bytes) of the largest file considered to be "small". The contents of "small" files are cached by allocating heap space and reading the file into it.

By default, this is set to 2048.

SmallFileSizeLimit=2048

SmallFileSpace
The size of heap space (in bytes) used for the cache, including heap space used to cache small files.

By default, this is set to 128000 (128 KB).

SmallFileSpace=128000

MediumFileSizeLimit (Unix)
The size (in bytes) of the largest file that is not a "small" file that is considered to be "medium" size. The contents of medium files are cached by mapping the file into virtual memory (currently only on Unix platforms). The contents of "large" files (larger than "medium") are not cached, although information about large files is cached.

By default, this is set to 128000 (128 KB).

MediumFileSizeLimit=128000

MediumFileSpace
The size (in bytes) of the virtual memory used to map all medium sized files.

By default, this is set to 4000000(4MB).

MediumFileSpace=4000000

TransmitFile
When TransmitFile is set to "true," open file descriptors are cached for files in the file cache, rather than the file contents, and PR_TransmitFile is used to send the file contents to a client. When set to "true," the distinction normally made by the file cache between small, medium, and large files no longer applies, since only the open file descriptor is being cached. By default, TransmitFile is "false" on Unix and "true" on Windows NT.

This directive is intended to be used on Unix platforms that have native OS support for PR_TransmitFile, which currently includes HPUX and AIX. It is not recommended for other Unix platforms.

TransmitFile="true"

File Cache Dynamic Control and Monitoring
An object can be added to obj.conf to enable the NSFC file cache to be dynamically monitored and controlled while the server is running. Typically this would be done by first adding a NameTrans directive to the "default" object:

Then add a new object definition:

This enables the file cache control and monitoring function to be accessed via the URI, "/nsfc." By changing the "from" parameter in the NameTrans directive, a different URI can be used.

Accessing this URI displays the following information:

The file listing includes the file name, a set of flags, the current number of references to the cache entry, the size of the file, and an internal file ID value. The flags are as follows:

A query string can be included when the "/nsfc" URI is accessed. The following values are recognized:

For sites with scheduled updates to content, consider shutting down the cache while the content is being updated, and starting it again after the update is complete. Although performance will slow down, the server operates normally when the cache is off.

Cache-init
The cache-init function controls file caching. The server caches files to improve performance. To optimize server speed, you should have enough RAM for the server and cache because swapping can be slow. Do not allocate a cache that is greater in size than the amount of memory on the system.

Note. In Enterprise Server 4.0, much of the functionality of the file cache is controlled by a new configuration file called nsfc.conf.

Parameters:
cache-size
(optional) Specifies the size of the cache in bytes. Valid values for the number of elements in the cache are 32 to 32768; the default is 512. The cache-size value should be greater than the size of all the documents on your server. You should include any static files such as HTML, text, images, sounds, or any other unchanging data in the count. URLs that are dynamic (such as CGI or NSAPI routines) return different data, depending on who calls them, and should not be counted.
mmap-max
(optional) Specifies the maximum amount of memory set aside for memory-mapped (mmap) files the server will keep open at any point. Acceptable values range from 512K to (512*1024)KB; the default is 10000KB (10MB). To get maximum speed, the cache keeps many mmap files open. To estimate the optimal value for mmap-max on your system, approximately compute the total number of bytes of "static" data on your system. For example, if you have 200 files that are 10K in size, then 2MB should be sufficient for mmax-map.
disable
(optional) Specifies whether the file cache is disabled or not. If set to anything but "false" the cache is disabled. By default, the cache is enabled.
PollInterval
(optional) How often the files in the file cache are checked for changes. The default is 5 seconds. In Enterprise Server 4.0, this parameter is ignored -- use the MaxAge parameter in the nsfc.conf file instead.
MaxNumberOfCachedFiles
(optional) Maximum number of entries in the accelerator cache. The default is 4096, minimum is 32, maximum is 32K.
MaxNumberOfOpenCachedFiles
(optional) Maximum number of accel_file_cache entries with file_cache entries.
Default is 512, minimum is 32, maximum is 32K.
MaxCachedFileSize
(optional) Maximum size of a file that can be cached. Files larger than this size are not cached.
The default is 525K.
In Enterprise Server 4.0, this parameter is ignored. Use the MediumFileSizeLimit parameter in nsfc.conf instead.
MaxTotalCachedFileSize
(optional) Total size of all files in the cache. Default is 10K, minimum is 1K, maximum is 16M.
In Enterprise Server 4.0, this parameter is ignored on Unix. Use the MediumFileSpace parameter in nsfc.conf instead.
In Enterprise Server 4.0, this parameter is ignored because it no longers applies to the platform.
MaxNumberOfOpenCachedFiles
(optional) Maximum number of cached files that can be open simultaneously.
CacheHashSize
(optional) size of hash table for the file cache accelerator. Default is 8192, minimum is 32, max is 32K.
NoOverflow
(optional) IRIX only.
IsGlobal
(optional) IRIX only.

Example
Init fn=cache-init cache-size=16000 mmap-max=10000

File Cache Tuning
MaxFiles, SmallFileSpace, and MediumFileSpace
Size the cache appropriately for the content being served, using parameters such as MaxFiles, SmallFileSpace, and MediumFileSpace.

MaxAge
Set MaxAge based on whether the content is updated (existing files are modified) on a regular schedule or not. For example, if content is updated four times a day at regular intervals, MaxAge could be set to 21600 seconds (6 hours). Otherwise, consider setting MaxAge to the longest time you are willing to serve the previous version of a content file, after the file has been modified.

FlushInterval
Set FlushInterval to a short enough time that the cache does not become full of expired entries.

SmallFileSizeLimit
The idea of distinguishing between small files and medium files is to avoid wasting part of many pages of virtual memory when there are lots of small files. So the SmallFileSizeLimit would typically be a slightly lower value than the VM page size.


Improving Servlet Performance
The use of NSAPI cache will improve servlet performance in cases where the obj.conf configuration file has many directives. To enable NSAPI cache inlcude the following line in obj.conf:

It's advisable to have servlet engine NameTrans (NameTrans fn="NSServletNameTrans" name="servlet") to be the first in the list.

It uses highly-optimized URI cache for loaded servlets and will return REQ_PROCEED if the match is found, thus eliminating the need of other NameTrans directives to be executed.

jvm.conf/jvm12.conf has a configuration parameter, called jvm.stickyAttach. Setting the value of this parameter to "1" will cause threads to remember that they are attached to the JVM, thus speeding up request processing by eliminating AttachCurrentThread and DetachCurrentThread calls. It can however have a side-effect as recycled threads which may be doing other processing can be suspended by the garbage collector arbitrarily. Thread pools can be used to eliminate this side effect for othersubsystems.

Thread Pools
Enterprise Server 4.0 allows to specify a number of configurable native thread pools, by adding the following directives to the obj.conf:

Pool must be declared before it's used. To use the pool add pool=name_of_the_pool parameter to load-modules directive of the appropriate subsystem. The older parameter NativeThread=yes will always engage one default native pool, called NativePool.

In addition to configuring the native pool parameters on Windows NT using the environmental variables beginning with "NSCP_POOL" in Enterprise Server 3.6, the following parameters can be added to magnus.conf for convenience:

Any of the parameters can be omitted to reflect the default behavior.

Native pool on Unix is normally not engaged, as all threads are OS-level threads. Using native pools on Unix may introduce a small performance overhead as they'll require an additional context switch, however they can be used to localize jvm.stickyAttach effect or for other purposes, such as resource control/management or to emulate single-threaded behavior of plugins (by setting maxThreads=1).

On Windows NT, however, at least the default native pool is always being used and Enterprise Server uses fibers (user-scheduled threads) for initial request processing. Using custom/additional pools on Windows NT will introduce no additional overhead.


Common Performance Problems
This section discusses a few common performance problems to check for on your web site:

Low-Memory Situations
If you need Enterprise Server to run in low-memory situations, try reducing the thread limit to a bare minimum by lowering the value of RqThrottle in your magnus.conf file. Also you may want to reduce the maximum number of processes that the Enterprise Server will spawn by lowering the value of the MaxProcs value in the magnus.conf file.

Under-Throttled Server
The server does not allow the number of active threads to exceed the Thread Limit value. If the number of simultaneous requests reaches that limit, the server stops servicing new connections until the old connections are freed up. This can lead to increased response time.

In Enterprise Server, the server's default RqThrottle value is 512. If you want your server to accept more connections, you need to increase the RqThrottle value.

Checking
The symptom of an under-throttled server is a server with a long response time. Making a request from a browser establishes a connection fairly quickly to the server, but on under-throttled servers it may take a long time before the response comes back to the client.

The best way to tell if your server is being throttled is to look at the WaitingThreads count. If this number is getting close to 0 or is 0, then the server is not accepting new connections right now. Also check to see if the number of ActiveThreads and BusyThreads are close to their limits. If so, the server is probably limiting itself.

Tuning
See About RqThrottle .

Cache Not Utilized
If the cache is not utilized, your server is not performing optimally. Since most sites have lots of GIF or JPEG files (which should always be cacheable), you need to use your cache effectively.

Some sites, however, do almost everything through CGIs, shtml, or other dynamic sources. Dynamic content is generally not cacheable and inherently yields a low cache hit rate. Don't be too alarmed if your site has a low cache hit rate. The most important thing is that your response time is low. You can have a 0% cache hit rate and still have very good response time. As long as your response time is good, you may not care that the cache hit rate is low.

Checking
Begin by checking your Hit Ratio. This is the percentage of times the cache was used with all hits to your server. A good cache hit rate is anything above 50%. Some sites may even achieve 98% or higher.

In addition, if you are doing a lot of CGI or NSAPI calls, you may have a low cache hit rate.

Tuning
If you have custom NSAPI functions (nametrans, pathcheck, etc), you may have a low cache hit rate. If you are writing your own NSAPI functions, be sure to see the programmer's guide for information on making your NSAPI code cacheable as well.

KeepAlive Connections Flushed
A web site that might be able to service 75 requests per second without keepalives may be able be able to do 200-300 requests per second when keepalives are enabled. Therefore, as a client requests various items from a single page, it is important that keepalives are being used effectively. If the KeepAliveCount exceeds the KeepAliveMaxCount, subsequent KeepAlive connections will be closed (or "flushed") instead of being honored and kept alive.

Checking
Check the KeepAliveFlushes and KeepAliveHits values. On a site where KeepAlives are running well, the ratio of KeepAliveFlushes to KeepAliveHits is very low. If the ratio is high (greater than 1:1), your site is probably not utilizing the HTTP KeepAlives as well as it could.

Tuning
To reduce KeepAlive flushes, increase the MaxKeepAliveConnections value in the magnus.conf file. The default value is 200. By raising the value, you keep more waiting keepalive connections open.

Warning. On Unix systems, if you increase the MaxKeepAliveConnections value too high, the server can run out of open file descriptors. Typically 1024 is the limit for open files on Unix, so increasing this value above 500 is not recommended.

Log File Modes
Keeping the log files on verbose mode can have a significant affect of performance.

Client-Host, Full-Request, Method, Protocol, Query-String, URI, Referer, User-Agent, Authorization and Auth-User: Because the "obscure" variable cannot be provided by the internal "accelerated" path, the accelerated path will not be used at all. Therefore performance numbers will decrease significantly for requests that would typically benefit from the accelerator, for example static files and images.

Enterprise Server 4.0 has a relaxed logging mode that easies the requirements of the log subsystem. Adding "relaxed.logname=anything" to the "flex-init" line in obj.conf changes the behavior of the server in the following way: Logging variables other than the "blessed few" does not prevent the accelerated path from being used. If the accelerator is used, the "non-blessed" variable (which is then not available internally) will be logged as "-". The server does not use the accelerator for dynamic content like CGIs or SHTML, so all the variables would be logged correctly for these requests.

Using Local Variables
The JavaScript virtual machine in Enterprise Server 4.0 implements significant improvements in processing local variables (variables declared inside a function). Therefore, you should minimize the use of global variables (variables declared between the <server> and </server> tags), and write applications to use functions as much as possible. This can improve the application performance significantly.


Benchmarking the Netscape Enterprise Server
This section describes how to benchmark the Enterprise Server using SpecWeb and WebStone.

For optimal performance in benchmark situations, make sure that Web Publishing, and Search are disabled. To do this, complete the follwoing steps in your Server Manager:

  1. Go to the Web Publishing tab.
  2. Click The Web Publishing State Page.
  3. Change the state to off if it is on, and apply the change.
  4. Go to the Search tab.
  5. Click The Search State Page.
  6. Change the state to off if it is on, and apply the change.
SPECweb96 Tuning
SPECweb96 uses a very large data fileset. This fileset far exceeds the Netscape Enterprise Server's expected fileset size. For instance, on the Netscape home web site, you can obtain more than an 80% cache-hit rate with a 10MB cache. With SPECweb96, a 10MB cache can yield cache-hit rates as low as 20% (this number varies depending depending on the SPECweb96 load requested; larger loads use larger filesets).

Note. To optimize performance in a SPECweb96 test, use a machine that has enough RAM to cache the entire file set and increase the web server's cache size (see Cache Information).

  1. Calculate the number of files used in the SPECweb96 fileset. The SPECweb96 fileset size is based on the number of SPECweb96 OPS requested in the test run. The following formula calculates the number of files:
  2. Increase the number of files that the web server's cache will hold to the number of files calculated in step 1.
  3. In the obj.conf configuration file, use the line:

    Append these values to the cache-init line:

  4. Increase the web server's total cache size to be large enough to hold the entire SPECweb96 fileset.
  5. Increase the web server's largest cached file size to be large enough to hold all SPECweb files.
  6. Increase the cache poll interval.
  7. Change the PollInterval directive to increase the cache refresh interval (default is 5 seconds). For example, a value of 30000 (8 hours) should keep cache checks from happening within a SPECweb96 run.

  8. Change some magnus.conf directives.
  9. Change these additional directives in magnus.conf to further improve performance:

    DaemonStats are on by default. A DaemonStats off directive in magnus.conf disables the collection of server statistics. DaemonStats is probably the most important for HTTP GET benchmarks. It disables some of the daemon statistics gathering in the web server.

    ACLFile is a magnus.conf directive to load ACLs. Disable all ACLFile directives for maximum performance.

    RqThrottle specifies the maximum number of simultaneous transactions the web server can handle. For more information, see About RqThrottle.

Table 10.1 SPECweb96 OPS, files, and fileset sizes
Max Requested OPS
Number of files
Fileset size (bytes)
500
3636
517,066,470
1000
5112
726,964,740
2000
7236
1,029,013,470

WebStone Tuning
For WebStone performance benchmarking, there is no additional web server tuning required as long as the fileset size is relatively small. If the WebStone fileset size is large, you should increase the web server's cache size (See the SPECweb tuning guide for information on tuning the cache).


Sizing Issues
This section examines subsystems of your server and makes some recommendations for optimal performance:

Processors
On Solaris and Windows NT, Enterprise Server transparently takes advantage of multiple CPUs. The effectiveness of multiple CPUs varies with the OS and with the nature of the workload. In general, Solaris scales better than Windows NT. The CGI, static SSL, and mixed SSL workloads scale dramatically better on Solaris than on NT, while NSAPI, static HTML, and the mixed workload scales within a few percent better or worse on Solaris than on Windows NT.

With static HTML, Windows NT has some non-standard extensions to the WinSock API that allow static files to be transmitted more efficiently than the BSD sockets API can accomplish. These extensions also allow the system to spend more time in the NT kernel, where SMP scaling has apparently been better optimized than on Solaris. The mixed workload is clearly dominated by static HTML, so it scales similarly. In the case of NSAPI, more time is spent in the NSAPI shared library, which is functionally identical on Windows NT and Solaris - so the scaling is similar.

The Solaris SSL test indicates that Solaris can actually exceed ideal SMP scaling - this is probably due to improved L2 cache coherency, so that the additional 3 CPUs effectively yield an L2 cache four times as large.

In general, SSL and CGI gained the most from additional CPUs, while static pages gained the least. This is partly due to the fact that static performance is very fast, and couldn't get much faster without saturating the test network. It's also due in part to the fact that static HTML pages are mapped to shared memory, so cache flushes impact performance more heavily than in the more parallelizable SSL case.

Scaling of less than 25% would mean that the four-CPU server performs more slowly than the one-CPU server. This does not happen in our testing, but the 4-CPU static workload comes closest. Again, this is because the 100BaseT network was close to saturation, so the server CPU was not the bottleneck. This should not be regarded as a defect in Enterprise Server, but rather as a limitation of the test environment.

Memory
As a baseline, Enterprise Server requires 64MB RAM. If you have multiple CPUs, get at least 64MB per CPU. For example, if you have four CPUs, you should install at least 256MB RAM for optimal performance. At high numbers of peak concurrent users, also allow extra RAM for the additional threads. After the first 50 concurrent users, add an extra 512KB per peak concurrent user.

Drive Space
You need to have enough drive space for your OS, document tree, and log files. In most cases 2GB total is sufficient.

Put the OS, swap/paging file, Enterprise Server logs, and document tree each on separate hard drives. Thus, if your log files fill up the log drive, your OS will not suffer. Also, you'll be able to tell whether, for example, the OS paging file is causing drive activity.

Your OS vendor may have specific recommendations for how much swap or paging space you should allocate. Based on our testing, Enterprise Server 4.0 performs best with swap space equal to RAM, plus enough to map the document tree.

Networking
For an Internet site, decide how many peak concurrent users you need the server to handle, and multiply that number of users by the average request size on your site. Your average request may include multiple documents. If you're not sure, try using your home page and all its associated subframes and graphics.

Next decide how long the average user will be willing to wait for a document, at peak utilization. Divide by that number of seconds. That's the WAN bandwidth your server needs.

For example, to support a peak of 50 users with an average document size of 24kB, and transferring each document in an average of 5 seconds, we need 240 kB/s - or 1920 kbit/s. So our site needs two T1 lines (each 1544 kbit/s). This allows some overhead for growth, too.

Your server's network interface card should support more than the WAN it's connected to. For example, if you have up to 3 T1 lines, you can get by with a 10BaseT interface. Up to a T3 line (45 Mbit/s) you can use 100BaseT. But if you have more than 50 Mbit/s of WAN bandwidth, consider configuring multiple 100BaseT interfaces, or look at Gigabit Ethernet technology.

For an Intranet site, your network is unlikely to be a bottleneck. However, you can use the same calculations as above to decide.

 

© Copyright © 1999 Sun Microsystems, Inc. Some preexisting portions Copyright © 1999 Netscape Communications Corp. All rights reserved.