Previous Next Contents Index


Glossary

application component.   A programmable element of an application. Examples of application components are servlets, JSPs, EJBs, and AppLogics.

cluster.   A group of NAS installations that participate in distributed synchronization of state and session data. A cluster is not a group of NAS servers that simply load-balance incoming requests among themselves. The NAS use of this term refers specifically to data synchronization, which servers do not participate in by default. You set up a cluster by assigning a role to each server in the cluster, either when you install the server or later by adjusting settings in the registries of all the cluster members.

concurrent users.   Users who are on the system simultaneously, whether they are actively submitting requests or merely in think time mode (viewing data, as opposed to processing requests). Even if a user is not submitting a request, he or she is still adding load to system resources by being actively logged on the system.

deployment.   The process of setting up NAS at your site. In this book, deployment refers not to deployment of applications but to deployment of the entire server system.

firewall topology.   A schematic layout of where the firewall exists within your network and how the rest of your enterprise interacts with it. See also topology.

load balancing.   The process by which user requests are sent to the server with the least load, and away from servers too busy to handle additional requests.

name-value pair list (application).   See request/response objects.

name-value pair list (session).   See session object.

peak capacity.   The maximum number of concurrent users that the system can sustain before requests per minute start to decline and response time starts to increase.

peak load.   The maximum number of concurrent users that the system should ideally support based on the pattern of activity that typically exists on your system.

performance.   Performance consists of two measurable factors: primarily throughput and, in some instances, response time.

request.   A single user's request for data, and the return of that data by the server. The request makes a round trip: from the user submitting the request, to the server, and then back from the server that is returning the result of that request to the user.

requests per minute.   The number of round-trip transactions that the system can handle per minute. Used as a measurement of NAS performance.

request/response objects.   The input and output list sent to and from an application component, such as a servlet. Contains information such as standard HTTP headers—including server name, server post, path information, query strings, and so on—and any cookies that were sent by the requesting client such as a session ID. This list is created automatically as part of your application processing.

response time.   The speed at which a request is processed. The faster the response time, the more requests per minute being processed. Unit = seconds (or minutes).

scalability.   The ability to support an increase in users and requests, also referred to as an increase in load, by proportionally increasing hardware resources, without degrading response time or system stability. In other words, the system's ability to cope with an increase in users and requests when that increase is matched with a proportional increase in hardware resources. If response time degrades, despite the addition of hardware resources in proportion to the increase in concurrent users, then the system is not scalable.

scaling factor.   The overhead that results from adding more servers to a cluster. Adding servers improves throughput, but each additional server introduces more overhead in the form of load balancing and data synchronization. The added servers must devote resources to the scaling factor as well as to the goal of improving performance.

session.   A grouping of information associated with an application user. The information is created only if you explicitly call the getSession() method, for servlets, or the CreateSession() method, for AppLogics, in your application design.

session object.   A list of name and value pair entries of data associated with a session. The name and value pair entries are user-specific and interaction-specific information, such as user preferences, security credentials, shopping cart contents, and so on.

steady state.   The median level of traffic on your NAS system or over your network. When your NAS system reaches steady state, it maintains capacity, or a steady number of requests per minute, while the number of concurrent users continues to increase. In steady-state mode, your system is not processing at peak capacity; however, it is maintaining a steady capacity and continues to maintain this capacity, even as the number of concurrent users on the system rises.

sticky application component.   An application component that has been intentionally marked to be processed by the same NAS machine or process where it was initially invoked.

think time.   The time a user spends reviewing and analyzing the result of a processed request before submitting a new request. The user is not performing any actions against the server, and thus a request is not being generated during think time.

throughput.   Capacity, or the number of requests that NAS can service in a given time period. A request consists of a single user's attempt to access data, and the return of that data by the server. Throughput is measured in requests per minute or requests per second.

topology.   A schematic layout of your network. It is the logical map of your servers, hosts, clients, and other elements, and it shows the connections made between them.

 

© Copyright 1999 Netscape Communications Corp.