Sun Java logo     Previous      Contents      Index      Next     

Sun logo
Sun Java System Application Server Enterprise Edition 8 2004Q4 Beta Performance Tuning Guide 

Chapter 1
About Application Server Performance

Significantly improve performance of applications and the application server by adjusting a few deployment and server configuration settings. However, it is important to understand the environment and performance goals. An optimal configuration for a production environment might not necessarily be optimal for a development environment.

This guide aids in tunderstanding the tuning and sizing options available, providing you with the capabilities and practices to obtain the best performance out of your Application Server.

This chapter discusses the following topics:


Process Overview

The following table outlines the overall administration process, and shows were performance tuning fits in the sequence.

Table 1-1 Performance Tuning Roadmap

Step

Description of Task

Location of Instructions

1

Design: Decide on your high-availability topology and set up your systems

Deployment Planning Guide

2

Capacity Planning: Make sure the systems have sufficient resources to perform well.

Deployment Planning Guide, Appendix X

3

Installation: Install the HADB software with or without the Application Server software

Installation Guide

4

Deployment: Install and run your applications. Familiarize yourself with how to configure and administer the Application Server subsystems and components.

Administration Guide

5.

Tuning: Tune your application, Java Runtime System, operating system, high-availability database (HADB), and the Application Server.

Performance Tuning Guide

Performance Tuning Sequence

Ideally, do performance tuning in the following sequence:

Application developers will want to tune applications prior to production use. Application tuning often produces dramatic improvements in performance.

The remaining steps in the preceding list are performed by the system administrator. Take those steps when the application has already been tuned, or when application tuning has to wait and you want to improve performance as much as possible in the meantime.


General Tuning Concepts

The previous discussion guides the administrator towards defining a preferred deployment architecture. However, the actual size of the deployment is determined by a process called capacity planning.

How does one predict either the capacity of a given hardware configuration or predict the hardware resources required to sustain a specified application load and customer criteria? This is done by a careful performance benchmarking process, using the real application and with realistic data sets and workload simulation.The basic steps are briefly described below.

  1. Determine performance on a single CPU
  2. First determine the largest load that can be sustained with a known amount of processing power. You can obtain this figure by measuring the performance of the application on a uniprocessor machine. Either leverage the performance numbers of an existing application with similar processing characteristics or, ideally, use the actual application and workload, in a testing environment. Make sure that the application and data resources are configured in a tiered manner, exactly as they would be in the final deployment.

  1. Determine vertical scalability
  2. Know exactly how much additional performance is gained when you add processors. That is, you are indirectly measuring the amount of shared resource contention that occurs on the server for a specific workload. Either obtain this information based on additional load testing of the application on a multiprocessor system, or leverage existing information from a similar application that has already been load tested. Running a series of performance tests on one to eight CPUs, in incremental steps, generally provides a sense of the vertical scalability characteristics of the system. Make sure that the application, application server and backend database resources, operating system are properly tuned so that they not skew the results of this study.

  3. Determined horizontal scalability
  4. If sufficiently powerful hardware resources are available, a single hardware node may meet the performance requirements. However for better service availability, two or more systems may be clustered. Employing an external load balancers and workload simulation, determine the performance benefits of replicating one well tuned application server node, as determined in step (2).

The following table describes the steps in capacity planning:

The following table describes factors that affect performance. The left most column describes the general concept, the second column gives the practical ramifications of the concept, the third column describes the measurements, and the right most column describes the value sources.

Table 1-2 Factors That Affect Performance - Applying Concepts

Concept

Applying the Concept

Measurement

Value Sources

User Load

Concurrent Sessions at Peak Load

Transactions Per Minute (TPM)

Web Interactions Per Second (WIPS)

(Number of Concurrent Users at Peak Load) * Expected Response Time) / (Time between clicks)

For example, (100 Concurrent Users * 2 seconds Response Time) / (10 seconds between clicks) = 20.

Application Scalability

Transaction Rate measured on one CPU

TPM or WIPS

Measured from workload benchmark. Needs to be performed at each tier.

vertical scalability (additional performance for additional CPU)

Percentage gain per additional CPU

Based on curve fitting from benchmark. Perform tests while gradually increasing the number of CPUs. Identify the "knee" of the curve, where additional CPUs are providing uneconomical gains in performance. Requires tuning as described in later chapters of this guide. Needs to be performed at each tier and iterated if necessary. Stop here if this meets performance requirements.

horizontal scalability (additional performance for additional server)

Percentage gain per additional server process and/or hardware node.

Use a well-tuned single application server instance, as in previous step. Measure how much each additional server instance and/or hardware node improves performance.

Safety Margins

High Availability Requirements.

If the system should cope with failures, size the system to meet performance requirements assuming that one or more application server instances are non functional

Different equations used if High Availability is required.

Slack for unexpected peaks

It is desirable to operate a server at less than its benchmarked peak, for some safety margin

80% system capacity utilization at peak loads may work for most installations. Measure your deployment under real and simulated peak loads.

User Expectations

Every application user will have some expectations with respect to application performance. Often they can be numerically quantified. The server administrator must understand these expectations clearly, and use them in capacity planning to ensure that the deployment will meet customer needs, when completed.

With regard to performance, you need to consider the following:


Further Information



Previous      Contents      Index      Next     


Copyright 2004 Sun Microsystems, Inc. All rights reserved.