Skip navigation.

Capacity Planning

  Previous Next vertical dots separating previous/next from contents/index/pdf Contents View as PDF   Get Adobe Reader

Examining Results from the Baseline Applications

The following sections provide information about baseline applications:

 


MedRec Benchmark Overview

The MedRec sample application was used for the Light and Heavy-weight application tests. The difference between the Light and Heavy-weight application tests is the communication protocol used between client and server. For the Light-weight application test, the HTTP 1.1 protocol is used from the clients. For the Heavy-weight application test, the HTTPS 1.1 protocol is used, which involves secure/encrypted data communication between client and server.

Avitek Medical Records (or MedRec) is a WebLogic Server sample application suite that concisely demonstrates all aspects of the J2EE platform. MedRec is designed as an educational tool for all levels of J2EE developers; it showcases the use of each J2EE component, and illustrates best practice design patterns for component interaction and client development.

The MedRec application provides a framework for patients, doctors, and administrators to manage patient data using a variety of different clients. Patient data includes:

The MedRec application suite consists of two main J2EE applications. Each application supports one or more user scenarios for MedRec:

The medrecEAR application is deployed to a single WebLogic Server instance called MedRecServer. The physicianEAR application is deployed to a separate server, PhysicianServer, which communicates with the controller components of medrecEAR by using Web services.

 


Changes to MedRec and Transactions Used for Benchmarking

For capacity planning benchmarking purposes, the WebLogic Server 8.1 MedRec application has been optimized. The patient registration process was made synchronous and two-phase commit is omitted in the modified application. The client running the LoadRunner benchmarking software is used to generate the load using the following sequence of transactions:

  1. A patient connects to the MedRec application and opens the Start page.
  2. The patient completes the registration process.
  3. A physician connect to the online system and signs in.
  4. The physician performs a lookup for a patient.
  5. The physician creates a new visit for the patient, creates and deletes prescriptions.
  6. The physician logs out.
  7. A patient signs in and edits the user profile.
  8. The patient makes changes to her profile and updates the information.
  9. The patient logs out.

Each action in the sequence above is treated as a transaction for computing the throughput. For each client, a unique patient ID is generated for every iteration.

 


Overview of the Baseline Applications

BEA produced baseline capacity planning results as a starting point. These baseline numbers were generated using three different applications:

You can use the metrics provided for each of these applications to set the expectation about WebLogic Server throughput, response time, number of concurrent users, and overall impact on performance based on customer choice of hardware and configuration.

The measurements in the systems under study were captured without adding a think-time factor to the transaction mix.

Note: For a capacity planning profile using the WebAuction application on standard hardware, see Chapter 15, "Analysis of Capacity Planning," in J2EE Applications and BEA WebLogic Server, Prentice Hall, 2001, ISBN 0-13-091111-9 at http://www.phptr.com.

 


Configuration for MedRec Baseline Applications (Light and Heavy)

For all measurements, the client applications and the Oracle database were kept on a 4x400 Mhz Solaris processor. A WebLogic Server intance was started on each processor/configuration combination listed. Baseline numbers from the capacity measurement runs for each of three applications are listed in tabular form, after the description of the application.

The baseline numbers were collected using a three-tier configuration, separating the client applications (running LoadRunner benchmarking software), WebLogic Server containing the MedRec baseline application, and the Oracle database server. The database in the systems under study is an Oracle 8.1.7 database. All measurements were taken using a 1 GB network.

Figure 2-1 Three-Tier Configuration


 

Three-Tier Configuration


 


 

WebLogic Server Configurations

The baseline numbers were collected using the following product versions and settings:

Measured Configurations

The following configurations were measured:

Database Configurations

Oracle 9.2.0.2.0 was used on an eight-way 750 MHz E6800. To improve performance, the Oracle logs (8GB) were placed on a RAID array of four disks striped together and another RAID array of four disks striped for all the other database files.

Database configurations were as follows:

Client Configurations

Client configurations were as follows:

Network Configuration

A one gigabite (1 GB) network was used.

LoadRunner Configurations

LoadRunner 7.02 was used for client load generation on a 4X700 MHz Windows 2000 box with 4 GB of memory. No think time is used for the test. HTTP/HTTPs version 1.1 was used.

About Baseline Numbers

The baseline numbers produced by the benchmarks used in this study should not be used to compare WebLogic Server with other application servers or hardware running similar benchmarks.  The benchmark methodology and tuning used in this study are unique.

A number of benchmarks show how a well-designed application can perform on WebLogic Server. These benchmarks are available from BEA Systems. For more information, contact your BEA Systems sales representative.

 


Measured TPS for Light MedRec Application on UNIX

The measurements for the light MedRec application on UNIX were obtained using Solaris hardware and the HTTP protocol. Table 2-1 lists the number of clients in the first column, and hardware configuration with transactions per second in the remaining columns. For this application on UNIX, the designation for the configuration number is light medrec UNIX n where n is the configuration number (lmUn).

(TPS = Transactions Per Second)

Table 2-1 Number of Clients x TPS for Light MedRec Application on UNI

Processor in MHz

Config #

1x750

lmU1

2x750

lmU2

4x750

lmU3

8x750

lmU4

4x400

1-node

lmU5

4x400

2-node

lmU6

4x400
3-node

lmU7

4x400
4-node

lmU8

Number of Clients

TPS

TPS

TPS

TPS

TPS

TPS

TPS

TPS

1 client

16.20

19.15

21.45

22.63

15.69

 

 

 

4 clients

17.20

29.80

47.95

58.12

34.54

53.57

58.67

63.33

10 clients

16.74

30.12

53.87

80.96

38.87

76.82

102.48

 

20 clients

17.04

30.54

55.76

83.84

39.17

81.14

120.73

154.76

40 clients

16.72

30.81

54.02

84.66

39.32

81.65

122.50

163.23

80 clients

16.83

30.71

54.33

84.22

39.02

80.64

121.47

163.84

100 clients

16.85

30.71

54.36

84.09

38.78

80.24

122.26

163.86

150 clients

 

 

 

 

 

 

121.38

162.71

200 clients

 

 

 

 

 

 

 

163.10


Max TPS

17.20

30.81

55.76

84.66

39.32

81.65

122.50

163.86

Appserver CPU Utilization for Max TPS

98%

97%

95%

91%

95.09%

95.58%

95.35%

95.22%

DB Server CPU Utilization for Max TPS

<1%

1.30%

2.25%

3.91%

1.70%

3.64%

6.21%

9.70%

DB Server Disk Utilization for Max TPS

6%

12%

19%

28.24

13.94

27.79

40.58

53.25

X

Table 2-2 summarizes the TPS achieved for each processor type/configuration combination measured, and identifies the number of concurrent clients running to achieve the measured TPS result.

Table 2-2 Measured TPS for Light MedRec Application on UNIX

Processor Type, Configuration

Config #

Measured TPS

Number of Clients

Solaris 1x750 MHz

lmU1

17.20

4

Solaris 2x750 MHz

lmU2

30.81

40

Solaris 4x750 MHz

lmU3

55.76

20

Solaris 8x750 MHz

lmU4

84.66


40

Solaris 4x400 - 1 Node

lmU5

39.32

40

Solaris 4x400 - 2 Node

lmU6

81.65

40

Solaris 4x400 - 3 Node

lmU7

122.50

40

Solaris 4x400 - 4 Node

lmU8

163.86

100

 


Measured TPS for Heavy MedRec Application on UNIX

The measurements for the heavy MedRec application on UNIX were obtained using Solaris hardware and the HTTP protocol. Table 2-3 lists the number of clients in the first column, and hardware configuration with transactions per second in the remaining columns. For this application on UNIX, the designation for the configuration number is heavy medrec UNIX n (hmUn).

(TPS = Transactions Per Second)

Table 2-3 Number of Clients x TPS for Heavy MedRec Application on UNIX

Processor in MHz

Config #

4x750

hmU1

4x400

hmU2

Number of Clients

TPS

TPS

1 client

12.18

8.76

4 clients

27.49

20.70

10 clients

32.13

23.00

20 clients

31.08

23.07

40 clients

31.39

23.35

80 clients

31.43

23.44

100 clients

31.81

23.29

Max TPS

32.13

23.44

Appserver CPU Utilization for Max TPS

93.00%

94.06%

DB Server CPU Utilization for Max TPS

1.16%

0.89%

DB Server Disk Utilization for Max TPS

12.23%

9.22%

 


Measured TPS for Light MedRec Application on Windows 2000

The measurements for the light MedRec application on Windows 2000 were obtained using the HTTP protocol. Table 2-4 lists the number of concurrently running clients in the first column, and hardware configuration with transactions per second in the remaining columns. For this application on Windows 2000, the designation for the configuration number is light medrec Windows n (lmWn).

Table 2-4 Number of Clients x TPS for Light MedRec Application on Windows 2000

Processor

in MHz

Config #

Win2K

4x700

1-Node

lmW1

Win2K

4x700

2-Node

lmW2

Win2K

4x700

3-Node

lmW3

Win2K

4x700

3-Node

lmW3

Number of Clients

TPS

TPS

TPS

TPS

1 client

33.50

 

 

 

4 clients

78.53

107.58

112.36

122.83

10 clients

80.24

149.09

205.13

 

20 clients

79.28

148.12

226.53

297.35

40 clients

77.36

148.78

223.53

299.11

80 clients

76.16

146.75

224.38

291.82

100 clients

76.38

144.99

220.02

292.22

150 clients

 

 

219.38

291.17

200 clients

 

 

 

292.87

Max TPS

80.24

149.09

226.53

299.11

Appserver CPU Utilization for Max TPS

92.51%

92.42%

92.40%

92.82%

DB Server CPU Utilization for Max TPS

3.32%

7.63%

14.63

23.44%

DB Disk Server CPU Utilization for Max TPS

26.32%

44.63%

70.25%

88.98%

The following table summarizes the measured TPS achieved for each processor type/configuration combination measured, and identifies the number of concurrent clients running to achieve the measured TPS result.

Table 2-5 Measured TPS for Light MedRec on Windows 2000

Processor Type, Configuration

Config #

Measured TPS

Number of Clients

Windows 2000 4x700 MHz - 1 Node

lmW1

80.24

10

Windows 2000 4x700 MHz - 2 Node

lmW2

149.09

10

Windows 2000 4x700 MHz - 3 Node

lmW3

226.53

20

Windows 2000 4x700 MHz - 4 Node

lmW4

299.11

40

 


Measured TPS for Heavy MedRec Application on Windows 2000

The following are measurements for the heavy MedRec application on Windows 2000. All tests on all operating systems use the HTTPS (secure) protocol. The JVM used for heavy MedRec application is the Sun Microsystems JDK.

Table 2-6 lists the number of clients in the first column, and hardware configuration with transactions per second in the remaining columns. For this application on Windows 2000, the designation for the configuration number is heavy medrec { U | W } n (hmUn or mmWn).

Table 2-6 Number of Clients x TPS for Heavy MedRec Application on Windows 2000

Processor

in MHz

Config #

Win2K

4x700

hmW1

Number of Clients

TPS

1 client

16.29

4 clients

38.62

10 clients

38.27

20 clients

37.58

40 clients

37.37

80 clients

36.86

100 clients

36.92

Max TPS

38.62

 


Next Steps

After examining the characteristics and baseline results from sample applications, the next step is to compare your application to one or more of the baseline samples. Then proceed to Determining Hardware Capacity Requirements. These steps can assist you in generating capacity planning requirements for your application.

 

Skip navigation bar  Back to Top Previous Next