Sun JavaTM System Web Proxy Server software (known as Proxy Server) is designed to meet the needs of high-traffic sites in the world. It can serve both static and dynamically generated content. Proxy Server can run in the Secure Sockets Layer (SSL) mode, enabling secure transfer of information.
This guide helps you to define your server workload and size a system to meet your performance needs. Your environment is unique, so the impact of the suggestions provided in this guide will depend on your specific environment.
This chapter provides a general discussion of server performance considerations, and more specific information about monitoring server performance.
This chapter includes the following topics:
You must first determine your requirements. Users want fast response times, typically less than 100 milliseconds, high availability with no connection refusedmessages, and significant control. Webmasters and proxy server administrators, on the other hand, need high connection rates, high data throughput, and uptime approaching 100%. You need to define what performance means for your particular situation based on your requirement.
The following factors have an impact on performance:
The Number of peak concurrent users
Security requirements
Encrypting your proxy server’s data streams with SSL makes an enormous difference to your site’s credibility for electronic commerce and other security conscious applications, but can seriously impact your CPU load. For more information, see SSL Performance.
Disk Cache hits or misses
A high percentage of cache hits indicate efficient utilization of cached objects, which in turn leads to improved performances.
Custom configurations to increase cache efficiency
You can configure Proxy Server to increase cache efficiency, and performance, at the cost of spec-compliance. For example, you can have configurations that ignores page reload requests , or those that ignore cache directives in response headers that do not allow caching the response.
Disk cache location
Using RAM-based file systems to hold the disk cache can significantly improve performance.
Hardware bottlenecks
Care should be taken to ensure that hardware factors such as network speed and disk throughput match the processing power of the CPU as well as the request handling capacity of the proxy server.
Behavior of origin servers
The origin servers response time has a crucial effect on the proxy server's performance numbers.
SSL always has a significant impact on throughput. Hence for optimum performance, minimize your use of SSL or consider using a multi-CPU server to handle it.
For SSL, Proxy Server uses the Network Security Services (NSS) library. However, you can use other options for SSL:
If you are using the SolarisTM 10 operating system, kernel SSL (KSSL) is available. It does not contain all the algorithms displayed, as does NSS, but it often provides better performance.
A cryptographic card hardware accelerator for SSL can also improve performance.
You must measure the system behavior before and after a change to check performance. You can monitor the performance of Proxy Server in different ways.
Table 1–1 Methods of Monitoring Performance
Monitoring Method |
How to Enable |
How to Access |
Advantages and Requirements |
---|---|---|---|
Statistics through the Admin console |
Enabled by default |
In the Admin console, for a configuration, click the Monitor tab |
Accessible when session threads are hanging. Administration Server must be running. |
XML-formatted statistics (stats-xml) by using a browser |
Enable through Admin console or by editing a configuration file |
Through a URI |
Administration Server need not be running. |
perfdump by using a browser |
Enable through Admin console or by editing a configuration file |
Through a URI |
Administration Server need not be running. |
Java ES monitoring |
Enabled by default |
Through the Java ES Monitoring Console |
Only for Java ES installations. Administration Server must be running. |
Monitoring the server does have some impact on computing resources. In general, using perfdump through the URI is the least expensive, followed by using stats-xml through a URI. Because using the Administration Server requires computing resources, using the Admin console is an expensivemonitoring method.
For more information about the monitoring methods, see the following sections:
You can monitor performance statistics by using the Admin Console user interface, the stats-xml URI, and the perfdump. For these monitoring methods, the server uses the statistics it collects. None of these monitoring methods will work if statistics are not collected.
The statistics give you information at the configuration level, the server instance level, or the virtual server level. The statistics are broken up into functional areas.
For configuration, statistics are available in the following areas:
Requests
Errors
Response Time
For the server instance, statistics are available in the following areas:
Requests
Errors
Response Time
General
Java Virtual Machine (JVMTM)
Connection Queue
Keep Alive
Host DNS Cache
Client DNS Cache
In-Memory File Cache
Thread Pools
Session Threads, including profiling data (exists if profiling is enabled)
Some statistics are set to zero if Quality of Service (QoS) is not enabled. For example, the count of open connections, the maximum open connections, the rate of bytes transmitted, and the maximum byte transmission rate is zero if disabled.
To enable statistics, use Admin Console.
Collecting statistics causes a slight hit to performance.
Select the Proxy Server instance.
Click the Server Status tab.
Click the Monitor Current Activity sub tab.
Choose Yes for Activate Statistics/Profiling?
Save and apply changes.
You can display statistics in XML format by using stats-xml. You can view the stats-xml output through a URI, that you need to enable, or you can view the stats-xml output through the CLI, that is enabled by default.
Select the Proxy Server instance.
Click the Server Status tab.
Click the Monitor Current Activity sub tab.
Ensure that Statistics is enabled (see above).
Select the required Statistics from the dropdown list under Monitor Proxy Server Statistics and click Submit.
You can modify the stats-xml URI to limit the data it provides.
Modify the stats-xml URI to limit the information by setting elements to 0 or 1.
An element set to 0 is not displayed on the stats-xml output. For example:
http://yourhost:port/stats-xml?thread=0&process=0
This syntax limits the stats-xml output so that thread and process statistics are not included. By default all statistics are enabled (set to 1).
Most of the statistics are available at the server level, but some are available at the process level.
Use the following syntax elements to limit stats-xmlstatistics:
cache-bucket
connection-queue
connection-queue-bucket (process-level)
cpu-info
host-dns-bucket
client-dns-bucket
keepalive-bucket
process
profile
profile-bucket (process-level)
request-bucket
thread
thread-pool
thread-pool-bucket (process-level)
Add the following object to your obj.conf file after the default object:
<Object name="perf"> Service fn="service-dump" </Object> |
Add the following line to the default object:
NameTrans fn=assign-name from="/.perf" name="perf"
Restart your server software.
Go to http://computer_name:proxyport/.perf and access perfdump.
You can specify the request time for the perfdump statistics. The browser automatically refreshes the statistics based on the time you specify. The following example sets the refresh time to every 5 seconds:
http://computer_name:proxyport/.perf?refresh=5
Performance buckets enable you to define buckets and link them to various server functions. Every time one of these functions is invoked, the server collects statistical data and adds the data to the bucket. The cost of collecting this information is minimal, and the impact on the server performance is usually negligible. You can access this information by using perfdump. The following information is stored in a bucket:
Name of the bucket. This name associates the bucket with a function.
Description. A description of the functions with which the bucket is associated.
Number of requests for this function. The total number of requests that caused this function to be called.
Number of times the function was invoked. This number might not coincide with the number of requests for the function, because some functions might be executed more than once for a single request.
Function latency or the dispatch time. The time taken by the server to invoke the function.
Function time. The time spent in the function itself.
default-bucket is predefined by the server. It records statistics for the functions not associated with any user-defined bucket.
You must specify all configuration information for performance buckets in the obj.conf file. Only the default-bucket is automatically enabled.
You must enable performance statistics collection and perfdump.
The following examples show how to define new buckets in obj.conf:
Init fn="define-perf-bucket" name="acl-bucket" description="ACL bucket" |
The above examples creates a bucket: acl-bucket. To associate this bucket with functions, add bucket=bucket-name to the obj.conf function for which to measure performance.
Example
PathCheck fn="check-acl" acl="default" bucket="acl-bucket" ... Service method="(GET|HEAD|POST)" type="*~magnus-internal/*" fn="send-file" bucket="file-bucket" ... <Object name="cgi"> ObjectType fn="force-type" type="magnus-internal/cgi" Service fn="send-cgi" bucket="cgi-bucket" </Object>
The Server statistics in buckets can be accessed by using perfdump. The performance buckets information is located in the last section of the report returned by perfdump.
The report contains the following information:
Average, Total, and Percent columns show data for each requested statistic.
Request Processing Time is the total time required by the server to process all requestsreceived.
Number of Requests is the total number of requests for the function.
Number of Invocations is the total number of times that the function was invoked. This number differs from the number of requests because a function can be called multiple times while processing one request. The percentage column for this row is calculated in reference to the total number of invocations for all of the buckets.
Latency is the time in seconds that Proxy Server takes to prepare for calling the function.
Function Processing Time is the time in seconds that Proxy Server spends in the function. The percentage of Function Processing Time and Total Response Time is calculated with reference to the total Request Processing Time.
Total Response Time is the sum in seconds of Function Processing Time and Latency.
The following example shows performance bucket information in perfdump:
Performance Counters: ------------------------------------------------ Average Total Percent Total number of requests: 62647125 Request processing time: 0.0343 2147687.2500 default-bucket (Default bucket) Number of Requests: 62647125 (100.00%) Number of Invocations: 3374170785 (100.00%) Latency: 0.0008 47998.2500 ( 2.23%) Function Processing Time: 0.0335 2099689.0000 ( 97.77%) Total Response Time: 0.0343 2147687.2500 (100.00%)
The statistics displayed through the Proxy Server Admin Console is also accessible through the Java ES Monitoring Console. Though the information is the same, it is presented in a different format by using Common Monitoring Data Model (CMM). You can also monitor your server by using the Java ES monitoring tools. For more information about using the Java ES monitoring tools, see Sun Java Enterprise System 5 Monitoring Guide at http://docs.sun.com/app/docs/doc/819-5081. Use the same settings to tune the server, irrespectiveof the monitoring method used.