Sun ONE Web Server 6.1 Performance Tuning, Sizing, and Scaling Guide |
Chapter 1
Performance and Monitoring OverviewSun ONE Web Server is designed to meet the needs of the most demanding, high-traffic sites in the world. It runs flexibly on UNIX, Linux, and Windows, and can serve both static and dynamically generated content. Sun ONE Web Server can also run in SSL mode, enabling the secure transfer of information.
This guide helps you to define your server workload and size a system to meet your performance needs. Your environment is unique, however, so the impacts of the suggestions provided here also depend on your specific environment. Ultimately you must rely on your own judgement and observations to select the adjustments that are best for you.
This chapter provides a general discussion of server performance considerations, and more specific information about monitoring server performance.
This chapter includes the following topics:
Performance IssuesThe first step toward sizing your server is to determine your requirements. Performance means different things to users than to webmasters. Users want fast response times (typically less than 100 milliseconds), high availability (no “connection refused” messages), and as much interface control as possible. Webmasters and system administrators, on the other hand, want to see high connection rates, high data throughput, and uptime approaching 100%. In addition, for virtual servers the goal might be to provide a targeted level of performance at different price points. You need to define what performance means for your particular situation.
Here are some areas to consider:
Encrypting your Sun ONE Web Server’s data streams with SSL makes an enormous difference to your site’s credibility for electronic commerce and other security conscious applications, but it can also seriously impact your CPU load. SSL always has a significant impact on throughput, so for best performance minimize your use of SSL, or consider using a multi-CPU server to handle it.
Virtual ServersVirtual servers add another layer to the performance improvement process. Certain settings are tunable for the entire server, while others are based on an individual virtual server. You can also use the quality of service (QOS) features to set resource utilization constraints for an individual virtual server or class of virtual servers. For example, you can use QOS features to limit the number of connections allowed for a virtual server or class of virtual servers.
For more information about using the quality of service features, see the Sun ONE Web Server 6.1 Administrator’s Guide.
Monitoring Server PerformanceMaking the adjustments described in this guide without measuring their effects doesn’t make sense. If you don’t measure the system’s behavior before and after making a change, you won’t know whether the change was a good idea, a bad idea, or merely irrelevant. You can monitor the performance of Sun ONE Web Server in several different ways, as discussed in the following topics:
See Also
General Tuning Tips
Solaris-specific Performance MonitoringMonitoring Current Activity Using the Server Manager
You can monitor many performance statistics through the Server Manager user interface, and through stats-xml. Once statistics are activated, you can monitor the following areas:
Activating Statistics
You must activate statistics on Sun ONE Web Server before you can monitor performance. This can be done through the Server Manager, or by editing the obj.conf and magnus.conf files.
Caution
When you activate statistics/profiling, statistics information is made available to any user of your server.
Activating Statistics from the Server Manager
To activate statistics from the user interface:
Activating Statistics with stats-xml
You can also activate statistics directly by editing the obj.conf and magnus.conf files. Users who create automated tools or write customized programs for monitoring and tuning may prefer to work directly with stats-xml.
To activate statistics using stats-xml:
- Under the default object in obj.conf, add the following line:
NameTrans fn="assign-name" from="/stats-xml/*" name="stats-xml"
- Add the following Service function to obj.conf:
<Object name="stats-xml">
Service fn="stats-xml"
</Object>- Add the stats-init SAF to magnus.conf.
Here's an example of stats-init in magnus.conf:
Init fn="stats-init" update-interval="5" virtual-servers="2000" profiling="yes"
The above example shows you can also designate the following:
- update-interval. The period in seconds between statistics updates. A higher setting (less frequent) will be better for performance. The minimum value is 1; the default value is 5.
- virtual-servers. The maximum number of virtual servers for which you track statistics. This number should be set equal to or higher than the number of virtual servers configured. Smaller numbers result in lower memory usage. The minimum value is 1; the default is 1000.
- profiling. Activate NSAPI performance profiling. The default is "no," which results in slightly better server performance. However, if you activate statistics through the user interface, profiling is turned on by default.
Monitoring Statistics
Once you’ve activated statistics, you can get a variety of information on how your server instance and your virtual servers are running. The statistics are broken up into functional areas.
To monitor statistics from the Server Manager:
- From the Server Manager, click the Monitor tab, and then click Monitor Current Activity.
- Make sure that statistics/profiling is activated ("Yes" is selected and applied for "Activate Statistics/Profiling?").
- From the drop-down list, select a refresh interval.
This is the interval, in seconds, that updated statistics will be displayed on your browser.
- From the drop-down list, select the type of web server statistics to display.
- Click Submit.
A page appears displaying the type of statistics you selected. The page is updated every 5-15 seconds, depending on the refresh interval. All pages will display a bar graph of activity, except for Connections.
- Select the process ID from the drop-down list.
You can view current activity through the Server Manager, but these categories are not fully relevant for tuning your server. The perfdump statistics are recommended for tuning your server. For more information, see "Using Statistics to Tune Your Server."
Virtual Server Statistics
Virtual server statistics can be viewed from the Server Manager. You can choose to display statistics for the server instance, for an individual virtual server, or for all. This information is not provided through perfdump.
Monitoring Current Activity Using the perfdump Utility
The perfdump utility is a Server Application Function (SAF) built into Sun ONE Web Server that collects various pieces of performance data from the Web Server internal statistics and displays them in ASCII text. The perfdump utility allows you to monitor a greater variety of statistics than those available through the Server Manager.
With perfdump, the statistics are unified. Rather than monitoring a single process, statistics are multiplied by the number of processes, which gives you a more accurate view of the server as a whole.
Installing the perfdump Utility
To install perfdump, make the following modifications in obj.conf:
- Add the following object to your obj.conf file after the default object:
<Object name="perf">
Service fn="service-dump"
</Object>- Add the following to the default object:
NameTrans fn=assign-name from="/.perf" name="perf"
Make sure that the .perf NameTrans directive is specified before the document-root NameTrans directive in the default object.
- If not already activated, activate stats-xml.
For more information, see "Activating Statistics."
- Restart your server software.
- Access perfdump by entering this URL:
http://yourhost/.perf
You can request the perfdump statistics and specify how frequently (in seconds) the browser should automatically refresh. The following example sets the refresh to every 5 seconds:
http://yourhost/.perf?refresh=5
See Also
"Using Statistics to Tune Your Server"
Sample perfdump Output
The following is sample perfdump output:
------------------------------------------------------------
webservd pid: 2408
ConnectionQueue:
----------------------------------
Current/Peak/Limit Queue Length 0/0/4096
Total Connections Queued 0
Average Queueing Delay 0.00 milliseconds
ListenSocket ls1:
------------------------
Address http://0.0.0.0:8080
Acceptor Threads 1
Default Virtual Server https-iws-files2.red.iplanet.com
KeepAliveInfo:
--------------------
KeepAliveCount 0/256
KeepAliveHits 0
KeepAliveFlushes 0
KeepAliveRefusals 0
KeepAliveTimeouts 0
KeepAliveTimeout 30 seconds
SessionCreationInfo:
------------------------
Active Sessions 1
Total Sessions Created 48/128
CacheInfo:
------------------
enabled yes
CacheEntries 0/1024
Hit Ratio 0/0 ( 0.00%)
Maximum Age 30
Native pools:
----------------------------
NativePool:
Idle/Peak/Limit 1/1/128
Work Queue Length/Peak/Limit 0/0/0
Server DNS cache disabled
Async DNS disabled
Performance Counters:
------------------------------------------------
Average Total Percent
Total number of requests: 0
Request processing time: 0.0000 0.0000
default-bucket (Default bucket)
Number of Requests: 0 ( 0.00%)
Number of Invocations: 0 ( 0.00%)
Latency: 0.0000 0.0000 ( 0.00%)
Function Processing Time: 0.0000 0.0000 ( 0.00%)
Total Response Time: 0.0000 0.0000 ( 0.00%)
Sessions:
----------------------------
Process Status Function
2408 response service-dump
------------------------------------------------------------Using Performance Buckets
Performance buckets allow you to define buckets and link them to various server functions. Every time one of these functions is invoked, the server collects statistical data and adds it to the bucket. For example, send-cgi and NSServletService are functions used to serve the CGI and Java servlet requests respectively. You can either define two buckets to maintain separate counters for CGI and servlet requests, or create one bucket that counts requests for both types of dynamic content. The cost of collecting this information is little and impact on the server performance is usually negligible. This information can later be accessed using the perfdump utility. The following information is stored in a bucket:
- Name of the bucket. This name is used for associating the bucket with a function.
- Description. A description of the functions that the bucket is associated with.
- Number of requests for this function. The total number of requests that caused this function to be called.
- Number of times the function was invoked. This number may not coincide with the number of requests for the function because some functions may be executed more than once for a single request.
- Function latency or the dispatch time. The time taken by the server to invoke the function.
- Function time. The time spent in the function itself.
The default-bucket is predefined by the server. It records statistics for the functions not associated with any user-defined bucket.
Configuration
You must specify all configuration information for performance buckets in the magnus.conf and obj.conf files. Only the default bucket is automatically enabled.
First, you must enable performance measurement as described in "Monitoring Current Activity Using the perfdump Utility."
The following examples show how to define new buckets in magnus.conf:
The example above creates three buckets: acl-bucket, file-bucket, and cgi-bucket. To associate these buckets with functions, add bucket=bucket-name to the obj.conf function for which you wish to measure performance.
Example
PathCheck fn="check-acl" acl="default" bucket="acl-bucket"
...
Service method="(GET|HEAD|POST)" type="*~magnus-internal/*" fn="send-file" bucket="file-bucket"
...
<Object name="cgi">
ObjectType fn="force-type" type="magnus-internal/cgi"
Service fn="send-cgi" bucket="cgi-bucket"
</Object>
Performance Report
The server statistics in buckets can be accessed using the perfdump utility. The performance buckets information is located in the last section of the report returned by perfdump.
The report contains the following information:
- Average, Total, and Percent columns give data for each requested statistic.
- Request Processing Time is the total time required by the server to process all requests it has received so far.
- Number of Requests is the total number of requests for the function.
- Number of Invocations is the total number of times that the function was invoked. This differs from the number of requests in that a function could be called multiple times while processing one request. The percentage column for this row is calculated in reference to the total number of invocations for all of the buckets.
- Latency is the time in seconds Sun ONE Web Server takes to prepare for calling the function.
- Function Processing Time is the time in seconds Sun ONE Web Server spent inside the function. The percentage of Function Processing Time and Total Response Time is calculated with reference to the total Request Processing Time.
- Total Response Time is the sum in seconds of Function Processing Time and Latency.
The following is an example of the performance bucket information available through perfdump:
Performance Counters:
------------------------------------------------
Average Total Percent
Total number of requests: 0
Request processing time: 0.0000 0.0000
default-bucket (Default bucket)
Number of Requests: 0 ( 0.00%)
Number of Invocations: 0 ( 0.00%)
Latency: 0.0000 0.0000 ( 0.00%)
Function Processing Time: 0.0000 0.0000 ( 0.00%)
Total Response Time: 0.0000 0.0000 ( 0.00%)