Sun Java System Portal Server 7.1 Deployment Planning Guide

Chapter 8 Basics of Portal Performance

Understanding the basic factors that affect Portal Server performance helps you to architect a well designed portal for your enterprise. For a well tuned portal, performance is determined by:

For an improperly tuned or sized portal, performance is determined by:

The following sections cover the topics listed above and provide some guidelines to get the highest possible performance for Portal Server deployments. Performance is an algorithm analysis, based on many inputs or factors that determine portal server performance. Throughput is a principal consideration. It provides the data to consider for concurrent users that can be supported by your Portal server configuration and hardware. It is advisable to do a baseline load test to determine the characteristics of the architected system. A baseline test is one which tests the throughput with most common user/portal interaction Most of the data and performance testing tools (slamd) will display this data as RPS (Rounds per second). RPS and login per second convey the same meaning. Throughput is mainly CPU bound (user activity).

Performance Constraints

Portal Server performance can be CPU bound, memory bound or even bound by the capability of the garbage collector. The scenario where the portal is CPU bound is the easiest to detect and resolve. If CPU utilization at the point of peak load is greater than 75% the deployment will benefit from additional CPUs. Portal scales well to servers with 4 CPUs and it scales linearly when adding additional servers so long as the network itself is not an issue. Even if at the time of deployment a Portal Server installation is neither CPU nor memory bound a successful portal induces stickiness and the load increases over time. Consequently as with any web application it is paramount to monitor CPU, memory and all other logs consistently.

Performance Drivers

Some important general factors that affect portal performance and are not release dependent. These factors are the most important factors that should be considered by every customer and portal architect on every deployment.

Arrival Rates

Arrival rates are defined as the rate at which portal users (employees/customers) login over a period of time. Arrival rates with steep peaks can cause Portal Server to became CPU-bound even if at other times throughout the day CPU utilization is minimal. Consider the example where, employees and/or customers of an enterprise start connecting to the portal early in the morning, remain mostly idle for rest of day and either logout due to session time out or manually logout at end of business. This is the most common scenario for business to employee portals and for some outward-facing portals with high locality of their user base. The rate at which the users login in the morning or logout at the end of day and perform any interaction with the portal server, determines the throughput demanded from the portal server. The throughput test gives a reliable conclusion for this requirement.

User Activity

User activity is defined as the interaction that a portal users has with the portal server. It depends on the number of tabs, links, sites that a user has to go through to reach a specific channel or information.

User activity is also governed by the type of channels that a portal architect may decide to provide the users. For example, if the communications channels (default Mail and Calendar channels shipped with the Portal server software) are placed on a user's portal desktop, the user tends to demand updated content regularly. Such Portal sites with communication channels or connections to back-end systems face far shorter reload intervals . The affect of reloading the desktop is that it consumes CPU cycles and generates jvm heap garbage. Because portal is good at caching, sites that promote reloads can become memory bound before the CPU is stressed. It is recommended to use the JVM option for Concurrent garbage collector to alleviate this issue.

Desktop Channel Types

Configuration of the portal desktop can make a difference in portal server performance. While designing a portal desktop, the baseline data should be referenced as a starting point to guide through the desktop design process. A feature rich desktop is good to entice users to your portal, but you should consider the portal response time for an overall user experience. It is necessary to keep the desktop within a defined performance envelope. Types of channels used on the desktop make a huge difference. Some channels are considered cheap and others categorized as expensive . Cheap channels are the ones that when added do not affect performance and in contrast adding expensive channels will show some performance degradation. In general, the expensive channels are those that require a back-end service or are computationally expensive. It is generally advised to use connection pooling mechanisms for these. Adding a URL scrapper channel to the desktop generally shows almost no impact on throughput.

Portal Server Tuning

Tuning allows you to increase your Portal Server's performance.

Web Container Tuning

Of the tunings that can be applied to a Portal Server deployment, the JVM tunings of the web container are the most important. It is important to use the tuning scripts to apply that configuration and to understand the changes made by those scripts. To be able to run those scripts it is necessary to edit a configuration file.

The tunable parameters in this file are crucial for Portal Server performance. Review those parameters carefully, especially those that determine the amount of memory that will be given to the web container.

With significant improvement in the JVM scalability, the Portal application is also able to take advantage of the new JVM parameters resulting in increased performance. An optimal configuration of the web container calls for a 2 GB of Java heap. As far as binding to Portal Server instances to CPUs, we now have great scalability up to 4 CPUs on a single instance and to even more CPUs with multiple instances of PS.


Note –

Each version of portal is certified for JVM tunings with the JVM version that ships with the product. Using a non-certified JVM is neither supported nor recommended. When deploying Portal Server in a production environment where all the component products are also deployed on the same server, it is recommended to use a machine with at least 4 GB or RAM.



Note –

It is being assumed that the system is being exclusively used for Portal server. If Access Manager and Directory Server resources are being consumed by other applications, then you might need to consider the tunings appropriately.


Tuning Scripts

Access Manager performance utility amtune can be used to tune Directory Server, Access Manager and web container as well as operating system configurations. It is located in /opt/SUNWam/bin/amtune/amtune for a default install of Java Enterprise System software. Apart from the above, Portal product ships with a utility script called perftune (/opt/SUNWportal/bin/) that takes care of tuning the Directory server, Access Manager, web container, TCP/IP settings, Kernel settings and Portal Server configurations as well. In fact, perftune calls amtune to configure the former part of the tunings if they are deployed in the same web container. The administrator of Portal Server must understand the tunings recommended by the above utilities. These tunings should first be validated on a staging and quality assurance environment before being pushed to a production system.

AMTUNE

By default, amtune is configured to run in REVIEW mode. IN this mode, amtune will suggest tuning recommendations but will not make any changes to the deployment. This is a safe mode of running. This and other parameters are defined in a file called /opt/SUNWam/bin/amtune/amtune-env. The amtune script is a useful starting point to define the correct tuning parameters for the portal.

To invoke amtune, use the following command:

amtune directory-server-admin-password web-server-admin-password

The following is displayed:


Debug information log can be found in file: 
/var/opt/SUNWam/debug/amtune-20061124-6300
############################################################
./amtune : 11/24/06 18:25:41
############################################################
Initializing...
-------------------------------------------------------------
Checking System Environment...
Checking User...
Checking Web Server JVM mode (32-bit or 64-bit) for 
web server 7...
--------------------------------------------------------------
amtune Information...
--------------------------------------------------------------
amtune Mode      : REVIEW
OS               : true
Access Manager   : true
Directory        : true
Web Container    : true
WS Mode          : 32-bit
---------------------------------------------------------------
Detecting System Environment...
---------------------------------------------------------------
Number of CPUs in the system :  2
WS Acceptor Threads : 2
Memory Available (MB) :  2048
Memory to Use (MB) : 1536
There is enough memory.
----------------------------------------------------------------
Calculating Tuning Parameters...
----------------------------------------------------------------
Max heap size (MB) : 1344
Min Heap size (MB) : 1344
Max new size (MB) : 168
Cache Size (MB) : 448
SDK Cache Size (KB) : 298
Number of SDK Cache Entries : 38144
Session Cache Size (KB) : 149
Number of Session Cache Entries : 38144
Maximum Number of Java Threads : 672
Maximum Number of Thread Pool : 280
LDAP Auth Threads : 28
SM LDAP Threads : 28
Notification Threads : 14
Notification Queue Size : 38144
=================================================================
Access Manager Tuning Script
-----------------------------------------------------------------
Solaris Tuning Script
-----------------------------------------------------------------
Solaris Kernel Tuning...
 
File                 : /etc/system
Parameter tuning     :
 
1.   rlim_fd_max
Current Value        :   rlim_fd_max=
Recommended Value    :   rlim_fd_max=65536
 
2.   rlim_fd_cur
Current Value        :   rlim_fd_cur=
Recommended Value    :   rlim_fd_cur=65536
 
 
-----------------------------------------------------------------
Solaris TCP Tuning using ndd...
 
File                 : /etc/rc2.d/S71ndd_tcp
Parameter tuning     :
 
1.   /dev/tcp tcp_fin_wait_2_flush_interval
Current Value        :   /dev/tcp tcp_fin_wait_2_flush_interval 675000
Recommended Value    :   /dev/tcp tcp_fin_wait_2_flush_interval 67500
 
2.   /dev/tcp tcp_conn_req_max_q
Current Value        :   /dev/tcp tcp_conn_req_max_q 128
Recommended Value    :   /dev/tcp tcp_conn_req_max_q 8192
 
3.   /dev/tcp tcp_conn_req_max_q0
Current Value        :   /dev/tcp tcp_conn_req_max_q0 1024
Recommended Value    :   /dev/tcp tcp_conn_req_max_q0 8192
 
4.   /dev/tcp tcp_keepalive_interval
Current Value        :   /dev/tcp tcp_keepalive_interval 7200000
Recommended Value    :   /dev/tcp tcp_keepalive_interval 90000
 
5.  /dev/tcp tcp_smallest_anon_port
Current Value        :   /dev/tcp tcp_smallest_anon_port 32768
Recommended Value    :   /dev/tcp tcp_smallest_anon_port 1024
 
6.  /dev/tcp tcp_slow_start_initial
Current Value        :   /dev/tcp tcp_slow_start_initial 4
Recommended Value    :   /dev/tcp tcp_slow_start_initial 2
 
7.  /dev/tcp tcp_xmit_hiwat
Current Value        :   /dev/tcp tcp_xmit_hiwat 49152
Recommended Value    :   /dev/tcp tcp_xmit_hiwat 65536
 
8.  /dev/tcp tcp_recv_hiwat
Current Value        :   /dev/tcp tcp_recv_hiwat 49152
Recommended Value    :   /dev/tcp tcp_recv_hiwat 65536
 
9.  /dev/tcp tcp_ip_abort_cinterval
Current Value        :   /dev/tcp tcp_ip_abort_cinterval 180000
Recommended Value    :   /dev/tcp tcp_ip_abort_cinterval 10000
 
10.  /dev/tcp tcp_deferred_ack_interval
Current Value        :   /dev/tcp tcp_deferred_ack_interval 100
Recommended Value    :   /dev/tcp tcp_deferred_ack_interval 5
 
11.  /dev/tcp tcp_strong_iss
Current Value        :   /dev/tcp tcp_strong_iss 1
Recommended Value    :   /dev/tcp tcp_strong_iss 2
 
 
=====================================================================
Access Manager - Web Server Tuning Script
---------------------------------------------------------------------
Tuning Web Server Instance...
 
File                    : /var/opt/SUNWwbsvr7/https-xxxxxx.pstest.com/config/
server.xml (using wadm command line tool)
Parameter tuning     :
 
1.   Minimum Threads
Current Value        : min-threads=16
Recommended Value    : min-threads=10
 
2.   Maximum Threads
Current Value        : max-threads=128
Recommended Value    : max-threads=280
 
3.   Queue Size
Current Value        : queue-size=1024
Recommended Value    : queue-size=8192
 
4.   Native Stack Size
Current Value        : stack-size=131072
Recommended Value    : Use current value
 
5.   Acceptor Threads
Current Value        : acceptor-threads=1
Recommended Value    : acceptor-threads=2
 
6.   Statistic
Current Value        : enabled=true
Recommended Value    : enabled=false
 
7.   nativelibrarypathprefix
Current Value        : nativelibrarypathprefix=<No value set>
Recommended Value    : Append /usr/lib/lwp to nativelibrarypathprefix 
(if Solaris 8)
 
8.   Max and Min Heap Size
Current Value        : Min Heap: -Xms512M Max Heap: -Xmx768M
Recommended Value    : -Xms1344M -Xmx1344M
 
9.   LogGC Output
Current Value        : <No value set>
Recommended Value    : -Xloggc:/var/opt/SUNWwbsvr7/https-xxxxxx.pstest.com/logs/gc.log
 
10.   JVM in Server mode
Current Value        : <No value set>
Recommended Value    : -server
 
11.   JVM Stack Size
Current Value        : -Xss128k
Recommended Value    : -Xss128k
 
12.  New Size
Current Value        : -XX:NewSize=168M
Recommended Value    : -XX:NewSize=168M
 
13.  Max New Size
Current Value        : -XX:MaxNewSize=168M
Recommended Value    : -XX:MaxNewSize=168M
 
14.  Disable Explicit GC
Current Value        : -XX:+DisableExplicitGC
Recommended Value    : -XX:+DisableExplicitGC
 
15.  Use Parallel GC
Current Value        : <No value set>
Recommended Value    : -XX:+UseParNewGC
 
16.  Print Class Histogram
Current Value        : <No value set>
Recommended Value    : -XX:+PrintClassHistogram
 
17.  Print GC Time Stamps
Current Value        : <No value set>
Recommended Value    : -XX:+PrintGCTimeStamps
 
18.  OverrideDefaultLibthread (if Solaris 8)
Current Value        : <No value set>
Recommended Value    : -XX:+OverrideDefaultLibthread
 
19.  Enable Concurrent Mark Sweep GC
Current Value        : <No value set>
Recommended Value    : -XX:+UseConcMarkSweepGC
 
 
=====================================================================
Access Manager - Directory Server Tuner Preparation Script
Preparing Directory Server Tuner...
---------------------------------------------------------------------
Determining Current Settings...
Creating Directory Server Tuner tar file: ./amtune-directory.tar
a amtune-directory 29K
a amtune-utils 45K
 
Directory Server Tuner tar file: ./amtune-directory.tar
Steps to tune directory server:
1. Copy the DS Tuner tar to the DS System
2. Untar the DS Tuner in a temporary location
3. Execute the following script in 'REVIEW' mode : amtune-directory
4. Review carefully the recommended tunings for DS
5. If you are sure of applying these changes to DS, modify the following 
lines in amtune-directory
a. AMTUNE_MODE=
These parameters can also be modified or left unchange to use default values
b. AMTUNE_LOG_LEVEL=
c. AMTUNE_DEBUG_FILE_PREFIX=
d. DB_BACKUP_DIR_PREFIX=
Its highly recommended to run dsadm backup before running amtune-directory
 
=====================================================================
Access Manager - Access Manager Server Tuning Script
---------------------------------------------------------------------
Tuning /etc/opt/SUNWam/config/AMConfig.properties...
 
File                 : /etc/opt/SUNWam/config/AMConfig.properties
Parameter tuning     :
 
1.   com.iplanet.am.stats.interval
Current Value        : com.iplanet.am.stats.interval=60
Recommended Value    : com.iplanet.am.stats.interval=60
 
2.   com.iplanet.services.stats.state
Current Value        : com.iplanet.services.stats.state=file
Recommended Value    : com.iplanet.services.stats.state=file
 
3.   com.iplanet.services.debug.level
Current Value        : com.iplanet.services.debug.level=error
Recommended Value    : com.iplanet.services.debug.level=error
 
4.   com.iplanet.am.sdk.cache.maxSize
Current Value        : com.iplanet.am.sdk.cache.maxSize=10000
Recommended Value    : com.iplanet.am.sdk.cache.maxSize=38144
 
5.   com.iplanet.am.notification.threadpool.size
Current Value        : com.iplanet.am.notification.threadpool.size=10
Recommended Value    : com.iplanet.am.notification.threadpool.size=14
 
6.   com.iplanet.am.notification.threadpool.threshold
Current Value        : com.iplanet.am.notification.threadpool.threshold=100
Recommended Value    : com.iplanet.am.notification.threadpool.threshold=38144
 
7.   com.iplanet.am.session.maxSessions
Current Value        : com.iplanet.am.session.maxSessions=5000
Recommended Value    : com.iplanet.am.session.maxSessions=38144
 
8.   com.iplanet.am.session.httpSession.enabled
Current Value        : com.iplanet.am.session.httpSession.enabled=true
Recommended Value    : com.iplanet.am.session.httpSession.enabled=false
 
9.   com.iplanet.am.session.purgedelay
Current Value        : com.iplanet.am.session.purgedelay=60
Recommended Value    : com.iplanet.am.session.purgedelay=1
 
10.  com.iplanet.am.session.invalidsessionmaxtime
Current Value        : com.iplanet.am.session.invalidsessionmaxtime=10
Recommended Value    : com.iplanet.am.session.invalidsessionmaxtime=1
 
 
---------------------------------------------------------------------
Tuning /etc/opt/SUNWam/config/serverconfig.xml...
 
File                 : /etc/opt/SUNWam/config/serverconfig.xml
 
Recomended tuning parameters only. These paramters will not be tuned by the script.
You need to modify them manually in /etc/opt/SUNWam/config/serverconfig.xml.
The number should depend on number of Access Manager instances and the memory of
Directory Server.  Please refer to Access Manager Performance Tuning Guide.
 
1.   minConnPool
Current Value        : minConnPool=1
Recommended Value    : minConnPool=1
 
2.   maxConnPool
Current Value        : maxConnPool=10
Recommended Value    : maxConnPool=28
 
 
---------------------------------------------------------------------
Tuning LDAP Connection Pool in Global iPlanetAMAuthService...
 
Service              : iPlanetAMAuthService
SchemaType           : global
 
Recomended tuning parameters only. These paramters will not be tuned by the script.
If you want to tune these parameters, review data file /tmp/dsame-auth-core-tune.xml
and run it with amadmin command.  The number should depend on number of Access Manager
instances and the memory of Directory Server.  Please refer to Access Manager
Performance Tuning Guide.
 
1.   iplanet-am-auth-ldap-connection-pool-default-size
Recommended Value    : iplanet-am-auth-ldap-connection-pool-default-size=28:28
 
=====================================================================
Tuning Complete
#####################################################################

PERFTUNE

The perftune script runs the amtune script, but it also tunes the Portal Server. The following is an example of the perftune output:


Portal Tuning Script
---------------------------------------------------------------------
Tuning /var/opt/SUNWportal/portals/portal1/config/desktopconfig.properties...
 
File                 : /var/opt/SUNWportal/portals/portal1/config/
desktopconfig.properties
Parameter tuning     :
 
1.   callerPoolMinSize
Current Value        : callerPoolMinSize=0
Recommended Value    : callerPoolMinSize=128
 
2.   callerPoolMaxSize
Current Value        : callerPoolMaxSize=0
Recommended Value    : callerPoolMaxSize=256
 
3.   callerPoolPartitionSize
Current Value        : callerPoolPartitionSize=0
Recommended Value    : callerPoolPartitionSize=32
 
4.   templateScanInterval
Current Value        : templateScanInterval=30
Recommended Value    : templateScanInterval=3600
 
---------------------------------------------------------------------
Tuning /var/opt/SUNWportal/portals/portal1/config/PSLogConfig.properties...
 
File                 : /var/opt/SUNWportal/portals/portal1/config/
PSLogConfig.properties
Parameter tuning     :
 
1.   debug.com.sun.portal.level
Current Value        : debug.com.sun.portal.level=SEVERE
Recommended Value    : debug.com.sun.portal.level=FINE
 
=====================================================================

Thread Pools

The Java Virtual Machine (JVM) can support many threads of execution at once. To help performance, both Access Manger and Portal Server maintains one or more thread pools. Thread pools, allow you to limit the total number of threads assigned to a particular task. You can see an example of the tuning parameter callerPool in the output of perftune above. When a request is passed into the web container from a browser it will flow through several thread pools. A thread pool contains an array of WorkerThread objects. These objects are the individual threads that make up the pool. The WorkerThread objects will start and stop as work arrives for them. If there is more work than there are WorkerThreads, the work will backlog until WorkerThreads free up. Assigning an insufficient amount of threads in a thread pool can cause a bottleneck in the system which is hard to see. Assigning too many threads to a thread pool is also undesirable but normally is not critical.

RqThrottle — The RqThrottle parameter specifies the maximum number of simultaneous transactions a current Java Enterprise System web container can handle. The maximum number of simultaneous transactions is considered the thread limit.

Low-Memory Situations — If you need the web container to run in low-memory situations, reduce the thread limit to a bare minimum by lowering the value of RqThrottle. Also, you can reduce the maximum number of processes by lowering the value of the MaxProcs value. Typically this value will be 1 for the Java Enterprise System Web Server.

Under-Throttled Server — The server does not allow the number of active threads to exceed the thread limit value. If the number of simultaneous requests reaches that limit, the server stops servicing new connections until the old connections are freed up. By waiting for old connections to be freed, response time is increased. In Sun Java Enterprise System web containers, the server's default RqThrottle value is 128. If you want your server to process more requests concurrently, increase the RqThrottle value. The symptom of an under-throttled server is a server with a long response time. Making a request from a browser establishes a connection fairly quickly to the server, but on under-throttled servers it may take a long time before the response comes back to the client. The best way to tell if your server is being throttled is to see if the number of active sessions is close to, or equal to, the maximum number allowed by RqThrottle.

Access Manager Tuning

Portal server takes advantage of the Identity management and policy evaluation capabilities of the Access Manager and has been tightly integrated with the Access Manager solution. During the installation process of Portal server, the administrator is prompted for passwords and credentials of the Access Manager server.

Access Manager is also required to be tuned properly for Portal applications. The tuning script provided by AM, amtune takes care of most of the tuning required by the administrator for production systems and it is highly recommended to study these changes and understand their applicability. The amtune script output has been included earlier and it shows recommended changes in a format that makes it easy for a administrator to find and make those changes as required.

Directory Server Tuning

Portal Server and Access Manager both store their schema and user data in the Directory server. As the users of Portal Server or Access Manager customize their profiles, the desktop profile is saved as xml in the directory server. Is is important to tune the directory, index the most searched attributes for faster response, and tune the cache size for optimized performance.

Monitoring CPU Utilization

The mpstat utility on Solaris can be used to monitor CPU utilization, especially with multi-threaded applications running on multiprocessor machines, which is a typical configuration for enterprise solutions.

The mpstat command reports processor statistics in tabular form. Each row of the table represents the activity of one processor. The first table summarizes all activity since boot. Each subsequent table summarizes activity for the preceding interval. All values are rates listed as events per second unless otherwise noted.

Use mpstat with an argument between 5 seconds to 10 seconds. An interval that is smaller than 5 or 10 seconds might be more difficult to analyze. A larger interval might provide a means of smoothing the data by removing spikes that could mislead the result.

Input

mpstat 10

Output


CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 1 0 5529 442 302 419 166 12 196 0 775 95 5 0
0 1 1 0 220 237 100 383 161 41 95 0 450 96 4 0
0 4 0 0 27 192 100 178 94 38 44 0 100 99 1 0 0

What to Look For

The cpu utilization is the inverse of idle time. ie 0% idl means 100% cpu utilization.

Monitoring Memory Utilization

The vmstat utility on UNIX can be used to monitor memory utilization. The most important criteria after having sufficient physical memory in the machine is to have sufficient swap space to sustain the virtual memory requirements of the portal/access manager web containers, directory process's and other associated process's.

The following example shows the input and output.

Input

% vmstat 5

Output


kthr   memory          page             disk      faults        cpu
      r b w swap  free re mf pi p fr de sr s0 s1 s2 s3  in  sy  cs us sy id
      0 0 0 11456 4120 1  41 19 1  3  0  2  0  4  0  0  48 112 130  4 14 82
      0 0 1 10132 4280 0   4 44 0  0  0  0  0 23  0  0 211 230 144  3 35 62