Sun Java System Portal Server 7.1 Deployment Planning Guide

Developing a Portal Prototype

ProcedureTo Develop a Portal Prototype

  1. Identify and remove obvious bottlenecks in the processor, memory, network, and disk.

  2. Setup a controlled environment to minimize the margin of error (defined as less than ten percent variation between identical runs).

    By knowing the starting data measurement baseline, you can measure the differences in data performance between sample gathering runs. Be sure measurements are taken over an adequate period of time and that you are able to capture and evaluate the results of these tests.

    Plan to have a dedicated machine for generating load simulation which is separate from the Portal Server machine. A dedicated machine helps you to uncover the origin of performance problems.

  3. Define a baseline performance for your deployment, before you add in the full complexity of the project.

  4. Using this initial benchmark, define the transaction volume your organization is committed to supporting in the short term and in the long run.

    Determine whether your current physical infrastructure is capable of supporting the transaction volume requirement you have defined.

    Identify services that are the first to max out as you increase the activity to the portal. This indicates the amount of headroom you have as well as identify where to expend your energies.

  5. Develop and refine the prototype workload that closely simulates the anticipated production environment agreed between you and the portal administrators and portal developers.

  6. Measure and monitor your traffic regularly to verify your prototype.

    Track CPU utilization over time. Load usually comes in spikes and keeping ahead of spikes involves a careful assessment of availability capabilities.

    Most organizations find that portal sites are “sticky” in nature. This means that site usage grows over time, even when the size of the user community is fixed, as users become more comfortable with the site. When the size of the user community also grows over time a successful portal site can see a substantial growth in the CPU requirements over a short period of time.

    When monitoring a portal server’s CPU utilization, determine the average web page latency during peak load and how that differs from the average latency.

    Expect peak loads to be four to eight times higher than the average load, but over short periods of time.

  7. Use the model for long-range scenario planning. The prototype can help you understand how dramatically you need to change your deployment to meet your overall growth projections for upcoming years.

  8. Keep the error logging level to ERROR and not MESSAGE. The MESSAGE error level is verbose and can cause the file system to quickly run out of disk space. The ERROR level logs all error conditions and exceptions.

  9. Monitor customized portal applications such as portlets.

  10. Monitor the following areas.

    • Portal Desktop

    • Channel rendering time

    • Sun JavaTM System Access Manager

    • Sun Java System Directory Server

    • Sun Java System Virtual Machine

    • Web container

    The following sections explain issues in terms of portal performance variables and provides guidelines for determining portal efficiency.

Access Manager Cache and Sessions

The performance of a portal system is affected to a large extent by the cache hit ratio of the Access Manager cache. This cache is highly tunable, but a trade-off exists between memory used by this cache and the available memory in the rest of the heap.

You can enable the amSDKStats logs to monitor the number of active sessions on the server and the efficiency of the Directory Server cache. These logs are located by default in the /var/opt/SUNWam/stats directory. Use the com.iplanet.am.stats.interval parameter to set the logging interval. Do not use a value less than five (5) seconds. Values of 30 to 60 seconds give good output without impacting performance.

The com.iplanet.services.stats.directory parameter specifies the log location, whether to a file or to the Portal Server administration console, and also is used to turn off the logs. You must restart the server for changes to take effect. Logs are not created until the system detects activity.


Note –

Multiple web container instances write logs to the same file.


The cache hit ratio displayed in the amSDKStats file gives both an internal value and an overall value since the server was started. Once a user logs in, the user’s session information remains in cache indefinitely or until the cache is filled up. When the cache is full, oldest entries are removed first. If the server has not needed to remove a user’s entry, it might be the case that on a subsequent login—days later, for example—the user’s information is retrieved from the cache. Much better performance occurs with high hit ratios. A hit ratio of a minimum of 80 percent is a good target although (if possible) an even higher ratio is desired.

Thread Usage

Use the web container tools to monitor the number of threads being used to service requests. In general, the number of threads actually used is generally lower than many estimates, especially in production sites where CPU utilization usually is far less than 100 percent.

Portal Usage Information

Portal Server does include a built-in reporting mechanism to monitor portal usage information by portal users. This includes which channels are accessed, how long the channels are accessed, and the ability to build a user behavioral pattern of the portal. Portal monitoring can be administered using the psconsole or cli psadmin. The following monitoring capabilities are possible:

User Based Tracking (UBT) is also administered using the psconsole or cli psadmin. UBT is disabled by default. To enable user based tracking, set property com.sun.portal.ubt.enable=true in file /var/opt/SUNWportal/portals/portal1/config. The level INFO, FINE, FINER, FINEST, OFF controls extent of data captured. Logs are routed to a file: /var/opt/SUNWportal/portals/portal1/logs/%instance/ubt.%u.%g.log which is configurable.