Sun Java System Access Manager 7.1 Performance Tuning and Troubleshooting Guide

Chapter 6 Best Practices for Performance Tuning and Testing

Using a planned, systematic approach to tuning will help you avoid most performance troubleshooting pitfalls. This chapter includes the following topics:

Avoiding Common Performance Testing and Tuning Mistakes

Don't make the same mistakes deployment engineers and performance test teams usually make. Deployment engineers usually construct the system and perform the functional tests. Next the engineers hand over the system to the performance testing team. The testing team develops test plans and test scripts based on the targeted load assumptions. The project manager usually gives the testing team only a few hours or a few days to conduct the performance tests. Using this approach, the testing team usually encounters unexpected behaviors during the tests.

The testing team then realizes that performance tuning was not done been before the tests. Tuning is hastily done, but problems still persist. The testing team starts to experiment with different parameter settings and configurations. This frequently leads to more problems which jeopardize the schedule. Even when the testing team successfully produces a performance report, the report usually fails to cover test cases and information crucial to capacity planning and production launch support. For example, the report often does not capture the system capacity, request breakdowns, and the system stability under stress.

You can avoid these performance testing and tuning mistakes by using a systematic approach, and by allocating adequate project resources and time.

Using a Systematic Approach to Performance Tuning

The best practice is a systematic approach to performance testing with an allocation of a minimum of three weeks testing time. A good performance tuning plan includes the following phases:

  1. Constructing the System

  2. Automated Performance Tuning

  3. Related Systems Tuning

  4. Baseline Modular Performance Testing

  5. Advanced Performance Tuning

  6. Targeted Performance Testing

Constructing the System

During the system construction phase, the entire system is built step by step in a modular fashion. For a detailed example, see the document Deployment Example 1: Access Manager 7.1 Load Balancing, Distributed Authentication UI, and Session Failover. Each module is in the example is built and then verified. It's always easier to verify a module build than to troubleshoot an entire system. The modular verification tests prevent configuration problems from being buried in the system. Some of these verification steps are performance related. For example, there are steps to verify that sticky load balancing is working properly. See To Configure the Access Manager Load Balancer in Deployment Example 1: Access Manager 7.1 Load Balancing, Distributed Authentication UI, and Session Failover

Automated Performance Tuning

In this phase, you tune the system using the automated tuning script amtune that comes with the product. The amtune script automates most of the performance tunings and address most, if not all, Access Manager tuning needs. Manual tweaking is unnecessary and may cause harm unless you run into some of the known extreme problems

Related Systems Tuning

In this phase, you manually tune Directory Server, any Web Servers that host Web Policy Agents, and any Application Servers that host J2EE Policy Agents. The typical tuning steps are as follows:

  1. Run amtune to tune the Access Manager system. For more detailed information, see Chapter 2, Access Manager Tuning Scripts.

  2. Follow the amtune onscreen prompts to tune the related Directory Server configuration instances. The following is an overview of the primary tuning steps you must complete:

    1. Increase the nsslapd-dbcachesize value.

    2. Relocate nsslapd-db-home-directory to the /tmp directory.

    For detailed information, see the Directory Server documentation.

  3. Manually tune the user Directory Server user database instance if one is used. The following is an overview of the primary tuning steps you must complete:

    1. Increase the nsslapd-dbcachesize value.

    2. Relocate the nsslapd-db-home-directoryto the /tmp directory.

  4. If the Access Manager sub-realm is pointing to an external Directory Server user database, then manually tune the sub-realm LDAP connection pool.

    The amtune script tunes only the LDAP connection pools of the root realm. See Tuning the LDAP Connection Pool and LDAP Configurations. You can configure the following parameters on LDAPv3IDRpo:

    1. LDAP Connection Pool Minimum Size

    2. LDAP Connection Pool Maximum Size

  5. If you have installed a Web Policy Agent on a Sun Web Server, then manually tune the Web Server. You must configure the following parameters in the magnus.conf:

    • RqThrottle

    • RqThrottleMin

    • RqThrottleIncrement

    • ConnQueueSize

    If Access Manager is deployed on a Sun Web Server, the amtune script will modify the Web Server magnus.conf file. You can copy the changes and use the changed values in the Web Policy Agent Web Server.

  6. If you have installed a J2EE Policy Agent on an application server, seeThird-Party Web Containers for instructions on manually tune both the J2EE Policy Agent and the application server. You must configure settings for heap sizes and for garbage collection (GC) behaviors.

Baseline Modular Performance Testing

The system is largely performance tuned after you've run the amtune script. But it is still too early to perform the final complex performance tests. It's always more difficult to troubleshoot performance problems in the entire system than to troubleshoot individual system components performing basic transactions. So in this phase, you perform several baseline tests. Be sure that the specific baseline test scripts you write will:

Conducting Baseline Authentication Tests

You will need the following test scripts are to generate the basic authentication workload:

For all tests, randomly pick user IDs from a large user pool, from minimally 100K to one million users. The load test script should first log the user in, then either log the user out or simply drop the session and let the session time out. A good practice is to remove all static pages and graphics requests from the scripts. This will make the workload cleaner and well— defined. The results are easier to interpret.

The test scripts should have zero think time to put the maximum workload on the system. The tests are not focused on response times in this phase. The baseline tests should determine the maximum system capacity based on maximum throughput. The number of test users, sometimes called test threads, is usually a few hundred. The exact number is unimportant. What is important is to achieve as close to 100% Access Manager CPU usage as possible while keeping the average response time to at least 500 ms. A minimum of 500 ms is used to minimize the impact of relatively small network latencies. If the average response time is too low (for example 50ms), a large portion is likely to be caused by network latency. The data will be contaminated with unnecessary noise.

Determine the Number of Test Users

In the following example baseline test, 200 users per one AM instance are used. For your tests, you could use 200 users for one Access Manager instance, 400 users for two Access Manager instances, 600 users for three Access Manager instances, and so forth. If the workload is too low, start with 100 users, and increase it by increments of 100 to find out the minimum number. Once you have determined the minimum test users per AM instance, use with this number for the rest of the tests to make the results more comparable.

Determine the System Steady State

In the example baseline tests, the performance data is captured at the steady state. The system can take any where from 5 to 15 minutes to reach its steady state. Watch the tests. The following indicators will settle into predictable patterns when the system has reached its steady state:

The following are examples of capturing transactions by categories on different sytems.

On each Access Manager host, parse the container access log to gather the number of different transactions received. For example, if Access Manager is deployed on Sun Web Server, use the following command to obtain the result:


cd /opt/SUNwbsvr/https-<am1host>/logs
cp access a; grep Login a | wc; grep naming a | wc; grep session a| 
wc; grep policy a | wc ; grep jaxrpc a | wc; grep notifi a | wc;  
grep Logout a | wc; wc a;

On each LDAP server, parse the LDAP access log to gather the number of different transactions received. For example, use the following command to obtain the result:


cd <slapd-xxx>/logs
cp access a; grep BIND a | grep "uid=u" | wc; grep BIND a|wc; 
grep UNBIND a| wc; grep SRCH a| wc; grep RESULT a| wc; wc a ;

Conduct the Baseline Test

In this example, the baseline test follows this sequence:

  1. Log in and log out on each individual AM directly.

  2. Log in and time out on each individual AM directly.

  3. Log in and log out using a load balancer with one Access Manager server.

  4. Log in and time out using a load balancer with one Access Manager server.

  5. Log in and log out test on LB with two AM instances behind

  6. Perform login and timeout test on LB with two AM instances behind

If you have two Access Manager instances behind a load balancer, the above tests actually involve at least ten individual test runs: two test runs for 1 through 4, one test run, and one test run for 6.


Note –

In order to perform any log in and timeout test, you must reduce the maximum session timeout value to lower than the default value. For example, change the default 30 minutes to one minute. Otherwise, at the maximum throughput, there will be too many sessions lingering on the system for so long that the memory will be exhausted quickly.


Analyze the Baseline Test Results

The data you capture will help you identify possible trouble spots in the system. The following are examples of things to look for in the baseline test results.

Compare the maximum authentication throughput of individual Access Manager instances with no load balancer in place.

If identical hardware is used in the test, the number of authentication transactions per second should be roughly the same for each Access Manager instance. If there is a large variance in throughput, investigate why one server behaves differently than another.

Compare the maximum authentication throughput of individual Access Manager instances that have a load balancer in front of them.

Using a load balancer should not cause a decrease in the maximum throughput. In the example above, test 3 should yield results similar to test 1 results, and test 4 should yield results similar to test 2 results. If the maximum throughput numbers go down when a load balancer is added to the system, investigate why the load balancer introduces significant overhead. For example, you could conduct a further test with static pages through the load balancer.

Verify that the maximum throughput on a load balancer with two Access Manager instances is roughly twice the throughput on a load balancer with one Access Manager instance behind it.

If the throughput numbers to do not increase proportionately with the number of Access Manager instances, you have not configured sticky load balancing properly. Users logged in to one Access Manager instance are being redirected to another instance for logout. You must correct the load balancer configuration. For related information, see Configuring the Access Manager Load Balancer in Deployment Example 1: Access Manager 7.1 Load Balancing, Distributed Authentication UI, and Session Failover.

Verify that for each test, the Access Manager transaction counts report indicates no unexpected Access Manager requests.

For example, if you perform the Access Manager login and logout test, your test results may look similar to this:


    1581   15810  139128
       0       0       0
       0       0       0
       0       0       0
       0       0       0
       0       0       0
    1609   16090  146419
    3198   31972  286043 a

This output indicates three important pieces of information. First, the system processed 1581 login requests and 1609 logouts request. They are roughly equal. This is expected as each login is followed by one logout. Secondly, all other types of AM requests were absent. This is expected. Lastly, the total number of requests received, 3198, is roughly the sum of 1581 and 1609. This indicates there are no unexpected requests that we didn't grepin the command.

Troubleshoot the Problems You Find

A common problem is that when two Access Manager instances are both running, you see not only login and logout requests, but session requests as well. The test results may look similar to this:


    3159   31590  277992
       0       0       0
    5096   50960  486676
       0       0       0
       0       0       0
    1305   13050  127890
    3085   30850  280735
   12664  126621 1174471 a

In this example, for each logout request, there are now extra session and notification requests. The total number of requests does add up. This means there are no other unexpected requests. The reason for the session request is that the sticky load balancing is not working properly. A user logged in on one Access Manager instance, then is sent to another AM instance for logout. The second Access Manager instance must generate an extra session request to the originating AM instance to perform the request. The extra session request increases the system workload and reduces the maximum throughput the system can provide. In this case, the two Access Manager instances cannot double the throughout of the single Access Manager throughput. Instead, there is a mere 20% increase. You can address the problem at this point by reconfiguring the load balancer. This is an example of a problem should have been caught during modular verification steps in the system construction phase.

Run Extended Tests for System Stability

Once the system has passed all the basic authentication tests, it's a good practice to put the system under the test workload for an extended period of time to test the stability. You can use test 6 let it run over several hours. You may need to set up automated scripts to periodically remove excessive access logs generated so that they do not fill up the file systems.

Conducting Baseline Authorization Tests

You will need the following test scripts are to generate the basic authorization workload:

In this example, the baseline authorization test follows this sequence:

It is a good practice to set up a single URL policy that allows all authenticated users to access the wildcard URL protected by the policy agent. This simplified setup keep things simple in the baseline tests.

For all tests, randomly pick user IDs from a large user pool, from minimally 100K to one million users. The load test scripts log the user in, accesses a protected static page twice, and then logs the user out. A good practice is to remove all other static page or gif requests from the scripts. This will make the workload cleaner, well-defined, and the results are easier to interpret.

The test scripts should have zero think time to put the maximum workload on the system. The tests are not focused on response times in this phase. The baseline tests should determine the maximum system capacity based on maximum throughput. The number of test users, sometimes called test threads, is usually a few hundred. The exact number is unimportant. What is important is to achieve as close to 100% Access Manager CPU usage as possible while keeping the average response time to at least 500 milliseconds. A well executed test indicates the maximum system capacity while minimizing the impact of network latencies.

Determine the Number of Test Users

A typical 200 users per one Access Manager instance can be used . For example, you could use 200 users for one Access Manager instance, 400 users for two Access Manager instances, 600 users for three Access Manager instances, and so on. If the workload is too low, start with 100 users, and increase it by a 100—user increments to find out the minimum number. Once the number of test users per Access Manager instance is determined, continue to use this number for the rest of the tests to make the results more comparable. If you have two Access Manager instances behind a load balancer, the above tests actually involve at least five individual test runs. You conduct two runs each for tests 1 and 2, and conduct one run for test 3.

Verify that for each test, the response time of the second protected resource access is significantly lower than the response time of the first protected page access. On the first access to a protected resource, the agent needs to perform uncached session validation and authorization. This involves the agent communicating with Access Manager servers. On the second access to a protected resource, the agent can perform cached session validation and authorization. The agent does not need to communicate with the Access Manager servers. Thus the second access tends to be significantly faster. It's common to see the first page access takes 1 second (this highly depends on the number of test users used), while the second page access takes less than 10 ms (this does not depend too much on the number of test users used). If the second page access is not as fast as it should be, compared with the first page access, you should investigate to find out why. Is it because first page access being relatively too fast ? If so, you can increase the number of test users to increase the response time of the first page access. Is it because the agent machine is undersized so that no matter how much load you put on the system, Access Manager does not reach full capacity, and the agent machine reaches full capactiy first. In this case, since the agent machine is the bottleneck, and not the AccessManager, you can expect both the first and second page access to be slow while Access Manager responds quickly.

Analyze the Test Results

The data you capture will help you identify possible trouble spots in the system. The following are examples of things to look for in the baseline test results.

Compare the maximum authorization throughput of individual Access Manager instances with no load balancer in place.

If identical hardware is used in the test, the number of authorizations transactions per second should be roughly the same for each Access Manager instance. If there is a large variance in throughput, investigate why one server behaves differently than another.

Compare the maximum authorization throughput of individual Access Manager instances that have a load balancer in front of them.

Using a load balancer should not cause a decrease in the maximum throughput. In the example above, test 2 should yield results similar to test 1 results. If the maximum throughput numbers go down when a load balancer is added to the system, investigate why the load balancer introduces significant overhead. For example, you could conduct a further test with static pages through the load balancer.

Verify that the maximum throughput on a load balancer with two Access Manager instances is roughly twice the throughput on a load balancer with one Access Manager instance behind it.

If the throughput numbers to do not increase proportionately with the number of Access Manager instances, you have not configured sticky load balancing properly. Users logged in to one Access Manager instance are being redirected to another instance for logout. You must correct the load balancer configuration. When sticky load balancing is properly configured, each Access Manager should serve requests independently and thus the system would scale near linearly. If the throughput numbers to do not increase proportionately with the number of Access Manager instances, you have not configured sticky load balancing correctly. For related information, see Configuring the Access Manager Load Balancer in Deployment Example 1: Access Manager 7.1 Load Balancing, Distributed Authentication UI, and Session Failover.

Verify that for each test, the Access Manager transaction counts report indicates no unexpected Access Manager requests.

For example, if you perform the Access Manager login and logout test, your test results should look similar to this:


    1079   10790   94952
    1032   10320   99072
    1044   10440  101268
    1064   10640  101080
       0       0       0
       0       0       0
    1066   10660   97006
    5312   53093  495052 a

This output indicates three pieces of information. First, the system processed 1079 login, 1032 naming, 1044 session, 1064 policy and 1066 logout requests. These numbers are roughly equal. For each login, there is one naming call, one session call (to validate the user's session), one policy call (to authorize the user's access) and one logout. Secondly, all other types of Access Manager requests were absent. This is expected. Lastly, the total number of request received 5312 is roughly the sum of login, naming, session, policy and logout requests. This indicates there are no unexpected requests that we didn't grep in the command.

Troubleshoot Problems You Find

A common problem is that when two AM instances are both running, you see the number of session requests exceeds the number of logins. For example, the test output may look similar to this:


	
    4075   40750  358600
    4167   41670  400032
   19945  199450 1913866
    3979   39790  381984
       0       0       0
    3033   30330  297234
    3946   39460  359086
   39194  391891 3713840 a

Note that for each login request, there are now 5 session requests, and 0.75 notifications. The total number of requests do add up though. This indicates there are no other unexpected requests. There more session requests per login because the sticky load balancing is not working properly. A user logged in on one Access Manager instance is sometimes sent to another Access Manager instance for session validation and logout. The second Access Manager instance must generate extra session and notification requests to the originating Access Manager instance to perform the request. The extra requests increase the system workload and reduce the maximum throughput the system can provide. In this case, the two Access Manager instances cannot double the throughout of the single AM throughput. You can address the problem by reconfiguring the load balancer. The problem should have been caught during modular verification steps in the system construction phase.

Conduct Extended Stability Tests

Once you've passed all the basic authorization tests, it's a good idea to put the system under the workload for extended period of time to test the stability. You can use test 3 and let it run over several hours. You may need to set up automated scripts to periodically remove excessive access logs generated so that they do not fill up the file systems.

Advanced Performance Tuning

The amtune script is specifically designed to address most, if not all, of the performance tuning needs. This means that you almost never need to manually tweak performance parameters. With the large number of performance related parameters, tweaking them invite more problems instead of solving them. However, there are a few special situations that amtune currently does not tune or tune well. This is documented in Chapter 7, Advanced Performance Tuning. For each special situation, there is an explanation of what amtune is doing today, how to identify whether you need to manually tune the parameters, and how to tune them. It is worth repeating here that most, if not all, of your performance tuning should be addressed by the amtune script. Performance problems are usually caused by poor system configuration. The special tuning cases should be used only if they actually apply to your specific case.

Targeted Performance Testing

By the time you've reached this test phase, you've already done enough baseline tests that give you both the confidence the system performs properly, and a rough idea of how the system should perform in your targeted performance test scenarios. Target performance tests typically have the projected real-world workload in mind. They usually include many more test users, but also slower users (by introducing realistic think time). The test also tries not to test the system at maximum CPU usage. Instead, the tests usually focus on several scenarios. Examples:

Regardless what scenarios you are testing, if a problem occurs, it always helps to go back to the baseline tests to validate if certain things have changed in the environment, and to isolate the new elements (hardware or software configuration changes) that may have contributed to the problem. Unless you've isolated the problem, haphazardly tweaking performance related parameters is not productive, and usually do more harm and cause more confusion. Detailed troubleshooting methodology and techniques are beyond the scope of this document. See section name for suggestions on troubleshooting some common performance problems.