By the time you've reached this test phase, you've already done enough baseline tests to give you both confidence that the system performs properly, and a rough idea of how the system should perform in your targeted performance test scenarios. Targeted performance tests typically have the projected real-world workload in mind. They usually include many more test users, but also slower users (by introducing realistic think time). The test also tries not to test the system at maximum CPU usage. Instead, the tests usually focus on several scenarios. Examples:
Average workload that gauge the users' experience in terms of the average response time.
Peak workload when demands peak or one or more servers are down, and load transfer has occurred, to gauge the users perceived average response time, and the system stability.
Stability tests that use average or peak workload to run extended period of time, such as a day or a week.
Regardless what scenarios you are testing, if a problem occurs, it always helps to go back to the baseline tests to validate if certain things have changed in the environment, and to isolate the new elements (hardware or software configuration changes) that may have contributed to the problem. Unless you've isolated the problem, haphazardly tweaking performance related parameters is not productive, and usually do more harm and cause more confusion. Detailed troubleshooting methodology and techniques are beyond the scope of this document. See section name for suggestions on troubleshooting some common performance problems.