The following is a common approach that is fraught with peril. Deployment engineers construct the system and perform the functional tests. Next the engineers hand over the entire system to the performance testing team. The testing team develops test plans and test scripts based on the targeted load assumptions. The project manager usually gives the testing team only a few hours or a few days to conduct the performance tests.
The testing team then realizes that performance tuning was not done before the tests were run. Tuning is hastily done, but problems still persist. The testing team starts to experiment with different parameter settings and configurations. This frequently leads to more problems which jeopardize the schedule. Even when the testing team successfully produces a performance report, the report usually fails to cover test cases and information crucial to capacity planning and production launch support. For example, the report often does not capture the system capacity, request breakdowns, and the system stability under stress.
A deployment often consists of a half-dozen or more systems with complex behaviors. By testing the entire system directly and troubleshooting problems as they occur, you can unnecessarily complicate the issue resolution process. Even a trivial problem in lower-level components takes great effort to identify, isolate, and address. Problems buried even further down may not be observable at a high level, and can remain buried until discovered in production.
In application development, this approach would be similar to conducting system integration tests without conducting unit tests. Every application developer knows how counter-productive it is to skip unit tests. It's not surprising to see that when conducting performance tests, it is also counter-productive to skip unit tests.
You can avoid these performance testing and tuning mistakes by using a systematic approach, and by allocating adequate project resources and time.