Performance Test Cases

You accomplish performance testing by simulating system activity using automated testing tools. Oracle has several software partners who provide load testing tools that have been validated to integrate with Siebel business applications. Automated load-testing tools are important because they allow you to accurately control the load level, and correlate observed behavior with system tuning. This topic describes the process of authoring test cases using an automation framework.

When you are authoring a performance test case, first document the key performance indicators (KPIs) that you want to measure. The KPIs can drive the structure of the performance test and also provide direction for tuning activities. Typical KPIs include resource utilization (CPU, memory) of any server component, uptime, response time, and transaction throughput.

The performance test case describes the types of users and number of users of each type that will be simulated in a performance test. The following image shows a typical test profile for a performance test and includes the following information: Test Case #, Test Case Name, Test Case Description, Application, KPIs, User Type, Number Users, and Total Number of Users and Business Transactions.

Performance Test Profile

Test cases should be created to mirror various states of your system usage, including:

  • Response time or throughput. Simulate the expected typical usage level of the system to measure system performance at a typical load. This allows evaluation against response time and throughput KPIs.

  • Scalability. Simulate loads at peak times (for example, end of quarter or early morning) to verify system scalability. Scalability (stress test) scenarios allow evaluation of system sizing and scalability KPIs.

  • Reliability. Determine the duration for which the application can be run without the need to restart or recycle components. Run the system at the expected load level for a long period of time and monitor system failures.