Testing Siebel Business Applications > Design and Develop Tests > Test Case Authoring >

Performance Test Cases

You accomplish performance testing by simulating system activity using automated testing tools. Siebel Systems has several software partners who provide load testing tools that have been validated to integrate with Siebel 7. Automated load-testing tools are important because they allow you to accurately control the load level, and correlate observed behavior with system tuning. This section describes the process of authoring test cases using an automation framework.

When you are authoring a performance test case, first document the key performance indicators (KPIs) that you want to measure. The KPIs can drive the structure of the performance test and also provide direction for tuning activities. Typical KPIs include resource utilization (CPU, memory) of any server component, uptime, response time, and transaction throughput.

The performance test case describes the types of users and number of users of each type that will be simulated in a performance test. Figure 13 presents a typical test profile for a performance test.

Figure 13. Performance Test Profile
Click for full size image

Test cases should be created to mirror various states of your system usage, including:

  • Response time or throughput. Simulate the expected typical usage level of the system to measure system performance at a typical load. This allows evaluation against response time and throughput KPIs.
  • Scalability. Simulate loads at peak times (for example, end of quarter or early morning) to verify system scalability. Scalability (stress test) scenarios allow evaluation of system sizing and scalability KPIs.
  • Reliability. Determine the duration for which the application can be run without the need to restart or recycle components. Run the system at the expected load level for a long period of time and monitor system failures.

User Scenarios

The user scenario defines the type of user, as well as the actions that the user performs. The first step to authoring performance test cases is to identify the user types that are involved. A user type is a category of typical business user. You arrive at a list of user types by categorizing all users based on the transactions they perform. For example, you may have call center users who respond to service requests, and call center users who make outbound sales calls. For each user type, define a typical scenario. It is important that scenarios accurately reflect the typical set of actions taken by a typical user, because scenarios that are too simple, or too complex skew the test results. There is a trade-off that must be balanced between the effort to create and maintain a complex scenario, and accurately simulating a typical user. Complex scenarios require more time-consuming scripting, while scenarios that are too simple may result in excessive database contention because all the simulated users attempt simultaneous access to the small number of tables that support a few operations.

Most user types fall into one of two usage patterns:

  • Multiple-iteration users tend to log in once, and then cycle through a business process multiple times (for example, call center representatives). The Siebel application has a number of optimizations that take advantage of persistent application state during a user session, and it is important to accurately simulate this behavior to obtain representative scalability results. The scenario should show the user logging in, iterating over a set of transactions, and then logging out.
  • Single-iteration scenarios emulate the behavior of occasional users such as e-commerce buyers, partners at a partner portal, or employees accessing ERM functions such as employee locator. These users typically execute an operation once and then leave the Siebel environment, and so do not take advantage of the persistent state optimizations for multiple-iteration users. The scenario should show the user logging in, performing a single transaction, and then logging out.
Figure 14. Sample Test Case Excerpt with Wait Time
Click for full size image

As shown in Figure 14, the user wait times are specified in the scenario. It is important that wait times be distributed throughout the scenario, and reflect the times that an actual user takes to perform the tasks.

Data Sets

The data in the database and used in the performance scenarios can impact test results, because this data impacts the performance of the database. It is important to define the data shape to be similar to what is expected in the production system. Many customers find it easiest to use a snapshot of true production data sets to do this.

Testing Siebel Business Applications