Bookshelf Home | Contents | Index | PDF |
Testing Siebel Business Applications > Design and Develop Tests > Test Case Authoring > Performance Test CasesYou accomplish performance testing by simulating system activity using automated testing tools. Oracle has several software partners who provide load testing tools that have been validated to integrate with Siebel business applications. Automated load-testing tools are important because they allow you to accurately control the load level, and correlate observed behavior with system tuning. This topic describes the process of authoring test cases using an automation framework. When you are authoring a performance test case, first document the key performance indicators (KPIs) that you want to measure. The KPIs can drive the structure of the performance test and also provide direction for tuning activities. Typical KPIs include resource utilization (CPU, memory) of any server component, uptime, response time, and transaction throughput. The performance test case describes the types of users and number of users of each type that will be simulated in a performance test. Figure 12 presents a typical test profile for a performance test. Test cases should be created to mirror various states of your system usage, including:
User ScenariosThe user scenario defines the type of user, as well as the actions that the user performs. The first step to authoring performance test cases is to identify the user types that are involved. A user type is a category of typical business user. You arrive at a list of user types by categorizing all users based on the transactions they perform. For example, you may have call center users who respond to service requests, and call center users who make outbound sales calls. For each user type, define a typical scenario. It is important that scenarios accurately reflect the typical set of actions taken by a typical user, because scenarios that are too simple, or too complex skew the test results. There is a trade-off that must be balanced between the effort to create and maintain a complex scenario, and accurately simulating a typical user. Complex scenarios require more time-consuming scripting, while scenarios that are too simple may result in excessive database contention because all the simulated users attempt simultaneous access to the small number of tables that support a few operations. Most user types fall into one of two usage patterns:
As shown in Figure 13, the user wait times are specified in the scenario. It is important that wait times be distributed throughout the scenario, and reflect the times that an actual user takes to perform the tasks. Data SetsThe data in the database and used in the performance scenarios can impact test results, because this data impacts the performance of the database. It is important to define the data shape to be similar to what is expected in the production system. Many customers find it easiest to use a snapshot of true production data sets to do this. |
Testing Siebel Business Applications | Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Legal Notices. | |