4Design and Develop Tests

Design and Develop Tests

This chapter describes the process of developing the tests that you should perform during the development of your project. It includes the following topics:

Overview of Test Development

It is important that you develop test cases in close cooperation between the tester, the business analyst, and the business user. The process in the following illustrates some of the activities that should take place in the test development process.

Develop Tests Process

To generate valid and complete test cases, they must be written with full understanding of the requirements, specifications, and usage scenarios.

The deliverables of the test development process include:

  • Requirement gaps. As a part of the design review process, the business analyst should identify business requirements that have incomplete or missing designs. This can be a simple list of gaps tracked in a spreadsheet. Gaps must be prioritized and critical issues scoped and reflected in the updated design. Lower priority gaps enter the change management process.

  • Approved technical design. This is an important document that the development team produces (not a testing-specific activity) that outlines the approach to solving a business problem. It should provide detailed process-flow diagrams, UI mock-ups, pseudo-code, and integration dependencies. The technical design should be reviewed by both business analysts and the testing team, and approved by business analysts.

  • Detailed test cases. Step-by-step instructions for how testers execute a test.

  • Test automation scripts. If test automation is a part of the testing strategy, the test cases need to be recorded as actions in the automation tool. The testing team develops the functional test automation scripts, while the IT team typically develops the performance test scripts.

Design Evaluation

The earliest form of testing is design evaluation. Testing during this stage of the implementation is often neglected. Development work should not start until requirements are well understood, and the design can fully address the requirements. All stakeholders should be involved in reviewing the design. Both the business analyst and business user, who defined the requirements, should approve the design. The design evaluation process is illustrated in the following:

Evaluate Design Process

Reviewing Design and Usability

Two tools for identifying issues or defects are the Design Review and Usability Review. These early stage reviews serve two purposes. First, they provide a way for development to describe the components to the requirement solution. Second, they allow the team to identify missing or incomplete requirements early in the project. Many critical issues are often introduced by incomplete or incorrect design. These reviews can be as formal or informal as deemed appropriate. Many customers have used design documents, white board sessions, and paper-based user interface mock-ups for these reviews.

Once the design is available, the business analyst should review it to make sure that the business objectives can be achieved with the system design. This review identifies functional gaps or inaccuracies. Usability reviews determine design effectiveness with the UI mock-ups, and help identify design inadequacies.

Task-based usability tests are the most effective. In this type of usability testing, the tester gives a user a task to complete (for example, create an activity), and using the user interface prototype or mock-up, the user describes the steps that he or she would perform to complete the task. Let the user continue without any prompting, and then measure the task completion rate. This UI testing approach allows you to quantify the usability of specific UI designs.

The development team is responsible for completing the designs for all business requirements. Having a rigorous design and review process can help avoid costly oversights.

Test Case Authoring

Based on the test case objective, requirements, design, and usage scenarios, the process of authoring test cases can begin. Typically this activity is performed with close cooperation between the testing team and business analysts. The following illustrates the process for authoring a test case.

Test Authoring Process

As you can see from the process, functional and performance test cases have different structures based on their nature.

Functional Test Cases

Functional test cases test a common business operation or scenario. The following table illustrates the different types of test phases in Functional Test Cases.

Table Common Functional Test Cases

Test Phase Example

Unit test

  • Test common control-level navigation through a view. Test any field validation or default logic.

  • Invoke methods on an applet.

Module test

  • Test common module-level user scenarios (for example, create an account and add an activity).

  • Verify correct interaction between two related Siebel components (for example, Workflow Process and Business Service).

Process test

  • Test proper support for a business process.

User interface

  • Verify that a view has all specified applets, and each applet has specified controls with correct type and layout.

Data entity

  • Verify that a data object or control has the specified data fields with correct data types.

A functional test case may verify common control navigation paths through a view. Functional test cases typically have two components, test paths and test data.

Test Case

A test case describes the actions and objects that you want to test. A case is presented as a list of steps and the expected behavior at the completion of a step. The following shows an example of a test case. In the Detailed Step column, there are no data values in the step. Instead you see a parameter name in brackets as a place holder. This parameterization approach is a common technique used with automation tools, and is helpful for creating reusable test cases.

Sample Test Case

Test Data

Frequently, you can use a single path to test many scenarios by simply changing the data that is used. For example, you can test the processing of both high-value and low-value opportunities by changing the opportunity data entered, or you can test the same path on two different language versions of the application. For this reason, it can be helpful to define the test path separately from the test data.

System Test Cases

System test cases are typically used in the system integration test phase to make sure that a component or module is built to specification. Functional tests focus on validating support for a scenario and system tests make sure that the structure of the application is correct. The following table shows some examples of typical system tests.

Table Common System Test Cases

Object Type Example

Interface

Verify that an interface data structure has the correct data elements and correct data types.

Business Rule

Verify that a business rule (for example, assignment rule) handles all inputs and outputs correctly.

Performance Test Cases

You accomplish performance testing by simulating system activity using automated testing tools. Oracle has several software partners who provide load testing tools that have been validated to integrate with Siebel business applications. Automated load-testing tools are important because they allow you to accurately control the load level, and correlate observed behavior with system tuning. This topic describes the process of authoring test cases using an automation framework.

When you are authoring a performance test case, first document the key performance indicators (KPIs) that you want to measure. The KPIs can drive the structure of the performance test and also provide direction for tuning activities. Typical KPIs include resource utilization (CPU, memory) of any server component, uptime, response time, and transaction throughput.

The performance test case describes the types of users and number of users of each type that will be simulated in a performance test. The following information presents a typical test profile for a performance test.

Performance Test Profile

Test cases should be created to mirror various states of your system usage, including:

  • Response time or throughput. Simulate the expected typical usage level of the system to measure system performance at a typical load. This allows evaluation against response time and throughput KPIs.

  • Scalability. Simulate loads at peak times (for example, end of quarter or early morning) to verify system scalability. Scalability (stress test) scenarios allow evaluation of system sizing and scalability KPIs.

  • Reliability. Determine the duration for which the application can be run without the need to restart or recycle components. Run the system at the expected load level for a long period of time and monitor system failures.

User Scenarios

The user scenario defines the type of user, as well as the actions that the user performs. The first step to authoring performance test cases is to identify the user types that are involved. A user type is a category of typical business user. You arrive at a list of user types by categorizing all users based on the transactions they perform. For example, you may have call center users who respond to service requests, and call center users who make outbound sales calls. For each user type, define a typical scenario. It is important that scenarios accurately reflect the typical set of actions taken by a typical user, because scenarios that are too simple, or too complex skew the test results. There is a trade-off that must be balanced between the effort to create and maintain a complex scenario, and accurately simulating a typical user. Complex scenarios require more time-consuming scripting, while scenarios that are too simple may result in excessive database contention because all the simulated users attempt simultaneous access to the small number of tables that support a few operations.

Most user types fall into one of two usage patterns:

  • Multiple-iteration users tend to log in once, and then cycle through a business process multiple times (for example, call center representatives). The Siebel application has a number of optimizations that take advantage of persistent application state during a user session, and it is important to accurately simulate this behavior to obtain representative scalability results. The scenario should show the user logging in, iterating over a set of transactions, and then logging out.

  • Single-iteration scenarios emulate the behavior of occasional users such as e-commerce buyers, partners at a partner portal, or employees accessing ERM functions such as employee locator. These users typically execute an operation once and then leave the Siebel environment, and so do not take advantage of the persistent state optimizations for multiple-iteration users. The scenario should show the user logging in, performing a single transaction, and then logging out.

Sample Test Case Excerpt with Wait Time

As shown in the preceding table, the user wait times are specified in the scenario. It is important that wait times be distributed throughout the scenario, and reflect the time that an actual user takes to perform the tasks.

Data Sets

The data in the database and used in the performance scenarios can impact test results, because this data impacts the performance of the database. It is important to define the data shape to be similar to what is expected in the production system. Many customers find it easiest to use a snapshot of true production data sets to do this.

Test Case Automation

Oracle partners with the leading test automation tool vendors, who provide validated integrations with Siebel business applications. Automation tools can be a very effective way to execute tests. In the case of performance testing, automation tools are critical to provide controlled, accurate test execution. When you have defined test cases, you can automate them using third-party tools.

Functional Automation

Using automation tools for functional or system testing can cost less than performing manual test execution. You should consider which tests to automate because there is a cost in creating and maintaining functional test scripts. Acceptance regression tests benefit the most from functional test automation technology.

For functional testing, automation provides the greatest benefit when testing relatively stable functionality. Typically, automating a test case takes approximately five to seven times as long as manually executing it once. Therefore, if a test case is not expected to be run more than seven times, the cost of automating it may not be justified.

Performance Automation

Automation is necessary to conduct a successful performance test. Performance testing tools virtualize real users, allowing you to simulate thousands of users. In addition, these virtual users are less expensive, more precise, and more tolerant than actual users. The process of performance testing and tuning is iterative, so it is expected that a test case will be run multiple times to first identify performance issues, and then verify that any tuning changes have corrected observed performance issues.

Performance testing tools virtualize real users by simulating the HTTP requests made by the client for the given scenario. The Siebel Smart Web Client Architecture separates the client-to-server communication into two channels, one for layout and one for data. The protocol for the data channel communication is highly specialized; therefore Oracle has worked closely with leading test tool vendors to provide their support for Siebel business applications. Because the communication protocol is highly specialized and subject to change, it is strongly recommended that you use a validated tool.

At a high level, the process of developing automated test scripts for performance testing has four steps. Please refer to the instructions provided by your selected tool vendor for details:

  • Record scripts for each of the defined user types. Use the automation tool’s recording capability to record the scenario documented in the test case for each user. Keep in mind the multiiteration versus single iteration distinction between user types. Many tools automatically record user wait times. Modify these values, if necessary, to make sure that the recorded values accurately reflect what was defined in the user type scenario.

  • Insert parameterization. Typically, the recorded script must be modified for parameterization. Parameterization allows you to pass in data values for each running instance of the script. Because each virtual user runs in parallel, this is important for segmenting data and avoiding uniqueness constraint violations.

  • Insert dynamic variables. Dynamic variables are generated based on data returned in a prior response. Dynamic variables allow your script to intelligently build requests that accurately reflect the server state. For example, if you execute a query, your next request should be based on a record returned in the query result set. Examples of dynamic variables in Siebel business applications include session ids, row ids, and timestamps. All validated load test tool vendors provide details on how dynamic variables can be used in their product.

  • Script verification. After you have recorded and enhanced your scripts, run each script with a single user to verify that it functions as expected.

Oracle offers testing services that can help you design, build, and execute performance tests if you need assistance.

Best Practice

Using test automation tools can reduce the effort required to execute tests, and allows a project team to achieve greater test coverage. Test Automation is critical for Performance testing, because it provides an accurate way to simulate large numbers of users.