Testing Siebel Business Applications > Plan Testing Strategy > Test Plans >
A test case represents an application behavior that needs to be verified. For each component, application, and business process you can identify one or more test cases that need verification. Figure 6 shows a sample test case list. Each test plan contains multiple test cases.
Figure 6. Sample Test Plan: Test Case List
This example uses the following numbering schema for the Test Case ID:
TC1.x - New records and seed data required to support other test cases
TC2.x - Positive test cases
TC3.x - Negative test cases
TC4.x - Data Conversion testing
TC5.x - System integration testing
Note how the test schedule is included in Figure 6. For example, TC1.0 - New Contact is performed during Functional Cycle 1 (Functional C1) of the functional testing. Whereas, TC3.0 - Contracts occurs during Functional Cycle 2 (Functional C2) and during system integration testing.
During the Design phase of the test plan, there are a number of test types that you must define:
- Functional test cases. Functional test cases are designed to validate that the application performs a specified business function. The majority of these test cases take the form of user or business scenarios that resemble common transactions. Testers and business users should work together to compile a list of scenarios. Following the business process testing practice, functional test cases should be derived directly from the business process, where each step of the business process is clearly represented in the test case.
For example, if the test plan objective is to validate support for the Manage Quotes Business Process, then there should be test cases specified based on the process definition. Typically, this means that each process or subprocess has one or more defined test cases, and each step in the process is specified within the test case definition. Figure 7 illustrates the concept of a process-driven test case. Considerations must also be given for negative test cases that test behaviors when unexpected actions are taken (for example, creation of a quote with a create date before the current date).
Figure 7. Business Process-Driven Test Case with its Corresponding Process Diagram
- Structural test cases. Structural test cases are designed to verify that the application structure is correct. They differ from functional cases in that structural test cases are based on the structure of the application, not on a scenario. Typically, each component has an associated structural test case that verifies that the component has the correct layout and definition (for example, verify that a view contains all the specified applets and controls).
- Performance test cases. Performance test cases are designed to verify the performance of the system or a transaction. There are three categories of performance test cases commonly used:
- Response time or throughput. Verifies the time for a set of specified actions. For example, tests the time for a view to paint or a process to run. Response time tests are often called performance tests.
- Scalability. Verifies the capacity of a specified system or component. For example, test the number of users that the system can support. Scalability tests are often called load or stress tests.
- Reliability. Verifies the duration for which a system or component can be run without the need for restarting. For example, test the number of days that a particular process can run without failing.
Each test case should have a primary testing phase identified. You can run a given test case several times in multiple testing phases, but typically the first phase in which you run it is considered the primary phase. The following describes how standard testing phases typically apply to Siebel business application deployments:
- Unit test. The objective of the unit test is to verify that a unit (also called a component) functions as designed. The definition of a unit is discussed in Component Inventory. In this phase of testing, in-depth verification of a single component is functionally and structurally tested.
For example, during the unit test the developer of a newly configured view verifies that the view structure meets specification and validates that common user scenarios, within the view, are supported.
- Module test. The objective of the module test is to validate that related components fit together to meet specified application design criteria. In this phase of testing, functional scenarios are primarily used. For example, testers will test common navigation paths through a set of related views. The objective of this phase of testing is to verify that related Siebel components function correctly as a module.
- Process test. The objective of the process test is to validate that business process are supported by the Siebel application. During the process test, the previously-tested modules are strung together to validate an end-to-end business process. Functional test cases, based on the defined business processes are used in this phase.
- Data conversion test. The objective of the data conversion test is to validate that the data is properly configured and meets all requirements. This should be performed before the integration test phase.
- Integration test. In the integration test phase, the integration of the Siebel business application with other back-end, middleware, or third-party components are tested. This phase includes functional test cases and system test cases specific to integration logic. For example, in this phase the integration of Siebel Orders with an ERP Order Processing system is tested.
- Acceptance test. The objective of the acceptance test is to validate that the system is able to meet user requirements. This phase consists primarily of formal and ad-hoc functional tests.
- Performance test. The objective of the performance test is to validate that the system will support specified performance KPIs, maintenance, and reliability requirements. This phase consists of performance test cases.