3Plan Testing Strategy

Plan Testing Strategy

This chapter describes the process of planning your tests. It includes the following topics:

Overview of Test Planning

The objective of the test planning process is to create the strategy and tactics that provide the proper level of test coverage for your project. The test planning process is illustrated in the following.

The inputs to this process are the business requirements and the project scope. The outputs, or deliverables, of this process include:

  • Test objectives. The high-level objectives for a quality release. The test objectives are used to measure project progress and deployment readiness. Each test objective has a corresponding business or design requirement.

  • Test plans. The test plan is an end-to-end test strategy and approach for testing the Siebel application. A typical test plan contains the following sections:

    • Strategy, milestones, and responsibilities. Set the expectation for how to perform testing, how to measure success, and who is responsible for each task

    • Test objectives. Define and validate the test goals, objectives, and scope

    • Approach. Outlines how and when to perform testing

    • Entrance and exit criteria. Define inputs required to perform a test and success criteria for passing a test

    • Results reporting. Outlines the type and schedule of reporting

  • Test cases. A test plan contains a set of test cases. Test cases are detailed, step-by-step instructions about how to perform a test. The instructions should be specific and repeatable by anyone who typically performs the tasks being tested. In the planning process, you identify the number and type of test cases to be performed.

  • Definition of test environments. The number, type, and configuration for test environments should also be defined. Clear entry and exit criteria for each environment should be defined.

Plan Testing Strategy Process

Test Objectives

The first step in the test planning process is to document the high-level test objectives. The test objectives provide a prioritized list of verification or validation objectives for the project. You use this list of objectives to measure testing progress, and verify that testing activity is consistent with project objectives.

Test objectives can typically be grouped into the following categories:

  • Functional correctness. Validation that the application correctly supports required business processes and transactions. List all of the business processes that the application is required to support. Also list any standards for which there is required compliance.

  • Authorization. Verification that actions and data are available only to those users with correct authorization. List any key authorization requirements that must be satisfied, including access to functionality and data.

  • Service level. Verification that the system will support the required service levels of the business. This includes system availability, load, and responsiveness. List any key performance indicators (KPIs) for service level, and the level of operational effort required to meet KPIs.

  • Usability. Validation that the application meets required levels of usability. List the required training level and user KPIs required.

The testing team, development team, and the business unit agree upon the list of test objectives and their priority. The following shows a sample Test Objectives document.

A test case covers one or more test objective, and has the specific steps that testers follow to verify or validate the stated objectives. The details of the test plan are described in Test Plans.

Sample Test Objectives

Test Plans

The purpose of the test plan is to define a comprehensive approach to testing. This includes a detailed plan for verifying the stated objectives, identifying any issues, and communicating schedules towards the stated objectives. The test plan has the following components:

  • Project scope. Outlines the goals and what is included in the testing project.

  • Test cases. Detail level test scenarios. Each test plan is made up of a list of test cases, their relevant test phases (schedule), and relationship to requirements (traceability matrix).

  • Business process script inventory and risk assessment. A list of components (business process scripts) that require testing. Also describes related components and identifies high-risk components or scenarios that may require additional test coverage.

  • Test schedule. A schedule that describes when test cases will be executed.

  • Test environment. Outlines the recommendations for the various test environments (for example, Functional, System Integration, and Performance). This includes both software and hardware.

  • Test data. Identifies the data required to support testing.

Business process testing is an important best practice. Business process testing drives the test case definition from the definition of the business process. In business process testing, coverage is measured based on the percentage of validated process steps.

Best Practice

Functional testing based on a required business process definition provides a structured way to design test cases, and a meaningful way to measure test coverage based on business process steps.

Business process testing is described in more detail in the topics that follow.

Test Cases

A test case represents an application behavior that needs to be verified. For each component, application, and business process you can identify one or more test cases that need verification. The following shows a sample test case list. Each test plan contains multiple test cases.

Sample Test Plan: Test Case List

This example uses the following numbering schema for the Test Case ID:

TC1.x – New records and seed data required to support other test cases

TC2.x – Positive test cases

TC3.x – Negative test cases

TC4.x – Data Conversion testing

TC5.x – System integration testing

Note how the test schedule is included in the figure. For example, TC1.0 – New Contact is performed during Functional Cycle 1 (Functional C1) of the functional testing. Whereas, TC3.0 – Contracts occurs during Functional Cycle 2 (Functional C2) and during system integration testing.

During the Design phase of the test plan, there are a number of test types that you must define:

  • Functional test cases. Functional test cases are designed to validate that the application performs a specified business function. The majority of these test cases take the form of user or business scenarios that resemble common transactions. Testers and business users should work together to compile a list of scenarios. Following the business process testing practice, functional test cases should be derived directly from the business process, where each step of the business process is clearly represented in the test case.

    For example, if the test plan objective is to validate support for the Manage Quotes Business Process, then there should be test cases specified based on the process definition. Typically, this means that each process or subprocess has one or more defined test cases, and each step in the process is specified within the test case definition. The following image illustrates the concept of a process-driven test case. Considerations must also be given for negative test cases that test behaviors when unexpected actions are taken (for example, creation of a quote with a create date before the current date).

Business Process-Driven Test Case with its Corresponding Process Diagram
  • Structural test cases. Structural test cases are designed to verify that the application structure is correct. They differ from functional cases in that structural test cases are based on the structure of the application, not on a scenario. Typically, each component has an associated structural test case that verifies that the component has the correct layout and definition (for example, verify that a view contains all the specified applets and controls).

  • Performance test cases. Performance test cases are designed to verify the performance of the system or a transaction. There are three categories of performance test cases commonly used:

    • Response time or throughput. Verifies the time for a set of specified actions. For example, tests the time for a view to paint or a process to run. Response time tests are often called performance tests.

    • Scalability. Verifies the capacity of a specified system or component. For example, test the number of users that the system can support. Scalability tests are often called load or stress tests.

    • Reliability. Verifies the duration for which a system or component can be run without the need for restarting. For example, test the number of days that a particular process can run without failing.

Test Phase

Each test case should have a primary testing phase identified. You can run a given test case several times in multiple testing phases, but typically the first phase in which you run it is considered the primary phase. The following describes how standard testing phases typically apply to Siebel business application deployments:

  • Unit test. The objective of the unit test is to verify that a unit (also called a component) functions as designed. The definition of a unit is discussed in Component Inventory. In this phase of testing, in-depth verification of a single component is functionally and structurally tested.

    For example, during the unit test the developer of a newly configured view verifies that the view structure meets specification and validates that common user scenarios, within the view, are supported.

  • Module test. The objective of the module test is to validate that related components fit together to meet specified application design criteria. In this phase of testing, functional scenarios are primarily used. For example, testers will test common navigation paths through a set of related views. The objective of this phase of testing is to verify that related Siebel components function correctly as a module.

  • Process test. The objective of the process test is to validate that business process are supported by the Siebel application. During the process test, the previously-tested modules are strung together to validate an end-to-end business process. Functional test cases, based on the defined business processes are used in this phase.

  • Data conversion test. The objective of the data conversion test is to validate that the data is properly configured and meets all requirements. This should be performed before the integration test phase.

  • Integration test. In the integration test phase, the integration of the Siebel business application with other back-end, middleware, or third-party components are tested. This phase includes functional test cases and system test cases specific to integration logic. For example, in this phase the integration of Siebel Orders with an ERP Order Processing system is tested.

  • Acceptance test. The objective of the acceptance test is to validate that the system is able to meet user requirements. This phase consists primarily of formal and ad-hoc functional tests.

  • Performance test. The objective of the performance test is to validate that the system will support specified performance KPIs, maintenance, and reliability requirements. This phase consists of performance test cases.

Component Inventory

The Component Inventory is a comprehensive list of the applications, modules, and components in the current project. Typically, the component inventory is done at the project level, and is not a testing-specific activity. There are two ways projects typically identify components. The first is to base component definition on the work that needs to be done (for example, specific configuration activities). The second method is to base the components on the functionality to be supported. In many cases, these two approaches produce similar results. A combination of the two methods is most effective in making sure that the test plan is complete and straightforward to execute. The worksheet shown in the following is an example of a component inventory.

Sample Component Inventory Document

Risk Assessment

A risk assessment is used to identify those components that carry higher risk and may require enhanced levels of testing. The following characteristics increase component risk:

  • High business impact. The component supports high business-impact business logic (for example, complex financial calculation).

  • Integration. This component integrates the Siebel application to an external or third-party system.

  • Scripting. This component includes the coding of browser script, eScript, or VB script.

  • Ambiguous or incomplete design. This component design is either ambiguous (for example, multiple implementation options described) or the design is not fully specified.

  • Availability of data. Performance testing requires production-like data (a data set that has the same shape and size as that of the production environment). This task requires planning, and the appropriate resources to stage the testing environment.

  • Downstream dependencies. This component is required by several downstream components.

As shown in the image in Component Inventory, one column of the component inventory provides a risk score to each component based on the guidelines above. In this example one risk point is given to a component for each of the criteria met. The scoring system should be defined to correctly represent the relative risk between components. Performing a risk assessment is important for completing a test plan, because the risk assessment provides guidance on the sequence and amount of testing required.

Best Practice

Performing a risk assessment during the planning process allows you to design your test plan in a way that minimizes overall project risk.

Test Plan Schedule

For each test plan, a schedule of test case execution should be specified. The schedule is built using four different inputs:

  • Overall project schedule. The execution of all test plans must be consistent with the overall project schedule.

  • Component development schedule. The completion of component configuration is a key input to the testing schedule.

  • Environment availability. The availability of the required test environment needs to be considered in constructing schedules.

  • Test case risk. The risk associated with components under test is another important consideration in the overall schedule. Components with higher risk should be tested as early as possible.

Test Environments

The specified test objectives influence the test environment requirements. For example, service level test objectives (such as system availability, load, and responsiveness) often require an isolated environment to verify. In addition, controlled test environments can help:

  • Provide integrity of the application under test. During a project, at any given time there are multiple versions of a module or system configuration. Maintaining controlled environments can make sure that tests are being executed on the appropriate versions. Significant time can be wasted executing tests on incorrect versions of a module or debugging environment configuration without these controls.

  • Control and manage risk as a project nears rollout. There is always a risk associated with introducing configuration changes during the lifecycle of the project. For example, changing the configuration just before the rollout carries a significant amount of risk. Using controlled environments allows a team to isolate late-stage and risky changes.

It is typical to have established Development, Functional Testing, System Testing, User Acceptance Testing, Performance Testing, and Production environments to support testing. More complex projects often include more environments, or parallel environments to support parallel development. Many customers use standard code control systems to facilitate the management of code across environments.

The environment management approach includes the following components:

  • Named environments and migration process. A set of named test environments, a specific purpose (for example, integration test environment), and a clear set of environment entry and exit criteria. Typically, the movement of components from one environment to the next requires that each component pass a predefined set of test cases, and is done with the appropriate level of controls (for example, code control and approvals).

  • Environment audit. A checklist of system components and configuration for each environment. Audits are performed prior to any significant test activity. The Environment Verification Tool can be used to facilitate the audit of test environments. For help with the Environment Verification Tool, see 477105.1 (Doc ID) on My Oracle Support. This document was previously published as Siebel Technical Note 467.

  • Environment schedule. A schedule that outlines the dates when test cases will be executed in a given environment.

Performance Test Environment

In general, the more closely the performance test environment reflects the production environment, the more applicable the test results will be. It is important that the performance test environment includes all of the relevant components to test all aspects of the system, including integration and third-party components. Often it is not feasible to build a full duplicate of the production configuration for testing purposes. In that case, the following scaled-down strategy should be employed for each tier:

  • Web Servers and Siebel Servers. To scale down the Web and application server tiers, the individual servers should be maintained in the production configuration and the number of servers scaled down proportionately. The overall performance of a server depends on a number of factors besides the number of CPUs, CPU speed, and memory size. So, it is generally not accurate to try to map the capacity of one server to another even within a single vendor’s product line.

    The primary tier of interest from an application scalability perspective is the application server tier. Scalability issues are very rarely found on the Web server tier. If further scale-down is required it is reasonable to maintain a single Web server and continue to scale the application server tier down to a single server. The application server should still be of the same configuration as those used in the production environment, so that the tuning activity can be accurately reflected in the system test and production environments

  • Database server. If you want to scale down a database server, there is generally little alternative but to use a system as close as possible to the production architecture, but with CPU, memory, and I/O resources scaled down as appropriate.

  • Network. The network configuration is one area in which it is particularly difficult to replicate the same topology and performance characteristics that exist in the production environment. It is important that the system test includes any active network devices such as proxy servers and firewalls. The nature of these devices can impact not only the performance of the system, but also the functionality, because in some cases these devices manipulate the content that passes through them. The performance of the network can often be simulated using bandwidth and latency simulation tools, which are generally available from third-party vendors.