5Execute Siebel Functional Tests

Execute Siebel Functional Tests

This chapter describes the process of executing Siebel functional tests. It includes the following topics:

Overview of Executing Siebel Functional Tests

The process of executing Siebel functional tests is designed to provide for delivery of a functionally validated Siebel application into the system environment. For many customers the Siebel application is one component of the overall system, which may include other back-end applications, integration infrastructure, and network infrastructure. Therefore, the objective of the Execute Siebel Functional Tests process is to verify that the Siebel application functions properly before inserting it into the larger system environment. This process is illustrated in the following:

Execute Siebel Functional Tests Process

There are three phases in this process:

  • Unit test. The unit test validates the functionality of a single component (for example, an applet or a business service).

  • Module test. The module test validates the functionality of a set of related components that make up a module (for example, Contacts or Activities).

  • Process test. The process test validates that multiple modules can work together to enable a business process (for example, Opportunity Management or Quote to Order).

Application developers test their individual components for functional correctness and completeness before checking component code into the repository. The unit test cases should have been designed to test the low-level details of the component (for example, control behavior, layout, data handling).

Typical unit tests include structural tests of components, negative tests, boundary tests, and component-level scenarios. The unit test phase allows developers to fast track fixes for obvious defects before checking in. A developer must demonstrate successful completion of all unit test cases before checking in their component. In some cases, unit testing identifies a defect that is not critical for the given component; these defects are logged into the defect tracking system for prioritization.

Once unit testing has been completed on a component, that component is moved into a controlled test environment, where the component can be tested along side others at the module and process level.

    Reviews

    There are two types of reviews done in conjunction with functional testing, configuration review and scripting code review:

    • Configuration review. This is a review of the Siebel application configuration using Siebel Tools. Configuration best practices should be followed. Some common recommendations include using optimized, built-in functionalities rather than developing custom scripts, and using primary joins to improve MVG performance.

    • Scripting code review. Custom scripting is the source of many potential defects. These defects are the result of poor design or inefficient code that can lead to severe performance problems. A code review can identify design flaws and recommend code optimization to improve performance.

    Checking in a component allows the testing team to exercise that component along side related components in an integration test environment. Once in this environment, the testing team executes the integration test cases based on the available list of components. Integration tests are typically modeled as actual usage scenarios, which allow testers to validate that a user can perform common tasks. In contrast to unit test cases, these tests are not concerned with specific details of any one component, but rather the way that logic is handled when working across multiple components.

      Track Defects Subprocess

      The Track Defects subprocess is designed to collect the data required to measure and monitor the quality of the application, and also to control project risk and scope. The process, illustrated in the following, is designed so that those with the best understanding of the customer priorities are in control of defect prioritization. The business analyst monitors a list of newly discovered issues using a defect tracking system like the Siebel Quality module. These users monitor, prioritize, and target defects with regular frequency. This is typically done daily in the early stages of a project, and perhaps several times a day in later stages.

      Track Defects Subprocess

      The level of scrutiny is escalated for defects discovered after the project freeze date. A very careful measurement of the impact to the business of a defect versus the risk associated with introducing a late change must be made at the project level. Commonly, projects that do not have appropriate levels of change management in place have difficulty reaching a level of system stability adequate for deployment. Each change introduced carries with it some amount of regression risk. Late in a project, it is the responsibility of the entire project team, including the business unit, to carefully manage the amount of change introduced.

      Once a defect has been approved to be fixed, it is assigned to development and a fix is designed, implemented, unit tested, and checked in. The testing team must then verify the fix by bringing the affected components back to the same testing phase where the defect was discovered. This requires regression testing (reexecution of test cases from earlier phases). The defect is finally closed and verified when the component or module successfully passes the test cases in which it was discovered. The process of validating a fix can often require the reexecution of past test cases, so this is one activity where automated testing tools can provide significant savings. One best practice is to define regression suites of test cases that allow the team to reexecute a relevant, comprehensive set of test cases when a fix is checked in.

      Tracking defects also collects the data required to measure and monitor system quality. Important data inputs to the deployment readiness decision include the number of open defects and defect discovery rate. Also, it is important for the business customer to understand and approve the known open defects prior to system deployment.

      Best Practice

      The use of a defect tracking system allows a project team to understand the current quality of the application, prioritize defect fixes based on business impact, schedule resources, and carefully control risk associated with configuration changes late in the project.