Testing Siebel Business Applications > Automating Functional Tests > Best Practices for Functional Test Automation >

Best Practices for Functional Test Design

The following best practices are provided to assist in your design of functional tests:

Review Test Plans to Identify Candidates for Automation

Use predefined criteria (like the following list) to identify test plans that are candidates for automation:

  • Can you automate the entire test plan using preconfigured automation functionality?
  • Do you need to rearrange the test plan to better suit automation?
  • Can you reuse existing scripts, or modify scripts from sample libraries?
Define a Common Structure and Templates for Creating Tests

Before testing begins, define templates, standards, and naming conventions for test plan documents and automation scripts. This will make it easier in the long-run to correlate test plans to test scripts, to follow the logic of test steps, and to maintain test instructions.

Define Test Script Naming Conventions

When creating a standard template for test plan documents and test scripts, define naming conventions that reflect the test procedures. Make sure that the names are meaningful and illustrate the logical operations being performed. The use of naming conventions makes it easier to identify scripts, read the script logic, and understand how scripts are organized during the maintenance phase.

For script modules, names should be logically expressive of the purpose of the module and test steps. Additionally, the name can be correlated to the title of the corresponding test plan document. For script variables, follow standard naming guidelines. For example, use the g_ prefix for global variables and the str_ prefix for strings.

Design Flexible Scripts with Defined Purpose

Define the purpose and summarize the validation conditions for each test script in advance. The intent of the test is a key input into script design that determines how much flexibility you need to write into the script.

For a test that is intended to validate a business process, create a script that maps closely to the process, and divide the script into modules that correspond to process steps. This allows you to test each pass through the business process with minimal branching logic, and to modify the order of the script if the business process changes.

Additionally, you might need to add fail-safe logic to the script that ignores minor discrepancies in the UI configuration and proceeds with executing the business process. Consider, for example, a script that sets a value in a particular control as part of a business process. Your test should validate that the control is present and that it functions as expected; but it need not validate the font size and color of the control's text. You might also need to add logic so that the script proceeds even when the control is not present in the UI.

For a test that is intended to validate specific attributes of a UI component (rather than a business process), create a script that checks UI properties explicitly. Avoid adding fail-safe logic to this type of test, because the script should detect when the UI has changed and fail accordingly.

Design Modular Scripts

A module is an independent reusable test component comprised of inter-related entities. Conceptual script modules may be defined by native functionality of a particular test tool, or by a script developer who writes reusable test routines. Examples of modules include a login or query, or the creation of a new Contact or Service Request.

Each automation script should consist of small logical modules rather than having one continuous unstructured sequence of test steps. This approach makes scripts easier to maintain in the long-run, and allows rapid creation of new scripts from a library of proven modules. Other script developers can use well-designed modules with minimal or no adaptations. Modules also can increase the independence of a script from the test environment.

You can categorize modules broadly as RequiredToPass, Critical, and Test Case. Typical RequiredToPass modules are setup scripts and scripts that launch the application. Failure of RequiredToPass modules should result in the entire script being aborted. Critical modules include navigating to the client area, inserting test data, and so on. Failure of Critical modules may affect certain test cases. Test Case modules are where you perform the test cases. The test cases should be able to fail without affecting the execution of the whole script. You should be able to rerecord or insert a test case without changing other modules in the script. In addition, Test Case modules can have a specified number of iterations. For example, a test case module to add a new account can be constructed to execute an indefinite number of times without changing any script.

Module interdependency should be clearly established. Execution status for RequiredToPass and Critical modules should be stored in global variables or environment variables so that other modules can use this information for purposes such as error handling. Some test tools will skip an entire module when a failure occurs.

Design Reusable Scripts

Reusability is necessary for building a library of test cases that can be shared between developers and reused for subsequent test cycles. You can improve reusability using a variety of strategies including script modularization, parameterization, and external definition of global functions and constants.

Design Multilingual Tests for Internationalized Applications

Parameterize all hard-coded data contained in internationalized test scripts, and then create script logic to switch the data table at runtime based on the current language setting. You can obtain the language setting for the Siebel application from a suffix on the URL (for example, callcenter_enu).

Parameterization is especially important for picklist values, dates, currencies, and numbers because their formats differ across languages. As a general rule, parameterize all test data and references to configurable UI components.

NOTE:  The column names and structure of the external data table must be consistent across all languages for the script to access the data successfully.

Use regular expressions to perform basic pattern matching and greater than and less than comparisons. This is especially important for inserting data validation conditions into the script. Do not directly compare string representations of picklist values, dates, currencies, and numbers.

Scripts that perform only navigation without getting or setting data do not need to be modified to run on multiple languages.

Make Test Scripts Independent of the Operating Environment

Develop and use strategies to create environment-independent test scripts. Design your test scripts so that they are capable of running on disparate hardware configurations, operating systems, language locales, database platforms, and browser versions.

Make Test Scripts Independent of Test Data

When authoring a test script, do not leave any hard-coded data values in the script. Instead, replace the hard-coded data with variables that reference an external data source. This procedure is generally called parameterization. Parameterizing your test scripts makes them independent of the data structure of the application being tested. Without parameterization, scripts can stop running due to database conflicts and dependencies.

Parameterization also allows you to switch data tables dynamically, if necessary. Store the data used by test scripts in external data tables. Then use the data table import function within the test script to import data. This feature can be useful for multilingual testing, allowing you to switch the data table based on the current language environment.

NOTE:  The column names and structure of the external data table must match the variable names used in the script.

Testing Siebel Business Applications