|Bookshelf Home | Contents | Index | PDF|
Use commented text in your scripts to describe the test steps. A test script that is thoroughly annotated is much easier to maintain. When sharing test scripts among a large team of testers and developers, it is often helpful to document conventions and guidelines for providing comments in test scripts.
Make sure to scope script variables and data tables appropriately. The scope, either modular or global, determines when a variable or data table is available. By scoping them appropriately, you can be sure that the global variables are available across multiple scripts, and modular variables maintain their state within a specific script module.
The test tool typically allows you to store data values in tables and variables, but often these tables and variables have a scope that is defined by the test tool. You might need to override the scoping that is predefined by the test tool. Identify when variables are used within the script, and construct your test so that variables and data values are available when needed.
For critical scripts that validate key application functionality, insert validation conditions and error event handling to decide whether to proceed with the script or abort the script. If error events are not available in the test tool, you can write script logic to address error conditions. Set a global or environment variable on or off at the end of the script module, and then use a separate script module to check the variable before proceeding. Construct your test scripts so that for every significant defect in the product, only one test will fail. This is commonly called the one bug, one fail approach.
When an error condition is encountered, the script should report errors to the test tool's error logging system. When a script aborts, the error routine should clean up test data in the application before exiting.
Construct your test scripts so that individual test modules use global setup modules to initialize the testing environment. This allows you to design tests that can restart the application being tested and continue with script execution (for example, in the event of a crash).
Some fields in Siebel applications require data values that have a defined format. Therefore, you must use data values that are formatted as the fields are configured in the Siebel application. For example, a date field that requires a value in the format 4/28/2003 02:00:00 PM causes an error if the data value supplied by the test script is 28 Apr 2003 2:00 PM. Test automation checkpoints should also use data that has been formatted correctly, or use regular expressions to do pattern matching.
Some fields in Siebel applications are calculated automatically and are not directly modifiable by the user (for example, Today's Date). Construct your test scripts so that they remember calculated values in a local variable, or in an output value if the calculated value needs to be used later in the script. For example, you might need to use a calculated value to run a Siebel query.
When you set a checkpoint for a calculated value, you might not know the value ahead of time. Use a regular expression in your checkpoint such as an asterisk (*) to verify that the field is not blank. When you are using a tabular checkpoint, you might want to omit the calculated field from the checkpoint.
Store generalized script functions and routines in a separate file. This allows you to maintain these pieces of script separately from specialized test code. In your test tool, use the import functionality (if available) to access the generalized scripts stored in the external file.
NOTE: When developing and debugging generalized functions, keep them in the specialized test script until they are ready to be extracted. This is because you might not be able to debug external files due to test tool limitations.
Before creating data that could potentially cause a conflict, run a query to verify that no record with the same information already exists. If a matching record is found, the script should delete it, rename it, or otherwise modify the record to mitigate the conflict condition.
Before accessing an existing record or record set, run a query to narrow the records that are available. Do not assume that the desired record is in the same place in a list applet because the test database can change over time.
You also should query for data ahead of time when you are in the process of developing test scripts. Check for data that should not be in the database, or that was left in the database by a previous test pass, and delete it before proceeding.
Create all test data necessary for running a script within the script itself. Avoid creating scripts that are dependent on preexisting data in a shared test database. Manage test data using setup scripts and script data tables, rather than database snapshots.
The general approach is to have the clean-up script perform a query for the records in a list applet and iterate through them until all of the associated test records are deleted or renamed. When the records need to be renamed, the initial query should be repeated after each record is renamed, until the row count is 0.
When recording a test script, perform all actions using the visual components as if you were a beginning user. This requires clicking on the UI components rather than using keyboard accelerators and other shortcuts.
Most shortcuts in Siebel applications are supported for test automation. However, the Tab key shortcut is not supported—pressing the Tab key typically moves the focus from one control to another based on a preconfigured tab order. Click the mouse to move focus rather than using the Tab key.
|Testing Siebel Business Applications||Copyright © 2015, Oracle and/or its affiliates. All rights reserved. Legal Notices.|