Bookshelf Home | Contents | Index | Search | PDF |
Testing Siebel eBusiness Applications > Overview of Testing Siebel Applications > Application Software Testing Methodology >
Common Test Definitions
There are several common terms used to describe specific aspects of software testing. These testing classifications are used to break down the problem of testing into manageable pieces. Here are some of the common terms that are used throughout this book:
- Unit Testing. Developers test their code against predefined design specifications. A unit test is an isolated test that is often the first feature test that developers perform in their own environment before checking changes into the configuration repository. Unit testing prevents introducing unstable components (or units) into the larger system.
- Integration Testing. Validates that all programs and interfaces external to the Siebel application function correctly. Sometimes adding a new module, application, or interface may negatively affect the functionality of another module.
- Regression Testing. Code additions or changes may unintentionally introduce unexpected errors or regressions that do not exist previously. Regression tests are executed when a new build or release is available to make sure existing and new features function correctly.
- Interoperability Testing. Applications that support multiple platforms or devices need to be tested to verify that every combination of device and platform works properly.
- Usability Testing. User interaction with the graphical user interface (GUI) is tested to observe the effectiveness of the GUI when test users attempt to complete common tasks.
- System Testing. System testing is a complete system test in a controlled environment. Both users and IT organization are involved to assess the system's readiness for general release.
- User Acceptance Test (UAT). Users test the complete, end-to-end business processes. Functional and performance requirements are verified to make sure there are no user task failures and no prohibitive system response times.
- Performance Testing. This test is usually performed using an automation tool to simulate user load while measuring system resources used. Client and server response times are both measured.
- Stress Testing. This test identifies the maximum load a given hardware configuration can handle. Test scenarios usually simulate expected peak loads.
- Reliability Testing. Reliability tests are performed over an extended period of time to determine the durability of an application as well as to capture any defects that become visible over time.
- Positive Testing. Verifies that the software functions correctly by inputting a value known to be correct to verify that the expected data or view is returned appropriately.
- Negative Testing. Validates that the software fails appropriately by inputting a value known to be incorrect to verify that the action fails as expected. This allows you to understand and identify failures, and by displaying the appropriate warning messages, that the unit is operating correctly.
Bookshelf Home | Contents | Index | Search | PDF |
Testing Siebel eBusiness Applications Published: 21 July 2003 |