Testing Siebel Business Applications > Overview of Testing Siebel Applications > Application Software Testing Methodology >

Common Test Definitions


There are several common terms used to describe specific aspects of software testing. These testing classifications are used to break down the problem of testing into manageable pieces. Here are some of the common terms that are used throughout this book:

  • Business process testing. Validates the functionality of two or more components that are strung together to create a valid business process.
  • Data conversion testing. The testing of converted data used within the Siebel application. This is normally performed before system integration testing.
  • Functional testing. Testing that focuses on the functionality of an application that validates the output based on selected input that consists of Unit, Module and Business Process testing.
  • Interoperability testing. Applications that support multiple platforms or devices need to be tested to verify that every combination of device and platform works properly.
  • Negative testing. Validates that the software fails appropriately by inputting a value known to be incorrect to verify that the action fails as expected. This allows you to understand and identify failures. By displaying the appropriate warning messages, you verify that the unit is operating correctly.
  • Performance testing. This test is usually performed using an automation tool to simulate user load while measuring the system resources used. Client and server response times are both measured.
  • Positive testing. Verifies that the software functions correctly by inputting a value known to be correct to verify that the expected data or view is returned appropriately.
  • Regression testing. Code additions or changes may unintentionally introduce unexpected errors or regressions that did not exist previously. Regression tests are executed when a new build or release is available to make sure existing and new features function correctly.
  • Reliability testing. Reliability tests are performed over an extended period of time to determine the durability of an application as well as to capture any defects that become visible over time.
  • Scalability testing. Validates that the application meets the key performance indicators with a predefined number of concurrent users.
  • Stress testing. This test identifies the maximum load a given hardware configuration can handle. Test scenarios usually simulate expected peak loads.
  • System integration testing. This is a complete system test in a controlled environment to validate the end-to-end functionality of the application and all other interface systems (for example, databases and third-party systems). Sometimes adding a new module, application, or interface may negatively affect the functionality of another module.
  • Test case. A test case contains the detailed steps and criteria (such as roles and data) for completing a test.
  • Test script. A test script is an automated test case.
  • Unit testing. Developers test their code against predefined design specifications. A unit test is an isolated test that is often the first feature test that developers perform in their own environment before checking changes into the configuration repository. Unit testing prevents introducing unstable components (or units) into the test environment.
  • Usability testing. User interaction with the graphical user interface (GUI) is tested to observe the effectiveness of the GUI when test users attempt to complete common tasks.
  • User acceptance test (UAT). Users test the complete, end-to-end business processes, verifying functional requirements (business requirements).
Testing Siebel Business Applications Copyright © 2015, Oracle and/or its affiliates. All rights reserved. Legal Notices.