2Overview of Testing Siebel Applications

Overview of Testing Siebel Applications

This chapter provides an overview of the reasons for implementing testing in software development projects, and introduces a methodology for testing Oracle’s Siebel Business Applications with descriptions of the processes and types of testing that are used in this methodology. This chapter includes the following topics:

About Testing Siebel Business Applications

This guide introduces and describes the processes and concepts of testing Siebel Business Applications. It is intended to be a guide for best practices for Oracle customers currently deploying or planning to deploy Siebel Business Applications, version IP2017 or later.

Although job titles and duties at your company may differ from those listed in the following table, the audience for this guide consists primarily of employees in these categories:

Application Testers

Testers responsible for developing and executing tests. Functional testers focus on testing application functionality, while performance testers focus on system performance.

Business Analysts

Analysts responsible for defining business requirements and delivering relevant business functionality. Business analysts serve as the advocate for the business user community during application deployment.

Business Users

Actual users of the application. Business users are the customers of the application development team.

Database Administrators

Administrators who administer the database system, including data loading, system monitoring, backup and recovery, space allocation and sizing, and user account management.

Functional Test Engineers

Testers with the responsibility of developing and executing manual and automated testing. Functional test engineers create test cases and automate test scripts, maintain regression test library and report issues and defects.

Performance Test Engineers

Testers with the responsibility of developing and executing automated performance testing. Performance test engineers create automated test scripts, maintain regression test scripts and report issues and defects.

Project Managers

Manager or management team responsible for planning, executing, and delivering application functionality. Project managers are responsible for project scope, schedule, and resource allocation.

Siebel Application Developers

Developers who plan, implement, and configure Siebel business applications, possibly adding new functionality.

Siebel System Administrators

Administrators responsible for the whole system, including installing, maintaining, and upgrading Siebel business applications.

Test Architect

Working with the Test Manager, an architect designs and builds the test strategy and test plan.

Test Manager

Manages the day-to-day activities, testing resources, and test execution. Manages the reporting of test results and the defect management process. The Test Manager is the single point of contact (POC) for all testing activities.

Note: On simple projects, the Test Architect and Test Manager are normally combined into a single role.

    How This Guide Is Organized

    This book describes the processes for planning and executing testing activities for Siebel business applications. These processes are based on best practices and proven test methodologies. You use this book as a guide to identify what tests to run, when to run tests, and who to involve in the quality assurance process.

    The first two chapters of this book provide an introduction to testing and the test processes. You are encouraged to read the Overview of Testing Siebel Applications chapter which describes the relationships between the seven high-level processes. The chapters that follow describe a specific process in detail. In each of these chapters, a process diagram is presented to help you to understand the important high-level steps. You are encouraged to modify the processes to suit your specific situation.

    Depending on your role, experience, and current project phases you will use the information in this book differently. Here are some suggestions about where you might want to focus your reading:

    • Test manager. At the beginning of the project, review Chapters 2 through 8 to understand testing processes.

    • Functional testing. If you are a functional tester focus on Chapters 3 through 7 and 9. These chapters discuss the process of defining, developing, and executing functional test cases.

    • Performance testing. If you are a performance tester focus on Chapters 3, 4, 7, and 10. These chapters describe the planning, development, and execution of performance tests.

    At certain points in this book, you will see information presented as a best practice. These tips are intended to highlight practices proven to improve the testing process.

      Additional Resources

        Introduction to Application Software Testing

        Testing is a key component of any application deployment project. The testing process determines the readiness of the application. Therefore, it must be designed to adequately inform deployment decisions. Without well-planned testing, project teams may be forced to make under-informed decisions and expose the business to undue risk. Conversely, well-planned and executed testing can deliver significant benefit to a project including:

        • Reduced deployment cost. Identifying defects early in the project is a critical factor in reducing the total cost of ownership. Research shows that the cost of resolving a defect increases dramatically in later deployment phases. A defect discovered in the requirements definition phase as a requirement gap can be a hundred times less expensive to address than if it is discovered after the application has been deployed. Once in production, a serious defect can result in lost business and undermine the success of the project.

        • Higher user acceptance. User perception of quality is extremely important to the success of a deployment. Functional testing, usability testing, system testing, and performance testing can provide insights into deficiencies from the users’ perspective early enough so that these deficiencies can be corrected before releasing the application to the larger user community.

        • Improved deployment quality. Hardware and software components of the system must also meet a high level of quality. The ability of the system to perform reliably is critical in delivering consistent service to the users or customers. A system outage caused by inadequate system resources can result in lost business. Performance, reliability, and stress testing can provide an early assessment of the system to handle the production load and allow IT organizations to plan accordingly.

        Inserting testing early and often is a key component to lowering the total cost of ownership. Software projects that attempt to save time and money by lowering their initial investment in testing find that the cost of not testing is much greater. Insufficient investment in testing may result in higher deployment costs, lower user adoption, and failure to achieve business returns.

        Best Practice

        Test early and often. The cost of resolving a defect when detected early is much less then resolving the same defect in later project stages. Testing early and often is the key to identifying defects as early as possible and reducing the total cost of ownership.

        Application Software Testing Methodology

        The processes described in this book are based on common test definitions for application software. These definitions and methodologies have been proven in customer engagement, and demonstrate that testing must occur throughout the project lifecycle.

          Common Test Definitions

          There are several common terms used to describe specific aspects of software testing. These testing classifications are used to break down the problem of testing into manageable pieces. Here are some of the common terms that are used throughout this book:

          • Business process testing. Validates the functionality of two or more components that are strung together to create a valid business process.

          • Data conversion testing. The testing of converted data used within the Siebel application. This is normally performed before system integration testing.

          • Functional testing. Testing that focuses on the functionality of an application that validates the output based on selected input that consists of Unit, Module and Business Process testing.

          • Interoperability testing. Applications that support multiple platforms or devices need to be tested to verify that every combination of device and platform works properly.

          • Negative testing. Validates that the software fails appropriately by inputting a value known to be incorrect to verify that the action fails as expected. This allows you to understand and identify failures. By displaying the appropriate warning messages, you verify that the unit is operating correctly.

          • Performance testing. This test is usually performed using an automation tool to simulate user load while measuring the system resources used. Client and server response times are both measured.

          • Positive testing. Verifies that the software functions correctly by inputting a value known to be correct to verify that the expected data or view is returned appropriately.

          • Regression testing. Code additions or changes may unintentionally introduce unexpected errors or regressions that did not exist previously. Regression tests are executed when a new build or release is available to make sure existing and new features function correctly.

          • Reliability testing. Reliability tests are performed over an extended period of time to determine the durability of an application as well as to capture any defects that become visible over time.

          • Scalability testing. Validates that the application meets the key performance indicators with a predefined number of concurrent users.

          • Stress testing. This test identifies the maximum load a given hardware configuration can handle. Test scenarios usually simulate expected peak loads.

          • System integration testing. This is a complete system test in a controlled environment to validate the end-to-end functionality of the application and all other interface systems (for example, databases and third-party systems). Sometimes adding a new module, application, or interface may negatively affect the functionality of another module.

          • Test case. A test case contains the detailed steps and criteria (such as roles and data) for completing a test.

          • Test script. A test script is an automated test case.

          • Unit testing. Developers test their code against predefined design specifications. A unit test is an isolated test that is often the first feature test that developers perform in their own environment before checking changes into the configuration repository. Unit testing prevents introducing unstable components (or units) into the test environment.

          • Usability testing. User interaction with the graphical user interface (GUI) is tested to observe the effectiveness of the GUI when test users attempt to complete common tasks.

          • User acceptance test (UAT). Users test the complete, end-to-end business processes, verifying functional requirements (business requirements).

            Modular and Iterative Methodology

            An IT project best practice that applies to both testing and development is to use a modular and incremental approach to develop and test applications to detect potential defects earlier rather than later. This approach provides component-based test design, test script construction (automation), execution and analysis. It brings the defect management stage to the forefront, promoting communication between the test team and the development team. Beginning the testing process early in the development cycle helps reduce the cost to fix defects.

            This process begins with the test team working closely with the development team to develop a schedule for the delivery of functionality (a drop schedule). The test team uses this schedule to plan resources and tests. In the earlier stages, testing is commonly confined to unit and module testing. After one or more drops, there is enough functionality to begin to string the modules together to test a business process.

            After the development team completes the defined functionality, they compile and transfer the Siebel application into the test environment. The immediate functional testing by the test team allows for early feedback to the development team regarding possible defects. The development team can then schedule and repair the defects, drop a new build of the Siebel application, and provide the opportunity for another functional test session after the test team updates the test scripts as necessary.

            Best Practice

            Iterative development introduces functionality to a release in incremental builds. This approach reduces risk by prompting early communication and allowing testing to occur more frequently and with fewer changes to all parts of the application.

              Continuous Application Lifecycle

              One deployment best practice is the continuous application lifecycle. In this approach, application features and enhancements are delivered in small packages on a continuous delivery schedule. New features are considered and scheduled according to a fixed release schedule (for example, once every quarter). This model of phased delivery provides an opportunity to evaluate the effectiveness of prebuilt application functionality, minimizes risk, and allows you to adapt the application to changing business requirements.

              Continuous application lifecycle incorporates changing business requirements into the application on a regular timeline, so the business customers do not have a situation where they become locked into functionality that does not quite meet their needs. Because there is always another delivery date on the horizon, features do not have to be rushed into production. This approach also allows an organization to separate multiple, possibly conflicting change activities. For example, the upgrade (repository merge) of an application can be separated from the addition of new configuration.

              Best Practice

              The continuous application lifecycle approach to deployment allows organizations to reduce the complexity and risk on any single release and provides regular opportunities to enhance application functionality.

                Testing and Deployment Readiness

                The testing processes provide crucial inputs for determining deployment readiness. Determining whether or not an application is ready to deploy is an important decision that requires clear input from testing.

                Part of the challenge in making a good decision is the lack of well-planned testing and the availability of testing data to gauge release readiness. To address this, it is important to plan and track testing activity for the purpose of informing the deployment decision. In general, you can measure testing coverage and defect trends, which provide a good indicator of quality. The following are some suggested analyses to compile:

                • For each test plan, the number and percentage of test cases passed, in progress, failed, and blocked. This data illustrates the test objectives that have been met, versus those that are in progress or at risk.

                • Trend analysis of open defects targeted at the current release for each priority level.

                • Trend analysis of defects discovered, defects fixed, and test cases executed. Application convergence (point A in the following image) is demonstrated by a slowing of defect discovery and fix rates, while maintaining even levels of test case activity.

                Trend Analysis of Testing and Defect Resolution

                Testing is a key input to the deployment readiness decision. However it is not the only input to be considered. You must consider testing metrics with business conditions and organizational readiness.

                Overview of the Siebel Testing Process

                Testing processes occur throughout the implementation lifecycle, and are closely linked to other configuration, deployment, and operations processes. The following information presents a high-level map of testing processes.

                High-Level Testing Process Map

                Each of the seven testing processes described in this book are highlighted in bold in the diagram and are outlined briefly in the following topics:

                  Plan Testing Strategy

                  The test planning process makes sure that the testing performed is able to inform the deployment decision process, minimize risk, and provide a structure for tracking progress. Without proper planning many customers may perform either too much or too little testing. The process is designed to identify key project objectives and develop plans based on those objectives.

                  It is important to develop a testing strategy early, and to use effective communications to coordinate among all stakeholders of the project.

                    Design and Develop Tests

                    In the test design process, the high-level test cases identified during the planning process are developed in detail (step-by-step). Developers and testers finalize the test cases based on approved technical designs. The written test cases can also serve as blueprints for developing automated test scripts. Test cases should be developed with strong participation from the business analyst to understand the details of usage, and less-common use cases.

                    Design evaluation is the first form of testing, and often the most effective. Unfortunately, this process is often neglected. In this process, business analysts and developers verify that the design meets the business unit requirements. Development work should not start in earnest until there is agreement that the designed solution meets requirements. The business analyst who defines the requirements should approve the design.

                    Preventing design defects or omissions at this stage is more cost effective than addressing them later in the project. If a design is flawed from the beginning, the cost to redesign after implementation can be high.

                      Execute Siebel Functional Tests

                      Functional testing is focused on validating the Siebel business application components of the system. Functional tests are performed progressively on components (units), modules, and business processes in order to verify that the Siebel application functions correctly. Test execution and defect resolution are the focus of this process. The development team is fully engaged in implementing features, and the defect-tracking process is used to manage quality.

                        Execute System Integration Tests

                        System integration testing verifies that the Siebel application validated earlier, integrates with other applications and infrastructure in your system. Integration with various back-end, middleware, and third-party systems are verified. Integration testing occurs on the system as a whole to make sure that the Siebel application functions properly when connected to related systems, and when running along side system-infrastructure components.

                          Execute Acceptance Tests

                          Acceptance testing is performed on the complete system and is focused on validating support for business processes, as well as verifying acceptability to the user community from both the lines of business and the IT organization. This is typically a very busy time in the project, when people, process, and technology are all preparing for the rollout.

                            Execute Performance Tests

                            Performance testing validates that the system can meet specified key performance indicators (KPIs) and service levels for performance, scalability, and reliability. In this process, tests are run on the complete system simulating expected loads and verifying system performance.

                              Improve and Continue Testing

                              Testing is not complete when the application is rolled out. After the initial deployment, regular configuration changes are delivered in new releases. In addition, Oracle delivers regular maintenance and major software releases that may need to be applied. Both configuration changes and new software releases require regression testing to verify that the quality of the system is sustained.

                              The testing process should be evaluated after deployment to identify opportunities for improvement. The testing strategy and its objectives should be reviewed to identify any inadequacies in planning. Test plans and test cases should be reviewed to determine their effectiveness. Test cases should be updated to include testing scenarios that were discovered during testing and were not previously identified, to reflect all change requests, and to support software releases.