26.6 Creating and Managing Test Cases and Scripts

Test management refers to the creation of repeatable tests that can be executed at any time by an individual Administrator or system. Quick spot checks are very useful and effective in troubleshooting current issues. However, a more predictable and repeatable approach to validating server and policy configuration is often necessary.

This approach can include testing OAM Server configuration for regressions after a product revision, or during a policy development and QA cycle.

To be useful such tests must allow for multiple use cases to be executed as group. Once the test scripts have been designed and validated as correct, replaying the tests against the OAM Server helps identify regressions in a policy configuration.

This section provides the information you need to perform test management in the following topics:

26.6.1 About Test Cases and Test Scripts

A test case is created from the request sent to, and response data received from, the OAM Server using the Access Tester. Among other data elements, a test case includes request latency and other identifying information that enables analysis and comparison of old and new test cases. Test scripts can be configured, run, and generated from the Access Tester Console.

Once captured, the test case can be replayed without new input, and then new results can be compared with old results. If the old results are marked as "known good" then deviations from those results constitute failed test cases.

The test case workflow is illustrated by Figure 26-7.

Figure 26-7 Test Case Workflow

Description of Figure 26-7 follows
Description of "Figure 26-7 Test Case Workflow"

Task overview: Creating and managing a test case

From the Access Tester Console, you can connect to the OAM Server and manually conduct individual tests. You can save the request to the capture queue after a request is sent and the response is received from the OAM Server. You can continue capturing additional test cases before generating a test script and clearing the capture queue. If you exit the Access Tester before saving the capture queue, you are asked if the test cases should be saved to a script before exiting. Oracle recommends that you do not clear the queue until all your test cases have been captured.

Once you have the test script, you can run it from either the Access Tester Console or from the command line.

26.6.2 Capturing Test Cases

You can save each test case to a capture queue after sending the request from the Access Tester to the OAM Server and receiving the response. You can capture as many individual test cases as you need before generating a test script that will automate running the group of test cases.

For instance, the following outlines three test cases that must be captured individually:

  • A validation request and response

  • An authentication request and response

  • An authorization request and response

Table 26-10 describes the location of the capture options.

Table 26-10 Access Tester Capture Request Options

Location Description

Test menu

Capture last "..." request

Select this command from the Test menu to add the last request issued and results received to the capture queue (for inclusion in a test script later).

Blue up arrow

Select this command button from the tool bar to add the last request issued and results received to the capture queue (for inclusion in a test script later).

If you exit the Access Tester before saving the capture queue, you are asked if the test cases should be saved to a script before exiting. Do not clear the Access Tester capture queue until all your test cases have been captured.

To capture one or more test cases

  1. Initiate a request from the Access Tester Console, as described in "Testing Connectivity and Policies from the Access Tester Console".
  2. After receiving the response, click the Capture last "..." request command button in the tool bar (or choose it from the Test menu).
  3. Confirm the capture in the Status Messages panel and note the Capture Queue test case count at the bottom of the Console.
  4. Repeat steps 1, 2, and 3 to capture in the queue each test case that you need for your test script.
  5. Proceed to "Generating an Input Test Script".

26.6.3 Generating an Input Test Script

A test script is a collection of individual test cases that were captured using the Access Tester Console. When individual test cases are grouped together, it becomes possible to automate test coverage to validate policy configuration for a specific application or site.

You can create a test script to be used as input to the Access Tester and drive automated processing of multiple test cases. The Generate Script option enables you to create an XML file test script and clear the capture queue. If you exit the Access Tester before saving the capture queue, you are asked if the test cases should be saved to a script before exiting. The following sections provide more details:

Note:

Do not clear the capture queue until you have captured all the test cases you want to include in the script.

26.6.3.1 About Input Test Script

You can create a test script to be used as input to the Access Tester and drive automated processing of multiple test cases.

Such a script must follow these rules:

  • Allows possible replay by a person or system

  • Allows possible replay against different policy servers w/o changing the script, to enable sharing of test scripts to drive different Policy Servers

  • Allows comparison of test execution results against "Known Good" results

Following are the locations of the Generate Script command.

Table 26-11 Generate Script Command

Location of the Command Description

Test menu

Generate Script

Select Generate Script from the Test menu to initiate creation of the script containing your captured test cases.

Paper Script Scroll

Select the Generate Script command button from the tool bar to initiate creation of the script containing your captured test cases. After you specify or select a name for your script, you are asked if the capture queue should be cleared. Do not clear the capture queue until all your test cases are saved to a script.

26.6.3.2 Generating an Input Test Script

You can capture test cases that you want in your test script and record it.

Prerequisites

Capturing Test Cases

To record a test script containing captured test cases

  1. Perform and capture each request that you want in the script, as described in "Capturing Test Cases".
  2. Click the Generate Script command button in the tool bar (or choose it from the Test menu to include all captured test cases.
  3. In the new dialog box, select or enter the name of your new XML script file and then click Save.
  4. Click Yes to overwrite an existing file (or No to dismiss the window and give the file a new name).
  5. In the Save Waning dialog box, click No to retain the capture queue and continue adding test cases to your script (or click Yes to clear the queue of all test cases).
  6. Confirm the location of the test script before you exit the Access Tester.
  7. Personalize the test script to include details such as who, when, and why the script was developed, as described next.

26.6.4 Personalizing an Input Test Script

This section describes how to personalize and customize a test script.

26.6.4.1 Test Script Control Parameters

The control block of a test script is used to tag the script and specify information to be used during the execution of a test. You might want to include details about who created the script and when and why the script was created. You might also want to customize the script using one or more control parameters.

The Access Tester provides command line "control" parameters to change processing of the script without changing the script. (test name, test number, and so on). This enables you to configure test runs without having to change "known good" input test scripts. Table 26-12 describes the control elements and how to customize these.

Table 26-12 Test Script Control Parameters

Control Parameter Description

ignorecontent=true

Ignores differences in the Content section of the use case when comparing the original OAM Server response to the current response. The default is to compare the Content sections. This parameter can be overwritten by a command line property when running in the command line mode.

Default: false (Compare Content sections).

Values: true or false

In command line mode, use ignorecontent=true to over ride the specified value in the Control section of the input script.

testname="oamtest"

Specifies a prefix to add to file names in the "results bundle" as described in the previous section.

In command line mode, use Testname=name to over ride the specified value in the Control section.

configfile="config.xml"

Specifies the absolute path to a configuration XML file that was previously created by the Access Tester.

In command line mode, this file is used by the Access Tester to locate connection details to establish a server connection.

numthreads="1"

Indicates the number of threads (virtual clients) that will be started by the Access Tester to run multiple copies of the test script. Each thread opens its own pool of connections to the policy server. This feature is designed for stress testing the Policy Server, and is available only in command line mode.

Default: 1

Note that when running a test script in GUI mode, the number of threads is ignored and only one thread is started to perform a single iteration of the test script.

numiterations="1"

Indicates the number of iterations that will be performed by the Access Tester. This feature is designed for stress testing and longevity testing the Policy Server and is available only in command line mode.

Default: 1

26.6.4.2 Customizing a Test Script

You can personalize a test script generated by the Access Tester.

Prerequisites

Generating an Input Test Script

To customize a test script

  1. Locate and open the test script that was generated by the Access Tester.
  2. Add any details that you need to customize or personalize the script.
  3. Save the file and proceed to "Executing a Test Script".

26.6.5 Executing a Test Script

Once a test script has been created against a "Known Good" policy configuration and marked as "Known Good", it is important to drive the Access Tester using the script rather than specifying each test manually using the Console.

This section provides the following topics:

26.6.5.1 About Test Script Execution

You can interactively execute tests scripts from within the Access Tester Console, or use automated test runs performed by command scripts.

Automated test runs can be scheduled by the operating system or a harness such as Apache JMeter, and executed without manual intervention. Other than lack of human input in command line mode, the two execution modes are identical.

Note:

A script such as .bat (Windows) or .sh (Unix) executes a test script in command line mode. Once a test script is created, it can be executed using either the Run Script menu command or the Access Tester command line.

Table 26-13 describes the commands to execute a test script.

Table 26-13 Run Test Script Commands

Location Description

Test menu

Run Script

Select the Run Script command from the Test menu to begin running a saved test script against the current policy server. The Status message panel is populated with the execution status as the script progresses.

Paper Script Scroll with green arrow

Select the Run Script command button from the tool bar to begin running a saved test script against the current policy server. The Status message panel is populated with the execution status as the script progresses.

Command line mode

A script such as .bat (Windows) or .sh (Unix) executes a test script in command line mode. Once a test script is created, it can be executed using either the Run Script menu command or the Access Tester command line.

The following overview describes how the Access Tester operates when running a test. Other than lack of human input in command line mode, the two execution modes are identical.

Process overview: Access Tester behavior when running a test script

  1. The Access Tester loads the input xml file.

    In command line mode, the Access Tester opens the configuration XML file defined within the input test script's Control element.

  2. The Access Tester connects to the primary and secondary OAM Proxy using information in the Server Connection panel of the Console.

    In command line mode, the Access Tester uses information in the Connection element of the configuration XML file.

  3. In command line mode, the Access Tester checks the Control elements in the input script XML file to ensure none have been overwritten on the command line (command line values take precedence).

  4. For each original test case defined in the script, the Access Tester:

    1. Creates a new target test case.

    2. Sends the original request to the OAM Server and collects the response.

    3. Makes the following comparisons:

      Compares the new response to the original response.

      Compares response codes and marks as "mismatched" any new target test case where response codes differ from the original test case. For instance, if the original Validate returned "Yes", and now returns "No", a mismatch is marked.

      When response codes are identical, and "the ignorecontent" control parameter is "false", the Access Tester compares Content (the name of the Authentication scheme or post authorization actions that are logged after each request). If Content sections differ, the new target test case is marked "mismatched".

    4. Collect new elapsed time and store it in the target use case.

    5. Build a new target test case containing the full state of the last server request and the same unique ID (UUID) as the original test case.

    6. Update the internal statistics table with statistics for the target test case (request type, elapsed time, mismatched, and so on).

  5. After completing all the input test cases, the Access Tester:

    1. Displays summary results.

    2. Obtains and combines the testname and testnumber, and generates a name for the "results bundle" (three files whose names start with <testname>_<testnumber>.

      Note:

      Shell scripts can automate generating the bundle by providing testname and testnumber command line parameters.

      Obtain testname from the command line parameter. If not specified in the command line, use the testname element of the input script's Control block.

      Obtain testnumber from the command line parameter. If not specified, testnumber defaults to a 7-character numeric string based on the current local time: 2 character minutes, 2 character seconds, 3 character hundredths.

    3. Generates the "results bundle": three files whose names start with <testname>_<testnumber>:

      The target XML script contains the new test cases: <testname>_<testnumber_results.xml.

      The statistics XML file contains a summary and detailed statistics of the entire test run, plus those test cases marked as "mismatched": <testname>_<testnumber_stats.xml

      The execution log file contains information from the Status Message panel: <testname>_<testnumber_log.log.

    4. When running in multi-threaded mode, only the statistics XML file and execution log file will be generated.

    5. In command line mode, the Access Tester exits with the exit code as described in "About the Access Tester Command Line Mode".

26.6.5.2 Running a Test Script

You can submit your test script for processing through Access Tester Console or opt for command line processing.

Prerequisites

Generating an Input Test Script

To run a test script

  1. Confirm the location of the saved test script before exiting the Access Tester., as described in "Generating an Input Test Script".
  2. Submit the test script for processing using one of the following methods:
    • From the Access Tester Console, click the Run Script command button in the tool bar (or select Run Script from the Test menu), then follow the prompts and observe messages in the Status Message panel as the script executes.

    • From the command line, specify your test script with the desired system properties, as described in "Starting the Access Tester with System Properties For Use in Command Line Mode".

      java -Dscript.scriptfile="\tests\script.xml" -Dcontrol.ignorecontent="true" 
      -jar oamtest.jar
      
  3. Review the log and output files and perform additional analysis after the Access Tester compares newly generated results with results captured in the input script, as described in "Evaluating Scripts, Log File, and Statistics".