Test management refers to the creation of repeatable tests that can be executed at any time by an individual Administrator or system. Quick spot checks are very useful and effective in troubleshooting current issues. However, a more predictable and repeatable approach to validating server and policy configuration is often necessary.
This approach can include testing OAM Server configuration for regressions after a product revision, or during a policy development and QA cycle.
To be useful such tests must allow for multiple use cases to be executed as group. Once the test scripts have been designed and validated as correct, replaying the tests against the OAM Server helps identify regressions in a policy configuration.
This section provides the information you need to perform test management in the following topics:
A test case is created from the request sent to, and response data received from, the OAM Server using the Access Tester. Among other data elements, a test case includes request latency and other identifying information that enables analysis and comparison of old and new test cases. Test scripts can be configured, run, and generated from the Access Tester Console.
Once captured, the test case can be replayed without new input, and then new results can be compared with old results. If the old results are marked as "known good" then deviations from those results constitute failed test cases.
The test case workflow is illustrated by Figure 26-7.
Task overview: Creating and managing a test case
From the Access Tester Console, you can connect to the OAM Server and manually conduct individual tests. You can save the request to the capture queue after a request is sent and the response is received from the OAM Server. You can continue capturing additional test cases before generating a test script and clearing the capture queue. If you exit the Access Tester before saving the capture queue, you are asked if the test cases should be saved to a script before exiting. Oracle recommends that you do not clear the queue until all your test cases have been captured.
Once you have the test script, you can run it from either the Access Tester Console or from the command line.
You can save each test case to a capture queue after sending the request from the Access Tester to the OAM Server and receiving the response. You can capture as many individual test cases as you need before generating a test script that will automate running the group of test cases.
For instance, the following outlines three test cases that must be captured individually:
A validation request and response
An authentication request and response
An authorization request and response
Table 26-10 describes the location of the capture options.
Table 26-10 Access Tester Capture Request Options
Location | Description |
---|---|
Test menu Capture last "..." request |
Select this command from the Test menu to add the last request issued and results received to the capture queue (for inclusion in a test script later). |
Blue up arrow |
Select this command button from the tool bar to add the last request issued and results received to the capture queue (for inclusion in a test script later). |
If you exit the Access Tester before saving the capture queue, you are asked if the test cases should be saved to a script before exiting. Do not clear the Access Tester capture queue until all your test cases have been captured.
To capture one or more test cases
A test script is a collection of individual test cases that were captured using the Access Tester Console. When individual test cases are grouped together, it becomes possible to automate test coverage to validate policy configuration for a specific application or site.
You can create a test script to be used as input to the Access Tester and drive automated processing of multiple test cases. The Generate Script option enables you to create an XML file test script and clear the capture queue. If you exit the Access Tester before saving the capture queue, you are asked if the test cases should be saved to a script before exiting. The following sections provide more details:
Note:
Do not clear the capture queue until you have captured all the test cases you want to include in the script.
You can create a test script to be used as input to the Access Tester and drive automated processing of multiple test cases.
Such a script must follow these rules:
Allows possible replay by a person or system
Allows possible replay against different policy servers w/o changing the script, to enable sharing of test scripts to drive different Policy Servers
Allows comparison of test execution results against "Known Good" results
Following are the locations of the Generate Script command.
Table 26-11 Generate Script Command
Location of the Command | Description |
---|---|
Test menu Generate Script |
Select Generate Script from the Test menu to initiate creation of the script containing your captured test cases. |
Paper Script Scroll |
Select the Generate Script command button from the tool bar to initiate creation of the script containing your captured test cases. After you specify or select a name for your script, you are asked if the capture queue should be cleared. Do not clear the capture queue until all your test cases are saved to a script. |
You can capture test cases that you want in your test script and record it.
This section describes how to personalize and customize a test script.
The control block of a test script is used to tag the script and specify information to be used during the execution of a test. You might want to include details about who created the script and when and why the script was created. You might also want to customize the script using one or more control parameters.
The Access Tester provides command line "control" parameters to change processing of the script without changing the script. (test name, test number, and so on). This enables you to configure test runs without having to change "known good" input test scripts. Table 26-12 describes the control elements and how to customize these.
Table 26-12 Test Script Control Parameters
Control Parameter | Description |
---|---|
i |
Ignores differences in the Content section of the use case when comparing the original OAM Server response to the current response. The default is to compare the Content sections. This parameter can be overwritten by a command line property when running in the command line mode. Default: false (Compare Content sections). Values: true or false In command line mode, use ignorecontent=true to over ride the specified value in the Control section of the input script. |
testname="oamtest" |
Specifies a prefix to add to file names in the "results bundle" as described in the previous section. In command line mode, use Testname=name to over ride the specified value in the Control section. |
configfile="config.xml" |
Specifies the absolute path to a configuration XML file that was previously created by the Access Tester. In command line mode, this file is used by the Access Tester to locate connection details to establish a server connection. |
numthreads="1" |
Indicates the number of threads (virtual clients) that will be started by the Access Tester to run multiple copies of the test script. Each thread opens its own pool of connections to the policy server. This feature is designed for stress testing the Policy Server, and is available only in command line mode. Default: 1 Note that when running a test script in GUI mode, the number of threads is ignored and only one thread is started to perform a single iteration of the test script. |
numiterations="1" |
Indicates the number of iterations that will be performed by the Access Tester. This feature is designed for stress testing and longevity testing the Policy Server and is available only in command line mode. Default: 1 |
You can personalize a test script generated by the Access Tester.
Once a test script has been created against a "Known Good" policy configuration and marked as "Known Good", it is important to drive the Access Tester using the script rather than specifying each test manually using the Console.
This section provides the following topics:
You can interactively execute tests scripts from within the Access Tester Console, or use automated test runs performed by command scripts.
Automated test runs can be scheduled by the operating system or a harness such as Apache JMeter, and executed without manual intervention. Other than lack of human input in command line mode, the two execution modes are identical.
Note:
A script such as .bat (Windows) or .sh (Unix) executes a test script in command line mode. Once a test script is created, it can be executed using either the Run Script menu command or the Access Tester command line.
Table 26-13 describes the commands to execute a test script.
Table 26-13 Run Test Script Commands
Location | Description |
---|---|
Test menu Run Script |
Select the Run Script command from the Test menu to begin running a saved test script against the current policy server. The Status message panel is populated with the execution status as the script progresses. |
Paper Script Scroll with green arrow |
Select the Run Script command button from the tool bar to begin running a saved test script against the current policy server. The Status message panel is populated with the execution status as the script progresses. |
Command line mode |
A script such as .bat (Windows) or .sh (Unix) executes a test script in command line mode. Once a test script is created, it can be executed using either the Run Script menu command or the Access Tester command line. |
The following overview describes how the Access Tester operates when running a test. Other than lack of human input in command line mode, the two execution modes are identical.
Process overview: Access Tester behavior when running a test script
The Access Tester loads the input xml file.
In command line mode, the Access Tester opens the configuration XML file defined within the input test script's Control element.
The Access Tester connects to the primary and secondary OAM Proxy using information in the Server Connection panel of the Console.
In command line mode, the Access Tester uses information in the Connection element of the configuration XML file.
In command line mode, the Access Tester checks the Control elements in the input script XML file to ensure none have been overwritten on the command line (command line values take precedence).
For each original test case defined in the script, the Access Tester:
Creates a new target test case.
Sends the original request to the OAM Server and collects the response.
Makes the following comparisons:
Compares the new response to the original response.
Compares response codes and marks as "mismatched" any new target test case where response codes differ from the original test case. For instance, if the original Validate returned "Yes", and now returns "No", a mismatch is marked.
When response codes are identical, and "the ignorecontent" control parameter is "false", the Access Tester compares Content (the name of the Authentication scheme or post authorization actions that are logged after each request). If Content sections differ, the new target test case is marked "mismatched".
Collect new elapsed time and store it in the target use case.
Build a new target test case containing the full state of the last server request and the same unique ID (UUID) as the original test case.
Update the internal statistics table with statistics for the target test case (request type, elapsed time, mismatched, and so on).
After completing all the input test cases, the Access Tester:
Displays summary results.
Obtains and combines the testname and testnumber, and generates a name for the "results bundle" (three files whose names start with <testname>_<testnumber>.
Note:
Shell scripts can automate generating the bundle by providing testname and testnumber command line parameters.
Obtain testname from the command line parameter. If not specified in the command line, use the testname element of the input script's Control block.
Obtain testnumber from the command line parameter. If not specified, testnumber defaults to a 7-character numeric string based on the current local time: 2 character minutes, 2 character seconds, 3 character hundredths.
Generates the "results bundle": three files whose names start with <testname>_<testnumber>:
The target XML script contains the new test cases: <testname>_<testnumber_results.xml.
The statistics XML file contains a summary and detailed statistics of the entire test run, plus those test cases marked as "mismatched": <testname>_<testnumber_stats.xml
The execution log file contains information from the Status Message panel: <testname>_<testnumber_log.log.
When running in multi-threaded mode, only the statistics XML file and execution log file will be generated.
In command line mode, the Access Tester exits with the exit code as described in "About the Access Tester Command Line Mode".