Sun Worklist Manager Service Engine User's Guide

Testing the Worklist Manager Composite Application

Testing a deployed Worklist Manager application involves creating test cases in the composite application project that act as remote partner services. These test cases send messages to the BPEL Service Engine. When a Worklist Manager task is invoked, the BPEL SE waits for you to complete the task before the test is completed.

All steps in this section assume the following:

Creating a Test Case

You create the test case in the composite application. When you create the test case, you need to select the WSDL operation you want to test.

ProcedureTo Create a Test Case

  1. In the NetBeans IDE Projects window, expand the composite application project to expose the Test folder.

  2. Right-click Test, and then select New Test Case.

    The New Test Case Wizard appears.

  3. Enter a name for the test case, and then click Next.

    The Select the WSDL Document window appears.

  4. Expand the BPEL Module project, select the WSDL file containing the operation to test, and then click Next.

    The Select the Operation to Test window appears.

  5. Select the operation to test, and then click Finish.

    In the Projects tree, a new folder is created under the Test node, containing two files: Input.xml and Output.xml.


    Note –

    If you view the test case in the Files window, you see Concurrent.properties as a third file.


Configuring Test Properties

ProcedureTo Configure Test Properties

  1. In the Projects window, expand the composite application and expand Test.

  2. Right-click the test case you created, and then select Properties.

  3. Set the properties of the test case as follows:

    • Description: A general description of the test case.

    • Destination: The URL from the WSDL file's <soap:address location="THIS"> tag. This identifies the location of the web service to be tested.

    • SoapAction: This can be left blank.

    • Concurrent Threads: The number of concurrent threads to run during the test. This value can be any integer. Each thread can invoke the test case multiple times (see the following property). For example, Concurrent Threads =2 and Invokes Per Thread=3, the test case will run 6 times (two threads, each run three times).

    • Invokes Per Thread: The number of times each thread invokes the test case. This can be any integer.

    • Test Timeout (sec): The length of time in seconds each thread has to finish (including completing the worklist portion of the test). If it does not finish in the allotted time, then an exception is thrown. This can be any integer.

    • Calculate Throughput: An indicator of whether to calculate the throughput during testing.

    • Comparison Type: The method in which the tester should compare the test output. Select one of the following options:

      • identical: Considers the output and actual output as a stream of characters.

      • binary: Considers the output and actual output as a stream of bytes.

      • equals: Considers the output and actual output as a XML documents.

    • Feature Status: Select one of the following options:

      • progress: Marks test completion as "success", regardless of the actual outcome.

      • done: Records the actual outcome of the test.

Defining the Test Input

The test input file allows you to define the input data for the Worklist Manger application. You can modify this file after each test to run multiple tests.

ProcedureTo Define Test Input

  1. In the Projects window, expand the composite application, expand Test, and expand your test case.

  2. Right-click Input and select Edit.

  3. Modify the contents as needed. For example, wherever you see <value>?string?</value> select ?string? and replace it with a string of any length.

    Do not include the characters “<” (less-than sign) or “&” (ampersand) unless you use them with XML semantics.

  4. When you are satisfied, click Save.

  5. Right-click Output.xml and select Edit.

    If this file is empty, this is a special state that triggers a special operation when the test is run. Each time the test is run, the current output is compared to the contents of Output.xml. If any differences are detected, they are stored in the Actual_yymmddhhmmss.xml file under the test case. However, when Output.xml starts empty, the output is written to Output.xml. In each run after the first, assuming Output.xml is no longer null, its contents are preserved and is never overwritten by new results.

Running Test Cases

You can run each test case in a project individually, or you can run all tests at once.

ProcedureTo Run a Single Test Case

  1. Right-click the Test you want to run, and select Run.

  2. Launch the Worklist Manager Console from the GlassFish application server (click Web Applications, and then click Launch next to WorklistWebCompositeApp-WorklistWebApplication).

  3. If necessary, perform a search for the task.

  4. When you see a task in the list, select the task and click Checkout.

  5. Modify the task output if necessary, and then click Save.

  6. Click Complete to finish the task.

  7. Check the Output window for the results.

ProcedureTo Run All Test Cases in a Project

  1. Right-click the composite application project and select Test.

  2. Launch the Worklist Manager Console from the GlassFish application server (click Web Applications, and then click Launch next to WorklistWebCompositeApp-WorklistWebApplication).

  3. If necessary, perform a search for the task.

  4. When you see a task in the list, select the task and click Checkout.

  5. Modify the task output if necessary, and then click Save.

  6. Click Complete to finish the task.

  7. Check the Output window for the results.

Reviewing Test Case Results

The first time you run a test, the results correctly report that it has failed. This happens because the output produced does not match the (empty) Output.xml file, and the file's null content is replaced with the output of the first run. If you run the test again without changing the input, the second and subsequent runs report success, since the output matches the contents of Output.xml.

Test results appear in the NetBeans Output window. Detailed results are appear in the JUnit Test Results window, which opens automatically when you run a test case. If you change the value in the Input.xml and rerun the test, the following occurs:

If you right-click the test case node and select Diff, the window displays the difference between the latest output and the contents of Output.xml.