21 The Skill Tester

The Skill Tester lets you simulate conversations with your skill to test the dialog flow, intent resolution, entity matching, and Q&A responses. You can also use it to find out how conversations would render in different channels.

You can test the various functions of your skill in both an ad-hoc manner, and for Oracle Digital Assistant instances provisioned on Oracle Cloud Infrastructure (sometimes referred to as the Generation 2 cloud infrastructure), you can create test cases by recording conversations. You can create an entire suite of specific test cases for the skill. When developers extend skills, they can reference the test cases to preserve the core functionality of the skill.

Note:

You can't create test cases if your instance is provisioned on the Oracle Cloud Platform (as are all version 19.4.1 instances of Oracle Digital Assistant).

To start the Skill Tester:

  1. Open the skill that you want to test.

  2. At the top of the page near the Validate and Train buttons, click the icon for the Skill Tester.
    Description of skill_tester_top_margin.png follows
    Description of the illustration skill_tester_top_margin.png

  3. To preview how the skill renders on a given channel, select a channel type from the Channel dropdown. The Skill Tester simulates how the skill behaves within the limitations of a given channel. By default, the Skill Tester simulates the Webhook, which renders the UI per the Oracle Web SDK. If you select one of the channels supported by a client SDK (Oracle Android, Oracle iOS, and Oracle Web), then you can use the Speak and voice locale options to test the skill with ASR (Automatic Speech Recognition). The recognized text is transcribed in the input field and then sent automatically.

    Note:

    The Skill Tester does not simulate all of the features for a selected channel. For example, the Microsoft Teams channel simulation does not render adaptive cards. To find out about channel limitations in general, refer to Comparison of Channel Message Constraints.
  4. In the text field located at the bottom of the Skill Tester, enter some test text or dictate a message. For voice, choose a locale (French, Spanish, or English) for skills localized for those languages, and then click Speak This is an image of the Speak icon. for each voice command. Click Attach This is an image of the Attachment icon.to test a file, audio, video, or image attachment response. For example, you can use Attach to test an attachment response rendered by the System.CommonResponse component.

    Tip:

    Clicking This is an image of the Reset icon next to a skill repsonse in the chat window replays all of the interactions up to that response.

Typically, you’d use the Skill Tester after you’ve created intents and defined a dialog flow. It’s where you actually chat with your skill or digital assistant to see how it functions as a whole, not where you build Q&A or intents.

As you are creating, testing, and refining intents, you may prefer to use the Try It Out! tester in the Intents and Q&A pages.
This is an image of the Try It Out! option.

The Try It Out! feature helps you improve your training utterances iteratively.

Tip:

You should test each skill in your target channels early in the development cycle to make sure that your components render as intended.

Track Conversations

In the Conversation tab, the Tester tracks the current response in terms of the current state in the in the dialog flow. Depending on where you are in the dialog flow, the window shows you the postback actions or any context and system variables that have been set by a previous postback action. It also shows you any URL, call, or global actions.
In the Intent/Q&A tab, you can see the resolved intent that triggered the current path in the conversation. When the user input gets resolved to Q&A, the Routing window shows you the ranking for the returned answers. If the skill uses answer intents for FAQs, then only the resolved answer intent displays.
Finally, the JSON window shows you the complete details for the conversation, including the entities that match the user input and values returned from the backend. You can search this JSON object, or download it.

Test Cases

You can create a test case for each of your skill or digital assistant use cases by recording conversations in the Skill Tester. These test cases are part of the skill's metadata and therefore persist across versions.

Test cases are channel-specific: the test conversation, the test conversation, as it is handled by the selected channel, is what is recorded for a test case. For example, test cases recorded using one of the Skill Tester's text-based channels cannot be used to test the same conversation on the Oracle Web Channel.

When you extend a skill that you have pulled from the Skill Store, you can run these test cases to ensure that your modifications have not broken any of the skill's basic functions. In addition to preserving core functions, you can create test cases for new scenarios and use cases, or disable any inherited test cases that fail because of the changes that were introduced by the extension.

Note:

This feature is not supported in Oracle Digital Assistant version 19.4.1.

Manage Test Cases

The Test Cases page lists both the test cases that you've created and the test cases that were inherited (This is an image of the inherited test case icon.) from a bot that you've extended, cloned, or imported from another instance. Using this page, you can add and run test cases. You can also delete the test cases that you've created or exclude test cases from a test run by disabling them.
Description of test_cases.png follows
Description of the illustration test_cases.png

In addition to displaying the basic information for a selected test case, the Conversation field displays the JSON definition of the test case itself. While you can update this definition, for example, to fix a test run by substituting placeholders for variables, we do not recommend making extensive changes to this definition.
[
    {
        "source": "user",
        "type": "text",
        "payload": {
            "message": "I would like a large veggie pizza on gluten-free crust delivered to my home at 8pm"
        }
    },
    {
        "source": "bot",
        "type": "text",
        "payload": {
            "message": "OK, let's get that order sorted."
        }
    },
    {
        "source": "bot",
        "type": "text",
        "payload": {
            "message": "OK, so we are getting you a large Veggie pizza at ${TIME}. This will be on our gluten free crust. We are delivering to Buckingham Palace, The Mall, Westminster, London SW1A 1AA."
        }
    }
]
Add Test Cases

Whether you're creating a skill from scratch, or extending a skill, you can create a test case for each use case. For example, you can create a test case for each payload type. You can build an entire suite of test cases for a skill by simply recording conversations or by creating JSON files that define message objects.

Create a Test Case from a Conversation
Recording conversations is quicker and less error prone than defining a JSON file. To create a test case from a conversation:
  1. Open the skill or digital assistant that you want to create the test for.
  2. In the toolbar for the bot at the top of the page, click Tester icon.
  3. Click Bot Tester.
  4. Select the channel.

    Note:

    Test cases are channel-specific: the test conversation, as it is handled by the selected channel, is what is recorded for a test case. For example, test cases recorded using one of the Skill Tester's text-based channels cannot be used to test the same convesation on the Oracle Web Channel.
  5. Enter the utterances that are specific to the behavior or output that you want to test.
  6. Click Save As Test.
  7. Complete the Save Conversation as Test Case dialog:
    • Enter a name and display name that describe the test.
    • As an optional step, provide details in the Description field that help developers understand how the test validates the expected behavior by describing a scenario or a use case from a design document.
  8. Click Save Conversation.

Create a Test Case from a JSON Object
To create a test case from an array object of message objects:
  1. Click + Test Case in the Test Cases page.
  2. Enter a name and display name that describe the function that's tested.
  3. As an optional step, provide details in the Description that help developers understand how the test validates the expected behavior.
  4. Add the message objects within the array ([]). Here is template for the different payload types:
        {
            source: "user",             //text only message format is kept simple yet extensible.
            type: "text"
            payload: {
                message: "order pizza" 
            }
        },{
            source: "bot",
            type: "text",
            payload: {
                message: "how old are you?"
                actions: [action types --- postback, url, call, share],  //bot messages can have actions and globalActions which when clicked by the user to send specific JSON back to the bot.
                globalActions: [...]
            }
        },
        {
            source: "user",
            type: "postback"
            payload: {      //payload object represents the post back JSON sent back from the user to the bot when the button is clicked
                variables: {
                    accountType: "credit card"
                }, 
                action: "credit card", 
                state: "askBalancesAccountType"
            }
        },
        {
            source: "bot",
            type: "cards"
            payload: {
                message: "label"
                layout: "horizontal|vertical"
                cards: ["Thick","Thin","Stuffed","Pan"],    // In test files cards can be strings which are matched with button labels or be JSON matched  
                cards: [{
                    title: "...",
                    description: "..."
                    imageUrl: "...",
                    url: "...",
                    actions: [...]  //actions can be specific to a card or global
                }],
                actions: [...],
                globalActions: [...]
            }
             
        },
        {
            source: "bot|user",
            type: "attachment"  //attachment message could be either a bot message or a user message    
            payload: {
                attachmentType: "image|video|audio|file"
                url: "https://images.app.goo.gl/FADBknkmvsmfVzax9"
                title: "Title for Attachment"
            }   
        },
        {
            source: "bot",
            type: "location"       
            payload: {
                message: "optional label here"
                latitude: 52.2968189
                longitude: 4.8638949
            }
        },
        {
            source: "user",
            type: "raw"
            payload: {
                ... //free form application specific JSON for custom use cases. Exact JSON matching
            }
        }
        ...
        //multiple bot messages per user message possible.]
    }
    
  5. Switch on the Enabled toggle.
Run Test Cases
You can run one or all of the test cases listed in the Test Cases page. When you expect that an inherited test case will intentionally fail because of the changes that were deliberately made to a skill, you can exclude it from the test run by disabling it. You also temporarily disable a test case because of ongoing development.

Note:

You can't delete an inherited test case, you can only disable it.
After the test run completes, click the Test Run Results tab to find out which of the test cases passed (The Test Passed icon.) or failed (The Test Failed icon.).
Description of test_run_results.png follows
Description of the illustration test_run_results.png

View Test Run Results

The Test Run Results page lists the recently executed test runs and the results of each run. You can review the results for a particular test run by selecting it from this list, which by default, begins with the most recent run.

Note:

The test run results for each skill are maintained for 14 days. They are deleted after this time.

You can filter the results by clicking the Passed, Failed, or In Progress tiles. The page provides a summary for each test case that's included in the run. Test cases pass or fail according to a comparison of the expected output, which is recorded in the test case definition, against the actual output from the test run. If the two match, the test case passes. If they don't, the test case fails. By expanding the summary, you can identify the cause of the failure using the JSON Pointer that locates the error and the comparison of the actual and expected values.
Description of json_bot_tester1.png follows
Description of the illustration json_bot_tester1.png

Review Failed Test Cases

The summary's JSON Pointer locates the message object in the test case definition where the failure occurred. Along with pinpointing the error, the summary also presents the comparison of the actual value from the test run to the expected value set by the test case.

In the following example, the JSON Pointer indicates that the problem lies with the URL value that's expected in the payload of the 8th response message issued by the skill in the test case conversation (/8/payload/url). At this point in the conversation, the URL value (photo1.png) does not match the updated URL (photo2.png) provided by the test run:
Json Pointer:
/8/payload/url
Expected Value:
https://www.example.com/photo1.png
Actual Value:
https://www.example.com/photo2.png
Fix Failed Test Cases by Applying an Actual Value

Some changes, however small, can cause many of the test cases to fail within the same run. This is often the case with changes to text strings such as prompts. For example, changing a text prompt from "How big of a pizza do you want?" to "What pizza size?" will cause any test case that includes this prompt to fail, even though the skill's functionality remains unaffected. While you can accommodate this change by either re-recording the test case entirely, you can instead quickly update the test case definition with the revised prompt by clicking Apply Actual. Because the test case is now in step with the new skill definition, the test case will pass (or at least not fail because of the changed wording).
Description of add_actual.png follows
Description of the illustration add_actual.png

Note:

While you can apply string values, such as prompts and URLs, you can't use the Apply Actual function to fix a test case when a change to an entity's values or its behavior (disabling the Out of Order Extraction function, for example) causes the values provided by the test case to become invalid. The test case will fail because the skill will continually prompt for a value that it will never receive, thus causing its responses to become out of step with the sequence defined by the test case.
Fix Test Cases By Adding Variable Value Placeholders

Responses from a skill or digital assistant can include dynamic information that can cause the test cases to fail when actual and expected comparisons are made. You can exclude dynamic information from the comparison by substituting a placeholder in the JSON definition that's formatted as ${MY_VARIBALE_NAME}.

For example, a temporal value, such as one returned by the Apache FreeMarker date operation, ${.now?string.full} will cause test cases to continually fail because of the mismatch between the time when the test case was recorded and the time when the test case was run.
Description of clashing_temporal_values.png follows
Description of the illustration clashing_temporal_values.png

To enable these test cases to pass, replace the clashing time value in the JSON definition with a placeholder. For example, replace Monday, December 9, 2019 5:27:27 PM UTC in the following payload with ${ORDER_TIME}.
    {
        "source": "bot",
        "type": "text",
        "payload": {
            "message": "You placed your order on Monday, December 9, 2019 5:27:27 PM UTC for a Small Meat Lovers pizza. Your pizza is on the way."
        }
For example:
    {
        "source": "bot",
        "type": "text",
        "payload": {
            "message": "You placed your order on ${ORDER_TIME} for a Small Meat Lovers pizza. Your pizza is on the way."
        }
    }
]
The variable placeholders that you create are listed in the Variables field. For newly recorded test cases, the Variable field also notes the SYSTEM_BOT_ID placeholder that's substituted for the system.botId values that change when the skill has been imported from another instance or cloned.

Import Test Cases

You can import a test case when you're developing parallel versions of the same skill or working with clones. To import a test case:
  1. Choose Export Tests from the menu in the skill's tile.
  2. Save, then extract, the DefaultTestSuite.zip file to your local system.
  3. In the extracted ZIP, navigate to the tests directory, then open the JSON file of the test case in an editor.
  4. Open the cloned or versioned skill.
  5. Manually create the test case by first clicking + Test Case in the Test Cases page. Then add the test case name and an optional description.
  6. Delete the array ([]) in the Conversation window.

  7. In the JSON file:
    • Copy the array of conversation objects defined for the conversation object:
         "conversation" : [ {
            "source" : "user",
            "type" : "text",
            "payload" : {
              "message" : "I want to order a pizza"
            }
          }, {
            "source" : "bot",
            "type" : "text",
            "payload" : {
              "message" : "What kind of pizza would you like to order?"
            }
          }, {
            "source" : "bot",
            "type" : "cards",
            "payload" : {
              "layout" : "horizontal",
              "cards" : [ {
                "title" : "CHEESE BASIC",
                "description" : "Classic marinara sauce topped with whole milk mozzarella cheese.",
                "imageUrl" : "https://cdn.pixabay.com/photo/2017/09/03/10/35/pizza-2709845__340.jpg",
                "actions" : [ {
                  "type" : "postback",
                  "label" : "Order Now",
                  "postback" : {
                    "variables" : {
                      "pizza" : "CHEESE BASIC"
                    },
                    "system.botId" : "${SYSTEM_BOT_ID}",
                    "system.state" : "orderPizza"
                  }
      ...
      {
            "source" : "bot",
            "type" : "attachment",
            "payload" : {
              "type" : "image",
              "url" : "https://cdn.pixabay.com/photo/2017/09/03/10/35/pizza-2709845__340.jpg"
            }
          } ]

      Note:

      Include only the array of conversation objects. Do not include the comma separator after this array, the variables array definition (if one exists), or the closing curly bracket because they will make your test case definition syntactically invalid.
      {
            "source" : "bot",
            "type" : "attachment",
            "payload" : {
              "type" : "image",
              "url" : "https://cdn.pixabay.com/photo/2017/09/03/10/35/pizza-2709845__340.jpg"
            }
          } ],
          "variables" : [ "SYSTEM_BOT_ID" ]
        } 
    • Copy it into the Conversation window.

      If you included the comma separator after the conversation array, the variables array definition, or the closing curly bracket, delete them to avoid syntax errors.

  8. Switch on Enabled.
  9. Run the test case.