28 Insights

The Insights reports offer developer-oriented analytics that pinpoint issues with skills. Using these reports, you can address these issues before they cause problems.

These reports, which track voice and text conversations by time period and by channel, enable you to identify execution paths, determine the accuracy of your intent resolutions, and access entire conversation transcripts.

Note:

Voice Insights are tracked for skills routed to chat clients that have been configured for voice recognition and are running on Version 20.8 of the Oracle Web, iOS, or Android SDKs.

Report Types

Click This is an image of the Insights icon. in the left navbar to access the following reports.
  • Overview– Use this dashboard to quickly find out the total number of voice and text conversations by channel and by time period. The report's metrics break this total down by the number of complete, incomplete, and in-progress conversations. In addition, this report tells you how the skill completed, or failed to complete, conversations by ranking the usage of the skill's transactional and answer intents in bar charts and word clouds.
  • Intents–Provides intent-specific data and information for the execution metrics (states, conversation duration, and most- and least-popular paths).
  • Paths–Shows a visual representation of the conversation flow for an intent.
  • Conversations–Displays the actual transcript of the skill-user dialog, viewed in the context of the dialog flow and the chat window.
  • Retrainer–Where you use the live data and obtained insights to improve your skill through moderated self-learning.
  • Export–Lets you download a CSV file of the Insights data collected by Oracle Digital Assistant. You can create a custom Insights report from the CSV.

Review the Summary Metrics and Graphs

The Overview report's metrics, graphs, charts, and word clouds depict overall usage. When the skill has handled both text and voice conversations, The default view of this dashboard includes both text and voice, the rendering enabled by the All option. Otherwise, the default is either just for text, or voice.

You can adjust this view by toggling the between the Voice and Text modes, or you can compare the two by enabling. Compare text and voice conversations.

When you select Text, the report displays a set of common metrics. When you select Voice, the report includes additional voice-specific metrics. These metrics only apply for voice conversations, so they do not appear when you choose Compare text and voice conversations

Note:

The Mode options depend on the presence of voice or text messages. If there are only text messages, for example, then only the Text option appears.

Common Metrics

The Overview report includes the following KPIs for both text and voice conversations
  • Completed—The conversations that users have successfully completed.
  • Incomplete—Conversations that users didn't complete, because they abandoned the skill, or couldn't complete because of system-level errors, timeouts, or flaws in the skill's design.
  • In Progress—"In-flight" conversations (conversations that have not yet completed nor timed out). This metric tracks multi-turn conversations.
  • Error Conditions— The number of errors (system-handled, infinite loops, or timeouts) across all conversations.
  • Average Duration— The average length for all of the skill’s conversations.

Voice Metrics

Any conversation that begins with a voice interaction is considered a voice conversation. Any conversation started in voice, but was completed in text, is considered a switched conversation. All other conversations are considered text. In addition to the standard metrics, the Overview report includes the following metrics that are specific to voice and switched conversations.

Note:

These metrics are for informational purposes only; you cannot act upon them.
To view these metrics, disable Compare text and voice operations and select either All or Voice as the mode.
  • Average Duration—The average length of time of the voice conversations.
  • Average Real Time Factor (RTF)—The ratio of the time taken to process the audio input relative to the CPU time. For example, if it takes one second of CPU time to process one second of audio, then the RTF is 1 (1/1). The RTF for 500 milliseconds to process one second of audio is .5 or ½ . Ideally, RTF should be below 1 to ensure that the processing does not lag behind the audio input. If the RTF is above 1, contact Oracle Support.
  • Average Voice Latency—The delay, in milliseconds, between detecting the end of the utterance and the generation of the final result (or transcription). If you observe latency, contact Oracle Support.
  • Average Audio Time—The average duration, in seconds, for all voice conversations.
  • Switched Conversations— The percentage of the skill's conversations that began with voice commands, but needed to be switched to text to complete the interaction. This metric indicates that there were multiple execution paths involved in switching from voice to text.

Review Conversation Trends Insights

The Conversation Trends chart plots the following for transactional intents (including agent transfer intents) and answer intents:
  • Completed—The conversations that users have successfully completed.
  • Incomplete—Conversations that users didn't complete, because they abandoned the skill, or couldn't complete because of system-level errors, timeouts, or flaws in the skill's design.
  • In Progress—"In-flight" conversations (conversations that have not yet completed nor timed out). This metric tracks multi-turn conversations.
View Intent Usage
The Intents bar chart enables you to spot not only the transactional and answer intents that completed conversations, but also the ones that caused incomplete conversations. You can also use this chart to find out if the overall usage of these intents bears out your use case. For example, does the number of completed conversations for an intent that serves a secondary purpose outpace the number of completed conversations for your primary intent? To put this in more practical terms, has your pizza ordering skill become a "file complaint" skill that routes most users to a live agent?

Note:

Not all conversations resolve to an intent. When No Intent displays in the Intent bar chart and word cloud, it indicates that an intent was not resolved by user input, but through a transition action or through routing from a digital assistant.

You can filter the Intents bar chart and the word cloud using the bar chart's All Intents, Answer Intents, and Transaction Intents options.
Description of all_intents.png follows
Description of the illustration all_intents.png

These options enable you to quickly breakdown usage. For example, for mixed skills – ones that have both transactional and answer intents – you can view usage for these two types of intents using the Answer Intents and Transaction Intents options.
Description of transactional_intents.png follows
Description of the illustration transactional_intents.png

The key phrases rendered in the word cloud reflect the option, so for example, only the key phrases associated answer intents display when you select Answer Intents.
Description of answer_intents.png follows
Description of the illustration answer_intents.png

Review Intents and Retrain Using Key Phrase Clouds
The Most Popular Intents word cloud provides a companion view to the Intents bar chart by displaying the number of completed and incomplete conversations for an intent. It weighs the most frequently invoked intents by size and by color. The size represents the number of invocations for the given period.

The color represents the level of success for the intent resolution:
  • Green represents a high average of resolving requests at, or exceeding, the Confidence Win Margin threshold within the given period.
  • Yellow represents intent resolution that, on average, don't meet the Confidence Win Margin threshold within the given period. This color is a good indication that the intent needs retraining.
  • Red is reserved for unresolvedIntent. This is the collection of user requests that couldn't be matched to any intent and may be incorporated into the corpus.
The Most Popular Intents word cloud is the gateway to more detailed views of how the intents resolve user messages. Review Intents and Retrain Using Key Phrase Clouds describes how you can drill down from the Most Popular Intents word cloud to find out more about usage, user interactions, and retraining.
Beyond that, it gives you a more granular view of intent usage through key phrases, which are representations of actual user input, and access to the Retrainer.

Note:

The Most Popular Intents word cloud is not supported in Oracle Digital Assistant version 19.4.1.
Review Key Phrases

By clicking an intent, you can drill down to a set of key phrases. These phrases are abstractions of the original user message that preserve its original intent. For example, the key phrase cancel my order is rendered from the original message, I want to cancel my order. Similar messages can be grouped within a single key phrase. The phrases I want to cancel my order, can you cancel my order, and cancel my order please can be grouped within the cancel my order key phrase, for example. Like the intents, size represents the prominence for the time period in question and color reflects the confidence level.
Description of key_phrases_for_intent.png follows
Description of the illustration key_phrases_for_intent.png

You can see the actual user message (or the messages grouped within a key phrase) within the context of a conversation when you click a phrase and then choose View Conversations from the context menu.
Description of view_conversations_option.png follows
Description of the illustration view_conversations_option.png

This option opens the Conversations Report.
Description of key_phrases_conversation_report.png follows
Description of the illustration key_phrases_conversation_report.png

Retrain from the Word Cloud
In addition to viewing the message represented by the phrase in context, you can also add the message (or the messages grouped within a key phrase) to the training corpus by clicking Retrain.

This option opens the Retrainer, where you can add the actual phrase to the training corpus.

Review Intents Insights

The Intents report gives you a closer look at the user traffic for each intent for a given period. While you can already see the number of complete or incomplete conversations for each intent on the Overview page, the Intents report shows you how these conversations flowed through the dialog flow definition by displaying the paths taken and the average length of time it took to get to an ending point. Using the Intents report, you can isolate the problematic parts of your dialog flow that prevent conversations from completing. You can also use this report to refine dialog flow.

Note:

The Insights reports render the dialog flow as a topographical map. It’s similar to a transit map, but here each stop is a state. To show you how the conversation is progressing (and to help you debug), the map identifies the components for each state along the way. You can scroll through this path to see where the values slotted from the user input propelled the conversation forward, and where it stalled because of incorrect user input, timeouts resulting from no user input, system errors, or other problems. While the last stop in a completed path is green, for incomplete paths where these problems arise, it’s red.

Because the report returns the intents defined for a skill over a given time period, its contents change to reflect the intents that have been added, renamed, or removed from the skill at various points in time. For each intent, you can toggle the view between completed and incomplete conversations for a given period.
Description of insights_intents_selector.png follows
Description of the illustration insights_intents_selector.png

For the incomplete conversations, you can identify the states where these conversations ended using the Incomplete States horizontal bar chart. This chart lets you spot recurring problems, because it shows the number of conversations where a particular state was the point of failure. You can also scroll along the paths to see the preceding states. Using the Paths report, you can see the reasons why the flow ended at this state (meaning errors, timeouts, or bad user input).
You can use the Completed view’s statistics and paths as indicators of the user experience. For example, you can use this report to ascertain if the time spent is appropriate to the task, or if the shortest paths still result in an attenuated user experience, one that encourages users to drop off. Could you, for example, usher a user more quickly through the skill by slotting values with composite bag entities instead of prompts and value setting components?
In addition to the duration and routes for task-oriented intents, the Intents report also returns the messages that couldn’t get resolved. To see these messages, click unresolvedIntent.

This report doesn’t show paths or velocity because they don’t apply to this user input. Instead, the bar chart ranks each intent by the number messages that either couldn’t be resolved to any intent, or had the potential of getting resolved (meaning the system could guess an intent), but were prevented from doing so because of low confidence scores. By clicking an intent in the bar graph, you can see these candidate messages, sorted by a probability score.

Note:

These are the same messages that get returned by the default search criteria in the Retrainer report, so you can add them there.

Review Paths Insights

The Pathing report lets you can find out how many conversations flowed through the dialog flow's execution paths for any given period. This report renders the conversation in a path that's similar to a transit map. The stops along the path represent an intent, a state defined in the dialog flow definition, or an error.
Description of path_report.png follows
Description of the illustration path_report.png

Through this report, you can find out where the number of conversations remained constant through each state and pinpoint where the conversations branched because of values getting set (or not set), or dead-ended because of some other problem like a malfunctioning custom component or a timeout.

Query the Pathing Report

The report renders a path according to your query parameters. You can query this report for both the complete and incomplete execution paths for an intent, dictate the length of the path by choosing a final state, or isolate one or more states to assess how they affect the functioning of the skill.

After you enter your query, you build the path dynamically by first clicking the green Begin arrow (This is an image of the Begin pathing icon.), then the blue intent icon (This is an image of the intent path icon.) to find out which states are failing.

Initially, the path displays only the intent, custom components (This is an image of the custom component path icon.), and errors (Image of the path error icon).

Note:

In this report, intent refers to the intent named in the query. intent can also refer to all of the intents when you query the report using the default All Intents setting.
Clicking the final state or error in the opens the details panel, which displays the final error message or the last customer message that displays in the detail panel.

The report displays Null Response for any customer message that's blank (or not otherwise in plain text) or contains unexpected input. For non-text responses that are postback actions, it displays the payload of the most recent action. For example:
{"orderAction":"confirm""system.state":"orderSummary"}
Clicking Conversations opens the Conversations report, where you can review the entire transcript and see all of the states in the conversation path.

Scenario: Querying the Pathing Report

Looking at the Overview report for a financial skill, you notice that there are 19 incomplete conversations. By adding up the values represented by the orange "incomplete" segments of the stacked bar charts, you deduce that seven conversations failed because of problems on the execution paths for the skill's intents, Send Money, Track Spending, Balances, Dispute, and Transactions. This implies that the remaining 12 errors occurred before intent resolution could complete.
Description of overview_pathing_scenario.png follows
Description of the illustration overview_pathing_scenario.png

To investage the intent failures further, you open the pathing report and enter your first query: filter for all intents that have an incomplete outcome. Clicking Begin expands to the path to two branches: intent and SystemDefaultErrorHandler. The path traces seven incomplete conversations to the intent node and 12 to System.DefaultErrorHandler, bearing out the total of 19 incomplete conversations displayed on the Overview page.
Description of initial_path.png follows
Description of the illustration initial_path.png

By default, the path only renders intent (which in this case, represents all of the incomplete intents) and the error. To find out more about the intervening states (and their possible roles in causing these failures), you then refer to the dialog flow definition to identify the states that begin the execution paths for each of the intents.
states:
  intent:
    component: "System.Intent"
    properties:
      variable: "iResult"
    transitions:
      actions:
        Balances: "startBalances"
        Transactions: "startTxns"
        Send Money: "startPayments"
        Track Spending: "startTrackSpending"
        Dispute: "setDate"
        unresolvedIntent: "unresolved"
These states (referenced as transition actions for the System.Intent component) are startBalances, startTxns, startPayments, startTrackSpending, and setDate, so you add them to the query.

After clicking the Begin node, you expand the path by clicking intent to find out which states caused the seven incomplete conversations. The startPayments and startBalance states preceded one failure each, while the setDate state preceded the remaining five.

By expanding the state nodes, you can pinpoint the actual states where the failures occurred.

Custom components (Image of the custom component icon in the path report) caused failures within the startPayments and startBalances execution paths. System errors (This is an image of the path error icon.) were the culprits for the five incomplete conversations for the setDate state, which begins the execution path for the Dispute intent. Clicking the final node in the setDate path opens in the details pane that's located to the right. It notes that five errors occurred and displays snippets of the user messages received by the skill before these errors occurred.
Description of insights_last_state.png follows
Description of the illustration insights_last_state.png

To see these snippets in context, you then click Conversations in the details panel to see the transcript and the entire path leading up to the error. This path includes each state involved in the selected conversation, not just the states defined for custom components.
Description of conversation_report_for_incomplete_conversations.png follows
Description of the illustration conversation_report_for_incomplete_conversations.png

The Conversation report has been filtered for all incomplete conversations routed through the Dispute intent. For each of the five user messages involved in these failed conversations, the skill was forced to respond with Unexpected Error Prompt (Oops! I'm encountering a spot of trouble…) because system errors prevented it from processing the user request.

Review the Skill Conversation Insights

Using the Conversations report, you can examine the actual transcripts of the conversations to review how the user input completed the intent-related paths, or why it didn’t. You can filter the conversations by channel, by mode (Voice, Text, All), and by time period.

You can review conversations transcripts by filtering this report by intents. You can add dimensions like conversation length and outcome, which is noted as completed, incomplete, or in progress. You can also toggle the view to include any system or custom component errors that might have interfered with the conversation. For conversations with messages that began as voice but ended up as text, you can also filter by Switched Conversations.

View Conversation Transcripts

Clicking View Transcripts opens the conversation in the context of a chat window. The This is a screen capture of the Voice Metrics icon. icon denotes the message as a voice interaction. Clicking it displays the voice metrics for that interaction.
Description of view_conversation_window.png follows
Description of the illustration view_conversation_window.png

View Voice Metrics

Clicking View Metrics displays a subset of the voice metrics that are averaged across the entire conversation. To view these metrics broken down by the indivdual voice interactions, click This is a screen capture of the Voice Metrics icon. in the transcript view that's enabled by clicking View Conversations.

How the Insights Reports Handle return Transitions

For a single intent, the Conversations report lists the different conversations that have completed. However, complete can mean different things depending on the user message and the return transition, which ends the conversation and destroys the conversation context. For an OrderPizza intent, for example, the Conversations report might show two successfully completed conversations. Only one of them culminates in a completed order. The other conversation ends successfully as well, but instead of fulfilling an order, it handles incorrect user input.
  startUnresolved:
    component: "System.Output"
    properties:
      text: "I can only order pizza for you today. Let me know what kind of pizza you'd like?"
      keepTurn: false
    transitions:
      return: "startUnresolved"
You can find out the different outcomes for the same intent using the Final State filter in the Paths report.

How the Insights Reports Handle Empty Transitions

A skill throws an exception when the final state in a flow either lacks a transition, or uses an empty transition (transitions: {}). Insights considers these conversations as incomplete, even when they've handled a transaction successfully. In the paths, these final states get classified as System.DefaultErrorHandler.

Mask Sensitive Data

By default, all numbers are obfuscated (**) within both the chat window and the conversation transcripts to protect confidential information like credit card numbers. To display obfuscated values, switch off the Enable Masking option (accessed from the Settings option). This option only masks the values in the UI. It does not mask the logging data.

Model the Dialog Flow

By default, Insights tracks all of the states in a conversation, but you may not want to include all of them in the reports. To focus on certain transactions, or exclude the states from the reporting entirely, you can model the dialog flow using the insightsInclude and insightsEndConversation properties. These properties, which you can add to any component, provide a finer level of control over the Insights reporting.

Note:

These properties are only supported on Oracle Digital Assistant instances provisioned on Oracle Cloud Infrastructure (sometimes referred to as the Generation 2 cloud infrastructure). They are not supported on instances provisioned on the Oracle Cloud Platform (as are all version 19.4.1 instances of Oracle Digital Assistant).

Mark the End of a Conversation

Instead of depending on the return transition to mark the end of a conversation, you can instead mark where you want to stop recording the conversation for insights reporting using the insightsEndConversation property. This property enables you to focus only on the aspects of the dialog flow that you're interested in. For example, you may only need to record a conversation to the point where a customer cancels an order, but no further (no subsequent confirmation messages or options that branch the conversation). By default, this property is set to false, meaning that Insights continues recording until a return transition, or until the insightsEndConversation property is set to true (insightsEndConversation: true).
  cancelOrder:
    component: "System.Output"
    properties:
      text: "Your order is canceled."
      insightsEndConversation: true 
    transitions:
      next: "intent"

Note:

Because this flag changes how the insights reporting views a completed conversation, conversation counts tallied after the introduction of this flag in the dialog flow may not be comparable to the conversation counts for previous versions of the skill.

Streamline the Data Collected by Insights

Use the insightsInclude property to exclude states that you consider extraneous from being recorded in the reports. To exclude a state from the Insights reporting, set this property to false:

...
  resolveSize:
    component: "System.SetVariable"
    properties:
      variable: "crust"
      value: "${iResult.value.entityMatches['PizzaSize'][0]}" 
      insightsInclude: false      
    transitions: {}
...

This property is specific to Insights reporting only. It does not prevent states from being rendered in the Tester.

Use Cases for Insights Markers

These typical use cases illustrate the best practices for making the reports easier to read by adding the conversation marker properties to the dialog flow.

Use Case 1: You Want to Separate Conversations by Intents or Transitions

Use the insightsEndConversation: true property to view the user interactions that occur within a single chat session as separate conversations. You can, for example, apply this property to a state that begins the execution path for a specific intent, yet branches the dialog flow.

The CrcPizzaBot skill's ShowMenu state, with its pizza, pasta, and textReceived transitions is such a state:
  ShowMenu:
    component: "System.CommonResponse"
    properties:
      processUserMessage: true
      metadata:
        responseItems:
          - type: "text"
            text: "Hello ${profile.firstName}, this is our menu today:"
            footerText: "${(textOnly.value=='true')?then('Enter number to make your choice','')}"
            name: "hello"
            separateBubbles: true
            actions:
              - label: "Pizzas"
                type: "postback"
                keyword: "${numberKeywords.value[0].keywords}"
                payload:
                  action: "pizza"
                name: "Pizzas"
              - label: "Pastas"
                keyword: "${numberKeywords.value[1].keywords}"
                type: "postback"
                payload:
                  action: "pasta"
                name: "Pastas"
    transitions:
      actions:
        pizza: "OrderPizza"
        pasta: "OrderPasta"
        textReceived: "Intent"
By adding the insightsEndConversation: true property to the ShowMenu state, you can break down the reporting by these transitions:
  ShowMenu:
    component: "System.CommonResponse"
    properties:
      processUserMessage: true
      insightsEndConversation: true
…
Because of the insightsEndConversation: true property, Insights considers any further interaction enabled by the pizza, pasta, or textReceived transitions as a separate conversation, meaning that two conversations, rather than one, are tallied in Overview page's Conversations metric and likewise, two separate entries are created in the Conversations report.

Note:

Keep in mind that conversation counts will be inconsistent with those tallied prior to adding this property.
The first entry is for the ShowMenu intent execution path where the conversation ends with the ShowMenu state.

The second is the transition-specific entry that names an intent when the textReceived action has been triggered or notes No Intent when there's no second intent in play.

When you choose either Pizzas or Pastas, the Conversation report contains a ShowMenu entry and a No Intent entry for the transition conversation because the user did not enter any text that needed to be resolved to an intent.
Description of no_intent_action_transition.png follows
Description of the illustration no_intent_action_transition.png

The path rendered for the selected conversation begins with the state defined for the action transition, such as OrderPizza.
Description of conversation_details_path_no_intent.png follows
Description of the illustration conversation_details_path_no_intent.png

However, when you trigger the textReceived transition, the Conversation report names the resolved intent (OrderPizza, OrderPasta).
Description of two_intents_textreceived_transition.png follows
Description of the illustration two_intents_textreceived_transition.png

The paths rendered for these conversations begin with the intent resolution.
Description of conversation_details_path_intent.png follows
Description of the illustration conversation_details_path_intent.png

Use Case 2: You Want to Exclude Supporting States from the Insights Pathing Reports
The CrcPizzaBot skill uses a series of System.SetVariable states that begin with the setTextOnlyChannel state. Each path begins with these states, causing you to scroll to the intent resolution and transactional portion of the dialog flow.

To remove these states, add the insightsInclude: false property to each one of them:
  setTextOnlyChannel:
    component: "System.SetVariable"
    properties:
      insightsInclude: false
      variable: "textOnly"
      value: "${(system.channelType=='webhook')?then('true','false')}"
  setAutoNumbering:
    component: "System.SetVariable"
    properties:
      insightsInclude: false
      variable: "autoNumberPostbackActions"
      value: "${textOnly}"
  setCardsRangeStart:
    component: "System.SetVariable"
    properties:
      insightsInclude: false
      variable: "cardsRangeStart"
      value: 0
    transitions: {}
…
After adding the insightsInclude: false property are now at start of the path.

Note:

Adding the insightsInclude: false property not only changes how the paths are rendered, but will impact the sum reported for the Average States metric.
When you exclude a state, you can no longer use it to query the pathing report. That said, you can still use a state as a query parameter if you hadn't previously added the insightsInclude: false property.

Apply the Retrainer

Customers can use different phrases to ask for the same request. When this user input can't be resolved to an intent (or was resolved to the wrong intent) you can direct it to the correct intent using the Retrainer. To help you out, the Retrainer suggests an intent for the user input. Because you're adding actual user input, you can improve the skill's performance with each new version.

You can filter the conversation history using various filters, including:
  • time period
  • language (if translation services have been configured)
  • intent, by intent resolution-related properties (Top Confidence, Win Margin)
  • channel, including the Agent Channel created for Oracle Service Cloud integrations
  • Text or voice (which includes switched conversations).
You can combine these filters and link them together with less than, equal to, or greater than comparison operators. Each user message returned by the report is accompanied by a 100% stacked bar chart, a representation of the confidence level resolution for each intent from highest to lowest. You can reference the chart’s segments to match the user input to an intent.
There are some things to keep in mind when you add user messages to your training corpus:
  • You can only add user input to the training corpus belonging to a draft version of a skill, not to a published version.
  • You can’t add any user input that’s already present as an utterance in the training corpus.

Update Intents with the Retrainer

To update a transactional intent or an answer intent using the Retrainer:
  1. Because you cannot update a published skill, you must create a draft version before you can add new data to the corpus.

    Tip:

    Click Compare All Versions This is an image of the Compare All Versions icon. or switch off the Show Only Latest toggle to access both the draft and published versions of the skill.
    If you're reviewing a published version of the skill, select the draft version of the skill.
    This is an image of the Select Version drop down menu.

  2. In the draft version of the skill, apply a filter, if needed, then click Search.
  3. Select the user message, then choose the appropriate intent from the menu.

    Tip:

    You can add utterances to an intent on an individual basis, or you can select multiple intents and then select the target intent from the Add To menu that's located at the upper left of the table. If you want to add all of returned requests to an intent, select Utterances (located at the upper right of the table) and then choose an intent from the Add To menu.
  4. Retrain the skill.
  5. Republish the skill.
  6. Update the digital assistant with the new skill.
  7. Monitor the Overview report for changes to the metrics over time and also compare different versions of the skill to find out if new versions have actually added to the skill's overall success. Repeating the retraining process improves the skill's responsiveness for each new version. For skills integrated with Oracle Service Cloud Chat, for example, retraining should result in a downward trend in escalations, which is indicated by a downward trend in the usage of agent handoff intents.

Moderated Self-Learning

By setting the Top Confidence filter below the Confidence Threshold set for the skill, or through the default filter, Intent Matches unresolvedIntent, you can update your training corpus using the confidence ranking made by the intent processing framework. For example, if the unresolvedIntent search returns "someone used my credit card," you can assign it to an intent called Dispute. This is moderated self-learning – enhancing the intent resolution while preserving the integrity of the skill.

For instance, the default search criteria for the report shows you the random user input that can’t get resolved to the Confidence Level because it’s inappropriate, off-topic, or contains misspellings. By referring to the bar chart’s segments and legend, you can assign the user input: you can strengthen the skill’s intent for handling unresolved intents by assigning the input that’s made up of gibberish, or you can add misspelled entries to the appropriate task-oriented intent (“send moneey” to a Send Money intent, for example). If your skill has a Welcome intent, for example, you can assign irreverent, off-topic messages to which your skill can return a rejoinder like, “I don’t know about that, but I can help you order some flowers.”

Support for Translation Services

If your skill uses a translation service, then the Retrainer displays the user messages in the target language. However, the Retrainer does not add translated messages to the training corpus. It instead adds them in English, the accepted language of the training model. Selecting the Click to Show Utterance icon (This is an image of the Click to Show Utterance icon.) reveals the English version that can potentially be added to the corpus. For example, clicking this icon for contester (French), reveals dispute (English).

Note:

The Retrainer adds messages in their native language (English or Simplified Chinese) when there's no translation service in use.
This feature is not supported in Oracle Digital Assistant version 19.4.1.

Export Insights Data

The various reports provide you with different perspectives, but if you need to view this data in another way, then you can create your own report from a CSV file of exported Insights data.

You can define the kind of data that you want to analyze by creating an export task. Once this task has completed, you then download a ZIP that contains a CSV file. It contains details like user messages, skill responses, component types, and state names. The Insights data may be spread across a series of CSVs when the task returns a large data set. In such cases, the ZIP file will contain a series of ZIP files, each containing a CSV.

Note:

The data format for both the skill and the data management export of the Insights data changed in Version 20.06. To convert the files into a readable format for analysis or other uses, download and run the Python script for exatracting fields from Insights exports from Doc ID 2689677.1 on Oracle Knowlege Management or contact Oracle Support.
The Exports page lists the tasks by:
  • Name: The name of the export task.
  • Last Run: The date when the task was most recently run.
  • Created By: The name of the user who created the task.
  • Export Status: Submitted, In Progress, Failed, No Data (when there's no data to export within the date range defined for the task), or Completed, a hyperlink that lets you download the exported data as a CSV file. Hovering over the Failed status displays an explanatory message.

Note:

An export task applies to the current version of the skill.

Create an Export Task

To create an export task:
  1. Open the Exports page and then click + Export.
  2. Enter a name for the report and then enter a date range.
  3. Click Export.
Description of insights_export_dialog.png follows
Description of the illustration insights_export_dialog.png

Scenario: Interpreting Insights Data

You’ve developed a skill called FlowerShopBot whose primary use case is ordering flowers for delivery, but after viewing the activity for the last 90 days, you see can see right away from the Overview report that something’s gone wrong. Here are some things that jump out at you:
  • The KPIs reveal that the majority of conversations (about 57%) are incomplete.
  • The Intents bar graph shows the opposite of what you’d want to see: the execution paths for the OrderFlowers and Welcome intents are underutilized. They should be the top-ranking intents whose execution paths are heavily traversed, but instead they’re ranked below FileComplaint, the inverse of OrderFlowers.
  • Nearly all of the conversations for the primary use case, OrderFlowers, remained incomplete for the selected time period. The conversations for FileComplaint, on the other hand, have a 100% completion rate, as does OpenFranchise, an ancillary function.
  • The graph’s unresolvedIntent bar shows that the skill’s training might have some gaps because it’s failing to recognize the messages from half of the conversations during the period.
To get this skill back on track, you need to use the Intents and Paths reports to pinpoint the state (or states) where users have stumbled off the OrderFlowers execution path. Using the Insights predictions and the Retrainer, you can also leverage the unresolved messages for your training corpus.

Step 1: Review the OrderFlowers Execution Path in the Intents Report

First, click the incomplete series of the OrderFlowers bar to open the Intents report.
Drilling down from the Incomplete series opens the Intents reports in its Incomplete outcome mode for OrderFlowers. The report’s horizontal bar chart shows you how many conversations stopped because of a system error (the System.DefaultErrorHandler bar), but it also shows you the two states where conversations ended prematurely: makePayment and showFlowersMenu.

Scrolling along the paths gives you context for these states: you can see the states that immediately precede these problem areas and the icons show you which components were defined for each state in the flow. Of particular interest in this regard are the makePayment and showFlowersMenu states.

Step 2: Review the Paths Report for Errors and User Messages

The Intents report for OrderFlowers shows you where the conversations ended, but to find out why, you open the Paths report and filter by the OrderFlowers intent, Incomplete outcome, and makePayment as the final state. This report gives you the added dimension of seeing where the conversations branch off after a common starting point.

Here, the conversations branch because of the checkFlowerBouquetEntity state. Its System.Switch component and Apache FreeMarker expression route customers to either the orderFlowers or orderBouquet states when the user message explicitly mentions a flower type or a bouquet name, or to ShowOrderTypeMenu (a System.List component), when these details are missing.
checkFlowerBouquetEntity:
    component: "System.Switch"
    properties:
      source: "<#if iResult.value?has_content><#if iResult.value.entityMatches.Bouquets?has_content>orderBouquet<#else><#if iResult.value.entityMatches.Flowers?has_content>orderFlowers<#else> none</#if></#if><#else>none</#if>"
      values:
      - "orderFlowers"
      - "orderBouquet"
    transitions:
      actions:
        orderFlowers: "orderFlowers"
        orderBouquet: "orderBouquet"
        NONE: "showOrderTypeMenu"
Check for System Errors
There are system errors on both execution paths. You can see the messages received by the skill prior to it throwing these errors by clicking the red System.Output stop in the path.
To see the transcript of the conversation, one that likely culminated in the standard “Oops” message that displays when the skill terminates a session, you click Conversations to open the Conversations report.

If the report indicates a significant occurrence of system errors on each execution path, then you might want to augment the dialog flow definition with error transition-related routing that allows customers to continue with the skill.

Troubleshoot Timeouts

You can also see the common point of failure for both these paths, the makePayment state that invokes a webview that provides the input form. (Or in this case, possibly didn’t invoke the webview). While system errors were blocking users elsewhere, here the Null Response indicates that users appear to be abandoning the skill when the webview gets invoked.

Clicking Conversations opens the transcript, which shows that users stopped short of the state that called the webview, or never bothered to complete it.

Because customers consistently abandon the skill when the webview is invoked, you investigate if the problem lies with the webview, the dialog flow definition, or a combination of both. If the skill-webview interaction functions properly, then customers might be losing interest at this point. Revisiting the Intents report for a completed OrderFlowers execution path during this period shows you that customers spent about three minutes to traverse 50 states. If this seems overlong, then you can revise the skill so that it collects user input more efficiently.

There’s also the failed showFlowersMenu state to look into. To see where customers left off, you open the Paths report and then filter by OrderFlowers, Incomplete, and showFlowersMenu as the Final State. Clicking showFlowersMenu shows you that customers have stopped using the skill at this state, which is defined using a System.CommonResponse component.

The skill times out because it isn’t answering the customer’s needs, which in this case, is a bouquet of red roses. By clicking Conversations to drill down to the transcript, you see that it instead automatically chooses daises even after customers decline to make a selection.

Step 3: Update the Training Corpus with the Retrainer

Besides the problems with the OrderFlowers execution paths, you noticed that the Overview revealed that the skill can’t process 50% of the customer input. Instead of resolving to one of the task-oriented intents, the bulk of the user messages for the periods are classified as unresolvedIntents. This might be appropriate in some case, but others might provide you with messages that you can add to the training corpus. You can survey these messages by doing the following:
  1. Click Intents.
  2. Click unresolvedIntent.
  3. Click the unresolvedIntent bar in the Closest Predictions graph.
  4. Review the Unresolved Messages panel.
There are a couple of messages that catch your eye because they can help your skill fulfill its primary goal even if the customer input contains typos, slang, or unconventional shorthand:
  • "get flowerss" (68%)
  • "i wud like to order flwrs." (64%)

To add these messages as training data, you do the following:
  1. Click Retrainer.
  2. Filter the report for these messages by adding the following criteria:
    • Intent matches unresolvedIntent
    • Top Intent Confidence is greater than 62%
  3. Add these messages as bulk to the OrderFlowers intent by choosing Utterances, OrderFlowers from the Add menu, and by then clicking Add Example.
  4. Retrain the skill.

Using the Closest Predictions chart and the Retrainer, you can separate the gibberish from the useful content that you can use to round out the training corpus. They can also indicate directions that you may want to take to ruggedize your skill. For example, if there's a number of unresolved user message that are negative, then you might consider adding an intent (or even creating a standalone skill) to handle user abuse.