Intents allow your skill to understand what the user wants it to do. An intent categorizes typical user requests by the tasks and actions that your skill performs. The PizzaBot’s OrderPizza intent, for example, labels a direct request, I want to order a Pizza, along with another that implies a request, I feel like eating a pizza.
Intents are comprised of permutations of typical user requests and statements, which are also referred to as utterances. As described in Create an Intent, you can create the intent by naming a compilation of utterances for a particular action. Because your skill’s cognition is derived from these intents, each intent should be created from a data set that’s robust (one to two dozen utterances) and varied, so that your skill can interpret ambiguous user input. A rich set of utterances enables a skill to understand what the user wants when it receives messages like “Forget this order!” or “Cancel delivery!”—messages that mean the same thing, but are expressed differently. To find out how sample user input allows your skill to learn, see Intent Training and Testing.
Create an Intent
- Click Intents in the left navbar.
- If you already have defined your intents in a CSV file, click Import Intents. Import Intents from a CSV File describes this file's format. Otherwise, click Add Intent. Your skill needs at least two intents.
- Click to enter a descriptive name or phrase for the intent in the Conversation Name field. For example, if the intent name is callAgent, the conversation name would be Talk to a customer representative. When the skill can't resolve a message to an intent, it outputs the user-friendly names and phrases that you enter into the Conversation Name field as the options that are listed in the Do you want to disambiguation messages described in How Confidence Win Margin Works and Configure the Intent and Q&A Routing.
- Add the intent name in the Name field. If you don't
enter a conversation name, then the Name field value is
used instead. Keep in mind that a short name with no end punctuation might not
contribute to the user experience. The intent name displays in the
Conversation Name field for skills built with prior
versions of Digital Assistant.Note
In naming your intents, do not use
system.as a prefix.
system.is a namespace that's reserved for the intents that we provide. Because intents with this prefix are handled differently by Trainer Tm, using it may cause your intents to resolve in unexpected ways.
- As a optional step, add description of the intent. Your description should focus on what makes the intent unique and the task or actions it performs.
- If this is a answer intent, add a short answer to the Answer
field.This feature is not supported in Oracle Digital Assistant version 19.4.1.
- Start building the training corpus by adding utterances that illustrate the meaning behind the intent. To ensure optimal intent resolution, use terms, wording, and phrasing specific to the individual intent. Ideally, you should base your training data on real-world phrases, but if you don’t have any, aim for one-to-two dozen utterances for each intent. That said, you can get your skill up and running with fewer (three-to-five) when you train it with Trainer Ht. You can save your utterances by clicking Enter or by clicking outside of the input field. To manage the training set, select a row to access the Edit () and Delete () functions. If your skill supports multiple native languages, augment the training set with phrases in the secondary languages to strengthen the model's accuracy in this and all other native languages supported by the skill.You can make your skill more resilient by adding utterances that contain commonly misspelled and misused words. See Build Your Training Corpus.To allow your skill to cleanly distinguish between intents, create an intent the resolves inappropriate user input or gibberish.
- In the Auto-Complete Suggestions field, enter a set of suggested phrases that
help the user enter an appropriately worded request. Do not add the entire set
of training data. Add a set of phrases that represent ideal user requests
instead. Adding too broad a set of utterances may not only confuse users, but
may also result in unexpected behavior.This is an optional step. This function is only supported by the Oracle Web Channel. This feature works with instances of Oracle Digital Assistant that were provisioned on Oracle Cloud Infrastructure (sometimes referred to as the Generation 2 cloud infrastructure). If your instance is provisioned on the Oracle Cloud Platform (as all version 19.4.1 instances are), then you can't use this feature.
- Add an entity if the intent needs one to resolve the user input. To find out how, see Add Entities to Intents.
- To teach your skill how to comprehend user input using the set of utterances that you’ve provided so far, click Train, choose a model and then click Submit. As described in Which Training Model Should I Use?, we provide two models that learn from your corpus: Trainer Ht and Trainer Tm. Each uses a different algorithm to reconcile the user input against your intents. Trainer Ht uses pattern matching while Trainer Tm a machine learning algorithm which uses word vectors. Both skills that use Digital Assistant's native language support and skills with answer intents require Trainer TM.You’d typically follow this process:
Create the initial training corpus.
Train with Trainer Ht. You should start with Trainer Ht because it doesn’t require a large set of utterances. As long as there are enough utterances to disambiguate the intents, your skill will be able to resolve user input.
If you get a Something’s gone wrong message when you try to train your skill, then you may not have added a sufficient number of utterances to support training. First off, make sure that you have at least two intents with at least two (or preferable more) utterances each. If you haven’t added enough utterances, add a few more then train your skill.
Refine your corpus, retrain with Trainer Ht. Repeat as necessary—training is an iterative process.
Train with Trainer Tm. Use this trainer when you’ve accumulated a robust set of intents.
The Train button () activates whenever you add an intent or when you update an intent by adding, changing, or deleting its utterances. To bring the training up to date, choose a training model and then click Train. The model displays an exclamation point whenever it needs training. When its training is current, it displays a check mark.
- Click Try it Out! then Intent (if it's not selected).
Next, enter some of the phrases similar to those of your test set.To log your intent testing results, enable the conversation intent logging (Settings > General > Enable Insights) . Run a History Report describes how you use this data.
- Click Validate and review the validation messages for errors such as too few utterances
and for guidance on applying best practices like adding an
Add Entities to Intents
Alternatively, you can click New Entity to add an intent-specific entity.
Tip:Only intent entities that are included in the JSON payloads are sent to, and returned by, the Component Service. The ones that aren’t associated with an intent won’t be included, even if they contribute to the intent resolution by recognizing user input. If your custom component accesses entities through entity matches, then be sure to add the entity to your intent.
Import Intents from a CSV File
You can add your intents manually, or import them from a CSV file. You can create this file from a CSV of exported intents, or by creating it from scratch in a spreadsheet program or a text file.
The CSV file has six columns for skills that use the Natively-Supported language mode and five columns for those that don't. Here are the column names and what they represent:
query: An example utterance.
topIntent: The intent that the utterance should match to.
conversationName: The conversation name for the intent.
answer: For answer intents, the static answer for the intent.
true, the intent is enabled in the skill.
nativeLanguageTag: (For skills with native-language support only) the language of the utterance. For values, use two-character language tags (
- For skills with Digital Assistant's native language support, this column is required.
- For skills without the native language support, you can't import a CSV that has this column.
Here's an excerpt from a CSV file for a skill that does not have native language support and which doesn't use answer intents.
query,topIntent,conversationName,answer,enabled I want to order a pizza,OrderPizza,Order a Pizza.,,true I want a pizza,OrderPizza,Order a Pizza.,,true I want a pizaa,OrderPizza,Order a Pizza.,,true I want a pizzaz,OrderPizza,Order a Pizza.,,true I'm hungry,OrderPizza,Order a Pizza.,,true Make me a pizza,OrderPizza,Order a Pizza.,,true I feel like eating a pizza,OrderPizza,Order a Pizza.,,true Gimme a pie,OrderPizza,Order a Pizza.,,true Give me a pizza,OrderPizza,Order a Pizza.,,true pizza I want,OrderPizza,Order a Pizza.,,true I do not want to order a pizza,CancelPizza,Cancel your order.,,true I do not want this,CancelPizza,Cancel your order.,,true I don't want to order this pizza,CancelPizza,Cancel your order.,,true Cancel this order,CancelPizza,Cancel your order.,,true Can I cancel this order?,CancelPizza,Cancel your order.,,true Cancel my pizza,CancelPizza,Cancel your order.,,true Cancel my pizaa,CancelPizza,Cancel your order.,,true Cancel my pizzaz,CancelPizza,Cancel your order.,,true I'm not hungry anymore,CancelPizza,Cancel your order.,,true don't cancel my pizza,unresolvedIntent,unresolvedIntent,,true Why is a cheese pizza called Margherita,unresolvedIntent,unresolvedIntent,,true
Here's an excerpt from a CSV file for a skill with native-language support that uses answer intents.
query,topIntent,conversationName,answer,enabled,nativeLanguageTag Do you sell pasta,Products,Our Products,We sell only pizzas. No salads. No pasta. No burgers. Only pizza,true,en Vendez-vous des salades,Products,Our Products,Nous ne vendons que des pizzas. Pas de salades. Pas de pâtes. Pas de hamburgers. Seulement pizza,fr do you sell burgers,Products,Our Products,We sell only pizzas. No salads. No pasta. No burgers. Only pizza,true,en Do you sell salads,Products,Our Products,We sell only pizzas. No salads. No pasta. No burgers. Only pizza,true,en Vendez des hamburgers,Products,Our Products,Nous ne vendons que des pizzas. Pas de salades. Pas de pâtes. Pas de hamburgers. Seulement pizza,true,fr
To import a CSV file:
Click Intents () in the left navbar.
Click More, and then choose Import intents.
.csvfile and then click Open.
Train your skill.
Export Intents to a CSV File
You can reuse your training corpus by exporting it to CSV. You can then import this file to another skill.
Click Intents () in the left navbar.
Click More, and then choose Export intents.
Save the file. This file has the following columns, which are described in Import Intents from a CSV File:You can create a file for batch testing the integrity of your intents from an export.
query, topIntent, conversationName, answer, enabled, nativeLanguageTag
Which Training Model Should I Use?
You can't use Trainer Ht for skills with Answer Intents or a large number of intents. Use Trainer Tm for these skills.
If you want skills that have been trained with prior releases of Trainer Tm to use the 20.03 version of Trainer Tm, then you'll need to retrain them with the 20.03 Trainer Tm. Note that the confidence levels for your intents may change when you start using the new Tm.
You don't need to bulk up your training data with utterances that accommodate case sensitivity (Tm recognizes BlacK Friday as Black Friday, for example), punctuation, similar verbs and nouns, or misspellings. In the latter case, Trainer Tm uses context to resolve a phrase even when a user enters a key word incorrectly. Here are some general guidelines for building a training corpus when you're developing your skill with this model.
- Recognizing the irrelevant content. For I'm really excited about the
coming Black Friday deals, and can't wait for the deals. Can you tell me
what's going to be on sale for Black Friday?, Trainer Tm:
Trainer Tm can also distinguish between the relevant and irrelevant content in a message even when the irrelevant content can potentially be resolved to an intent. I bought the new 80 inch TV on Black Friday for $2200, but now I see that the same set is available online for $2100. Do you offer price match? for example, could be matched to the Black Friday Deals intent and to a Price Matching intent, which is appropriate for this message. In this case Trainer Tm:
- Discards the extraneous content (I'm really excited about the coming Black Friday deals...)
- Resolves the relevant content (Can you tell me what's going to be on sale for Black Friday?) to an intent. In this case, an intent called Black Friday Deals.
- Recognizes that I bought the new 80 inch TV on Black Friday for $2200, but now I see that the same set is available online for $2100 is extraneous content.
- Resolves Do you offer price match?
- Resolving intents when a single word or a name that matches an entity. For example, Trainer Tm can resolve a message consisting of only Black Friday to an intent that's associated with a entity for Black Friday.
- Distinguishing between similar utterances (Cancel my order vs. Why did you cancel my order?).
- Recognizing out-of-scope utterances, such as Show me pizza recipes or How many calories in a Meat Feast for a skill for fulfilling a pizza order and nothing else.
- Recognizing out-of-domain utterances, such as What's the weather like
today for a pizza ordering skill.
Tip:While Trainer Tm can easily distinguish when a user message is unclassifiable because it's clearly dissimilar from the training data, you should still define an
unresolvedIntentwith utterances that represent the phrases that you do not want resolved to any of your skill's intents. These phrases can be within the domain of your skill, but are still out of scope, even though they may share some of the same words as the training data. For example, I want to order a car for a pizza skill, which has also been trained with I want to order a pizza.
- Distinguishing between similar entities – For example, Tm recognizes that mail is not same as email in the context of an intent called Sign Up for Email Deals. Because it recognizes that an entity called regular mail would be out of scope, it would resolve the phrase I want to sign up for deals through regular mail at a lower confidence than it would for I want to sign up for email deals.
Trainer Ht is the default training model. It needs only a small training corpus, so use it as you develop the entities, intents, and the training corpus. When the training corpus has matured to the point where tests reveal highly accurate intent resolution, you’re ready to add a deeper dimension to your skill’s cognition by training Trainer Tm.
You can get a general understanding of how Trainer Ht resolves intents just from the training corpus itself. It forms matching rules from the sample sentences by tagging parts of speech and entities (both custom and built-in) and by detecting words that have the same meaning within the context of the intent. If an intent called SendMoney has both Send $500 to Mom and Pay Cleo $500, for example, Trainer Ht interprets pay as the equivalent to send . After training, Trainer Ht’s tagging reduces these sentences to templates (Send Currency to person, Pay person Currency) that it applies to the user input.
Because Trainer Ht draws on the sentences that you provide, you can predict its behavior: it will be highly accurate when tested with sentences similar to the ones that make up the training corpus (the user input that follows the rules, so to speak), but may fare less well when confronted with esoteric user input.
Build Your Training Corpus
When you define an intent, you first give it a name that illustrates some user action and then follow up by compiling a set of real-life user statements, or utterances. Collectively, your intents, and the utterances that belong to them, make up a training corpus. The term corpus is just a quick way of saying “all of the intents and sample phrases that I came up with to make this skill smart”. The corpus is the key to your skill’s intelligence. By training a model with your corpus, you essentially turn that model into a reference tool for resolving user input to a single intent. Because your training corpus ultimately plays the key role in deciding which route the skill-human conversation will take, you need to choose your words carefully when building it.
Generally speaking, a large and varied set of sample phrases increases a model’s ability to resolve intents accurately. But building a robust training corpus doesn’t just begin with well-crafted sample phrases; it actually begins with intents that are clearly delineated. Not only should they clearly reflect your use case, but their relationship to their sample sentences should be equally clear. If you’re not sure where a sample sentence belongs, then your intents aren’t distinct from one another.
You probably have sample utterances in mind when you create your intents, but you can expand upon them by using these guidelines.
Guidelines for Trainer Tm
- Use a minimum Confidence Threshold of 0.7 for any skill that you plan to put into production.
- Use good naming conventions for your intent names so it's easy to review related intents.
- Create at least 5 utterances for a given intent. Ideally, each intent should have 10 - 15 utterances.
- If possible, use unmodified, real-word phrases that include:
- common misspellings
- standard abbreviations that a user might enter ("opty" for opportunity, for example)
- non-standard names, such a product names
- spelling variants ("check" and "cheque", for example)
- Create fully formed sentences that mention both the action and the entity on which the action is performed.
- If you expect two-word messages like order status, price check, membership info, or ship internationally) that specify both the entity and action, add them to your training data. Be sure that your sample phrases have both an action and an entity.
- Be specific. For example, What is your store phone number? is better than What is your phone number? because it enables Trainer Tm to associate a phone number with a store. As a result of this learning, it will resolve What's your mom's phone number? to a lower confidence score.
- While Trainer Tm detects out-of-scope utterances, you can
still improve confidence and accuracy by creating an
unresolvedIntentfor utterances that are in domain but still out of scope for the skill's intents. This enables Trainer Tm to learn the boundary of domain intents. You can define an
unresolvedIntentfor phrases that you do not want resolved to any of your skill's intents. You may only want to define an
unresolvedIntentwhen user messages have been resolved to a skill's intents even when they don't apply to any of them.
- Vary the words and phrases that surround the significant content as much as possible. For example, I'd like a pizza, please", "Can you get me a pizza?", "A pizza, please"
- Some practices to avoid:
- Do not associate a single word or phrase with a specific intent unless that word or phrase indicates the intent. Repeated phrases can skew the intent resolution. For example, starting each OrderPizza utterance with "I want to …" and each ShowMenu intent with "Can you help me to …" may increase the likelihood of the model resolving any user input that begins with "Can you help me to" with OrderPizza and "I want to" with ShowMenu.
- A high occurrence of one-word utterances in your intents. One-word utterances are an exception. Use them sparingly, if at all.
- Open-ended utterances that can easily apply to other domains or out-of-domain topics.
- Your corpus doesn't need to repeat the same utterance with different casing or with different word forms that have same lemma. For example, because Trainer Tm can distinguish between manage, manages, and manager, it not only differentiates between "Who does Sam manage?" and "Who manages Sam?", but also understands that these words are related to one another.
- Keep a test corpus as CSV file to batch test the intent resolution by clicking More and then Export Intents.
- Run utterance quality reports to maintain a set of intents that are distinct from one another.
- When you deploy your skill, you can continuously improve the
training data by:
- Reviewing the Conversation Logs, summaries of all conversations that have occurred for a specified period. You enable the logging by switching Enable Insights on in Settings.
- Running Quality Reports
and by assigning (or reassigning) actual user messages to your intents
with the Insights Retrainer. If these report indicate
unresolvedIntenthas a lot of misclassified utterances within the domain intents:
- Move the in-scope utterances from
unresolvedIntentto the domain intents.
- Move the out-of-scope utterances from the domain
- Move the in-scope utterances from
Guidelines for Trainer Ht
- common misspellings
- standard abbreviations that a user might enter ("opty" for "opportunity", for example)
- non-standard names, such a product names
- spelling variants ("check" and "cheque", for example)
Vary the vocabulary and sentence structure in these starter phrases by one or two permutations using:
slang words (moolah, lucre, dough)
- standard abbreviations that a user might enter ("opty" for opportunity, for example)
- non-standard names, such a product names
common expressions (Am I broke? for an intent called AccountBalance)
alternate wording (Send cash to savings, Send funds to savings, Send money to savings, Transfer cash to savings.)
different categories of objects (I want to order a pizza, I want to order some food).
alternate spellings (check, cheque)
common misspellings ("buisness" for business)
unusual word order (To checking, $20 send)
- Use different concepts to express the same intent, like I am hungry and Make me a pizza
- Do not associate a single word or phrase with a specific intent unless that word or phrase indicates the intent. Repeated phrases can skew the intent resolution. For example, starting each OrderPizza utterance with "I want to …" and each ShowMenu intent with "Can you help me to …" may increase the likelihood of the model resolving any user input that begins with "I want to" with OrderPizza and "Can you help me to" with ShowMenu.
Avoid sentence fragments and single words. Instead, use complete sentences (which can be up to 255 characters) that include the action and the entity. If you must use single key word examples, choose them carefully.
- Keep a test corpus as CSV file to batch test the intent resolution by clicking More and then Export Intents. Because adding a new intent examples can cause regressions, you might end up adding several test phrases to stabilize the intent resolution behavior.
- Run utterance quality reports to maintain a set of intents that are distinct from one another.
Intent Training and Testing
Training a model with your training corpus allows your bot to discern what users say (or in some cases, are trying to say).
You can improve the acuity of the cognition through rounds of intent testing and intent training. You control the training through the intent definitions alone; the skill can’t learn on its own from the user chat.
We recommend that you set aside 20% percent of your corpus for testing your skill and train your skill with the remaining 80%. Keep these two sets separate so that the test set remains "unknown" to your skill.
Apply the 80/20 split to the each intent's data set. Randomize your utterances before making this split to allow the training models to weigh the terms and patterns in the utterances equally.
The Intent Tester
The Intent tester is your window into your skill’s cognition. By entering phrases that are not part of the training corpus (the utterances that you’ve maintained in your testing set), you can find out how well you’ve crafted your intents and entities through the ranking and the returned JSON. This ranking, which is the skill’s estimate for the best candidate to resolve the user input, demonstrates its acuity at the current time.
- Click Try It Out! (located at the right-hand side).
- If you skill supports multiple native languages, choose the testing language.
Choosing this option ensures that the utterance will be added to corresponding
language version of the corpus. The skill's primary language is selected by
- Enter a string of text that is not part of the training set.
- Click Send and then take a look at the ranking.
- Expand the JSON window to find out how your skill ranked the intents and
see the entities that matched the input.If your skill’s top-ranking candidate isn’t what you expect, you might need to retrain the intents after doing one or both of the following:
Update the better candidate’s corpus with the input text that you just entered—Select the appropriate intent and then click Add as <language> Example. By default, <language> is the primary language of the skill. If you've selected a different testing language, like French, then the tester prompts you with Add as French Example.
Caution:Consider the impact on your training data before you add a test phrase. Adding a test phrase can change how the utterances that are similar to it get classified after retraining. In addition, adding a test phrase invalidates the test, because the incorporation of a test phrase into the training set ensures that the test will be successful.
Correct the system by editing the corpus using the Edit () and Delete () functions. A FAQ intent, for example, might receive a top rank because of the scope and phrasing of its constituent utterances. If you don’t want your users to get a FAQ whenever they ask typical questions, you’ll need to revise the corpus.
You need to retrain an intent whenever you add, change, or delete an utterance. A dirty Train icon () indicates when your training becomes outdated. When the retraining completes, click Reset () and then send the test phrase again.
The Intent Testing History
You can export the training data into CSV file so that you can find out how the intents were trained.
By examining these logs in a text editor or spreadsheet program like Microsoft Excel, you can see each user request and bot reply. You can sort through these logs to see where the bot matched the user request with the right intent and where it didn’t.
Export Intent Data
To log conversations, be sure to enable Enable Insights in Settings > General before you test your intents.To export data for a skill:
- Click to open the side menu and select Development > Skills.
- In the tile for the skill, click and select Export Conversations.
- Choose Intent Conversation Log, set the logging period, and then click
- Review the user input by opening the CSV files in a spreadsheet program.
Batch Test Intents
Use your intent testing data set on each iteration of your skill to find out if changes made to the platform or to the skill itself have compromised the accuracy of its intent resolution.
To use the test data:
- Click Try It Out! and then switch on Batch.
- Click Load and then browse to, and select the batch test file (a CSV
file that has the columns described in Import Intents from a CSV File).
- To gauge the skill's responsiveness to the impacts that new versions of Oracle
Digital Assistant platform may have on intent resolution, or on real-world use
cases, adjust the Intent Confidence Threshold that's
applied to entire batch. The default value is the same as the skill's
Confidence Threshold that's set in the Configuration
page in Settings. You can change this value only for a
single run of the batch tests; the value gets reset to the default after the
testing concludes.Intents, even the top-scoring intents, that resolve below the Confidence Threshold are considered unresolved. Keep in mind that lowering the Confidence Threshold can shift the focus of the testing from the Confidence Threshold to simply matching the expected intent. The results from low Confidence Threshold testing may be unrealistic and may include false-positives.
- Choose the number of tests running in parallel. Increasing the number of concurrent tests may speed up testing, but may also burden the system.
- Click Test.The results display in the test window.
- Drill down () to see how the test results compare to the batch
Failure testing enables you bulk test utterances that should never be resolved, either because they result in unresolvedIntent, or because they only resolve to other intents below the Intent Confidence Threshold for all of the intents.
- In your CSV test file, specify
topIntentfor all the utterances that you expect to be unresolved. Ideally, these "false" phrases will remain unresolved.
- Load the CSV.
- If needed, adjust the Intent Confidence Threshold to confirm that
the false phrases (the ones with
unresolvedIntentas their topIntent) can only resolve below the value that you set here. For example, increasing the threshold might result in the false phrases failing to resolve at the confidence level to any intent (including unresolvedIntent), which means they pass because they're considered unresolved.
Tip:If you assigned the false phrases to unresolvedIntent in your CSV, you can double check that they continue to fail to resolve to other intents at the threshold by running another batch test at a lower the Intent Confidence Threshold. Even though the lower requirement gives them more leeway to resolve to an intent, they should still pass the test by failing to resolve at the threshold for the unresolvedIntent.
- Run the test.
- Review the test results, checking that the false test phrases are
either matched to unresolvedIntent at the threshold, or failed to match any
intent (unresolvedIntent or otherwise) at the threshold.
Tutorial: Best Practices for Building and Training Intents
Use this tutorial to find out about batch testing and other testing and training tips: Best Practices for Building and Training Intents.
Reference Intents in the Dialog Flow
System.Intentcomponent enables navigation to dialog states.
intent: component: "System.Intent" properties: variable: "iResult" transitions: actions: OrderPizza: "resolvesize" CancelPizza: "cancelorder" unresolvedIntent: "unresolved"
Tune Intent Resolution Before Publishing
Before you publish a version of a skill (and thus freeze that version), you should thoroughly test it and, if necessary, adjust its settings to fine tune its intent resolution.
These settings help the
System.Intent component resolve intents for the skill.
Confidence Threshold: Determines the minimum confidence level required for user input to match an intent. When the level falls below this minimum value for all of the skill's intents, the component triggers its
unresolvedIntentaction. It's recommended to set this value to
Confidence Win Margin: When a skill has multiple intents that exceed the value of the Confidence Threshold, it displays a list of possible intents and prompts the user to choose one. This property helps the skill determine what intents should be in the list. Set the maximum level to use for the delta between the respective confidence levels for the top intents. The list includes the intents that are greater than or equal to this delta and exceed the value set for the Confidence Threshold.
To access these settings:
Click to open the side menu, select Development > Skills, and open your bot.
In the left navigation for the skill, click and select the Configuration tab.
Once you add a skill to a digital assistant, there is another range of settings that you may need to adjust to better handle intent resolution in the context of the digital assistant. See Tune Routing Behavior.
How Confidence Threshold Works
You use the Confidence Threshold property to adjust the likelihood that given user input will resolve to the skill's intents.
When you increase the Confidence Threshold, you increase the certainty that any matching intents are accurate (not false positives). However, this also increases the chance that intents that you want to match with certain input will not get high enough confidence scores for the matching to occur, thus resulting in matches to
When you lower the value of the Confidence Threshold property, you reduce the chance that intents that you want to match will fail to match. However, the lower you set this threshold, the greater risk you have of generating false positives in your matches.
As a general rule the underlying language model works better with higher confidence thresholds, so you should set the Confidence Threshold to 70% (
.70) or higher to get the best results.
To help decide on the value that you set for this parameter, run batch tests with Confidence Threshold set at different levels to see which level works best for your skill.
How Confidence Win Margin Works
This feature is not supported in Oracle Digital Assistant version 19.4.1.
How Do I Create an Answer Intent?
- Create the answer intent – You can create the intents manually in the
intents page by adding the intent name, the user-friendly conversation name, the answer
and the training utterances, or you can create answer intents in bulk by importing a CSV file. This file is similar to the standard intent
CSV file, but in addition to the
conversationNamecolumns, it also has the
query,topIntent,conversationName,answer What are your hours?,StoreHours,Our Store Hours,"We're open from 9-5, Mondays-Thursdays or by appointment." When are you open?,StoreHours,Our Store Hours,"We're open from 9-5, Mondays-Thursdays or by appointment." When do you close?,StoreHours,Our Store Hours,"We're open from 9-5, Mondays-Thursdays or by appointment." What do you sell?,Products,Our Products,We sell only hammers. All types. Do you sell brick hammers?,Products,Our Products,We sell only hammers. All types. Do you sell claw hammers?,Products,Our Products,We sell only hammers. All types. Do you deliver?,Delivery_and_Pickup,Pickup and Delivery options,"No delivery service, sorry. Purchases are in-store only" Can I buy one of your hammers on the web?,Delivery_and_Pickup,Pickup and Delivery options,"No delivery service, sorry. Purchases are in-store only" Can you mail me a hammer?,Delivery_and_Pickup,Pickup and Delivery options,"No delivery service, sorry. Purchases are in-store only" Can I return a hammer?,Returns,Our Return Policy,You cannot return any items. All sales are final. My hammer doesn't work,Returns,Our Return Policy,You cannot return any items. All sales are final. Can I exchange my hammer,Returns,Our Return Policy,You cannot return any items. All sales are final.Note
Here some things to keep in mind when creating answer intents:
- Adding markup makes the answer intent channel-specific. Answer intents don't support channel-specific resource bundle references, which means the one answer must potentially work for multiple channels.
- Add the same number of utterances to an answer intent as you would to a transactional intent.
- Train the intent with Trainer Tm.
- Create the dialog flow. The dialog flow only needs the
System.Intentcomponent to resolve the answer intents. Unlike standard intents, you don't need to reference answer intents as
System.Intentactions. For example, this simple dialog flow definition will get your answer intents up and running:
#metadata: information about the flow # platformVersion: the version of the bots platform that this flow was written to work with metadata: platformVersion: "1.1" main: true name: Answer_intents1 context: variables: iResult: "nlpresult" states: # Note that even though answer intents don't have actions, you must have a System.Intent state # even if you have no other types of intents. # # QnA intents output the answer and restart the conversation. intent: component: "System.Intent" properties: variable: "iResult" transitions: actions: unresolvedIntent: "systemError" # This is to catch missing actions for non-answer intents and other system issues. systemError: component: "System.Output" properties: text: "I don't have an answer for your question. Is there anything else I can help you with?" transitions: return: "systemError"
- You can optionally store the answer intent in a resource bundle by clicking
In The default resource bundle (which is in English) that's associated with the
answer intent is listed in the resource bundle Q&A page.
The intent name is the resouce key. To add support for other languages, click + Language, and then complete the dialog by adding an IETF BCP 47 language tag (such as
frfor French), and then add a translated version of the output string.
- After your skill has been published and deployed to a channel, review the Quality Reports and the Q&A Insights.
DO's and DON'Ts for Conversational Design
Creating a robust set of intents for a successful skill requires a lot of attention. Here are some best practices to keep in mind.
Intent Design and Training
|DO plan to add utterances until you get results you expect. Generally speaking, models perform well as you add more quality training utterances. The number of utterances you need depends on the model, the training data, and the level of accuracy that is realistic for your model.||DON'T over-train individual intents. Don’t add excessive training data to some intents to make them work "perfectly". If intent resolution is not behaving as expected, evaluate your intent structure for overlap between intents. Intent resolution will NEVER be 100% accurate.|
|DO use real world data. Using the actual language that your skill is most likely to encounter is critical. Fabricated utterances can only take you so far and will not prepare your skill for real-world engagement.||DON'T use just keywords in training data. While it is acceptable to use single words/short phrases for training, the training data should have the same structure as the user’s inputs. The fewer the words in utterances, the less successful classification will be.|
|DO use whole sentences to train intents. While it’s OK to use short training utterances, be sure to match the conversational style of your users as closely as possible.||DON'T inadvertently skew intents. Be careful of words which add no specific meaning (e.g. "please" and "thanks") or entity values within utterances as they can inadvertently skew intent resolution if they are heavily used in one intent but not in another.|
|DO use similar numbers of utterances per intent. Some intents (e.g., "hello", "goodbye") may have fewer utterances in their training sets. However, ensure that your main intents have a similar number of utterances to avoid biasing your model.||DON’T rely ONLY on intent resolution. Use entities to disambiguate common intents. If there’s linguistic overlap between intents, consider using entities to disambiguate the user’s intentions (and corresponding unique conversational path).|
|DO handle small talk. Users will make requests that are not relevant to the skill's purpose, such as for jokes and weather reports. They may also do things like ask if the skill is human. Ensure that you have a small talk strategy and aggressively test how the skill responds at all steps of your conversational flow.||DON’T overuse unresolvedIntent. Create “out-of-scope" intents for the things you know you don't know (that you may or may not enable the skill to do later).|
|DO consider multiple intents for a single use case. Customers may express the same need in multiple ways, e.g. in terms of the solution they desire OR the symptom of their problem. Use multiple intents that all resolve to the same "answer".||DON’T ignore abusive interactions. Similar to small talk, have a plan for abuse. This plan may need to include measures to ensure any abusive input from the user is not reflected back by the skill, as well as provisions for immediate escalation.|
Conversational User Experience
|DO give indications of most likely responses (including help and exit). For example, "Hey, I'm Bob the Bot. Ask me about X, Y, or Z. If you run into any problems, just type 'help'."||DON'T delay conversational design until "later in the project". For all but the simplest skills, conversational design must be given the same priority and urgency as other development work. It should start early and proceed in parallel with other tasks.|
|DO consider a personality for your bot. You should consider the personality and tone of your bot. However, be careful of overdoing human-like interaction (humor and sympathy often don't resonate well from a bot) and never try to fool your users into thinking that they are interacting with a human.||DON'T say that the skill "is still learning". While well-intended, this bad practice signals to the user (consciously or subconsciously) that the skill is not up to the task.|
|DO guide the user on what is expected from them. The skill should try to guide the user toward an appropriate response and not leave questions open ended. Open-ended questions make the user more likely to fall off the happy path.||DON'T use "cute" or "filler" responses. See "DO guide the user on what is expected from them".|
|DO break up long responses into individual chat bubbles and/or use line breaks. Large blobs of text without visual breaks are hard to read and can lead to confusion.||DON'T say "I’m sorry, I don’t understand. Would you please rephrase your question?" This lazy error-handling approach is, more often than not, inaccurate. No matter how many times a user rephrases an out-of-scope question, the skill will NEVER have anything intelligent to say.|
|--||DON'T overuse "confirmation" phrases. Confirmation phrases have their place. However, don’t overuse them. Consider dialog flows that are able to take confidence levels into account before asking users to confirm.|
|DO develop utterances cyclically. Developing a robust training corpus requires multiple iterations and testing cycles and ongoing monitoring and tuning. Use a cyclical "build, test, deploy, monitor, update" approach.||DON'T neglect the need for a performance measurement and improvement plan. Lacking a plan for measuring and improving your skill, you'll have no way of knowing whether it’s really working.|
|DO test utterances using the 80/20 rule. Always test the robustness of your intents against one another by conducting multiple 80/20 tests, where 80% of newly harvested utterances are used to train the model and 20% are added to your testing data.||DON'T test only the happy path. "Getting it working" is 20% of the work. The remaining 80% is testing and adjusting how the skill responds to incorrect input and user actions.|
|DO test skill failure. Aggressively try to break your skill to see what happens. Don’t rely solely on positive testing.||DON'T ignore processing out of order messages. Users will scroll back in conversation history and click on past buttons. Testing the results need to be part of your 80% work (as noted in DON'T test only the happy path).|
|--||DON’T forget to re-test as you update your intents. If you add more training data (e.g., as you bot gets more real-world usage) and/or you add new intents for new use cases, don’t forget to retest your model.|
|DO select use cases that are enhanced by conversational UI (CUI). Enabling conversational UI (via skills and digital assistants) is work. Make sure that the use case will be truly enhanced by adding CUI.||DON'T fail to have an escalation path. Even if you don’t plan on allowing escalation to a human, you must have a strategy for those interactions where the skill can’t help.|
|DO anticipate the first day being the worst day. Even the best-tested skills and digital assistants require tuning on day 1.||DON'T disband the project team immediately after launch. When scheduling your skill project, ensure that you keep the skill’s creators (Conversational Designer, Project Manager, Tech Lead, etc.) on the project long enough for adequate tuning and, ultimately, knowledge transfer.|