Intents allow your skill to understand what the user wants it to do. An intent categorizes typical user requests by the tasks and actions that your skill performs. The PizzaBot’s OrderPizza intent, for example, labels a direct request, I want to order a Pizza, along with another that implies a request, I feel like eating a pizza.
Note:You can define your intents and custom entities in English and Simplified Chinese without having to first configure an auto-translation service. For other languages, you’ll need to configure an auto-translation service.
Create an Intent
- Click Intents () in the left navbar.
- Click Add Intent. Your skill needs at least two intents.
- Enter a descriptive name or phrase for the intent in the Conversation Name field. For example, if the intent name is callAgent, the conversation name would be Talk to a customer representative. When the skill can't resolve a message to an intent, it outputs the user-friendly names and phrases that you enter into the Conversation Name field as the options that are listed in the Do you want to disambiguation messages described in How Confidence Win Margin Works and Configure the Intent and Q&A Routing.
- Add the intent name in the Name field. If you don't enter a conversation name, then the Name field value is used instead. Keep in mind that a short name with no end punctuation might not contribute to the user experience. The intent name displays in the Conversation Name field for skills built with prior versions of Digital Assistant.
- As a optional step, add description of the intent. Your description should focus on what makes the intent unique and the task or actions it performs.
- Start building the training corpus by adding utterances that illustrate the meaning behind the intent. To ensure optimal intent resolution, use terms, wording, and phrasing specific to the individual intent. Ideally, you should base your training data on real-world phrases, but if you don’t have any, aim for one-to-two dozen utterances for each intent. That said, you can get your skill up and running with fewer (three-to-five) when you train it with Trainer Ht. You can save your utterances by clicking Enter or by clicking outside of the input field. To manage the training set, select a row to access the Edit () and Delete () functions. Alternatively, you can add an entire set of intents and their respective utterances by importing them from a CSV file.You can make your skill more resilient by adding utterances that contain commonly misspelled and misused words. See Build Your Training Corpus.To allow your skill to cleanly distinguish between intents, create an intent the resolves inappropriate user input or gibberish.
- Add an entity if the intent needs one to resolve the user input. To find out how, see Add Entities to Intents.
- To teach your skill how to comprehend user input using the set of utterances that you’ve provided so far, click Train, choose a model and then click Submit. As described in Which Training Model Should I Use?, we provide two models that learn from your corpus: Trainer Ht and Trainer Tm. Each uses a different algorithm to reconcile the user input against your intents. Trainer Ht uses pattern matching while Trainer Tm a machine learning algorithm which uses word vectors. You’d typically follow this process:
Create the initial training corpus.
Train with Trainer Ht. You should start with Trainer Ht because it doesn’t require a large set of utterances. As long as there are enough utterances to disambiguate the intents, your skill will be able to resolve user input.
If you get a Something’s gone wrong message when you try to train your skill, then you may not have added a sufficient number of utterances to support training. First off, make sure that you have at least two intents with at least two (or preferable more) utterances each. If you haven’t added enough utterances, add a few more then train your skill.
Refine your corpus, retrain with Trainer Ht. Repeat as necessary—training is an iterative process.
Train with Trainer Tm. Use this trainer when you’ve accumulated a robust set of intents.
The Train button () activates whenever you add an intent or when you update an intent by adding, changing, or deleting its utterances. To bring the training up to date, choose a training model and then click Train. The model displays an exclamation point whenever it needs training. When its training is current, it displays a check mark.
- Click Try it Out! then Intent (if it's not selected).
Next, enter some of the phrases similar to those of your test set.To log your intent testing results, enable the conversation intent logging (Settings > General > Intent Conversation) . Run a History Report describes how you use this data. Be sure to switch this (and all) logging options off when your skill is in a production environment.
Add Entities to Intents
Alternatively, you can click New Entity to add an intent-specific entity. See Custom Entity Types.
Tip:Only intent entities that are included in the JSON payloads are sent to, and returned by, the Component Service. The ones that aren’t associated with an intent won’t be included, even if they contribute to the intent resolution by recognizing user input. If your custom component accesses entities through entity matches, then be sure to add the entity to your intent.
Import Intents from a CSV File
You can add your intents manually, or import them from a CSV file. You can create this file by exporting the intents and entities from another skill, or by creating it from scratch in a spreadsheet program or a text file.
To import a CSV file:
query,topIntent,conversationName I want to order a pizza,OrderPizza,Order a Pizza. I want a pizza,OrderPizza,Order a Pizza. I want a pizaa,OrderPizza,Order a Pizza. I want a pizzaz,OrderPizza,Order a Pizza. I'm hungry,OrderPizza,Order a Pizza. Make me a pizza,OrderPizza,Order a Pizza. I feel like eating a pizza,OrderPizza,Order a Pizza. Gimme a pie,OrderPizza,Order a Pizza. Give me a pizza,OrderPizza,Order a Pizza. pizza I want,OrderPizza,Order a Pizza. I do not want to order a pizza,CancelPizza,Cancel your order. I do not want this,CancelPizza,Cancel your order. I don't want to order this pizza,CancelPizza,Cancel your order. Cancel this order,CancelPizza,Cancel your order. Can I cancel this order?,CancelPizza,Cancel your order. Cancel my pizza,CancelPizza,Cancel your order. Cancel my pizaa,CancelPizza,Cancel your order. Cancel my pizzaz,CancelPizza,Cancel your order. I'm not hungry anymore,CancelPizza,Cancel your order. don't cancel my pizza,unresolvedIntent,unresolvedIntent Why is a cheese pizza called Margherita,unresolvedIntent,unresolvedIntent
Click Intents () in the left navbar.
Click More, and then choose Import intents.
Description of the illustration import_intents.png
.csvfile and then click Open.
Note:You can import CSVs generated from prior version of Digital Assistant. Digital Assistant populates the conversation name using the
Train your skill.
Export Intents to a CSV File
Which Training Model Should I Use?
We provide a duo of training models that mold your skill’s cognition. You can use one or both of these models, each of which uses a different approach to machine learning. Run a data quality report after the training has completed for either model.
Trainer Ht is the default training model. It needs only a small training corpus, so use it as you develop the entities, intents, and the training corpus. When the training corpus has matured to the point where tests reveal highly accurate intent resolution, you’re ready to add a deeper dimension to your bot’s cognition by training Trainer Tm.
You can get a general understanding of how Trainer Ht resolves intents just from the training corpus itself. It forms matching rules from the sample sentences by tagging parts of speech and entities (both custom and built-in) and by detecting words that have the same meaning within the context of the intent. If an intent called SendMoney has both Send $500 to Mom and Pay Cleo $500, for example, Trainer Ht interprets pay as the equivalent to send . After training, Trainer Ht’s tagging reduces these sentences to templates (Send Currency to person, Pay person Currency) that it applies to the user input.
Note:Trainer Ht is the default model, but you can change this by clicking Settings > General and then by choosing another model from the list. The default model displays in the tile in the bot catalog.
Trainer Tm uses machine learning that's based on word vectors and other text-based features. It doesn't focus on matching rules as heavily as Trainer Ht. Instead, Trainer Tm performs hyperparameter testing.
Trainer TM is intended for an English vocabulary and understands the semantic meaning of frequently used words. While its support of Out of Vocabulary (OOV) words allows it to comprehend foreign language terms that are commonly used in English, it won't preserve the semantic integrity of non-English utterances.
Improve Trainer Tm’s Intent Classification with unresolvedIntent
To improve Trainer Tm’s utterance classification, you need to train it to recognize the kind of utterances that don’t belong to any intent. To do this, create an intent called unresolvedIntent that represents phrases that fall outside of the corpus and then train the skill with Trainer Tm.
For example, when training a skill with two intents, OrderPizza and CancelPizza, Trainer Tm classifies don’t cancel my pizza order to the CancelPizza intent. In the absence of unresolvedIntent, Trainer Tm resorts to this intent even though it places don't cancel my pizza order outside of the CancelPizza utterances.
Build Your Training Corpus
When you define an intent, you first give it a name that illustrates some user action and then follow up by compiling a set of real-life user statements, or utterances. Collectively, your intents, and the utterances that belong to them, make up a training corpus. The term corpus is just a quick way of saying “all of the intents and sample phrases that I came up with to make this skill smart”. The corpus is the key to your skill’s intelligence. By training a model with your corpus, you essentially turn that model into a reference tool for resolving user input to a single intent. Because your training corpus ultimately plays the key role in deciding which route the skill-human conversation will take, you need to choose your words carefully when building it.
Generally speaking, a large and varied set of sample phrases increases a model’s ability to resolve intents accurately. But building a robust training corpus doesn’t just begin with well-crafted sample phrases; it actually begins with intents that are clearly delineated. Not only should they clearly reflect your use case, but their relationship to their sample sentences should be equally clear. If you’re not sure where a sample sentence belongs, then your intents aren’t distinct from one another.
Create 12 to 24 sample phrases per intent, if possible. Use unmodified, real-word phrases that include vernacular and spelling errors. Keep in mind that the more examples you add, the more resilient your skill becomes.
If you don't have any real-world phrases, create your own sample phrases. When you deploy your bot, you can use the conversation logs to improve your corpus. For your starter utterances, vary the vocabulary and sentence structure by one or two permutations using:
slang words (moolah, lucre, dough)
common expressions (Am I broke? for an intent called AccountBalance)
alternate words (Send cash to savings, Send funds to savings, Send money to savings, Transfer cash to savings.)
different categories of objects (I want to order a pizza, I want to order some food).
alternate spellings (check, cheque)
common misspellings (“buisness” for “business”)
unusual word order (To checking, $20 send)
Create parallel sample phrases for opposing intents. For intents like CancelPizza and OrderPizza, define contrasting sentences like I want to order a pizza and I do not want to order a pizza.
When certain words or phrases signify a specific intent, you can increase the probability for a correct match by bulking up the training data not only with the words and phrases themselves, but with synonyms and variations as well. For example, a training corpus for an OrderPizza intent might include a high concentration of “I want to” phrases, like I want to order a Pizza, I want to place an order, and I want to order some food. Use similar verbiage sparingly for other intents, because it might skew the training if used too freely (say, a CancelPizza intent with sample phrases like I want to cancel this pizza, I want to stop this order, and I want to order something else). When the high occurrence of unique words or phrases within an intent’s training set is unintended, however, you should revise the initial set of sentences or use the same verbiage for other intents.
Use different concepts to express the same intent, like I am hungry and Make me a pizza.
Avoid sentence fragments and single words. Instead, use complete sentences (which can be up to 255 characters). If you must use single key word examples, choose them carefully.
Watch the letter casing: use uppercase when your entities extract proper nouns, like Susan and Texas, but use lowercase everywhere else.
Grow the corpus by adding any mismatched sentence to the correct intent.
- Keep a test corpus as CSV file to batch test intent resolution by clicking More and then Export Intents. Because adding a new intent example can cause regressions, you might end up adding several test phrases to stabilize the intent resolution behavior.
Intent Training and Testing
Training a model with your training corpus allows your bot to discern what users say (or in some cases, are trying to say).
You can improve the acuity of the cognition through rounds of intent testing and intent training. You control the training through the intent definitions alone; the skill can’t learn on its own from the user chat.
We recommend that you set aside 20% percent of your corpus for testing your skill and train your skill with the remaining 80%. Keep these two sets separate so that the test set remains “unknown” to your skill.
Apply the 80/20 split to the each intent’s data set. Randomize your utterances before making this split to allow the training models to weigh the terms and patterns in the utterances equally.
The Intent Tester
The Intent tester is your window into your skill’s cognition. By entering phrases that are not part of the training corpus (the utterances that you’ve maintained in your testing set), you can find out how well you’ve crafted your intents and entities through the ranking and the returned JSON. This ranking, which is the skill’s estimate for the best candidate to resolve the user input, demonstrates its acuity at the current time.
- Click Try It Out! (located at the right-hand side).
- Enter a string of text that is not part of the training set.
- Click Send and then take a look at the ranking.
- Expand the JSON window to find out how your skill ranked the intents and see the entities that matched the input.If your skill’s top-ranking candidate isn’t what you expect, you might need to retrain the intents after doing one or both of the following:
Update the better candidate’s corpus with the input text that you just entered—Select the appropriate intent and then click Add Example.
Caution:Consider the impact on your training data before you add a test phrase. Adding a test phrase can change how the utterances that are similar to it get classified after retraining. In addition, adding a test phrase invalidates the test, because the incorporation of a test phrase into the training set ensures that the test will be successful.
Correct the system by editing the corpus using the Edit () and Delete () functions. A FAQ intent, for example, might receive a top rank because of the scope and phrasing of its constituent utterances. If you don’t want your users to get a FAQ whenever they ask typical questions, you’ll need to revise the corpus.
You need to retrain an intent whenever you add, change, or delete an utterance. A dirty Train icon () indicates when your training becomes outdated. When the retraining completes, click Reset () and then send the test phrase again.
The Intent Testing History
You can export the training data into CSV file so that you can find out how the intents were trained.
By examining these logs in a text editor or spreadsheet program like Microsoft Excel, you can see each user request and bot reply. You can sort through these logs to see where the bot matched the user request with the right intent and where it didn’t.
Export Intent Data
To capture all of the intent testing data in a log, be sure to enable Intent Conversation in Settings > General before you test your intents.To export data:
- In the bots catalog, open the menu in the tile and then click Export Conversation Log.
- Choose Intent Conversation Log, set the logging period, and then click
- Open the CSV files in a spreadsheet program to review it. You can see if your model matches intents consistently by filtering the rows by keyword.
Batch Test Intents
You can use the intent testing data that you’ve exported on new iterations of your skill to gauge the accuracy of its intent detection.
- Click Try It Out! and then switch on Batch.
- Click Load and then browse to, and select the intents log (a CSV file).
Description of the illustration load_batch_dialog.png
- Choose the number of tests running in parallel. Increasing the number of concurrent tests may speed up testing, but may also burden the system.
- Click Test.The results display in the test window.
Description of the illustration batch_test_results1.png
- Drill down () to see how the test results compare to the batch data.
Description of the illustration batch_test_details.png
Reference Intents in the Dialog Flow
actionsproperty, as shown in the PizzaBot’s
intent: component: "System.Intent" properties: variable: "iResult" transitions: actions: OrderPizza: "resolvesize" CancelPizza: "cancelorder" unresolvedIntent: "unresolved"
Tune Intent Resolution Before Publishing
Before you publish a version of a skill (and thus freeze that version), you should thoroughly test it and, if necessary, adjust its settings to fine tune its intent resolution.
These settings help the
System.Intent component resolve intents for the skill.
Confidence Threshold: The skill uses this property to steer the conversation by the confidence level of the resolved intent. Set the minimum confidence level required to match an intent. When the level falls below this minimum value, the component triggers its
Confidence Win Margin: When a skill can’t determine a specific intent, it displays a list of possible intents and prompts the user to choose one. This property helps the skill determine what intents should be in the list. Set the maximum level to use for the delta between the respective confidence levels for the top intents. The list includes the intents that are greater than or equal to this delta and exceed the value set for the Confidence Threshold.
To access these settings:
Click to open the side menu, select Development > Skills, and open your bot.
In the left navigation for the skill, click and select the Configuration tab.
Note:Once you add a skill to a digital assistant, there is another range of settings that you may need to calibrate to better handle intent resolution in the context of the digital assistant. See Tune Routing Behavior.
How Confidence Threshold Works
With the Confidence Threshold property (accessed
through Settings > Configuration) , you can steer
the conversation by the confidence level of the resolved intent, which is held in the NLP result
variable (noted as
iResult in the sample skills).
If the intent’s ranking exceeds the Confidence Threshold
property (which, by default is 40%), then the action defined for that intent is triggered,
setting the path for the Dialog Engine. In the opposite case—when the value set for the
Confidence Threshold property is higher than the ranking for the
resolved intent—the Dialog Engine moves to the state defined for
unresolvedIntent action. See The Intent Tester.
Taking the PizzaBot as an example, testing its intents with I want to order
pizza resolves to 100%. When you enter the same phrase in the Intent tab of the tester,
however, the bot replies with How Old Are You?, a seemingly inappropriate response.
Within the context of the PizzaBot dialog flow definition, however, this is the expected
response for an intent whose ranking (100%) exceeds the confidence threshold (40%). When you
enter 18, the
allow: "crust" action directs
the Dialog Engine to the
crust state. (Because there were no entities to
extract from the initial user input, the Dialog Engine bypassed the
resolveCrust states and ended up here
after the age confirmation instead of completing the order.)
unresolvedIntent: "unresolved"). As a result, the bot responds with "I don't understand, what do you want to do?"
unresolved: component: "System.Output" properties: text: "I don't understand. What do you want to do?" transitions: return: "unresolved"
How Confidence Win Margin Works
DO's and DON'Ts for Conversational Design
Creating a robust set of intents for a successful skill requires a lot of attention. Here are some best practices to keep in mind.
Intent Design and Training
|DO use real world data. Using the actual language that your skill is most likely to encounter is critical. Fabricated utterances can only take you so far and will not prepare your skill for real-world engagement.||DON'T over-train your intents. Don’t add excessive training data to some intents to make them work "perfectly". If intent resolution is not behaving as expected, evaluate your intent structure for overlap between intents. Intent resolution will NEVER be 100% accurate.|
|DO use whole sentences to train intents. While it’s OK to use short training utterances, be sure to match the conversational style of your users as closely as possible.||DON'T use just keywords in training data. While it is acceptable to use single words/short phrases for training, the training data should have the same structure as the user’s inputs. The fewer the words in utterances, the less successful classification will be.|
|DO use similar numbers of utterances per intent. Some intents (e.g., "hello", "goodbye") may have fewer utterances in their training sets. However, ensure that your main intents have a similar number of utterances to avoid biasing your model.||DON'T inadvertently skew intents. Be careful of words which add no specific meaning (e.g. "please" and "thanks") or entity values within utterances as they can inadvertently skew intent resolution if they are heavily used in one intent but not in another.|
|DO consider multiple intents for a single use case. Customers may express the same need in multiple ways, e.g. in terms of the solution they desire OR the symptom of their problem. Use multiple intents that all resolve to the same "answer".||DON’T rely ONLY on intent resolution. Use entities to disambiguate common intents. If there’s linguistic overlap between intents, consider using entities to disambiguate the user’s intentions (and corresponding unique conversational path).|
||DON’T overuse unresolvedIntent. Create “out-of-scope" intents for the things you know you don't know (that you may or may not enable the skill to do later).|
|DO test utterances using the 80/20 rule. Always test the robustness of your intents against one another by conducting multiple 80/20 tests, where 80% of newly harvested utterances are used to train the model and 20% are added to your testing data.||DON’T forget to re-test as you update your intents. If you add more training data (e.g., as you bot gets more real-world usage) and/or you add new intents for new use cases, don’t forget to retest your model.|
|DO handle small talk. Users will make requests that are not relevant to the skill's purpose, such as for jokes and weather reports. They may also do things like ask if the skill is human. Ensure that you have a small talk strategy and aggressively test how the skill responds at all steps of your conversational flow.||DON’T ignore abusive interactions. Similar to small talk, have a plan for abuse. This plan may need to include measures to ensure any abusive input from the user is not reflected back by the skill, as well as provisions for immediate escalation.|
Conversational User Experience
|DO give indications of most likely responses (including help and exit). For example, "Hey, I'm Bob the Bot. Ask me about X, Y, or Z. If you run into any problems, just type 'help'."||DON'T delay conversational design until "later in the project". For all but the simplest skills, conversational design must be given the same priority and urgency as other development work. It should start early and proceed in parallel with other tasks.|
|DO consider a personality for your bot. You should consider the personality and tone of your bot. However, be careful of overdoing human-like interaction (humor and sympathy often don't resonate well from a bot) and never try to fool your users into thinking that they are interacting with a human.||DON'T say that the skill "is still learning". While well-intended, this bad practice signals to the user (consciously or subconsciously) that the skill is not up to the task.|
|DO guide the user on what is expected from them. The skill should try to guide the user toward an appropriate response and not leave questions open ended. Open-ended questions make the user more likely to fall off the happy path.||DON'T use "cute" or "filler" responses. See "DO guide the user on what is expected from them".|
|DO break up long responses into individual chat bubbles and/or use line breaks. Large blobs of text without visual breaks are hard to read and can lead to confusion.||DON'T say "I’m sorry, I don’t understand. Would you please rephrase your question?" This lazy error-handling approach is, more often than not, inaccurate. No matter how many times a user rephrases an out-of-scope question, the skill will NEVER have anything intelligent to say.|
|--||DON'T overuse "confirmation" phrases. Confirmation phrases have their place. However, don’t overuse them. Consider dialog flows that are able to take confidence levels into account before asking users to confirm.|
|DO develop utterances cyclically. Developing a robust training corpus requires multiple iterations and testing cycles and ongoing monitoring and tuning. Use a cyclical "build, test, deploy, monitor, update" approach.||DON'T neglect the need for a performance measurement and improvement plan. Lacking a plan for measuring and improving your skill, you'll have no way of knowing whether it’s really working.|
|DO test utterances using the 80/20 rule. Always test the robustness of your intents against one another by conducting multiple 80/20 tests, where 80% of newly harvested utterances are used to train the model and 20% are added to your testing data.||DON'T test only the happy path. "Getting it working" is 20% of the work. The remaining 80% is testing and adjusting how the skill responds to incorrect input and user actions.|
|DO test skill failure. Aggressively try to break your skill to see what happens. Don’t rely solely on positive testing.||DON'T ignore processing out of order messages. Users will scroll back in conversation history and click on past buttons. Testing the results need to be part of your 80% work (as noted in DON'T test only the happy path).|
|DO select use cases that are enhanced by conversational UI (CUI). Enabling conversational UI (via skills and digital assistants) is work. Make sure that the use case will be truly enhanced by adding CUI.||DON'T fail to have an escalation path. Even if you don’t plan on allowing escalation to a human, you must have a strategy for those interactions where the skill can’t help.|
|DO anticipate the first day being the worst day. Even the best-tested skills and digital assistants require tuning on day 1.||DON'T disband the project team immediately after launch. When scheduling your skill project, ensure that you keep the skill’s creators (Conversational Designer, Project Manager, Tech Lead, etc.) on the project long enough for adequate tuning and, ultimately, knowledge transfer.|