8 Intents

Intents allow your skill to understand what the user wants it to do. An intent categorizes typical user requests by the tasks and actions that your skill performs. The PizzaBot’s OrderPizza intent, for example, labels a direct request, I want to order a Pizza, along with another that implies a request, I feel like eating a pizza.

Intents are comprised of permutations of typical user requests and statements, which are also referred to as utterances. As described in Create an Intent, you can create the intent by naming a compilation of utterances for a particular action. Because your skill’s cognition is derived from these intents, each intent should be created from a data set that’s robust (one to two dozen utterances) and varied, so that your skill can interpret ambiguous user input. A rich set of utterances enables a skill to understand what the user wants when it receives messages like “Forget this order!” or “Cancel delivery!”—messages that mean the same thing, but are expressed differently. To find out how sample user input allows your skill to learn, see Intent Training and Testing.

Note:

You can define your intents and custom entities in English and Simplified Chinese without having to first configure an auto-translation service. For other languages, you’ll need to configure an auto-translation service.

Create an Intent

To create an intent:
  1. Click Intents (This is an image of the Intent icon.) in the left navbar.
  2. If you already have defind your training data in a CSV file, click Import Intents. Import Intents from a CSV File describes this file's format. Otherwise, click Add Intent. Your skill needs at least two intents.
  3. Enter a descriptive name or phrase for the intent in the Conversation Name field. For example, if the intent name is callAgent, the conversation name would be Talk to a customer representative. When the skill can't resolve a message to an intent, it outputs the user-friendly names and phrases that you enter into the Conversation Name field as the options that are listed in the Do you want to disambiguation messages described in How Confidence Win Margin Works and Configure the Intent and Q&A Routing.
  4. Add the intent name in the Name field. If you don't enter a conversation name, then the Name field value is used instead. Keep in mind that a short name with no end punctuation might not contribute to the user experience. The intent name displays in the Conversation Name field for skills built with prior versions of Digital Assistant.
  5. As a optional step, add description of the intent. Your description should focus on what makes the intent unique and the task or actions it performs.
  6. Start building the training corpus by adding utterances that illustrate the meaning behind the intent. To ensure optimal intent resolution, use terms, wording, and phrasing specific to the individual intent. Ideally, you should base your training data on real-world phrases, but if you don’t have any, aim for one-to-two dozen utterances for each intent. That said, you can get your skill up and running with fewer (three-to-five) when you train it with Trainer Ht. You can save your utterances by clicking Enter or by clicking outside of the input field. To manage the training set, select a row to access the Edit (This is an image of the Edit button.) and Delete (This is an image of the Delete function.) functions.
    Alternatively, you can add an entire set of intents and their respective utterances by importing them from a CSV file.
    You can make your skill more resilient by adding utterances that contain commonly misspelled and misused words. See Build Your Training Corpus.
    To allow your skill to cleanly distinguish between intents, create an intent the resolves inappropriate user input or gibberish.
  7. Add an entity if the intent needs one to resolve the user input. To find out how, see Add Entities to Intents.
  8. To teach your skill how to comprehend user input using the set of utterances that you’ve provided so far, click Train, choose a model and then click Submit.
    As described in Which Training Model Should I Use?, we provide two models that learn from your corpus: Trainer Ht and Trainer Tm. Each uses a different algorithm to reconcile the user input against your intents. Trainer Ht uses pattern matching while Trainer Tm a machine learning algorithm which uses word vectors. You’d typically follow this process:
    1. Create the initial training corpus.

    2. Train with Trainer Ht. You should start with Trainer Ht because it doesn’t require a large set of utterances. As long as there are enough utterances to disambiguate the intents, your skill will be able to resolve user input.

      If you get a Something’s gone wrong message when you try to train your skill, then you may not have added a sufficient number of utterances to support training. First off, make sure that you have at least two intents with at least two (or preferable more) utterances each. If you haven’t added enough utterances, add a few more then train your skill.

    3. Refine your corpus, retrain with Trainer Ht. Repeat as necessary—training is an iterative process.

    4. Train with Trainer Tm. Use this trainer when you’ve accumulated a robust set of intents.

    The Train button (This is an image of the “dirty” Train button.) activates whenever you add an intent or when you update an intent by adding, changing, or deleting its utterances. To bring the training up to date, choose a training model and then click Train. The model displays an exclamation point whenever it needs training. When its training is current, it displays a check mark.

  9. Click Try it Out! then Intent (if it's not selected). Next, enter some of the phrases similar to those of your test set.
    To log your intent testing results, enable the conversation intent logging (Settings > General > Intent Conversation) . Run a History Report describes how you use this data. Be sure to switch this (and all) logging options off when your skill is in a production environment.

Add Entities to Intents

Some intents require entities—both built-in and custom— to complete an action within the dialog flow or make a REST call to a backend API. The system uses only these entities, which are known as intent entities, to fulfill the intent that’s associated with them. You can associate an entity to an intent when you click Add New Entity and then select from the custom (This is an image of the Custom icon.) or built-in (This is an image of the System icon.) entities.

Alternatively, you can click New Entity to add an intent-specific entity. See Custom Entity Types.

Tip:

Only intent entities that are included in the JSON payloads are sent to, and returned by, the Component Service. The ones that aren’t associated with an intent won’t be included, even if they contribute to the intent resolution by recognizing user input. If your custom component accesses entities through entity matches, then be sure to add the entity to your intent.

Import Intents from a CSV File

You can add your intents manually, or import them from a CSV file. You can create this file by exporting the intents and entities from another skill, or by creating it from scratch in a spreadsheet program or a text file.

The CSV file has three columns: query , topIntent, and conversationName:
query,topIntent,conversationName
I want to order a pizza,OrderPizza,Order a Pizza.
I want a pizza,OrderPizza,Order a Pizza.
I want a pizaa,OrderPizza,Order a Pizza.
I want a pizzaz,OrderPizza,Order a Pizza.
I'm hungry,OrderPizza,Order a Pizza.
Make me a pizza,OrderPizza,Order a Pizza.
I feel like eating a pizza,OrderPizza,Order a Pizza.
Gimme a pie,OrderPizza,Order a Pizza.
Give me a pizza,OrderPizza,Order a Pizza.
pizza I want,OrderPizza,Order a Pizza.
I do not want to order a pizza,CancelPizza,Cancel your order.
I do not want this,CancelPizza,Cancel your order.
I don't want to order this pizza,CancelPizza,Cancel your order.
Cancel this order,CancelPizza,Cancel your order.
Can I cancel this order?,CancelPizza,Cancel your order.
Cancel my pizza,CancelPizza,Cancel your order.
Cancel my pizaa,CancelPizza,Cancel your order.
Cancel my pizzaz,CancelPizza,Cancel your order.
I'm not hungry anymore,CancelPizza,Cancel your order.
don't cancel my pizza,unresolvedIntent,unresolvedIntent
Why is a cheese pizza called Margherita,unresolvedIntent,unresolvedIntent
To import a CSV file:
  1. Click Intents (This is an image of the Intent icon.) in the left navbar.

  2. Click More, and then choose Import intents.
    Description of import_intents.png follows
    Description of the illustration import_intents.png

  3. Select the .csv file and then click Open.

    Note:

    You can import CSVs generated from prior version of Digital Assistant. Digital Assistant populates the conversation name using the topIntent entries.
  4. Train your skill.

Export Intents to a CSV File

You can reuse your training corpus by exporting it to CSV. You can then import this file to another bot.

To export your intents and their utterances:
  1. Click Intents (This is an image of the Intent icon.) in the left navbar.

  2. Click More, and then choose Export intents.
    Description of export_corpus.png follows
    Description of the illustration export_corpus.png

  3. Save the file.

Which Training Model Should I Use?

We provide a duo of training models that mold your skill’s cognition. You can use one or both of these models, each of which uses a different approach to machine learning. Run a data quality report after the training has completed for either model.

Trainer Ht

Trainer Ht is the default training model. It needs only a small training corpus, so use it as you develop the entities, intents, and the training corpus. When the training corpus has matured to the point where tests reveal highly accurate intent resolution, you’re ready to add a deeper dimension to your bot’s cognition by training Trainer Tm.

You can get a general understanding of how Trainer Ht resolves intents just from the training corpus itself. It forms matching rules from the sample sentences by tagging parts of speech and entities (both custom and built-in) and by detecting words that have the same meaning within the context of the intent. If an intent called SendMoney has both Send $500 to Mom and Pay Cleo $500, for example, Trainer Ht interprets pay as the equivalent to send . After training, Trainer Ht’s tagging reduces these sentences to templates (Send Currency to person, Pay person Currency) that it applies to the user input.

Because Trainer Ht draws on the sentences that you provide, you can predict its behavior: it will be highly accurate when tested with sentences similar to the ones that make up the training corpus (the user input that follows the rules, so to speak), but may fare less well when confronted with esoteric user input.

Note:

Trainer Ht is the default model, but you can change this by clicking Settings > General and then by choosing another model from the list. The default model displays in the tile in the bot catalog.

Trainer Tm

Trainer Tm uses machine learning that's based on word vectors and other text-based features. It doesn't focus on matching rules as heavily as Trainer Ht. Instead, Trainer Tm performs hyperparameter testing.

Trainer TM is intended for an English vocabulary and understands the semantic meaning of frequently used words. While its support of Out of Vocabulary (OOV) words allows it to comprehend foreign language terms that are commonly used in English, it won't preserve the semantic integrity of non-English utterances.

Improve Trainer Tm’s Intent Classification with unresolvedIntent

To improve Trainer Tm’s utterance classification, you need to train it to recognize the kind of utterances that don’t belong to any intent. To do this, create an intent called unresolvedIntent that represents phrases that fall outside of the corpus and then train the skill with Trainer Tm.

For example, when training a skill with two intents, OrderPizza and CancelPizza, Trainer Tm classifies don’t cancel my pizza order to the CancelPizza intent. In the absence of unresolvedIntent, Trainer Tm resorts to this intent even though it places don't cancel my pizza order outside of the CancelPizza utterances.

Build Your Training Corpus

When you define an intent, you first give it a name that illustrates some user action and then follow up by compiling a set of real-life user statements, or utterances. Collectively, your intents, and the utterances that belong to them, make up a training corpus. The term corpus is just a quick way of saying “all of the intents and sample phrases that I came up with to make this skill smart”. The corpus is the key to your skill’s intelligence. By training a model with your corpus, you essentially turn that model into a reference tool for resolving user input to a single intent. Because your training corpus ultimately plays the key role in deciding which route the skill-human conversation will take, you need to choose your words carefully when building it.

Generally speaking, a large and varied set of sample phrases increases a model’s ability to resolve intents accurately. But building a robust training corpus doesn’t just begin with well-crafted sample phrases; it actually begins with intents that are clearly delineated. Not only should they clearly reflect your use case, but their relationship to their sample sentences should be equally clear. If you’re not sure where a sample sentence belongs, then your intents aren’t distinct from one another.

You probably have sample utterances in mind when you create your intents, but you can expand upon them by using these guidelines:
  • Create 12 to 24 sample phrases per intent, if possible. Use unmodified, real-word phrases that include vernacular and spelling errors. Keep in mind that the more examples you add, the more resilient your skill becomes.

  • If you don't have any real-world phrases, create your own sample phrases. When you deploy your bot, you can use the conversation logs to improve your corpus. For your starter utterances, vary the vocabulary and sentence structure by one or two permutations using:
    • slang words (moolah, lucre, dough)

    • common expressions (Am I broke? for an intent called AccountBalance)

    • alternate words (Send cash to savings, Send funds to savings, Send money to savings, Transfer cash to savings.)

    • different categories of objects (I want to order a pizza, I want to order some food).

    • alternate spellings (check, cheque)

    • common misspellings (“buisness” for “business”)

    • unusual word order (To checking, $20 send)

    • Create parallel sample phrases for opposing intents. For intents like CancelPizza and OrderPizza, define contrasting sentences like I want to order a pizza and I do not want to order a pizza.

    • When certain words or phrases signify a specific intent, you can increase the probability for a correct match by bulking up the training data not only with the words and phrases themselves, but with synonyms and variations as well. For example, a training corpus for an OrderPizza intent might include a high concentration of “I want to” phrases, like I want to order a Pizza, I want to place an order, and I want to order some food. Use similar verbiage sparingly for other intents, because it might skew the training if used too freely (say, a CancelPizza intent with sample phrases like I want to cancel this pizza, I want to stop this order, and I want to order something else). When the high occurrence of unique words or phrases within an intent’s training set is unintended, however, you should revise the initial set of sentences or use the same verbiage for other intents.

      Use different concepts to express the same intent, like I am hungry and Make me a pizza.

  • Avoid sentence fragments and single words. Instead, use complete sentences (which can be up to 255 characters). If you must use single key word examples, choose them carefully.

  • Watch the letter casing: use uppercase when your entities extract proper nouns, like Susan and Texas, but use lowercase everywhere else.

  • Grow the corpus by adding any mismatched sentence to the correct intent.

  • Keep a test corpus as CSV file to batch test intent resolution by clicking More and then Export Intents. Because adding a new intent example can cause regressions, you might end up adding several test phrases to stabilize the intent resolution behavior.

Intent Training and Testing

Training a model with your training corpus allows your bot to discern what users say (or in some cases, are trying to say).

You can improve the acuity of the cognition through rounds of intent testing and intent training. You control the training through the intent definitions alone; the skill can’t learn on its own from the user chat.

Test Sets

We recommend that you set aside 20% percent of your corpus for testing your skill and train your skill with the remaining 80%. Keep these two sets separate so that the test set remains “unknown” to your skill.

Apply the 80/20 split to the each intent’s data set. Randomize your utterances before making this split to allow the training models to weigh the terms and patterns in the utterances equally.

The Intent Tester

The Intent tester is your window into your skill’s cognition. By entering phrases that are not part of the training corpus (the utterances that you’ve maintained in your testing set), you can find out how well you’ve crafted your intents and entities through the ranking and the returned JSON. This ranking, which is the skill’s estimate for the best candidate to resolve the user input, demonstrates its acuity at the current time.

Intent Testing
To find out how your intents and entities work:
  1. Click Try It Out! (located at the right-hand side).
    An image of the Try It Out! option.
  2. Enter a string of text that is not part of the training set.
  3. Click Send and then take a look at the ranking.

  4. Expand the JSON window to find out how your skill ranked the intents and see the entities that matched the input.

    If your skill’s top-ranking candidate isn’t what you expect, you might need to retrain the intents after doing one or both of the following:
    • Update the better candidate’s corpus with the input text that you just entered—Select the appropriate intent and then click Add Example.

      Caution:

      Consider the impact on your training data before you add a test phrase. Adding a test phrase can change how the utterances that are similar to it get classified after retraining. In addition, adding a test phrase invalidates the test, because the incorporation of a test phrase into the training set ensures that the test will be successful.
    • Correct the system by editing the corpus using the Edit (This is an image of the Edit button.) and Delete (This is an image of the Delete function.) functions. A FAQ intent, for example, might receive a top rank because of the scope and phrasing of its constituent utterances. If you don’t want your users to get a FAQ whenever they ask typical questions, you’ll need to revise the corpus.

    You need to retrain an intent whenever you add, change, or delete an utterance. A dirty Train icon (This is an image of the “dirty” Train icon.) indicates when your training becomes outdated. When the retraining completes, click Reset (This is an image of the Reset button.) and then send the test phrase again.

The Intent Testing History

You can export the training data into CSV file so that you can find out how the intents were trained.

By examining these logs in a text editor or spreadsheet program like Microsoft Excel, you can see each user request and bot reply. You can sort through these logs to see where the bot matched the user request with the right intent and where it didn’t.

Export Intent Data

To capture all of the intent testing data in a log, be sure to enable Intent Conversation in Settings > General before you test your intents.

To export data:
  1. In the bots catalog, open the menu in the tile and then click Export Conversation Log.

  2. Choose Intent Conversation Log, set the logging period, and then click Export.

  3. Open the CSV files in a spreadsheet program to review it. You can see if your model matches intents consistently by filtering the rows by keyword.
Batch Test Intents

You can use the intent testing data that you’ve exported on new iterations of your skill to gauge the accuracy of its intent detection.

To use that test data:
  1. Click Try It Out! and then switch on Batch.

  2. Click Load and then browse to, and select the intents log (a CSV file).
    Description of load_batch_dialog.png follows
    Description of the illustration load_batch_dialog.png
  3. Choose the number of tests running in parallel. Increasing the number of concurrent tests may speed up testing, but may also burden the system.
  4. Click Test.
    The results display in the test window.
    Description of batch_test_results1.png follows
    Description of the illustration batch_test_results1.png
  5. Drill down (This is an image of the drill-down icon.) to see how the test results compare to the batch data.
    Description of batch_test_details.png follows
    Description of the illustration batch_test_details.png

Reference Intents in the Dialog Flow

Within your dialog flow, your intents can define the actions property, as shown in the PizzaBot’s intent state.
  intent:
    component: "System.Intent"
    properties:
      variable: "iResult"
    transitions:
      actions:
        OrderPizza: "resolvesize"
        CancelPizza: "cancelorder"
        unresolvedIntent: "unresolved"

Tune Intent Resolution Before Publishing

Before you publish a version of a skill (and thus freeze that version), you should thoroughly test it and, if necessary, adjust its settings to fine tune its intent resolution.

These settings help the System.Intent component resolve intents for the skill.

  • Confidence Threshold: The skill uses this property to steer the conversation by the confidence level of the resolved intent. Set the minimum confidence level required to match an intent. When the level falls below this minimum value, the component triggers its unresolvedIntent action.

  • Confidence Win Margin: When a skill can’t determine a specific intent, it displays a list of possible intents and prompts the user to choose one. This property helps the skill determine what intents should be in the list. Set the maximum level to use for the delta between the respective confidence levels for the top intents. The list includes the intents that are greater than or equal to this delta and exceed the value set for the Confidence Threshold.

To access these settings:

  • Click icon to open the side menu to open the side menu, select Development > Skills, and open your bot.

  • In the left navigation for the skill, click This is an image of the Settings icon. and select the Configuration tab.

Note:

Once you add a skill to a digital assistant, there is another range of settings that you may need to calibrate to better handle intent resolution in the context of the digital assistant. See Tune Routing Behavior.

How Confidence Threshold Works

With the Confidence Threshold property (accessed through Settings > Configuration) , you can steer the conversation by the confidence level of the resolved intent, which is held in the NLP result variable (noted as iResult in the sample skills).

If the intent’s ranking exceeds the Confidence Threshold property (which, by default is 40%), then the action defined for that intent is triggered, setting the path for the Dialog Engine. In the opposite case—when the value set for the Confidence Threshold property is higher than the ranking for the resolved intent—the Dialog Engine moves to the state defined for System.Intent’s unresolvedIntent action. See The Intent Tester.

Taking the PizzaBot as an example, testing its intents with I want to order pizza resolves to 100%. When you enter the same phrase in the Intent tab of the tester, however, the bot replies with How Old Are You?, a seemingly inappropriate response. Within the context of the PizzaBot dialog flow definition, however, this is the expected response for an intent whose ranking (100%) exceeds the confidence threshold (40%). When you enter 18, the checkage state’s allow: "crust" action directs the Dialog Engine to the crust state. (Because there were no entities to extract from the initial user input, the Dialog Engine bypassed the resolveSize and resolveCrust states and ended up here after the age confirmation instead of completing the order.)

If you entered a wholly inappropriate phrase for the PizzaBot like I want to buy a car , the intent testing window will rank the top intent at only 25%, which is below the 40% threshold. Because neither the OrderPizza nor the CancelPizza intents can resolve the user input satisfactorily, the Dialog Engine moves to the state defined for the unresolvedIntent action (unresolvedIntent: "unresolved"). As a result, the bot responds with "I don't understand, what do you want to do?"
  unresolved:
    component: "System.Output"
    properties:
      text: "I don't understand. What do you want to do?"
    transitions:
      return: "unresolved"

How Confidence Win Margin Works

With the Confidence Win Margin property (accessed through Settings > Configuration), you can enable your skill to prompt users for an intent when the confidence scores for multiple intents are close. For example, if a user asks the FinancialBot, “I want to check balance or send money,” the skill responds with a select list naming the top intents, Check Balances and Send Money. The skill offers these two intents in a select list, because its confidence in them exceeds the value set for the Confidence Threshold property and the difference between their respective confidence levels (that is, the win margin) is within value set for the Win Margin property.

DO's and DON'Ts for Conversational Design

Creating a robust set of intents for a successful skill requires a lot of attention. Here are some best practices to keep in mind.

Intent Design and Training

DO DON'T
DO plan to add utterances until you get results you expect. Generally speaking, models perform well as you add more quality training utterances. The number of utterances you need depends on the model, the training data, and the level of accuracy that is realistic for your model. DON'T over-train individual intents. Don’t add excessive training data to some intents to make them work "perfectly". If intent resolution is not behaving as expected, evaluate your intent structure for overlap between intents. Intent resolution will NEVER be 100% accurate.
DO use real world data. Using the actual language that your skill is most likely to encounter is critical. Fabricated utterances can only take you so far and will not prepare your skill for real-world engagement. DON'T use just keywords in training data. While it is acceptable to use single words/short phrases for training, the training data should have the same structure as the user’s inputs. The fewer the words in utterances, the less successful classification will be.
DO use whole sentences to train intents. While it’s OK to use short training utterances, be sure to match the conversational style of your users as closely as possible. DON'T inadvertently skew intents. Be careful of words which add no specific meaning (e.g. "please" and "thanks") or entity values within utterances as they can inadvertently skew intent resolution if they are heavily used in one intent but not in another.
DO use similar numbers of utterances per intent. Some intents (e.g., "hello", "goodbye") may have fewer utterances in their training sets. However, ensure that your main intents have a similar number of utterances to avoid biasing your model. DON’T rely ONLY on intent resolution. Use entities to disambiguate common intents. If there’s linguistic overlap between intents, consider using entities to disambiguate the user’s intentions (and corresponding unique conversational path).
DO train unresolvedIntent intent. Training this intent will improve classification of utterances that are not part of the scope of your skill (or of your "out of scope" intent). DON’T overuse unresolvedIntent. Create “out-of-scope" intents for the things you know you don't know (that you may or may not enable the skill to do later).
DO handle small talk. Users will make requests that are not relevant to the skill's purpose, such as for jokes and weather reports. They may also do things like ask if the skill is human. Ensure that you have a small talk strategy and aggressively test how the skill responds at all steps of your conversational flow. DON’T ignore abusive interactions. Similar to small talk, have a plan for abuse. This plan may need to include measures to ensure any abusive input from the user is not reflected back by the skill, as well as provisions for immediate escalation.
DO consider multiple intents for a single use case. Customers may express the same need in multiple ways, e.g. in terms of the solution they desire OR the symptom of their problem. Use multiple intents that all resolve to the same "answer".  

Conversational User Experience

DO DON'T
DO give indications of most likely responses (including help and exit). For example, "Hey, I'm Bob the Bot. Ask me about X, Y, or Z. If you run into any problems, just type 'help'." DON'T delay conversational design until "later in the project". For all but the simplest skills, conversational design must be given the same priority and urgency as other development work. It should start early and proceed in parallel with other tasks.
DO consider a personality for your bot. You should consider the personality and tone of your bot. However, be careful of overdoing human-like interaction (humor and sympathy often don't resonate well from a bot) and never try to fool your users into thinking that they are interacting with a human. DON'T say that the skill "is still learning". While well-intended, this bad practice signals to the user (consciously or subconsciously) that the skill is not up to the task.
DO guide the user on what is expected from them. The skill should try to guide the user toward an appropriate response and not leave questions open ended. Open-ended questions make the user more likely to fall off the happy path. DON'T use "cute" or "filler" responses. See "DO guide the user on what is expected from them".
DO break up long responses into individual chat bubbles and/or use line breaks. Large blobs of text without visual breaks are hard to read and can lead to confusion. DON'T say "I’m sorry, I don’t understand. Would you please rephrase your question?" This lazy error-handling approach is, more often than not, inaccurate. No matter how many times a user rephrases an out-of-scope question, the skill will NEVER have anything intelligent to say.
-- DON'T overuse "confirmation" phrases. Confirmation phrases have their place. However, don’t overuse them. Consider dialog flows that are able to take confidence levels into account before asking users to confirm.

Test Strategies

DO DON'T
DO develop utterances cyclically. Developing a robust training corpus requires multiple iterations and testing cycles and ongoing monitoring and tuning. Use a cyclical "build, test, deploy, monitor, update" approach. DON'T neglect the need for a performance measurement and improvement plan. Lacking a plan for measuring and improving your skill, you'll have no way of knowing whether it’s really working.
DO test utterances using the 80/20 rule. Always test the robustness of your intents against one another by conducting multiple 80/20 tests, where 80% of newly harvested utterances are used to train the model and 20% are added to your testing data. DON'T test only the happy path. "Getting it working" is 20% of the work. The remaining 80% is testing and adjusting how the skill responds to incorrect input and user actions.
DO test skill failure. Aggressively try to break your skill to see what happens. Don’t rely solely on positive testing. DON'T ignore processing out of order messages. Users will scroll back in conversation history and click on past buttons. Testing the results need to be part of your 80% work (as noted in DON'T test only the happy path).
-- DON’T forget to re-test as you update your intents. If you add more training data (e.g., as you bot gets more real-world usage) and/or you add new intents for new use cases, don’t forget to retest your model.

Project Considerations

DO DON'T
DO select use cases that are enhanced by conversational UI (CUI). Enabling conversational UI (via skills and digital assistants) is work. Make sure that the use case will be truly enhanced by adding CUI. DON'T fail to have an escalation path. Even if you don’t plan on allowing escalation to a human, you must have a strategy for those interactions where the skill can’t help.
DO anticipate the first day being the worst day. Even the best-tested skills and digital assistants require tuning on day 1. DON'T disband the project team immediately after launch. When scheduling your skill project, ensure that you keep the skill’s creators (Conversational Designer, Project Manager, Tech Lead, etc.) on the project long enough for adequate tuning and, ultimately, knowledge transfer.