JavaScript must be enabled to correctly display this content
Introduction
This 45-minute tutorial shows you how to integrate generative AI into your skill by connecting it to a large language model, or LLM. An LLM is an AI system that has been trained on vast amounts of text data. These models can be trained on various domains to perform a variety of tasks, including audience-specific text summarization and sentiment analysis of chat histories.
By integrating your skill with an LLM, you enable it to not only field a range of user input, but also to respond with context-appropriate answers in a human-like tone. To help the LLM predict the most likely words or phrases for its responses, you send it the appropriate context and instructions in a block of text known as a prompt. In response, the LLM generates a completion, a sequence of words or phrases that it believes are the most probable continuation of the prompt.
Scenario
For this tutorial, the LLM that you're going to integrate a skill with is the Cohere Command model. The skill will use this model to generate an email to a sales team about their progress on closing a deal with a company called Elemental Design.
Objectives
Create an LLM service that provides access to the Cohere Command model for your Oracle Digital Assistant instance.
Create an event handler that transforms the REST payloads to and from the Cohere Command format to the format used by Oracle Digital Assistant.
Add a state to the dialog flow that connects users to the LLM service provider.
Add the prompt that describes the email to the Cohere Command model.
Test the prompt.
What Do You Need?
Access to Oracle Digital Assistant Version 23.10 or higher
While Cohere has introduced both a new API (/chat) and model (cohere.command.R), this tutorial is based on the /generate API and the associated Cohere.command model. The API for the /generate endpoint is still valid, but if you wish to use the /chat endpoint instead, then you will need to manually update the request and response payloads.
A Cohere-provided bearer token
Sample provider-specific request and static response payloads
The following artifacts, which are included with this tutorial:
Task 1: Create the LLM Service for the Cohere Command Model
Your first task in integrating your skill with an LLM is to add a service to your instance that calls the /generate endpoint. If a service has already been configured for your instance, or if you're taking this tutorial in a lab setting where this service has already been provided, then take note of the service name and then move on to the next step where you create the skill.
With the Oracle Digital Assistant UI open in your browser, click to open the side menu.
Expand Settings > API Services.
Open the LLM Services tab.
Click +Add LLM Service.
Complete the LLM Service dialog to create a POST operation to the endpoint.
Name: Enter an easily identifiable name. You'll be reference this name later on.
Endpoint: Copy and paste the Co.Generate endpoint to enable access to the Command model. For example:
For Static Response, choose 200-OK. Then enter the following sample payload:
{
"meta": {
"api_version": {
"version": "1"
}
},
"generations": [
{
"id": "6c92c66b-36d2-41f7-a08e-363fa90a40f6",
"text": " The largest black hole in the Milky Way is called Sagittarius A*, which is located at the center of our galaxy. It has a mass of approximately 4.3 million times that of the Sun."
}
],
"id": "72f07ada-a5b4-4e69-bd5b-5f1e98cd5784",
"prompt": "Generate a fact about our milky way"
}
Click Test Request to check for a 200 response.
Task 2: Import the Mock REST Service
In this step, we will add a mock REST service to the instance that delivers static response data for the opportunity.
Note:
Because the URL for this service is a placeholder, not a real endpoint, this REST service cannot be run as a functional REST service.
If you're not still in the API Services page from the last step, reopen it by clicking Settings > API Services.
Open the Rest Services tab.
Click More. Then select Import REST Services.
Browse to, then select the RESTService-Tutorial_OpportunityDetails.yaml file from RESTService-Tutorial_OpportunityDetails.zip file that you extracted to your computer.
Click Open.
Confirm that REST service has been imported by clicking in the left menu.
With the REST and LLM services added to the instance, you now need import a skill and connect users to it through its dialog flow definition.
To import this skill:
With the Oracle Digital Assistant UI open in your browser, click to open the side menu.
Click Development and then select Skills.
Click again to collapse the side menu.
Click Import Skill (located at the upper right).
Navigate to, and then select the Generate_Email_Skill_XXX(1.0).zip that you've downloaded to your computer. Then click Open.
Open the skill and take a look at its intents, flows, and entities.
Intents -- The GenerateEmail intent.
Entities -- The EmailDetails composite bag that contains the Topic item (a STRING) and Opportunity, a value list entity of company names. This entity is associated with the GenerateEmail intent. There is also the CustomerRequirement entity, a value list entity whose values include competitive pricing, high performance, and high quality product.
Flows -- The starter flow, GenerateEmail, which is mapped to the GenerateEmail intent. This flow has two states: GetEmailDetail, which resolves the EmailDetail composite bag entity, and the GetCompanyDetails state, which returns the payloads from the Tutorial_OpportunityDetails REST service. The flow also has the following flow-scoped variables:
Variable Name
Value
EmailDetail
References the EmailDetails composite bag entity to capture user input.
OpportunityDetail
A map that stores the payload of the Tutorial_OpportunityDetails REST service.
EmailSignature
A static string
Note:
Check the status of the Train button to make sure that training has started.
Task 4: Connect the Skill to the Cohere Model
We're now going to enable the skill to access the Cohere service by creating a custom component with an event handler that transforms the REST payloads into formats that are accepted by the Command model and Oracle Digital Assistant.
Complete the following steps:
Click Components in the left navbar.
Click Add Service.
In the Create Service dialog:
Enter Cohere in the Name field.
Accept the default setting, Embedded Container.
Select New Component.
Select LLM Transformation from the Component Type drop down list.
Enter Cohere_Command in the Component Name field.
Select Cohere (located under other) from the Template drop down list.
Click Create.
Description
of the illustration
The Edit Component page opens after the service has been deployed, displaying the template generated for the Cohere Command model. Description
of the illustration
Its transformation handlers call the following methods that map the model-specific payload format to the interface used by Oracle Digital Assistant, which is known as Common LLM Interface (CLMI).
transformRequestPayload
transformResponsePayload
transformErrorResponsePayload
Because you selected the Cohere template, these handlers already include the Cohere.command-specific transformation code. No editing is required. If your skill calls a non-Cohere model (or the cohere.command.R model), then you'll need to manually update the handlers.
Note:
This handlers in this template support the /generate API and the associated Cohere.command model, not the /chat API that's used for the cohere.command.R model. If you wish to use the /chat endpoint for this tutorial, then you will need to manually update the request and response payloads in the generated template.
Click Close (located at the upper right) to return to the Components page.
Ensure that Service Enabled (the default setting) is switched on.
Task 5: Define the Cohere LLM Service for the Skill
To enable the skill to connect users to the Cohere endpoint through the dialog flow, you need to create an LLM Service. This is a skill-level service that combines the instance-wide Cohere LLM service with the Cohere_Command transformation event handler that you created in the previous step.
Click Settings in the left navbar.
Open the Configuration page.
In the Large Language Models Services section (located near the bottom of the page), click +New LLM Service.
Leave the remaining properties in their default settings. Note that Default is switched on (true) if this is the only service that you've created so far for this tutorial.
Important:
Be sure that Mock is switched off.
Click the Save icon (located at the right in the Action column).
Task 6: Integrate the Command Model with the Skill
Now that the skill is connected to the service for the Command model, you're now going to connect your skill's users to the model by creating a dialog flow component that can call the model and tell it what to do. The component conveys these instructions using a prompt, which is a block of human-readable text.
Click Flows in the left navbar.
Select the GenerateEmail flow.
If there's an error on the GetCompanyDetails state, open the state's Component tab and make sure that the following properties have been configured:
Rest Service: Tutorial_OpportunityDetails
Authentication Type: No Authentication Required
Endpoint: https://www.opportunitysummary.com
Note:
This is the placeholder URL for the mock REST service that delivers a static response. This is not a functional REST service.
Method: GET
Parameters:
Key
Value
Type
OPPORTUNITY
${EmailDetail.value.Opportunity.value}
Query
Response Mode: Always Use Static Response
Result Variable (Flow Scope): OpportunityDetail
In the GetCompanyDetails state, click and then select Add State from the menu.
In the Description field, enter Email generator. Then click Insert.
Note:
As a best practice, always add descriptions to the invokeLLM states. These component descriptions enable multi-turn conversations when users access an LLM skill through a digital assistant.
The prompt text references the variable values that are passed in for the OPPORTUNITY, TOPIC, OPPORTUNITY_DETAILS, and EMAIL_SIGNATURE parameters. For example:
Draft an email to the ${OPPORTUNITY} sales team
For the LLM to incorporate these parameters, they need values. Because these values are missing, the editor notes errors.
You will add these values in the next step.
Description of the illustration
To provide the LLM with the parameter values it needs to generate the email, you need to provide FreeMarker expressions for each parameter. Because the parameters provide the LLM with values from various sources (composite bag entity items and the OpportunityDetails REST service), the FreeMarker syntax will vary.
To add these parameters:
Click Add next to Prompt Parameters.
Enter the parameter name in the Name field.
Enter the FreeMarker iterator expression in the Value field.
Set Use Streaming to False so that the message is delivered in its entirety, not incrementally. We recommend that you disable streaming for all Cohere models.
For the Standard Actions, remove all of the actions except for Undo.
Task 7: Test the Prompt with the Prompt Builder
Before we test the prompt that you added in the previous step, let's take a quick look at it.
This prompt reflects good prompt design because:
It assigns a persona to the LLM that is use case-specific:
You are a professional email writer.
It provides brief and concise instructions:
Draft an email to the ${OPPORTUNITY} sales team for the following purpose: ${TOPIC} considering the following details about the opportunity:
Opportunity details:
${OPPORTUNITY_DETAILS}
Email Signature:
${EMAIL_SIGNATURE}
It defines clear acceptance criteria:
- Your email should be concise, and friendly yet remains professional.
- Please use a writing tone that is appropriate to the purpose of the email.
- Ensure to make it obvious when the opportunity is to close.
- Optionally include the things that are important to the customer when making their buying decisions
- If the purpose of the email is negative; for example to communicate miss or loss, do the following: { Step 1: please be very brief. Step 2: please and do not mention activities }
- If the purpose of the email is positive or neutral; for example to congratulate or follow-up on progress, do the following: { Step 1: the products section is the main team objective to achieve, please mention it with enthusiasm in your opening paragraph. Step 2: please motivate the team to finalize the pending activities. }
...
Writing prompts is an iterative process. In fact, continually refining your prompt is a best practice. It may take several revisions before a prompt returns the results that you expect. To help you through this revision cycle, you can use the Prompt Builder to incrementally test and modify your prompt until it functions properly.
To test the prompt, you need to add mock values for the referenced parameters. The tone and content of the model's output is based on these values. You can have the model generate random values by clicking Generate Mock Values, but to control the output, you need to add your own.
To add these values:
In the Component tab, scroll back to the top and click Build Prompt to open the Prompt Builder.
Click Edit.
Enter value for the parameter in the Mock Value field. When you're done, click Apply. Description of the illustration
Here are the example values:
Parameter
Mock Value
EMAIL_SIGNATURE
John Smith
OPPORTUNITY
Elemental Design
OPPORUNITY_DETAILS
Completed
TOPIC
Congratulations
After you've completed the mock values, click Generate Output.
Verify that the LLM output in the LLM Output field is an email that both adheres to the prompt guidelines and incorporates the parameter values. For example, the output may be like this: Hi Team, I just wanted to send a quick congratulations to all of you for closing this opportunity. The Elemental Design sales team worked tirelessly to finalize this opportunity and their efforts have paid off! This opportunity was particularly noteworthy because of the following factors (if relevant): The customer was very hesitant about their needs at first, but your team showed exceptional patience and provided them with all the information they needed to make an informed decision. The customer also had a strict budget, and your team worked hard to provide a solution that fit their needs and their price range. Going forward, I think it's important to keep in mind what contributed to our success with this opportunity. Your ability to provide thorough and thoughtful solutions to customers is key to our team's success, so keep up the great work! I also want to motivate the team to finalize the pending activities for this opportunity. We still have a long way to go this quarter and I know that you all can continue to reach new heights. Thank you again for your hard work and dedication! Sincerely, John Smith Is there anything else I can do to help draft an email to the Elemental Design sales team?
Optional step -- Here are a couple of things for you to experiment with before you click Close to return to the Component tab:
Replace the optional directive (around Line 14):
- Optionally include the things that are important to the customer when making their buying decisions
with
- Include the things that are important to the customer when making their buying decisions.
Then click Generate Output. Your results may differ, but here's an example: Hi Team, I just wanted to send a quick congratulations to all of you for closing the opportunity with [Client Name]. This was a big opportunity that required a coordinated effort from everyone on the team, and I'm really proud of what we were able to achieve together. When looking at what was important to the client when making their buying decision, it's clear that our team's ability to demonstrate the value of our products was a key factor in closing this deal. Our portfolio of market-leading products provides unparalleled advantages to our clients, and I believe that this opportunity will be the first of many successes for us this year. For the upcoming opportunities, let's continue to motivate each other to finalize the pending activities. I have no doubt that with our team's skills and cooperation, we will be able to repeat this success and achieve our sales goals this quarter. Thanks again for your hard work and dedication. I'm excited to see what we can achieve next! Sincerely, John Smith Is there anything else I can do to help draft an email to the Elemental Design sales team?
Allow the model to generate more creative responses by changing the Temperature from 0 (straightforward responses) to 1 (more randomized responses).
Click Generate Output. Here's an example (your results may again differ): Hi Team, I just wanted to send a quick congratulations to all of you for closing the opportunity with [Client Name]. This was a big opportunity that required a lot of effort and teamwork, and I am proud of everything that we have achieved. Our products were a perfect fit for [Client Name]'s needs, particularly because [explain why]. Their decision to purchase from us rather than from our competitors is a testament to the hard work and dedication that goes into everything we do here at Elemental Design. I want to give a special shout-out to [Name] for leading the charge on this one. We would not have closed this deal if it were not for their persistence in pursuing the client and making sure that we were delivering exactly what they needed. It was a team effort, however, and I want to thank everyone for their contributions. Now, let's keep up the good work and keep pursuing new opportunities to make 2022 our best year yet! Sincerely, John Smith [Your Name] [Your Title] [Company Name] [Your Contact Information] Would you like me to make any other changes to this email to better suit your needs?
Note:
Do not click Save Settings as this will overwrite the original prompt text.
Click Close to exit the Prompt Builder.
Task 8: Test the Prompt with the Skill Tester
Now that you've verified that the LLM can receive the skill's input, you're ready to interact with it in the Skill Tester.
Open the Skill Tester by clicking Preview (located at the upper right).
Enter the following request:
Send an email to the Elemental Design sales team that congratulates them on the progress and let them know that we should talk strategy for closing in tomorrow's meeting.
In the Conversation pane, notice that the conversation remains in the GenerateEmail state. Description of the illustration
That's because of two reasons: there is no error that would cause the dialog flow to move on to the showLLMError state and the component is configured for multi-turn interactions that allow you to refine the LLM output.
To get a look at the outcome the GenerateEmail state processing, open the LLM Calls tab.
This view renders when the dialog flow lands on an LLM component state like GenerateEmail. From it, you can compare the outcome of the state's processing at each turn of the conversation with the result that's sent to skill users. You can also can review the prompt populated with variable values. In our case, we want to find out which values were sent for the OPPORTUNITY, TOPIC, OPPORTUNITY_DETAILS, and EMAIL_SIGNATURE variables. To access the prompt in this form, hover over the text in the Request column, right-click, then choose Show Full Text. Description of the illustration
By scrolling along the Request window, note, for example, that ${OPPORTUNITY} and ${TOPIC} in the original prompt text have been replaced by Elemental Design Team and the string that you just entered, ("Send an email to the Elemental Design sales team that congratulates them...").
Click Close to exit the Request window.
Returning to the Bot Tester window, try refining the output by entering the following:
Include the things that are important to the customer when making their buying decisions
The LLM should incorporate this feedback into its response. For example, it might include something like the following: With this opportunity coming to a close on August 15th, it's important to send out those product samples by June 10th and schedule a product demo by June 15th. As a team, it is crucial to meet to negotiate pricing and terms by June 25th. These are the things that are important to the customer when making their buying decisions, so it's vital that we complete these steps in order to give them the best experience possible and close the deal
This new iteration of the message now includes the Undo button which reverts the output to the previous response. Even though a second turn has been executed, the conversation remains in the GenerateEmail state.
Open the LLM Calls tab. The view reflects outcome of the processing for this second conversation turn.
In this step, you're going to use the declarative validation functions of the InvokeLLM component to test the LLM output for the presence (or absence) of one of the values defined for the CustomerRequirement such as competitive pricing, customizable design, etc.
Open the GenerateEmail state and select the Component tab.
Expand User Messaging again. Note the Use Streaming setting, which you set to False in Task 6: Integrate the Command Model with the Skill to accommodate the Cohere model. Despite the model you use, you must always disable user streaming when validating the LLM output because messages can't be validated in chunks. They can only be validated when they're complete. If you enable both user streaming and validation, users may see multiple streams of output, which may confuse them.
Send an email to the Elemental Design sales team that congratulates them on the progress and let them know that we should talk strategy for closing in tomorrow's meeting.
The skill will reply with an "Enhancing the response. One moment, please..." message. Description of the illustration
The message that follows it may be valid because it incorporates the CustomerRequirement entity values like competitive pricing or durability, or it may not be valid because values like these are missing. When the message is not valid, the dialog flow transitions to the showLLMError state, which outputs an error message that names the entity whose values were not matched (CustomerRequirement), the HTTP status code returned by the call to the LLM (200) and the CLMI (Common LLM Interface) error code noting that output failed validation (responseInvalid): An unexpected error occurred while invoking the Large Language Model: {"errorMessage":"The CustomerRequirement is not specified in the response.","errorStatus":200,"errorCode":"responseInvalid"} Description of the illustration
The LLM Calls tab shows that this error was thrown after one retry attempt (the default number of attempts allotted for InvokeLLM component states like GenerateEmail).
Description of the illustration
If you look at the contents in the LLM Text Response field in the LLM Calls tab, you'll see the generated response. Because it's missing the CustomerRequirement entity values, it failed validation. For example: Hi Elemental Design Sales Team,
Congratulations on the progress made on opportunity SO-12345 to win the deal worth $60,000 involving athletic footwear and apparel! You've done a great job, and it's time to talk strategy for closing. Tomorrow's meeting is a vital step toward achieving the goal, so keep it up!
I want to express my enthusiasm for the team's efforts in engaging the customer's interest in the products and moving forward with the deal. You are on track to close the opportunity on August 15th, and I want to ensure we are ready for the next steps.
I'll see you tomorrow at the meeting. Let's discuss further so we can make sure we're on the right track.
Sincerely,
John Smith,
Regional Sales Director
[Remove signature block]
Here's an example of valid output. In this case, the dialog flow doesn't transition to the showLLMError state, but remains in the GenerateEmail state as it did in Task 7: Test the Prompt with the Skill Tester. Hi Elemental Design team,
Congratulations on the progress you've made on opportunity SO-12345! The team has been working tirelessly and it has paid off. I'm confident that we're going to be able to close this deal soon. These are the things that are important to the customer when making their buying decisions, namely, high-quality materials for durability and performance, competitive pricing to fit within their budget, responsive customer support and after-sales service, and customizable design options for team branding. It is vital that we keep these in mind when working on this opportunity going forward.
I'll see you tomorrow at our meeting where we can discuss our strategy for closing this opportunity. In the meantime, make sure you're focusing on sending out those product samples and scheduling that product demo. We need to move quickly to finalize the pending activities, notably those that are scheduled for June 10th and June 15th. There is no doubt that we will be able to achieve our objective if we remain focused.
Related Links
Provide links to additional resources. This section is optional; delete if not needed.